url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://localsearchforum.com/threads/new-wordpress-plugin-to-manage-gmb-posts.53383/
code
- Oct 23, 2018 - Reaction score Has anyone had success with this fairly new plugin for WordPress to manage and schedule GMB posts based on Wordpress posts? Post to Google My Business it is by Post to Google my Business WordPress plugin - tyCoon Media With the new social aspect to "follow" a companies GMB posts, this might be nice for semi-automating some content for the posts. I have looked in the past for something like this and didn't find anything, but it seems like a nice concept for managing clients GMB posts that aren't offers or events.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100632.0/warc/CC-MAIN-20231207022257-20231207052257-00478.warc.gz
CC-MAIN-2023-50
549
7
https://forums.sifive.com/t/how-to-check-transmitting-data-over-apin/1649
code
I suppose there are a lot of different ways, depending on the nature of your signal and what standard you are trying to verify it to. The simplest would be to connect an LED (preferably in series with a, say, 10k resistor) to the pin. An old analogue oscilloscope might work well. I have one about 40 years old with I think 10 MHz bandwidth that I got for free. It’s pretty much fast enough for traditional AVR Arduino signals. Or you could connect that pin to another pin configured as an input and each time you write the output pin wait a short time (1 us should be plenty, if you can afford that) and then read the other pin and see if the correct data is there. Or you could use another Arduino-style board or a Raspberry Pi to make your own digital oscilloscope. There are a lot of ideas for that here https://www.instructables.com/id/PiScope-Raspberry-Pi-based-Oscilloscope/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390437.8/warc/CC-MAIN-20200525223929-20200526013929-00159.warc.gz
CC-MAIN-2020-24
883
5
https://drupal.stackexchange.com/questions/121011/custom-field-formatter-to-create-byline
code
I'm using the taxonomy module for authors on a site, which is sometimes recommended as a flexible way to store and display author information, since it makes it easy to list content by author. For some content types, I want to create a byline that looks something like this: by [author] on [date] In a first pass on a solution, I've created a custom field formatter that pretty much does what I want, and allows the user to choose author, or author + date for term reference fields. It does show up as an option on all term reference field, not sure if there's a way to restrict that or not(?) Before I go much farther with the solution, I wanted to check: with Drupal 7, is a field formatter the right way to approach this problem? Are there other approaches I should consider that that are easier, more flexible, etc?
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00186.warc.gz
CC-MAIN-2020-05
819
5
https://www.warriorforum.com/main-internet-marketing-discussion-forum/663511-nice-blog-marketing-next-steps.html
code
As a relative newbie to niche blog marketing, my question is: how many blogs posts are sufficient to build traffic? I currently have 16 posts and I'm writing a total of 60 (5 per week). Half are outsourced and half I've done. The word count is right around that 450 mark. I've focused on linking posts together as well. My question is what next? I realize that one niche blog isn't going to put food on the table and gas in the car. But I am looking at it as a passive income out of this one (with plans to build a few more). Any ideas or best practices? I appreciate all of the great resources here. I'd love to hear what some of you pros think. Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00029.warc.gz
CC-MAIN-2022-21
654
3
https://direct-invoice.com/en/blog/brand-new-notification-system
code
We are really happy to announce our brand new notification system! Get the last relevant information directly from the Direct Invoice interface. Our new in-app notification system will inform you of the following events: - Documents sending; - Documents marked as sent; - Documents viewed by the client; - New comments on document; - Estimate acceptation or refusal; - Recurring template occurrences creation; - Recurring template occurrences sending; - Reminders sending. Who will receive those notifications? Every user involved in the document life: - The creator; - The senders; - The users who left a comment; - The user who marked an estimate as accepted/refused; - The user who marked an invoice as paid/unpaid. By default, with Direct Invoice, you receive email notifications when something happens in your account. You can now choose not to receive those emails in your profile by simply unchecking the corresponding checkboxes. The notification system will be very usefull if you have a Direct Invoice account with multiple users. You will never miss important information again.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679516047.98/warc/CC-MAIN-20231211174901-20231211204901-00776.warc.gz
CC-MAIN-2023-50
1,089
20
http://www.freevistafiles.com/Core-Lab-Software-Development+MySQL-Developer-Studio.html
code
MyDeveloper Studio 3.00 beta - Convenient server browsing - Stored routine and SQL script debugger - Database project support - Advanced database administration tools - Enhanced SQL editor with code completion, parameter information, and code navigation - Visual query builder - Advanced SQL execution features - Easy database object manipulation - Powerful grid-based data editor - Easy data export - Database Export and Import support - Flexible connectivity - Convenient user interface - Comprehensive help system MyDeveloper Studio is a standalone application that runs with .NET Framework 1.1 or 2.0. MySQL Developer Studio supports all features of MySQL server versions 3.2.3 and higher. Related Products: MyDeveloper Tools and MyDirect .NET SQL syntax check Brand new Query Builder Extended Data Export Support of new table features and storage engines UTF-8 is supported in database export Template system usability improved Changes in Database Explorer: navigation history added, displayble column type and size Added: user comments for tables columns, SQL Log output customization ...dbForge Fusion for MySQL (formerly known as...MyDeveloper Tools) is a...to simplify the MySQL database application development...making all database development and administration tasks...compound SQL statements, query and manipulate data...framework - Advanced stored routine debugger -...Powerful schema and data comparison tools -... ...Debugging stored MySQL functions and procedures...primary concerns of MySQL database developers. Due...absence of generic debugging support in the...MySQL DBMS, many developers...writing server side code on the client...obtain control over code execution. However, this...the integrity of data and the business... ...to simplify the MySQL database application development...making all database development and administration tasks...compound SQL statements, query and manipulate data...of MS Visual Studio 2010 * Database...Wizards * Visual Query profiler * Enhanced...Query Builder * Convenient database...for offline database development * Advanced debugger... ...dbForge Studio for MySQL is a powerful...and comprehensive MySQL manager and admin...complete set of MySQL administration tools, database...development, and SQL query tools. Combination of...features of existing MySQL front ends and...MySQL GUI tools makes...dbForge Studio the perfect choice...You can know MySQL inside out, but... ...customizable Win32 user mode debugger/disassembler. PEBrowse Interactive...not a source code debugger, but operates...program executes. The debugger fully supports Microsoft...mode debugging. It can...as the startup debugger using the system...- useful for debugging ASP.NET applications....
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164447901/warc/CC-MAIN-20131204134047-00031-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
2,727
30
https://www.spider.com/contact
code
Oops! Something went wrong while submitting the form. 5 Penn Plaza, 23rd Floor New York, NY 10001 (800) 713-7278 The Internet is the largest database in the world. Spider allows real-time access to this database. Captchas and IP blocking can't limit your crawling freedom when you use us. With millions of residential IP addresses we get data.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00528.warc.gz
CC-MAIN-2021-49
343
3
https://www.i2m.univ-amu.fr/events/gabor-multipliers-applied-and-theoretical-aspects/
code
Hans G. Feichtinger Institute of Mathematics, University of Vienna Date(s) : 11/12/2014 iCal 10 h 00 min - 11 h 00 min Gabor Multipliers are linear operators arising similar to Fourier multipliers: Given an input signal the Gabor expansions is obtained. After multiplication with a sequence of numbers the synthesis operator is applied. From an engineering point of view they are like actions of an audio-engineer who decides in a time-variant manner who the different frequency bands of a signal are amplified or damped. In the mathematical description one deals with function spaces, classes of operators, symbols etc.. For example, the question of best approximation of a given Hilbert Schmidt-operator by Gabor multipiers (in the Hilbert Schmidt norm) is translated into an approximation problem for spline-type functions (comparable to the question of approximating an L2-function on R by a cubic spline function). Gabor multipliers are easily implemented and even the theory of discrete Gabor multipliers provides a non-trivial and interesting chapter of linear algebra.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948234904.99/warc/CC-MAIN-20240305092259-20240305122259-00214.warc.gz
CC-MAIN-2024-10
1,076
5
https://nowakdamian.com/portfolio/dpdk-open-source/
code
Data Plane Development Kit is an Intel’s open source project that extends an idea of using polling dedicated CPU cores instead of interrupt-driven networking to speed operations by a huge numbers. For more information I’d suggest taking a look at project’s main page: https://www.dpdk.org/ My role in the project was Network Software Developer. I was mostly working on DPDK’s small part called “QAT”, which is an API for Intel hardware called Quick Access Technology supporting on-chip cryptography and compression. To see my contributions to the project (including code I wrote), check out the Patchwork webpage:
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00070.warc.gz
CC-MAIN-2023-50
625
3
https://www.trailvalleycreek.ca/opportunities
code
Interested in working with us? Check out these great opportunities! Postdoctoral Fellowship: Linking hydrological and permafrost/groundwater models for improved estimates of climate impacts on northern waters Climate warming related changes to northern catchments affect quantity and quality of downstream waters through complex interactions among physical and biological processes. Across the Northern Water Futures (NWF) study domain there are ongoing changes in the spatial and temporal variability in active layer thickness, increasing occurrence of taliks and winter flows, changes in vegetation and snowcover, and complex changes in streamflow. As permafrost continues to thaw, the role of increasing taliks and sub-changes and supra-permafrost groundwater flow on streamflow is expected to be enhanced. However, the links between permafrost, groundwater and streamflow are poorly known, and few hydrological models include sophisticated permafrost/groundwater model components. There is growing evidence that lateral flows of water at fine horizontal scales play an important role in controlling permafrost thaw and streamflow. Through this postdoctoral position, we will improve out understanding of, and ability to model the interactions between surface and subsurface hydrology, under conditions of thawing permafrost. NWF has developed hydrological, geophysical, and remote sensing datasets, and we will use these to test and improve a suite of legacy and next-generation hydrologic models including the semi-distributed Cold Regions Hydrological Model-Arctic and the multi-scale, multi-extent, variable complexity Canadian Hydrological Model. We will consider, and test, a variety of key permafrost/groundwater processes not currently included in CRHM-A and CHM. These could include: the SUTRA-Ice groundwater model, the subsurface components of GEOtop or a permafrost model such as CryoGrid. This effort will determine the strengths and weaknesses of these models as related to the interactions between suface hydrology, sub-surface hydrology, and permafrost, and assess a wide range of future hydrological changes within this rapidly changing environment. We invite applications for a Postdoctoral Fellow interested in coupled surface hydrology and permafrost/groundwater modelling that will make us of the extensive suite of NWF measurements to support this effort to better understand the implications of climate warming changes on water across the NWT. Conduct an extensive review of existing GWF/NWF hydrological models and existing groundwater/permafrost models and make recommendations on the best approach to couple such models for climate impact studies, Test GWF/NWF surface hydrology models at key NWF study sites in the NWT, Test appropriate groundwater/permafrost models and key NWF study sites in the NWT, Couple surface hydrology and groundwater/permafrost models, and test at NWF study sites The candidate will be advised by Dr. Philip Marsh (Wilfrid Laurier University) and will work closely with an advisory group including Drs. Dave Rudolph (University of Waterloo), Jeff McKenzie (McGill University), Chris Spence (Environment and Climate Change Canada), Oliver Sonnentag (Université de Montréal), and Aaron Berg (University of Guelph). The ideal candidate should have a PhD in a relevant discipline (e.g. geography, environmental science, engineering, physics, atmospheric science) and experience in high resolution, spatially distributed hydrologic, groundwater, or permafrost models. The candidate should possess aptitude and enthusiasm for developing and applying high resolution, physics based hydrological models in order to understand past changes in hydrology and to consider future changes under a rapidly changing climate. Proficiency with appropriate modelling tools is essential. Experience in northern environments is an asset. A salary of $55,000 per year including benefits, plus a stipend of $2,000/year to cover direct research expenses. This position currently has funding for one year. How to Apply: i) a cover letter highlighting relevant experience and your interest in the position; ii) a curriculum vitae; iii) names and contact information for two referees. Email inquiries or application materials to Philip Marsh ([email protected]) with the subject line “NWF PDF Hydrology Application.” We will begin reviewing applications on December 15th, 2020. We anticipate an April 1, 2020 start date but there is flexibility in this. International and remote candidates will be considered. Equity, Diversity, and Inclusion The impact of leaves (e.g. parental leave, extended leaves due to illness, etc.) will be carefully considered when reviewing candidates’ eligibility and record of research achievement. Candidates are encouraged to explain in their cover letter how career interruptions may have impacted them. Diversity and creating a culture of inclusion is a key pillar of Wilfrid Laurier University’s Strategic Academic Plan and is one of Laurier’s core values. Laurier is committed to increasing the diversity of students and postdocs and welcomes applications from candidates who identify as Indigenous, racialized, having disabilities, and from persons of any sexual identities and gender identities. Indigenous candidates who would like to learn more about equity and inclusive programming at Laurier are welcomed to contact the Office of Indigenous Initiatives at [email protected]. Candidates from other equity seeking groups who would like to learn more about equity and inclusive programming at Laurier are welcomed to contact Equity and Accessibility at [email protected]. Graduate Student Research Opportunities We are always interested in bringing on new members to the team! Please contact us for more information about ongoing or future research opportunities at the undergraduate, Masters and Doctoral level Masters and Doctoral research opportunities in hydrological change in the Canadian Arctic, Wilfrid Laurier University, Waterloo, Ontario, Canada. Professor Philip Marsh, Climate warming affects the hydrology of the Arctic through complex interactions between the climate; snow; surface and groundwater runoff; lakes, ponds and wetlands; soil moisture; permafrost; evapotranspiration; beavers; and vegetation for example. Understanding the controlling processes, as well as understanding past changes in hydrology and the range of possible future scenarios of change requires the convergence and integration of field observations; process studies; hydrologic and climate data sets; remote sensing; and high-resolution hydrologic modelling. Professor Marsh has been building such a research program in the Inuvik, NWT region over the past decades. As a main component of this effort, research has been continuously carried out at the Trail Valley Creek (TVC) Research Station (Trailvalleycreek.ca) and the Havikpak Creek watershed for the last 30 years. This research has allowed the development of a unique, long term dataset, and the testing and development of hydrologic models. Examples of past research in these watersheds are listed in Professor Marsh’s Google Scholar profile. We invite graduate student applications for MSc and PhD positions in understanding and predicting Arctic hydrologic change under a rapidly changing climate. Potential research could include: Analysis of long-term climate and hydrologic data sets at TVC and nearby areas to understand past changes in hydrology, Hydrologic process studies of snow accumulation and melt; hillslope hydrology; and development of taliks and effects on suprapermafrost groundwater flow, Testing and improvement of high-resolution hydrologic models to consider past changes in hydrology, and/or Applying these improved hydrologic models to understand the effects of climate change scenarios on future hydrology. Ideal candidates should have previous degrees in relevant disciplines (e.g. geography, environmental science, engineering, physics, atmospheric science), and should possess aptitude and enthusiasm for understanding the impacts of climate change on Arctic hydrology. We especially encourage applicants with an interest in high-resolution hydrologic modelling. Proficiency with appropriate modelling tools is essential. Experience in northern environments is an asset, but not required. Graduate students receive competitive funding packages that come from a combination of teaching assistantships, internal scholarships, and research assistantships for example. All students are strongly encouraged to apply for a variety of external scholarships. Dr. Marsh’s students have been very successful in receiving such awards over the past years. International PhD applicants may apply for awards to offset the fee differential between Canadian and International student fees. Funding for Arctic field research is provided by external research grants. Wilfrid Laurier University Geography and Environmental Studies Department has a joint graduate program with the University of Waterloo. This is the second largest Geography graduate program in Canada, and the sixth largest in North America. You will find a large number of students, research associates, post doctoral fellows, and faculty exploring a wide range of research interests and offering a challenging and stimulating research environment. For admission in September 2021, candidates are encouraged to contact Dr. Philip Marsh. Please submit a cover letter highlighting relevant experience and your interest in joining our research team, a list of courses taken and marks, and a curriculum vitae to Philip Marsh ([email protected]) with the subject line “AHRG Graduate Student”. Dr. Philip Marsh, Professor and Canada Research Chair, Wilfrid Laurier University. Philipmarsh.ca
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703496947.2/warc/CC-MAIN-20210115194851-20210115224851-00112.warc.gz
CC-MAIN-2021-04
9,828
35
https://www.delftstack.com/howto/python/xlrd-python/
code
This article introduces how to install xlrd in Python. xlrd in Python Sometimes while working in Python, we need to work with spreadsheets. Python has many solutions for working in spreadsheets, and the xlrd module is one of them. xlrd module, we can retrieve information, read, write and modify spreadsheet data. We can also go through various sheets and retrieve data from multiple spreadsheets. As shown below, we can easily install the xlrd module using pip install xlrd If you have pip with a version lower than 3, this command will not work. Use the command below instead. pip3 install xlrd We will go through an example in which we will access a spreadsheet and get its content using So let’s create a new spreadsheet with columns such as the name, email, and roll number and add some data as shown below. Now, let’s try to get data from the spreadsheet. # Location of the ile location = "spreadsheet.xls" # To open Sheet workB = xlrd.open_workbook(location) worksheet = workB.sheet_by_index(0) # For row 0 and column 0 When we run the above code, the output will be: So, in this way, we can easily install the xlrd module and use it to work with spreadsheets. The xlrd module has many functions to perform different tasks related to spreadsheets with the help of these functions.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474360.86/warc/CC-MAIN-20240223021632-20240223051632-00093.warc.gz
CC-MAIN-2024-10
1,291
25
https://www.thasler.com/blog/blog/VectorMaps-Intro
code
As vector maps have been really successful (see maptiler example) in the last years I want to upgrade my own map and toolchain to fancy new vector tiles technology. This series of blog posts serves mainly to sort out my thoughts and hopefully make life easier for the next guy to try this. This is what the map looks like at the moment: As the process of presenting a vector tile map to an end-user requires a whole set of tools (aka Toolchain), I will try to cover all parts of the design- and implementation-process. All project-files will be stored on github.com/henrythasler/vectortiles. Create an issue there, if you have any questions, hints or whatevers. The following goals MUST be reached by the new toolchain/setup: - A free source for spatial data (e.g. openstreetmap) - Means of pre-processing and storing the spatial data (e.g. in a database) - Content and quality of the resulting map is roughly the same as it is with the current toolchain. - Use open source software whenever possible - Dockerize the whole toolchain for portability - support for high-DPI screens The following items would be nice-to-have - Existing style definitions (CartoCSS, Mapnik XML) can be reused/converted - Map design process is supported by some GUI/Editor - Toolchain can also generate raster tiles for offline use with low-end devices (smartphone). Sifting the web for useful information During my initial research I found these pages on various related topics helpful (random order): - How to make mvt with PostGIS by Parlons Tech - Vector tiles, PostGIS and OpenLayers by Giovanni Allegri - awesome-vector-tiles by mapbox - Using the new MVT function in PostGIS by Chris Whong - Vector Tiles - Introduction & Usage with QGIS by Pirmin and Kalberer - MVT generation: Mapnik vs PostGIS by Rafa de la Torre We have set some preliminary goals for this whole project. Some my be revised as the project proceeds. The next post will present some results of the initial research regarding tools and methods.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201329.40/warc/CC-MAIN-20190318132220-20190318153622-00009.warc.gz
CC-MAIN-2019-13
1,997
24
http://careers.aarviencon.com/?industryTypeId%5B0%5D=23&functionAreaId%5B0%5D=1&qp=dummyvar
code
2 - 5 yrs. Visiting sites / client office as and when required;Education: B.Com with Accounting, Income tax, I...view more 5 - 10 yrs. Requires an undergraduate degree from an accredited college or university and 5+ years of related wo...view more We will consider your Profile for future Jobs © 2015 Aarvi | All Rights Reserved
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00151.warc.gz
CC-MAIN-2018-17
329
6
https://www.jingqiao.design/about
code
I am a UX designer, research-based strategic thinker from Art Center College of Design. I love to be a detective, understanding how people live, their interests, their culture, their struggles. I use these insights to create solutions that can be shared visually with my team, using motion, storyboards and VR/AR technology. I speak English, Chinese & Japanese. Pro-gaming & make friends all around the world Immersed in the nature and breathe the fresh air with my cat. I am working with a musician on music production for BGM. Since I love playing games during the break time, I attended USC Games jams, participated in USC VR game productions with game engineers. Link to USC VR games.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00517.warc.gz
CC-MAIN-2021-10
688
8
https://community.livejournal.com/-sicksadworld/profile
code
Feel free to join and add some of your own WTF factor. This can come from random journal babble, the news, or wherever you find it. • Please credit your sources. • Please put pictures/long posts/videos behind an lj-cut. • Please copy+paste news articles (and place them behind an lj-cut) as well as linking to them, as sometimes they expire. • We'd prefer if you'd tag your entries to make the community easy to browse, but if you choose not to, a moderator will tag it for you. • Flaming and spamming will not be tolerated. One of the mods will first give you a warning, and if you continue to violate this rule you will be banned. • Please do not post any illegal content or anything that violates Livejournal's TOS. Depending upon the severity of the violation this could result in an immediate ban. • The moderators reserve the right to immediately delete any posts or ban any person(s) that they believe are disruptive to the community.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00507.warc.gz
CC-MAIN-2020-40
955
2
https://superuser.com/questions/10716/how-to-make-home-or-end-keys-work-in-mc-running-on-os-x-ssh/471611
code
I installed MacPorts on OS X 10.5 and I found out that when I connect to the computer using SSH and use mc - Midnight Commander - the END keys do not work. I have to mention that I'm using putty and I am able to use the keyboard very well on Linux machines like Fedora, Ubuntu,... Here is putty keyboard configuration (a configuration I found to be optimal over time): - Backspace key: 127 - Home/End keys: Standard - Function keys: Xterm R6 - Cursor keys: Normal - Numpad: normal - Terminal type string: xterm-color I'm looking for a command line solution/script that does these changes, this make much easier to create a prepare OS script for configuring a new OS.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00257.warc.gz
CC-MAIN-2021-31
666
12
http://www.linuxquestions.org/questions/slackware-14/qemu-and-kqemu-on-slackware-465123/page3.html
code
SlackwareThis Forum is for the discussion of Slackware Linux. Welcome to LinuxQuestions.org, a friendly and active Linux Community. You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today! Note that registered members see fewer ads, and ContentLink is completely disabled once you log in. If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here. Having a problem logging in? Please visit this page to clear all LQ-related cookies. Introduction to Linux - A Hands on Guide This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter. For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own. Click Here to receive this Complete Guide absolutely free. my fstab before running this is slightly different, I have my fstab setup the way ATI wants it tmpfs /dev/shm tmpfs defaults 0 0 maybe qemu wants me to leave out the left-hand side most 'tmpfs' and leave it as blanK? As long as there is a mounted tmpfs it's OK for the kqemu kernel module and it will use it. When such is unavailable and you do start QEMU with the kqemu kernel module loaded, you will receive a warning about kqemu not being able to use a memory filesystem and reverting to the MUCH slower /tmp directory to create it's virtual machine's RAM image. i tried qemu too and then after i see nykey post about vmware i tried that too and got network unlike in qemu. my only problem is the sound. i don't have /dev/dsp . why is that ? i have latest kernel 188.8.131.52 compiled by hand using the config from Pat's 184.108.40.206 + some modifications. i have latest alsa-1.0.11. how can i activate /dev/dsp ? after i add Sound Device at the drop-down list i only have Autodetect, no /dev/dsp and when i start virtual server i get error that no sound device exists. sound is working on the host (slackware). any ideeas please ? I'd just like to give you some feedback from someone who never did this before, and new to linux Eric. There just things that went thru my head, questions concerns that you may ( or may not) want to touch on in the wiki. Nothing is major here as I got thru it in notime, except for my stupidity on the root account owning my isos...omg. As if you did not have enough to do right, lol. 1. I get this echo'd in console loading up qemu: Could not configure '/dev/rtc' to have a 1024 Hz timer. This is not a fatal error, but for better emulation accuracy either use a 2.6 host Linux kernel or type 'echo 1024 > /proc/sys/dev/rtc/max-user-freq' as root. So I guess I'll write a sudo script to let me do this in your launch script. 2. Maybe telling people, (as stupid as this sounds) how to properly close the guest host. for instance, I wasn't sure if it were ok to just "x" out of slackware as guest, or tell the guest/slackware to shutdown. 3. My first time running qemu it went full screen, for whatever reason I dont know. So i just went into qmenu and hit help and was fine, but may be worth mentioning. 4. How to verify kqemu is doing something? Is the fact that it is modprobed mean it's working for me? Anything like "kqemu --status" kind of like what we have in "wpa_cli --status" , etc? 5. When I first made my slackware guest, I realized I wanted the image to sit on a different partition as it's permanent home. I was not sure if I could move it about on the hard drive, or to a separate physical drive. simple copy/paste worked for me. 6. maybe mention about my question above about changing size of the guest if need be down the road. 7. this may be more confusing, but i was not sure if I started by using cd1 of slackware to start the installer, and then later use an "iso" for cd2 on the install. the fact that you can mix/match. for me I had disc one of slackware, but no iso for it. trial and error prevailed, but was a question. 8. Maybe tell people that they can use a program like alcohol to make an iso of slackware cd's if they are in windows, or can use k3b in slackware and that k3b is in /extra. I didnt know how to make an iso in linux to be honest. I tried my nero in windows that came with my burner...but was then stuck with an 'nrg' file, which that didnt work either. I tried nrg2iso but get segmenation faults. Probably this is beyond the scope but thought I'd mention it. 9 the ctrl+alt work for me only with the ctrl+alt keys on the left side of my keyboard. the ctrl+alt keys on the right side of my keyboard don't do anything. maybe worth mentioning. I'm lefty so I went to the ctrl+alt on my right side of my keyboard to get to qmenu and was confused at first. 10. it appears the qemu is kernel independant, and kqemu is kernel dependent so upgrades to kernel will require rebuild of the kqemu. something like dont forget to remake a new kqemu if you upgrade your kernel. 11. you mention using a group for qemu. personally, I dont know how to do that. so i'm gonna go read my slackware book and find out how to. hope this helps, and again thank you for the build script and wiki. A truckload of notes there! I think I can try and address some in the Wiki page. I'm glad you solved most of those for yourself though (you're not the newbie you like to pose as you know :-) ) You can have a look at the rc script I wrote for the case you want to have a better networking experience than the default usermode networking (which is fine, too, if you have no intention of connecting TO the guest inside the Virtual Machine, and instead are satisfied with just connecting out FROM the guest OS): http://www.slackware.com/~alien/slac.../rc.vdenetwork It is part of the Slackware package for VDE which I also mention in my Wiki page. Maybe you can still pick out a couple of worthwhile morsels there even if you don't want to use VDE to provide the networking glue for QEMU. Presently I dont get any network on the slackware guest, but, that's no big deal as I plan to do the same thing that you do, make packages, kernels, etc in the virtual slackware. Actually, that's probably better that I dont have networking on the slackware now that I think of it. one last thing regaring the wiki: when i was putting slackware in, I made a guess, to add a swap partition and another partition for the slackware's "/" and when it came to the lilo, telling slackware to use the master boot record option too. I was not sure what to do but it worked out well. Ok, so I've tested VMware, it is quite impressive. I actually managed to get my Video Card up to 64MB (maximum supported is 128MB but I tought not to push the note for now, but will try that too). Alien, I would like still to get the hold of QEMU too, mostly to compare these two, but first I want to know, is it possible with QEMU to enlarge the Video Card memory ? Cause in QEMU the default memory for the Video Card is 4MB. With 64/128 in VMware I'm actually able to play games, and performance is quite acceptable. when it came to the lilo, telling slackware to use the master boot record option too. I was not sure what to do but it worked out well. The Guest OS inside QEMU runs in an emulated hardware environment. That includes emulated disks. The MBR to the Guest is just another sector in the virtual disk image file that you prepared for QEMU. Your Guest OS will not be able to address any physical hardware components directly (apart from USB devices for which such a translation from guest to host is made). You will not be able to harm your host computer (nor it's host OS) by doing weird things in the Guest, unless of course you hit a bug in QEMU :-) yeah, I know it's not meant for games, and thats not my intention of using it, but still, at 4MB I have such a lousy resolution and it moves like crap because of the Video Card. I actually keep VMware at 32MB and it's working just fine (btw I tried 128MB, and it worked too)... and in VMware Direct3D is available too when adding video ram. But, still no answer... can I do the same in QEMU ? "Add" more RAM to the video card ?
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189903.83/warc/CC-MAIN-20170322212949-00489-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
8,699
45
http://www.martinguth.de/business-intelligence/a-matter-of-data-visualization/
code
Some time ago my boss asked me for advice on a report he just created. Here’s an example with sample data of what he liked to visualize: For different dates we have a count of objects which had a specific final status. You can reproduce the sample data using the following select statement: query-with-inline-sample-data Goal of the visualization should be to show the distribution across the different status and the overall development of the measured data. Our first approach was to use a line char like this: The line chart shows the overall development pretty well however reading the distribution information from it is a bit challenging as we have all these intersections of lines. Why not try a bar chart? Well, the classical bar chart with a bar for each status seems to be not appropriate. More space is needed and if one would like to compare the development of one status one’s eye would have to jump around in order to grasp that information. One way to solve that would be to use an individual bar chart for each status. However that would result in some waste of space as well. Therefore we tried to do a stacked bar chart next. That now looks way better. However the height of the bar jumps related to the absolute values. Look at the values for status 1 for September 28th and September 29th. There share is nearly identical (36,9% vs. 363,8%)…however September 28th looks much bigger. If I were Nicolas Bissantz I now would take my ruler and show you the “lie factor” of the diagram…but I don’t have to go that far. Let’s now scale that bar chart to 100% and presto we have got the distribution information we wanted to show. September 28th and September 29th are not that big of a difference regarding status 1 anymore. I think, that’s quite a neat presentation of the distribution. However it lacks one information…reducing the visualization to percentages doesn’t give us the overall development anymore. September 30th, which had way less volumes than the other days, is scaled 100% as well. To circumvent that we could do two things: - Add absolute numbers to the chart (probably not that elegant but possible): - Add another small line chart on top showing the overall development (that’s what the total column in my resultset is for J). Even as this needs some additional space I think it’s the better solution than just adding the numbers to the bar chart. The data visualization was done with Cubeware Cockpit. However these (basic but indispensable) chart types should be available in every decent reporting tool (of course Excel does have them as well ;-)) What do you think about my solution, dear reader? Please let me know if you had done it completely different and how.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101282.74/warc/CC-MAIN-20231210060949-20231210090949-00808.warc.gz
CC-MAIN-2023-50
2,729
14
https://education.uci.edu/ucisoe_news/pubs_xu_2018
code
Xu, D. & Ran, X. (in press). Non-Credit education in community colleges: Students, enrollment patterns, and academic outcomes. Community College Review. Cung, B., Xu, D., Eichhorn, S., & Warschauer, M., (in press). Getting academically underprepared students ready through college developmental education: Does the course delivery format matter? American Journal of Distance Education. Glick, D., Cohen, A., Li, Q., Xu, D., & Warschauer, M. (in press). Predicting success, preventing failure: Using learning analytics to examine the strongest predictors of persistence and performance in an online English language course. In D. Ifenthaler, (Ed.). Using learning analytics to support study success. Xu, D. (2018). Academic performance in community colleges: The influences of part-time and full-time instructors. American Educational Research Journal, 1-39. Xu, D., & Dadgar, M. (2018). How effective are community college remedial math courses for students with the lowest mathematics skills? Community College Review, 46, 62-81. Xu, D., & Li, Q. (2018). Gender achievement gaps among Chinese middle school students and the role of teacher’s gender. Economics of Education Review, 67, 82-93. Xu, D., Solanki, S., McPartlan, P., & Sato, B. (2018). EASEing students into college: The impact of multidimensional support for underprepared students. Educational Researcher, 47(7), 435-350. Xu, D., Jaggars, S. S., Fletcher, J., & Fink, J. (2018). Are community college transfer students "a good bet" for four-year admissions?: Comparing academic and labor market outcomes between transfer and native four-year college students. Journal of Higher Education, 89, 478-502. Xu, D., Ran, X., Fink, J., Jenkins, D., & Dundar, A. (2018). Collaboratively clearing the path to a Baccalaureate degree: Identifying effective 2- to 4-year transfer partnerships. Community College Review, 46(3), 231-256. Cung, B., Xu, D., & Eichhorn, S. (2018). Increasing interpersonal interactions in an online course: Does increased instructor email activity and voluntary meeting time in a physical classroom facilitate student learning? Online Learning, 22(3), 193-215. Hodara, M., & Xu, D. (2018). Are two subjects better than one? The causal effects of developmental English courses on native and non-native English speakers in college. Economics of Education Review, 66(C), 1-13. Ran, F. X., & Xu, D. (2018). Does contractual form matter? The impact of different types of non-tenure track faculty on college students’ academic outcomes. Journal of Human Resources. Solanki, S. M., & Xu, D. (2018). Looking beyond academic performance: The influence of instructor gender on student motivation in STEM fields. American Educational Research Journal, 55(4). Hodara, M., Xu, D., & Petrokubi, J. (2018). Chapter 5: A case study using developmental education to raise equity and maintain standards. In R. Openshaw & M. Walshaw (Series Eds.) & S. Mahsood, & J. McKay (Vol. Eds.), Palgrave studies in excellence and equity in global education: Achieving equity and quality in higher education (pp. 97-117). Basingstoke, UK: Palgrave Macmillan. Ma, T., Wood, K. E., Xu, D., Guidotti, P., Pantano, A., & Komarova, N. (2018). Admission predictors for success in a mathematics graduate program: Letter to Editor. Notices of the American Mathematical Society, 65, 676. Jiang, S., Schenke, K., Eccles, J. S., Xu, D., & Warschauer, M. (2018). Cross-national comparison of gender differences in the enrollment in and completion of science, technology, engineering, and mathematics Massive Open Online Courses. Plus one 13(9). Cung, B., Xu, D., & Eichhorn, S. (2018). Increasing Interpersonal interactions in an online course: Does increased instructor email activity and voluntary meeting time in a physical classroom facilitate student learning? Online Learning, 22(3), 193-215.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00191.warc.gz
CC-MAIN-2023-40
3,841
17
https://pkg.go.dev/github.com/operator-framework/operator-sdk/pkg/helm/engine
code
Opens a new window with list of versions in this module. Published: Dec 10, 2019 Opens a new window with license information. Opens a new window with list of imports. Imported by: 0 Opens a new window with list of known importers. Package engine provides an implementation of Helm's templating engine required for a Helm operator. NewOwnerRefEngine creates a new OwnerRef engine with a set of metav1.OwnerReferences to be added to assets OwnerRefEngine wraps a tiller Render engine, adding ownerrefs to rendered assets Render proxies to the wrapped Render engine and then adds ownerRefs to each rendered file Click to show internal directories. Click to hide internal directories.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.42/warc/CC-MAIN-20231203161435-20231203191435-00373.warc.gz
CC-MAIN-2023-50
680
13
https://www.bluetooth.com/ja-jp/blog/bluetooth-pairing-part-5-legacy-pairing-out-of-band/
code
In previous blogs, we touched on topics such as Passkey Entry and Numeric Comparison which are two types of pairing methods. Today, I will introduce another one, out of band. The out of band (OOB) association model is designed for scenarios where an out of band mechanism is used to both discover the devices as well as to exchange or transfer cryptographic information which would be used in the pairing process. Out of band is a flexible option for developers that allows you to define some of your own pairing mechanisms, so the security level depends on out of band protection capability. Now, let’s have an inside look at it. 1. Phase 1 – Pairing Feature Exchange In my blog Bluetooth Pairing Part 4, there is a table similar to table 1. This is frame structure for pairing request/response. In this table, there is one field named “OOB Data Flag”, and it’s 1 byte in length. For the definition of “OOB Data Flag”, please refer to Table 2. The OOB data flag defines the values which are used when indicating whether OOB authentication data is available 2. Bluetooth LE Legacy Pairing When both Bluetooth® devices use LE legacy pairing, the process is easy to understand. For details about legacy pairing method selection mapping, please refer to Table 3. I’ve already highlighted OOB selection in this table, and you can see that: - Both devices MUST set their OOB data flag if they want to use OOB for pairing; - If one of device sets OOB data flag, but the other does not, both devices will check MITM flag which is in“AutheReq” field, Table 1, marked in green. If any device sets its MITM flag, the pairing method will be selected by the mapping of IO Capabilities to pairing method. Please refer to Bluetooth Core Specification v5.0, Vol3, Part H, Table 2.8 for the mapping detail. - Otherwise, use “Just Works” as pairing method. 3. Simplicity from OOB Currently, smartphones and tablets have Bluetooth® low energy capabilities as a standard, and as we have seen there are many ways to use Bluetooth to connect devices together. Another popular way to pair Bluetooth devices together is to use NFC to ‘tap to pair’ devices. Because of NFC’s super low range, some developers use the close NFC proximity between devices as an assurance that the two devices are indeed meant to be paired together. So, NFC can a good communications interface for OOB pairing. The user’s experience differs a bit when they use OOB for pairing. As an example, the user has one smartphone and one wristband, both devices have Bluetooth low energy and NFC interface. The user will initially touch the two devices together, and is given the option to pair. If “YES” is selected, the pairing is successful. This is a single touch experience where the exchanged information is used in both devices…it’s cool. Interested in pairing? Read the other posts in our pairing series: Part 1: Pairing Feature Exchange Part 2: Key Generation Methods Part 3: Low Energy Legacy Pairing Passkey Entry Part 4: LE Secure Connections – Numeric Comparison
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817014.15/warc/CC-MAIN-20240415174104-20240415204104-00387.warc.gz
CC-MAIN-2024-18
3,065
18
http://www.techadvisor.co.uk/forum/helproom-1/wireless-issues-need-help-badly-333015/
code
The Operating System on the computer should not matter. With WRT54GS you can connect 33 computers wireless to the router. Whats the firmware on the router ? For updated list of firmwares for linksys devices, click here For instructions to upgrade the firmware, click here For help with Wireless Security, click here And the wireless security on the router, i would suggest to use WEP 64 Bits.
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00712.warc.gz
CC-MAIN-2017-43
392
4
http://www.couchbase.com/forums/thread/memcache-refresh-issues
code
Memcache refresh issues we have 2 web servers (W1 and W2) which are load balanced and each has memcache (M1 and M2). We have limited config data that we store in Memcache. I can edit database values from web browser but when request goes thru UI hits LB and updated only one of memcache. Is there any better solution to handle situation other than forcing reload of all memcaches (Maintaing list of Memcache servers and reloading all).
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663637.20/warc/CC-MAIN-20140930004103-00422-ip-10-234-18-248.ec2.internal.warc.gz
CC-MAIN-2014-41
435
2
https://til.brie.dev/gunicorn
code
Neat! You found every TIL snippet that I have written about gunicorn. I like using 💚🦄 gunicorn to serve Flask apps, like httpcat.us. The Web site will be reloaded when app.py is modified with --reload-extra-file. In the end, it looks something like: …
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00826.warc.gz
CC-MAIN-2023-50
259
4
http://community.sitepoint.com/t/asp-net-users-being-kicked-out-randomly/7231
code
We are getting problem were users being kick out of the form and navigates to Login page even if the session has not timed out yet. very frustrating to the users... It doesn't happen all the time and not to all users. It looks like the Authentication Ticket is somewhat not valid but intermittently. Is this a common problem with ASP.net forms Authentication??? Anybody can help resolve this problem. Actually, the ticket may in fact be expiring. Have you actually tracked one to see if it is being renewed correctly? Here's a link to an example of how to get at the ticket, test for sliding expirations, and renewing it. you should check the authentication token and session both. if either is not set properly logout. depending on your settings, your session could be alive lot longer then the authentication ticket or vice versa. you could consider using sliding expiration for auth tokens along with DB session storage if you wanna give user long period of active time. What do you mean by authentication token? Some users complained that they are even active for less than 5 mins... Anyway, here's whats in the web config. <forms loginUrl="logon.aspx" protection="All" name="authCookie" timeout="60" path="/"> I will try to add slidingExpiration="true" and see if we will still get some complains althought we have implemented keepalive in the basepage. <iframe id="frmKeepAlive" width="1px" height="1px" frameborder="0" src="//xxxxx.net/xxxxx/keepalive.htm"> where the keepalive.htm reloads every 5 mins. So before even the session expires. Server knows that the user is still active. I will also change the timeout to 60 and see if this will make any difference. <sessionState mode="InProc" stateConnectionString="tcpip=127.0.0.1:42424" sqlConnectionString="data source=127.0.0.1;Trusted_Connection=yes" cookieless="false" timeout="20"/> Thank you guys for all your reply... Check the eventlog to see if your app recycles for some reason. If there's a serious resource leak IIS may recycle the app pool to release memory. IIRC it is by default set to recycle if IIS uses more than 60% of RAM. Yes, eventlog doesn't show any recycling of IIS. Otherwise all of them will be kicked out. Only some users are experiencing this... and some of them after just logging in. Is there any known issue of Anti -Virus in the client side corrupting the Auth Ticket??? I am "almost" positive that <authentication><forms timeout="value"> is in seconds, but I could be wrong. I usually use 3600 for one hour. This makes sense to me... renewing the Authorization Ticket...I will give this a rip! if (authTicket != null && !authTicket.Expired) FormsAuthenticationTicket newAuthTicket = authTicket; newAuthTicket = FormsAuthentication.RenewTicketIfOld(authTicket); string userData = newAuthTicket.UserData; string roles = userData.Split(','); new System.Security.Principal.GenericPrincipal(new FormsIdentity(newAuthTicket), roles); Yes, you're wrong, it is in Minutes: FormsAuthenticationConfiguration.Timeout Property (System.Web.Configuration) Are you sure about that? Because the behavior you're experiencing sounds to me that the application recycles! Do you have a machine key in your web.config? If not, you really should create one: Online tool to create keys for view state validation and encryption The machinekey is used to encrypt/decrypt the authentication tickets. When no machinekey is specified, ASP.NET will generate one. But when the application recycles, ASP.NET will generate a new one, resulting in the behavior your telling. Because the existing tickets are encrypted using the previous key, with the new key they cannot be decrypted anymore so ASP.NET will force you to login again. Specifying a machine key will solve this There are two things you can do in order to resolve this issue. Well only If( you have your form authentication and other properties are set correctly). - Create a Machine Key in your web.config. - Change the App Pool Process Idle time to higher limit. By default its 20 minutes. When the process stays idle for more than 20 minutes, it kills the worker process and as well as regenerate the machine key. While the existing cookie on client machine is encrypted with older machine key. As it wont be decrypt using the new machine, the user will be send to login page to re-enter the credentials and so does to create new persistent cookie.
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463485.78/warc/CC-MAIN-20150226074103-00101-ip-10-28-5-156.ec2.internal.warc.gz
CC-MAIN-2015-11
4,372
37
http://www.supportforum.philips.com/en/archive/index.php/t-2982.html
code
View Full Version : Wifi and (no Broadcast) SSID on 42PDL7906H/12 10-24-2011, 10:08 PM The set only connect to my router WIFI (thompson), if y enable Broadcast SSID( it's broadcast disable for security reasons), any idea to fix that? If y try manually to put a SSID on tv, the tv d'ont detect de wifi router. 10-25-2011, 06:43 AM Broadcast SSID( it's broadcast disable for security reasons a litte bit off topic, but hiding the SSID is not a security feature anymore. You can easily find any hidden wifi network nowadays... 05-16-2012, 11:13 PM I have a picture problem, and I want to disconnect the data that allowed the Tv to connect to my network (to see if the problem of picture remains), but I only can clean application memory, the SSID and Password remains, how can I clean this data? Or how to disable internet connection? Every time tv itīs trying acess my Pc (i do not use any media player).
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00590-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
903
9
http://dailyacquisition.blogspot.com/2012/08/the-acquisition-process-flow.html
code
The theme of this blog relates to my thought process for getting interested in stuff. I thought that a self-explanatory flow chart would make sense. When I preview this post, however, the image of the flow chart is blurry. It might be readable if you click on it; I'm not sure. So this might be a failed experiment. Even though I said the flow chart is self-explanatory, let me explain. The process begins with a seed of curiosity, which may come from a friend, a news article, a real world experience, a TV commercial, another blog, an Internet forum, etc. For instance, my interest in Japanese denim jeans came from a post I read on Head-Fi of all places. The seed germinates into a round of preliminary research that determines whether or not I remain at least somewhat interested in the topic. If not, I'll just spend more time on Head-Fi, no worries. If so, the initial research becomes a whole lotta research, which typically makes me want to acquire one or more items (pens, ridiculously spendy jeans, headphones, whatever). A normal person would just pull the trigger and buy something, especially if it's inexpensive like, say, a PEN that costs a buck fifty. Not me. I tend to get stuck in a loop of analysis paralysis during which additional research is performed before ultimately making a purchase. My theory is that informed consuming is the only way to go to prevent buyer's remorse and to ensure that you are getting the best possible product for whatever your budget might be. This sounds great in theory. In practice, however, it can become difficult for me to follow the "No" branch of the analysis paralysis decision box. Because I'm cheap. And because I know that I spend too much money on things that most people find stupid, compulsive, and/or excessive. So there you have it. This is my process. Update (about two hours after making this post): I am currently in the "Perform In-Depth Research" step regarding shaved ice machines. LOL, there goes my morning.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867841.63/warc/CC-MAIN-20180526170654-20180526190654-00587.warc.gz
CC-MAIN-2018-22
1,981
5
https://rog-forum.asus.com/t5/other-motherboards/maximus-v-extreme-bluetooth-fails-to-start/m-p/295805
code
So this is on a fresh install of windows 7 64 bit... with Intel Z77 Chipset last night the blue tooth was working fine and never had any issues... this morning I load up my machine and there is a pop up saying that one of my usb devices has failed. Checking into the device manager I see that the bluetooth adapter is missing. and there is a Unknown Device listed under the USB controllers. No matter what I do on with the device I can't get it to take the drivers for the adapter. Does anyone have some advice on how to get this fixed? Please let me know if there is anything else that you need to help me fix this.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.31/warc/CC-MAIN-20231206063543-20231206093543-00207.warc.gz
CC-MAIN-2023-50
616
4
https://niharzanwar.medium.com/?source=post_internal_links---------3----------------------------
code
The company for which this software project was undertaken provided 24/7 CCTV health and security monitoring services to around 500 locations each having 8–10 cameras across India. Health monitoring for CCTV systems means checking whether or not cameras are in working conditions which can be done in 2 ways, one is to look at them 24/7 * 500 locations * 10 cameras or capture an image of what each camera is currently viewing every N minutes. We obviously choose the second option This is my first Medium post ever, so please excuse me if make any mistakes. I will try to put up steps and code snippets after testing so that you don't have to invest time in debugging it. This post will explain how you can monitor your MongoDB instance and generate really cool graphs using drag and drop interface.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00497.warc.gz
CC-MAIN-2021-39
802
4
https://cobaltapps.com/community/index.php?threads/create-stand-alone-child-themes.81/
code
The Cobalt Apps Community Forum is for community interaction and assistance, but is not our official Cobalt Apps Product Support solution. If you need Cobalt Apps Product Support please contact our support team through the contact form found at the bottom of your "My Account" page. Look at the source code to "functions.php" and next to a number of lines is a little "arrow" that allows you to fold up comments, functions, and various other blocks of code. This is disabled in Instant IDE. The Ace Editor is the Themer Pro code editor, but in Instant IDE Ace is an optional editor, but not the default. Monaco Editor is the default, which of course has different features. So just switch to the Ace Editor in the Settings pop-up in Instant IDE and see if that doesn't provide the features you're finding in Themer Pro.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249569386.95/warc/CC-MAIN-20190224003630-20190224025630-00551.warc.gz
CC-MAIN-2019-09
819
3
https://man7.org/linux/man-pages/man8/choke.8.html
code
NAME | SYNOPSIS | DESCRIPTION | ALGORITHM | PARAMETERS | SOURCE | SEE ALSO | AUTHOR | COLOPHON TC(8) Linux TC(8) choke - choose and keep scheduler tc qdisc ... choke limit packets min packets max packets avpkt bytes burst packets [ ecn ] [ bandwidth rate ] probability chance CHOKe (CHOose and Keep for responsive flows, CHOose and Kill for unresponsive flows) is a classless qdisc designed to both identify and penalize flows that monopolize the queue. CHOKe is a variation of RED, and the configuration is similar to RED. Once the queue hits a certain average length, a random packet is drawn from the queue. If both the to-be-queued and the drawn packet belong to the same flow, both packets are dropped. Otherwise, if the queue length is still below the maximum length, the new packet has a configurable chance of being marked (which may mean dropped). If the queue length exceeds max, the new packet will always be marked (or dropped). If the queue length exceeds limit, the new packet is always dropped. The marking probability computation is the same as used by the RED qdisc. The parameters are the same as for RED, except that RED uses bytes whereas choke counts packets. See tc-red(8) for a description. o R. Pan, B. Prabhakar, and K. Psounis, "CHOKe, A Stateless Active Queue Management Scheme for Approximating Fair Bandwidth Allocation", IEEE INFOCOM, 2000. o A. Tang, J. Wang, S. Low, "Understanding CHOKe: Throughput and Spatial Characteristics", IEEE/ACM Transactions on Networking, 2004 sched_choke was contributed by Stephen Hemminger. This page is part of the iproute2 (utilities for controlling TCP/IP networking and traffic) project. Information about the project can be found at ⟨http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2⟩. If you have a bug report for this manual page, send it to [email protected], [email protected]. This page was obtained from the project's upstream Git repository ⟨https://git.kernel.org/pub/scm/network/iproute2/iproute2.git⟩ on 2022-12-17. (At that time, the date of the most recent commit that was found in the repository was 2022-12-14.) If you discover any rendering problems in this HTML version of the page, or you believe there is a better or more up-to-date source for the page, or you have corrections or improvements to the information in this COLOPHON (which is not part of the original manual page), send a mail to [email protected] iproute2 August 2011 TC(8) Pages that refer to this page: tc(8), tc-red(8)
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644683.18/warc/CC-MAIN-20230529042138-20230529072138-00375.warc.gz
CC-MAIN-2023-23
2,530
11
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-scale-with-machine-learning-functions
code
Scale your Stream Analytics job with Azure Machine Learning functions It's straight forward to set up a Stream Analytics job and run some sample data through it. What should we do when we need to run the same job with higher data volume? It requires us to understand how to configure the Stream Analytics job so that it scales. In this document, we focus on the special aspects of scaling Stream Analytics jobs with Machine Learning functions. For information on how to scale Stream Analytics jobs in general see the article Scaling jobs. What is an Azure Machine Learning function in Stream Analytics? A Machine Learning function in Stream Analytics can be used like a regular function call in the Stream Analytics query language. However, behind the scene, the function calls are actually Azure Machine Learning Web Service requests. Machine Learning web services support "batching" multiple rows, called mini-batch, in the same web service API call, to improve overall throughput. For more information, see Azure Machine Learning Web Services. Configure a Stream Analytics job with Machine Learning functions When configuring a Machine Learning function for Stream Analytics job, there are two parameters to consider, the batch size of the Machine Learning function calls, and the streaming units (SUs) provisioned for the Stream Analytics job. To determine the appropriate values for SUs, first a decision must be made between latency and throughput, that is, latency of the Stream Analytics job, and throughput of each SU. SUs may always be added to a job to increase throughput of a well partitioned Stream Analytics query, although additional SUs increase the cost of running the job. Therefore it is important to determine the tolerance of latency in running a Stream Analytics job. Additional latency from running Azure Machine Learning service requests will naturally increase with batch size, which compounds the latency of the Stream Analytics job. On the other hand, increasing batch size allows the Stream Analytics job to process *more events with the same number of Machine Learning web service requests. Often the increase of Machine Learning web service latency is sublinear to the increase of batch size so it is important to consider the most cost-efficient batch size for a Machine Learning web service in any given situation. The default batch size for the web service requests is 1000 and may be modified either by using the Stream Analytics REST API or the PowerShell client for Stream Analytics. Once a batch size has been determined, the number of streaming units (SUs) can be determined, based on the number of events that the function needs to process per second. For more information about streaming units, see Stream Analytics scale jobs. In general, there are 20 concurrent connections to the Machine Learning web service for every 6 SUs, except that one SU jobs and 3 SU jobs get 20 concurrent connections also. For example, if the input data rate is 200,000 events per second and the batch size is left to the default of 1000 the resulting web service latency with 1000 events mini-batch is 200 ms. This means every connection can make five requests to the Machine Learning web service in a second. With 20 connections, the Stream Analytics job can process 20,000 events in 200 ms and therefore 100,000 events in a second. So to process 200,000 events per second, the Stream Analytics job needs 40 concurrent connections, which come out to 12 SUs. The following diagram illustrates the requests from the Stream Analytics job to the Machine Learning web service endpoint – Every 6 SUs has 20 concurrent connections to Machine Learning web service at max. In general, B for batch size, L for the web service latency at batch size B in milliseconds, the throughput of a Stream Analytics job with N SUs is: An additional consideration may be the 'max concurrent calls' on the Machine Learning web service side, it’s recommended to set this to the maximum value (200 currently). For more information on this setting, review the Scaling article for Machine Learning Web Services. Example – Sentiment Analysis The following example includes a Stream Analytics job with the sentiment analysis Machine Learning function, as described in the Stream Analytics Machine Learning integration tutorial. The query is a simple fully partitioned query followed by the sentiment function, as shown in the following example: WITH subquery AS ( SELECT text, sentiment(text) as result from input ) Select text, result.[Score] Into output From subquery Consider the following scenario; with a throughput of 10,000 tweets per second a Stream Analytics job must be created to perform sentiment analysis of the tweets (events). Using 1 SU, could this Stream Analytics job be able to handle the traffic? Using the default batch size of 1000 the job should be able to keep up with the input. Further the added Machine Learning function should generate no more than a second of latency, which is the general default latency of the sentiment analysis Machine Learning web service (with a default batch size of 1000). The Stream Analytics job’s overall or end-to-end latency would typically be a few seconds. Take a more detailed look into this Stream Analytics job, especially the Machine Learning function calls. Having the batch size as 1000, a throughput of 10,000 events take about 10 requests to web service. Even with one SU, there are enough concurrent connections to accommodate this input traffic. If the input event rate increases by 100x, then the Stream Analytics job needs to process 1,000,000 tweets per second. There are two options to accomplish the increased scale: - Increase the batch size, or - Partition the input stream to process the events in parallel With the first option, the job latency increases. With the second option, more SUs would need to be provisioned and therefore generate more concurrent Machine Learning web service requests. This means the job cost increases. Assume the latency of the sentiment analysis Machine Learning web service is 200 ms for 1000-event batches or below, 250 ms for 5,000-event batches, 300 ms for 10,000-event batches or 500 ms for 25,000-event batches. - Using the first option (not provisioning more SUs). The batch size could be increased to 25,000. This in turn would allow the job to process 1,000,000 events with 20 concurrent connections to the Machine Learning web service (with a latency of 500 ms per call). So the additional latency of the Stream Analytics job due to the sentiment function requests against the Machine Learning web service requests would be increased from 200 ms to 500 ms. However, batch size cannot be increased infinitely as the Machine Learning web services requires the payload size of a request be 4 MB or smaller web service requests timeout after 100 seconds of operation. - Using the second option, the batch size is left at 1000, with 200-ms web service latency, every 20 concurrent connections to the web service would be able to process 1000 * 20 * 5 events = 100,000 per second. So to process 1,000,000 events per second, the job would need 60 SUs. Compared to the first option, Stream Analytics job would make more web service batch requests, in turn generating an increased cost. Below is a table for the throughput of the Stream Analytics job for different SUs and batch sizes (in number of events per second). |batch size (ML latency)||500 (200 ms)||1,000 (200 ms)||5,000 (250 ms)||10,000 (300 ms)||25,000 (500 ms)| By now, you should already have a good understanding of how Machine Learning functions in Stream Analytics work. You likely also understand that Stream Analytics jobs "pull" data from data sources and each "pull" returns a batch of events for the Stream Analytics job to process. How does this pull model impact the Machine Learning web service requests? Normally, the batch size we set for Machine Learning functions won’t exactly be divisible by the number of events returned by each Stream Analytics job "pull". When this occurs, the Machine Learning web service is called with "partial" batches. This is done to not incur additional job latency overhead in coalescing events from pull to pull. New function-related monitoring metrics In the Monitor area of a Stream Analytics job, three additional function-related metrics have been added. They are FUNCTION REQUESTS, FUNCTION EVENTS and FAILED FUNCTION REQUESTS, as shown in the graphic below. The are defined as follows: FUNCTION REQUESTS: The number of function requests. FUNCTION EVENTS: The number events in the function requests. FAILED FUNCTION REQUESTS: The number of failed function requests. To summarize the main points, in order to scale a Stream Analytics job with Machine Learning functions, the following items must be considered: - The input event rate - The tolerated latency for the running Stream Analytics job (and thus the batch size of the Machine Learning web service requests) - The provisioned Stream Analytics SUs and the number of Machine Learning web service requests (the additional function-related costs) A fully partitioned Stream Analytics query was used as an example. If a more complex query is needed, the Azure Stream Analytics forum is a great resource for getting additional help from the Stream Analytics team. To learn more about Stream Analytics, see: Send feedback about:
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232261326.78/warc/CC-MAIN-20190527045622-20190527071622-00154.warc.gz
CC-MAIN-2019-22
9,407
42
http://www.lulus.com/products/soda-resist-black-cutout-lace-up-platform-wedges/77306.html
code
Only Until Free Shipping View Your Bag Your Bag Is Empty. select a size and enter your email below to be notified when this product comes back in stock! Tag your photos on Instagram or Twitter for a chance to WIN $100! Fairy Tale Ending Silver High Heel Sandals Tall Tales Whiskey Brown Over the Knee Boots Report Areva Black Velvet Slip-On Sneakers Suede Away Taupe Suede Platform High Heel Sandals Check out our Blog > Sign up for our emails and be the first to know about special offers, sales, the latest arrivals and much more!
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929205.63/warc/CC-MAIN-20150521113209-00050-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
532
12
https://community.spiceworks.com/topic/31275-sbs-2-nic-network-issues
code
I am basically posting here to get some input of ideas and to brainstorm. Here is the scenario; There is a sbs server, two nics, its not doing ICS, but one of them was taking care of the DNS and DHCP. Everytime there is some major file transfer on the network, meaning copying file to/from the server, the network CRAWLS and the server slows to a halt. The guy who set this up (who is NOT an IT guy) screwed a few things up, so I have fixed the DNS and WINS settings that were propagating wrong settings to the clients (it was giving both ips of the NICs as DNS servers) and i think the network was getting confused. I am waiting to see how the network will react and if it will improve. Anything else that could be causing this? - bad NIC (doesnt seem to be the case, but have yet to replace it) - Bad networking component (switch, cables) - WINS, DNS settings not setup properly (i have fixed, i think) So let the ideas flow.....thanks again all....
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00464.warc.gz
CC-MAIN-2017-34
951
7
http://proceedings.mlr.press/v123/kanervisto20a.html
code
Playing Minecraft with Behavioural Cloning Proceedings of the NeurIPS 2019 Competition and Demonstration Track, PMLR 123:56-66, 2020. MineRL 2019 competition challenged participants to train sample-efficient agents to play Minecraft, by using a dataset of human gameplay and a limit number of steps the environment. We approached this task with behavioural cloning by predicting what actions human players would take, and reached fifth place in the final ranking. Despite being a simple algorithm, we observed the performance of such an approach can vary significantly, based on when the training is stopped. In this paper, we detail our submission to the competition, run further experiments to study how performance varied over training and study how different engineering decisions affected these results.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00600.warc.gz
CC-MAIN-2023-06
808
3
http://bluebelleandco.com/ajax/index/options/product_id/1637/
code
Details:Ridiculously cool origami Triceratops Clutch Bag. Made from turquoise PVC with gold printed outline, and bright pink cotton lining, including two card slots and a small zip pocket. Fits a phone, cards and your favourite lipstick! Comes with detachable cross-body and hand straps. Dimensions: 38 x 20 x 1 By House of Disaster
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00095.warc.gz
CC-MAIN-2020-05
332
3
http://orages.gogan.org/?bo_page=archive&bo_show=maps&bo_lang=sv
code
Here you can display the lightning strikes for each day on different maps. It is also possible to view animated maps, but consider that it will take some time to load them. Lightning data is available from 2024-02-27 to 2024-02-28. There's no guarantee for completeness. Endast data från 2024-02-27 till 2024-02-28 tillgängligt!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474737.17/warc/CC-MAIN-20240228143955-20240228173955-00737.warc.gz
CC-MAIN-2024-10
330
4
http://sourceforge.net/p/lxr/discussion/86145/thread/24575002/
code
its beautiful. its "only" taken me a week and a half. linux documentation is a joke. until the linux community changes their perspective on documentation, linux will stay a developers os. but it seems that we developers want it that way. man, the knowledge to get all those disparate tools and packages working requires more than a superficial knowledge of many different but then the sense of accomplishment afterwards is worth it. wouldn't want to take that away for others, but i made damn sure i documented for myself so that i don't ever spend another week and a half to get lxr up and running. - the documentation that comes with the lxr download is correct, but missing some crucial configurations -if you're using mod_perl with apache, make sure the mod_perl is working, test it out with a simple mod_perl script -every word in the INSTALL doc should be read literally -every word in the INSTALL may mean more than what you're thinking -its not a puzzle -you may need to use other installation documentation in addition to the
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447657.38/warc/CC-MAIN-20141017005727-00033-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
1,034
19
https://forge.typo3.org/issues/81718?tab=notes
code
Workspace - TYPO3 MM relations are defect Using a workspace and create a new system category. - In current workspace add categories for a content element - one already existing category (in the live workspace) - and the new created category above. - After save the change, only the already existing category is added, but the new category not. The Implementation for submitted relation uids to accordant version uids in workspace context doesn't work (see typo3/sysext/core/Classes/DataHandling/DataHandler.php) and it must be removed.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816045.47/warc/CC-MAIN-20240412163227-20240412193227-00847.warc.gz
CC-MAIN-2024-18
535
7
https://www.hipforums.com/forum/threads/my-dvd-player-crapped-out.57580/
code
I can't figure out how to fix the damn thing. I was hoping that maybe someone here could give me a few tips. All right, here's the story. I was playing a DVD, and for the first forty minutes or so, everything was fine. Then, all of a sudden, the player stopped reading the disc, and I could hear strange clicking noises coming from the player. I couldn't pause, fastforward, or stop the disc. So, I had to turn off the machine. I turned the machine back on, ejected the disc, checked it, but it was perfect. There are no marks on it whatsoever. It's brand new. So, I cleaned the disc off anyway, put the disc back in, and tried to play it... it wouldn't do it. It just sat there and made those strange clicking noises while it tried to read the disc. I took the disc out, thinking that maybe the disc was just messed up. So, I grabbed another DVD and put it in. I got it to play, and I selected a random scene to view. Well, after maybe five minutes, it happened again. Feeling a bit frustrated, I took one of those cleaning CD's with the little brushes on it and put it in the player. It couldn't read that either. Then, some how, I got that to work and let it clean the lens. Then, after the cleaning, it started screwing up again. So I finally grabbed the user manual. I checked the help/troubleshooting section, and I couldn't find any information on this kind of problem. So, I have no idea what I should do now. The thing is, I've only had the thing for three years, it's not a cheap model, I've always taken very good care of it, and I rarely use it. This shouldn't happen. There is no way this thing should be broken. So, there has to be some way to get it working again. Does anyone have any suggestions? Have you ever had a similar problem?
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00486.warc.gz
CC-MAIN-2022-49
1,750
1
http://gamedev.stackexchange.com/questions/tagged/levels+tiles
code
Game Development Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Level Creating Help I am making a little 2d overhead RPG type game just for fun. I have almost all the basic stuff set up, but I just need a little help on level creation. I can already make a level and place each tile ... Jun 23 '11 at 20:43 newest levels tiles questions feed Hot Network Questions Short Java FizzBuzz using recursion and parameters how to fix a wrong pgfplots plot? Is testing easier/harder than learning? Are satellites around earth visible to the naked eye? Why Are Linguistics and Law "Sciences"? How to do multivariate machine learning? (predicting multiple dependent variables) I want to start learning how to use \newcommand \newenvironment and \def any suggestions where to start? Help!! How to do square root! Plausible explanation for large number of armed adventurers in fantasy RPG setting? What do those mean for cheque purposes? Have axioms / axiom schemata of this flavour been proposed or otherwise considered? Why did this explosion make me fat? (A land mine increased my weight) Comparing a6000 to a7 photos on Flickr What is known about this binary representation polytope? What is the size of a pointer? What exactly does it depend on? Is it necessary that every function is a derivative of some function? What is the differences between mysql-client and mysql-client-core? How anticommunist was Robert Heinlein? How to use special characters like the question mark as a variable in a formula? How can I reduce voltage from 40VDC to 5VDC? Draw the shadows of buildings Does `sl` ever show the current directory? How do I teach algebra? What female mathematician can I introduce to my High School students? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00189-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
2,303
53
https://www.softpile.com/xstandard-xhtml-wysiwyg-editor/
code
XStandard is the leading standards-compliant plug-in WYSIWYG editor for desktop applications and browser-based content management systems (IE / Firefox / Safari / Opera). Version: 2.0XStandard is the leading standards-compliant plug-in WYSIWYG editor for desktop applications and browser-based content management systems (IE / Firefox / Safari / Opera). Operating System: Windows Version 2.0: Improvements in XStandard version 2.0 include support for OS X, keyboard accessible interface, find / replace and support for authoring definition lists. Version 1.7.1: Improvements in XStandard version 1.7 include support for content locking, and markers used to flag elements of content with informative text messages. XStandard also supports subdocuments, allowing authors to insert chunks of reusable content stored elsewhere in the CMS. Most CSS 2.1 selectors are now supported, plus more keyboard shortcuts and programmatic APIs. Version 1.6.2: Improvements in XStandard version 1.6 include ASP.NET Web Services for shared hosting environments, Search capability for image / attachment libraries, uploading of files to multiple libraries and sub-folders, support for PNG, new language versions (Swedish, Finnish, Danish and Czech) Version 1.5: Improvements in XStandard version 1.5 include 5-times faster loading (under half a second on the average PC), caching of customization files (more speed), a unique "heartbeat" (prevents session timeout in your CMS), support for SSL, easier cross-browser integration (supports the type attribute in IE / Firefox), ability to hide advanced editing features from novice users, copy / paste images from the clipboard (paste directly from Photoshop, etc.), ability to customize icons in the Directory service (bridge to 3rd-party content), support for "placeholders" (custom tags that reserve space for dynamic content) and easier image insertion in the Lite version (image dimensions now optional).
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314852.37/warc/CC-MAIN-20190819160107-20190819182107-00087.warc.gz
CC-MAIN-2019-35
1,937
7
https://www.dentalcareinstamford.com/author/admin
code
After all, human motivations for learning to code are consistent across demographics and age groups. Curiosity may motivate people to pursue careers in programming. Perhaps ambitious coders like solving problems and want to pursue a job that allows them to do so the whole day, every day. Perhaps they simply want a new pastime, coding classes for adults singapore and coding looked appealing. Seniors and older individuals don’t require any particular motivation to learn to code; they simply require the resolve to do so! However, we understand that learning to code might be scary. We’ll go over the fundamentals of how to acquire coding if you’re older in this post. Every explanation screams insecurity, and could blame any senior for thinking that way? Older people are nearly often characterized as chase typers who need assistance accessing computer systems and applications in popular culture. We’re taught that younger people are the electronics digital natives and that older people can’t hope to keep pace with them. Consider how often you’ve heard the comedy stereotype of older relatives seeking computer help from a technically savvy twenty-something or grumbling about the problems of digital life? Unfortunately, misconceptions like this are far from innocuous. They have slowly disenfranchised and undermined society’s most experienced and intelligent residents throughout the years. Adults can learn to code. Is it Ever Too Late to Learn Programming? Let’s get one thing straight: you weren’t too old to program. There isn’t and never has been an age limit for learning to code. Insecurity and uncertainty, however, coding classes for adults singapore force many older people to limit their potential for accomplishment.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00208.warc.gz
CC-MAIN-2023-06
1,759
5
http://support.signature.net/messages/23809.htm
code
Posted by Jeff Cinelli on November 03, 2018 at 12:28:49: I'm still stuck trying to get a colored button on a form. I know you can't change the color. So I'm looking for alternatives. I assigned an image to the button using the COS.LOADPICTURECONTROL command, and it looks great! But when I try to exit my form, Comet crashes. I can use a picture control vs a button, and again it looks good, but I don't know how to get Comet to act on a click on a picture control. Can anyone give me some direction on a solution for this? I just want a functioning button with an image that can be clicked. Post a Followup Each file can be a maximum of 1MB in length Uploaded files will be purged from the server on a regular basis.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00328.warc.gz
CC-MAIN-2023-14
717
5
http://forums.theregister.co.uk/user/1518/2
code
1188 posts • joined 28 Mar 2007 Re: F*** me! Or, was two children playing with toys as normal. And why did you "have" to explain this to your four-year-old? One of them knocked over the other's toys... what's to explain exactly other than one of them is a little git interrupting the other's quiet playtime? Get a grip, indeed. Re: "what will a SimBoss make of a SimCV?" And he'll end up hiring Simpletons. Re: They are NOT listening If you get a 100,000 MORE "agrees" than "disagrees", you respond to it. Until then, you don't waste your time. The only problem with that is that there would never be another response and the site would get caught up in the next purge of useless services offered by the previous government masquerading as "cost-saving". But then, maybe that's not such a bad thing after all. I remember when I was a kid. Several times a year, someone would approach me in the playground or classroom and ask me to "sign our petition". Sometimes they were quite sensible ("Open both doors at the East entrance at lunchtime so we don't get trampled trying to get in or out") but most of the time they were absolute crackpot that only made sense when you were a child ("Get Mr Smith sacked because he gave me an unfair detention!" or "More/larger chocolate desserts for the same price at lunchtime", etc.) Even back then, I never bothered. Honestly, it just wasn't worth it. You could have the entire place sign the thing and nothing much would ever happen about it, even if the idea was quite sensible (the doors never were both opened as long as I was at that school, for example - I assume there was a reason for this but never quite found out). Petitions really were the playground democracy and, let's be honest, the government will ignore most petitions just like my school did. Even the sensible ones. It's a gauge for government, that's all. If the country was ruled by public opinion, people would be hung before they were tried, some celebrity would be in charge (until they made their first mistake) and road deaths would increase ten-fold after all the changes people wanted (like to be able to drive like nutters on the motorway). All the petitions do is give a sense of "contribution", provide statistics about public opinion, but don't actually change anything. If the Jimmy Saville thing had come out earlier, and every person in the country voted to hang him without trial, it still wouldn't have happened. But they can use the list to look at the most-named ones and garner a lot of votes by giving a pseudo-statement to the effect that they'll look into it, and talk about it in the news (because people obviously want to hear about something being done about it, but obviously don't care about the three bills I slipped through the Parliament back door last week). The largest petition on there attracting over 200,000 names was "Convicted London rioters should loose all benefits." Apart from the bad spelling, this suggests that people who were convicted of a crime should have a punishment not assigned by a court, in a rash legal change, for a single incident only (presumably OTHER criminals are okay, but the wording of the law), etc. etc. etc. And what was the response? No, basically. Of course it was. The next ten more popular petitions of all time? No, we have already passed the law you didn't want. No (though we talked about it). We did nothing about this (though we talked about it). No, dropping the petrol taxes will cripple the country. We take your point but we can't stop people coming into the country. And, no, because PSHE classes already teach pupils enough financial acumen to survive in the world (really?). The biggest trending petitions still open are the moment are ALL media-related (West Coast mainline, badger cull, tax at Rangers Football Club, etc.). That should worry more than anything - people care more about things that the news outlets place on their front pages than anything practical or sensible . Lots are inherently misguided. And some are just plain crackpot ("Alopecia Areata - Research Needed" has more names to it than "Save Royal Bolton Hospital"). A petition of any volume WILL NOT CHANGE ANYTHING. All the petitions on that site HAVE NOT CHANGED ANYTHING (and if they were successful, I'd argue they could have been without the petition anyway). If you don't want the West Coast Mainline to change, sure air your view. But the only thing that will actually make any difference is to NOT use the West Coast Mainline if it changes to a company you don't want to support. And that won't even be a government effect, just a purely profit one. The fact is that if it did change, and the government approved it, lots of people would shout for change while still using it every day. You can say "we had no choice", but that just proves how unimportant it is for the government to respond in such cases - they KNOW you have no choice, so there's little point taking your view into account. It's like objecting to planning applications. Sure, you can. It's there. There's a process, and a form, and a guy, and a meeting that has to happen, and all the rest. But unless there's a REALLY good reason that nobody ever thought of and nobody ever checked and nobody's checklist forces them to consider already anyway, the chances are that your objections will be ignored and overruled. Chances are the number of objections upheld is really quite pathetic, and has more to do with things slipping through the net or personal favours rather than anything to do with "listening to the people". A petition is worthless. All the ones people have ever pushed into my face have come to nothing. And an electronic one means even less. Of all the government petitions I see for the UK, you only have to get to page 4 of 623 at the moment (20 petitions per page) of the closed petitions before everything goes under 10,000 names. Currently open ones? Page 2. That means that just churning through and responding and administering those petitions is actually causing LESS things to get changed overall than if we didn't have that. We've wasted more man-hours petitioning online and responding to petitions that it would have cost just to carry on as we were and do something ourselves. And the government response to almost every petition? No, or doing nothing, at great expense. Seriously, if your MP doesn't do anything when you personally write them a direct, open, well-considered, precise letter, what makes you think that an electronic tick-in-a-box does anything for the way they work? It doesn't. It just gives them an indication as to what the best thing to "cover-up" with is at the moment. What's the solution to actually getting change? I don't know. But a petition is probably the last and worst thing to do. Re: "a quarter that of The Sun" Half the Mirror Group readership or a quarter of The Sun's... that's pretty impressive. I don't dabble in media but that seems very good to me. Maybe there are more people out there with a brain than I initially thought. So the next question is, how long until we see The Reg on the shelves in our local newsagent? :-) "It is crazy that ambulance drivers cannot access a full medical history of someone they are picking up in an emergency" It's crazier to assume that ambulance staff are going to be sitting in the ambulance reading the patient's medical history for anything other than keywords. Keywords that, should they impact on the paramedics ability, are most likely to be printed on a bracelet about that person in a recognised design to attract the attention of a paramedic. "Must not be given" There are not thousands of people dying every day because the ambulance has given them something they didn't know the patient couldn't have. And if there are, nothing more than a summary of keywords needs to be stored ANYWHERE, or transmitted to ambulance crew. Thus this is a fabricated problem, which makes me wonder the true intent. The other part, about GP's etc. having consistent access to medical records - there, I grant you there's a use. But I'm afraid you just dug your own grave by going above and beyond what it quite a simple problem (digitise all medical records) to something that's unnecessary, expensive, needs lots of specialised equipment (a GP I expect to have a PC already, an ambulance doesn't need any more expensive electronic crap put into it), and transmits my personal medical details around the country for no real reason. What you need is a common electronic file format. Not a cloud-based system with poor controls on it. Under the current system, I know that my doctor has my medical records, and can supply them to other vetted people if necessary (at his own risk). If he had a common electronic file format, he could easily supply that information to various places as and when the need arises for my details to transfer (even, say, a one-time transfer to a central location which can pass them out to ambulances should I get run over and be identified as the patient). What ISN'T needed is a way for everyone, everywhere, with an NHS machine to access my records willy-nilly, confuse me with a similarly named / numbered stranger, and to have little to no control over, say, seven thousand people all accessing celebrity X's medical records to see if he really DID have a nose job last week. What you NEED is a common electronic medical file format. When you have that, and you publish it, and software manufacturer's can compete to provide the best system to handle those formats, then you may convert my records. How you distribute those records once converted - that's an ENTIRELY different question, and I'd personally go for a token-checkout style method. Anyone on the NHS can checkout a record (with suitable permission and checks that they are allowed to do so), but only ONE machine/user can checkout my records at a time. Those checkouts are logged and recorded and I can QUERY THEM myself from the Government gateway website at any time (I don't need personal medical details on there, either - I just want to have a list of when my token was checked out and who's currently holding it, and a short history of token changes). If a hospital in Strathclyde reads my details (I haven't been a doctor in nearly 10+ years except to register, and live nowhere near there), I will want to know WHY, and have people held accountable. And without the token request, you cannot see ANYTHING of my details. Then an ambulance, or a Casualty department, can have "priority", take the token away from any current holder for my records (suitably logged of course) at any time. And I will KNOW they did that. And they will see what they want. And the common file format thus devised will provide the minimum of access necessary for their job (i.e. a list of important conditions and nothing more, unless they request to probe further but most likely that would be a doctor in the Casualty rather than the paramedic who does that) so they can see if I'm allergic to penicillin but NOT, say, that I recently had a colonoscopy or whatever. Everything you do above and beyond a simple, secure system like that makes me question why. Usually the answer is simple greed ("I have a friend in the medical software business who needs some work", for instance), but that's indistinguishable from government corruption in the early stages, so you need to do things to reassure me that's NOT your intention. And the best thing you can do? Not pilot another humongously expensive NHS IT scheme (which now have a reputation for complete and utter failure worse than anything else in IT), but a small, simple change that will make all such future schemes easier, cheaper, more practical, and still compatible with what you've done. Gimme a common file format. Then we can talk about digiting records. Then we can talk about centralising records. Then you can give me a token system that prevents abuse. Each step in a few years work at absolute worst, do-able within a reasonable budget, and helps the next steps take place. Until then, you keep that brown envelope that my local doctor still holds and has about three slips of paper in it describing an injury to my eye at birth and - well, that's about it. I have nothing to hide in my medical records, but the WAY you want to use them doesn't give me any confidence at all in the presence of simple ideas that would work much better and that you actually stand a chance of implementing successfully inside a single term of leadership. I should be able to call someone a swearword. Everyone from Dickens to Shakespeare has done it, and it's not in any way affecting a normal person's life. We really are wasting people's time here by trying to regulate that. Also, the only logical conclusion would be that films and TV shows would have to ban almost all swearing - if the act is illegal itself, then depicting someone getting away with that act might well end up being regulated by the same rules, whether by word or law, or fear of prosecution, and we'll wind up in the same situation as smoking on TV has experienced. I can probably name 10 famous characters from movies who were never depicted without a cigarette or cigar, but try to do it with modern ones. They've gone. Sure, you can still see cigarettes but the law had an impact on silly things like movies too. (Side-note: I'm a non-smoker and always have been). I can think of a myriad variations that are "threatening", "abusive", or "grossly offensive", but that's not the sort of thing I mean, so the law is getting closer to a common sense rebound. "Insulting", however - why should that be a crime? If you're an idiot, I can say you're an idiot. It's insulting, sure, but it's hardly devastating to your life unless I do it in an "abusive" manner or I "threaten" you - both of which are covered. As people are wont to point out, personally I find religion offensive and insulting, especially if they tell me I will burn in hell, or that I'm not "one of God's children" or whatever fancy phrase they want to use to separate me from an ordinary person. That's insulting in the same manner. And though I'd quite like to shut them up, I don't think this law (which would have eventually permitted me to do just that) is sensible or reasonable or can be enforced fairly while it contains the word "insulting". Insults happen, thousands, even millions of times a day. There is no clear line of justification in the word "insulting" that you can use that separates incidents that are harmless, and those that are not. The definition is just not clear enough. And I don't see why you can't call someone the same things in person as you do online with the laws as proposed. If something is "grossly offensive", then it overlaps and will be covered in the same definitions as "abusive" or "threatening" in some manner - the only difference is that online publication allows posts that are not just verbal but visual too, and thus "grossly offensive" covers things that include obscenity of a non-verbal nature too, which I think it needs to. If you're insulted by something I've said to you, maybe you should either ignore those people, or fight your corner (verbally speaking). I find people who are "insulted" but can't be mature enough to ignore childish ramblings, or provide their own justification for someone not doing that to be the "babysat" adult of the worst kind. If my opinion matters to you, and you're insulted by me, maybe you're doing something very wrong and should look at what you did to cause it. If my opinion doesn't matter to you, then you won't be insulted by anything I say. The same is NOT true if you substitute "insulted" for "threatened" or "abused" (however, it does work for "offended", hence why "grossly" has been added to the definition to push it into the realm of extremes, not the everyday). This seems a sensible step, and the fact that someone in government has GONE BACK and CHANGED SOMETHING quite publicly means they recognise that. Maybe now we can spend less money on enforcing the ridiculousness that the police and prosecution services should have just said "we're not able to enforce that well enough" in the first place and never tried to (they have done just that for several other laws in the past). Soot particles in the air cause a lot of health problems - article about Beijing's air quality on this very site today. But what's more important is that ordinary volcano activity basically wipes out all of man's contributions for a year in terms of soot. Short of putting a hat on every volcano in the world, we aren't going to be able to stop the largest sources of it. As with everything "global warming"-wise, we can't stop it all, natural processes have been beating us in terms of pollution on almost all fronts for millions of years (possible exception of mercury, etc. but the global warming stuff, certainly), and yet still nobody actually proposes solutions. Let's assume the soot in the air is THE MOST IMPORTANT factor. How do we get rid of it? Stop burning wood. Stop using diesel. Fine. Let's assume (somehow) we make them both illegal and nobody ever burns a piece of wood in open air again, across the entire planet, and we combat all the natural sources of such. Now we save "half a degree" (per year? per decade? for ever? The article isn't clear). Now, what do we do instead? We now have entire fleets of vehicles out of action. The alternatives are petrol and electrics (which are nowhere near viable on that kind of scale - i.e. replacing diesel - and bring their own problems of supply sources and pollution). We can't use wood-burning stoves anywhere so we have to buy more gas, or more electricity, or more paraffin or SOMETHING to make up for it. So even if they are right, even if we implement a perfect solution, even if we claw back that half-a-degree that "buys us ten years" (Until what? Death? On what? A century? A millenia?), we have no way to replace the things we were doing that we had to stop doing. People are out of work, transport systems near collapse, we're burning more of other things that we're also told not to burn, etc. I've taken it to extremes using perfect (and unachievable) assumptions, but the same happens on any scale you try (e.g. say we find a product that "collects" soot from the air on an industrial scale that can be fitted to anything - even a wood fire - how much is it going to cost, what is it going to be made of, how many will we need, how will we get them to everyone we need to use them, etc. etc. etc.) It's the usual "global warming" problem: I believe you, in general. Let's assume I believe you 100% and that your science is absolutely perfect (unlikely, but let's just assume). Now what do we DO about it and, MOST IMPORTANTLY, what does that fix cost us? Because if it costs us more than it saves us, we might as well just carrying on doing what we're doing. Let's say we will eliminate soot, or CO2 or whatever we think is causing the problem: What's the knock-on effect of our fix, or reduced levels of those things (i.e. are we likely to trigger some natural process or even affect plantlife and wildlife because of a rapid change in the other direction?), and just what do we have to "break" elsewhere in order to "fix" this part? Robbing Peter to pay Paul comes to mind, and the situation comes up in ALL of these discussions but is never mentioned. Let's assume we all stop burning any oil-based fuel tomorrow and go with the best alternative. Just what does that mean, not just for us, but for the switchover, for the long-term transition, for the costs of transitioning, for people caught up in that change, etc. If it's not PROBABLY less (and you can't say that without looking as deeply into it as you do the problems of global warming) than what we imagine to happen under global warming, then it's actually more sensible to DO NOTHING. We're humans, we have a brain. When we change things it's often got side-effects that we didn't bother to think of and that can be worse than the original problem was (e.g. cane toads in Australia). And nobody is really looking at that. Re: What's the problem? Probably anyone involved in network security or data protection or even software licensing. - He fedexed a two-factor authentication token to an unknown Chinese person to use. - He provided them with VPN access into the internal company network. - They were writing software (which should now, by rights, all be audited), which was deployed into the company network and nobody now really knows for sure what it did historically or what it does today. - At any point, those Chinese programmers might have been culling other company's proprietary code to use for that job (illegal!), or similarly taking the company's code and selling it on to Chinese companies etc. The man is a genius. But he's a genius that broke several contracts and (quite likely) a few laws in doing what he did. The company might choose not to do anything about it, depending on the work they did and the data they processed, but it's not as clear cut as "good luck to him". A lot of people will now have to do a lot of work auditing code and explaining themselves to data protection agencies. Basically all the work he did will now have to be undone at great expense, unless the company is really willing to turn a blind eye to it (which may be illegal too!). It's like finding out that there's been a guy coming into your office, because he always came in with a certain employee, and logging onto the corporate network for years and now people find out that NOBODY has any idea who he is or what he was doing and that he was nothing to do with the company. It's serious stuff. PlusNet used to be amazing. I had them for decades and they were fabulous and the ultimate test "knowledge of the first guy to answer the phone" was passed flawlessly (changed my ADSL interleaving settings to alleviate latency in interactive connections within, about, 1 minute). Hell, they even took over the company hosting my domain names, and I'd again looked long and hard for a good company there and ended up with a fabulous one that I was happy for PlusNet to take over because they were similarly fabulous. Then they got taken over by BT. Since then it's been downhill I think. My brother has been fighting for three months with the domain-name host that is now owned by them because all of a sudden tons of things just stopped working properly, after literally 15 years of perfect operation. The ADSL side drops in rating every time I read an ADSLGuide review. And the technical side is now abysmal if people I've recommended to them are telling me the truth (and I have no reason to doubt them). Now they've "run out" of IPv4 addresses (telling me that BT don't have enough to go around? Honestly?), but can't be bothered to run a proper IPv6 trial. How about "If you let us issue you with only IPv6 addresses, we'll give you 50% off?" - an INCENTIVE to the technically literate on both fronts, and a way to free up IPv4 addresses for the technically-illiterate who have no idea what that service is or what it means to sign up for it. And last time I recommended someone, they were told you couldn't sign up over the phone, and given that the person in question had no Internet, they just used someone else. No, basically, BT have killed PlusNet. Hell, I had more IPv6 connectivity through PlusNet several years ago than they even offer today. It's ridiculous. I wouldn't sign up for it. I'd actually take it as a sign to move on to another provider. On an pseudo-related note, my external server host (not PlusNet related) is still offering 5 IPv4 IP's (no reason or signing things required) with every virtual server they sell, from £9.99 a month. Can't be that much of a shortage of them. Hell, if it came to it, I'd rather pay the £9.99 extra and VPN all my stuff through a real external IP. But really, the fix here is to offer IPv6 instead. But no, they don't even publish AAAA records for their main domain so that people can even GET to their website using it, let alone use it as part of one of their products. Re: Sad, but... People will do what they did with Amazon - go to the cheapest supplier. If Amazon fails to be the cheapest, you go elsewhere. Not saying there couldn't be some collusion and price-fixing, but the thing about online sales is that you can't get EVERYONE to sign up to you. If some local guy selling WHATEVER out of his house has a good website and a reasonable price, I'll use him quite happily. In fact, sometimes even in preference to Amazon. I've bought car parts from such people rather than pay garage or online-spares prices (even on "spares price comparison" sites) and never had any real trouble. People care about receiving the product for a decent price. They're not particularly fussed about a 1-2 day delay (as evidenced by high-street deaths), so long as they get the product, don't get conned, and can find it quickly and easily on your site (and that your site pops up on Google, for instance). If every big-name online store shut down every bricks-n-mortar store, then doubled their prices, we wouldn't use them. It's even easier to move onto "guy who charges the original price, plus £1, to cover his website expenses" than it is to even walk to the shop next door. Online shopping wiped out the competition by being more convenient and cheaper. If they aren't more convenient (i.e. their prices are high and force you to check several sites for the best deal), and aren't cheaper, the same Darwinian selection will happen to them. Methinks the tax issues are more likely to raise online prices on Amazon than anything they do themselves. Went in there before Christmas with the girlfriend. Walked out empty-handed. Went in there on their 25% off sale just last weekend (and it was 25% off the prices they were normally charging not a "sale-to-put-things-back-how-they-were-priced-anyway"). Walked out with £50 worth of stuff but - to be honest - that was more impulse purchase than anything else (we were both checking with each other "if it's okay to buy that" because we knew we were just impulse-buying and could get those films cheaper anywhere else), and they sell a lot of foreign movies that my girlfriend likes (we bought three foreign movies and two dvds-of-a-series). We had put a lot of stuff back on the shelves when we weighed up the value of it. There were no queues that time, either (which is unusual - they had some atrocious queues before Christmas, even weeks before, and not enough staff - enough to make me walk out without even looking because I wasn't going to queue through that for an impulse buy). And what value do the staff add? None. It's basically a DVD and music library - flick through, get what you want, take it to the counter. What's the advantage over Amazon, etc.? Immediate availability of the most popular titles only (for years, they didn't know what "Just Good Friends" was when I was trying to buy it on DVD, didn't have it for years, and could only try to order "Just Friends", some American comedy movie). Same as Comet - products on a shelf, pick your product, staff are useless, most things not in stock anyway, and pay over-the-odds to get what you can get elsewhere. It's basically a big supermarket for non-perishable items that's more expensive and more hassle than the alternatives. Notice, also, that WHSmith have several large stores that have no DVD's at all on shelves (one in Watford has only two little turntable shelf things with about 20 unique DVD titles on there, most of them kids' films). They know they can't compete. About the largest WHSmith DVD section I see nowadays is the one near the BBC which sells, surprisingly, mostly BBC documentaries and comedies. They're quite good at knowing what sells on impulse and they've cut right back from the days when you had walls of DVD's in there, the same as they used to have shelves of ZX Spectrum tapes back in my youth but now don't sell videogames at all. I'm not shocked. All these big, established chains wanted to do what they've done for 90 years and not change. They didn't stand up for the consumer (hell, imagine if they'd said we only sell DRM-free disks? That would be a kick in the teeth for their suppliers and also have consumers feeling they were on the same side). They didn't innovate. They didn't compete. They didn't change when they knew they couldn't compete. They just drive themselves into the ground, blinkered to reality. Go find a sheet-music shop. It'll be some tiny back-street affair with a few instrument in the window, some adverts for tuition, maybe tutorials and CD's, spare parts, you name it. Because they know how big the market is, and what they have to do to keep afloat. Now find a CD shop (HMV was pretty much the only one left - Virgin Music is dying off too and has been for years). They are IDENTICAL to how they always were, even down to rifling through bins of CD's put into four genres, with high prices, huge premises, and useless staff who got the job because they "like listening to music" (so, only about 99.9% of the population to choose from then). And nothing much else. No online shop, no burn-to-CD service, hell, they could have stuck some instruments in there and set up a £5-a-go recording studio for teenage group and try to sell the instruments on the side, but no. They didn't even TRY to change. They didn't even try to engage their core market (seriously - these music-fans wouldn't be interested in an instrument section, or some band trivia, or even indie band gigs in-store?). I have no doubt they made a lot of money for a LONG time, but it's hardly shocking that that came to an end. I actually chose HMV as the "next to go" when I was shopping in there just after Comet went bust. I don't think Dixon Group will hold the monopoly for long, they just held out for longer but have the same problems as Comet did. WHSmith has held on pretty well in my opinion, but that won't last forever given the changes I've seen lately. I'd probably go for Pets At Home next - can't see how they make money from the occasional sale of a rabbit and some overpriced dog-food (and, hell, you can't even get a kitten or puppy from them!), especially with their usually-huge premises. That or Hobbycraft, but Hobbycraft covers quite a diverse range of people and products. To be honest, wouldn't be surprised to see one of Wickes, B&Q or Homebase go soon, either. Overpriced tat and dumb staff in huge premises. Re: Problem is people don't like not having access to files I don't understand why every Windows program isn't "bottled" into its own private area. Let it write to the Program Files folder. Just not the "real" one, and let the admin determine which overrides what (so you can have the "real" Program Files folder always take precedence over anything installed by a particular app). When you uninstall, you delete the bottle. Thus, you don't cripple Windows by removing vital files that it overwrote. You don't leave traces of the program everywhere. You don't end up with a million old copies of msvcrt.dll because everything bundled one and left it around "in case it broke something". You can rollback to previous versions of a bottle without worrying about X needing DLL Y and vice versa. Do the same for registries (because that's just another abstraction over a file access). If a program wants to work on a user document, a copy is created inside its bottle (so it only sees the files that the user actually gives it - hell, it can list all its wants of what the user lets it see, but actually opening a particular user file requires permission SOMEWHERE) and, if the user wants, the changed file is propagated back into the users documents when its closed (again, with suitable rollback - we have Shadow Copies - USE IT!). Do the same for ANY startup list or service (and having several of these lists is RIDICULOUS) - let the program do what it thinks it's doing, then ignore it, then have the user decide (by domain policy, or user restrictions, or popup, or whatever combination is appropriate) whether it ACTUALLY gets to do that for real, and with rollback. Then it doesn't matter that program X comes bundled with toolbar Y that always tries to install - it thinks it's installed successfully, even when run as admin, can't tell that it hasn't, and the user isn't affected (and network admins can just have all these options turned off so programs think they are trashing C:\ or installed in the root, or in the startup entries, or have installed their pseudo-printer or whatever, when in reality nothing has changed for any user, even the admin). Programs can do anything stupid at any time. Let them. Then ignore that stupid action. That's how it works, without having to stop things running (and cause uproar from users and application producers alike), without seven million permissions dialogs, and without breaking backward compatibility. Don't just allow virtualisation of the OS, let every program be "virtualised" and think it's writing to C:\ when in fact it's writing only to its own private bottle. MS even understands how to do this - some registry compatibility layers for old Windows do exactly this kind of thing! A program demands admin rights for some archaic / stupid reason? Give it to them - as a user that is limited but can "fake" any access it likes. Hell, let it be "admin10437" inside a chroot-like jail that only admin10347 writes to or reads from, which it is unable to escape because it is IMPOSSIBLE to tell that it's in a bottle (i.e. it writes to C:\ as far as it's concerned, it just doesn't happen for real) and which is contained inside a subfolder of the real OS that is able to ignore any and all registry, file or other things inside that bottle at will. There's no excuse for sloppy task management, not even "compatibility", or confusing administrators. It can all be done TODAY. And then when a virus comes along, it ends up in a bottle, on its own unable to see anything or do anything interesting, and - if detected - can be rolled back safely in a second including any and all hooks it TRIED to put into the OS (and, obviously, would have failed at doing on any non-trivial permission setup). Fact is, as the most limited of users, you can still wreak havoc on a typical Windows PC even if that's just making it so busy that you can't log it off, or deleting all that users documents. That SHOULDN'T happen, ever. We have the technology, it's there. Just make every execution run inside a bottle rather than have access to the system itself. A program may REQUEST that I put it into startup lists, but it cannot MAKE me, or do it for me unless I want it to. It shouldn't even be able to detect whether I have allowed that or not. Windows still hasn't sorted silly little things like this (hell, startup lists - some of them, not all - have been hidden away inside msconfig for years and aren't user-friendly at all). Solve this sort of thing, and you don't need to break ANYTHING, and the rest solves itself. AV is ineffective. It does some things but not nearly enough to justify its cost, performance hit, and other problems. You only need to work in IT for a while, especially with the front-end of business and networks, to see this. We deploy it because even some things like PCI certification require "up-to-date anti-virus". In all the years I've been deploying AV, I've seen it stop only a bare handful of the most benign infections. Most of the real ones, that start popping up pornography on student's PCs, or trying to delete entire drives, or even things like "encrypting" every single file on every shared network drive that it has write access to and deleting the original, have gone undetected no matter what the manufacturer, or how often you apply updates. AV is a bouncer's list of who not to let in, and about as accurate. Sure, it stops some known troublemakers but 90% of the people who start a fight inside the club aren't being dealt with for years after their release (my bursar just got an AV update that marked an email that was FIVE YEARS OLD in his archive as a virus - it was a true detection, but it took that long for the signatures to appear that it could recognise it). You wouldn't let your bouncer JUST stop the people on his list and ignore the fights breaking out behind him (which is the bit that SHOULD be dealt with by "heuristics" but they are even more performance-killing and ineffective), so why do we tolerate AV? Basically AV is a miner's canary. When it falls over, because a virus has disabled it usually, that's tells you something is wrong. That's not the ONLY indication you are given, and sometimes it doesn't give an indication at all. But it's the only useful purpose of AV (and I've seen more AV drop off the network because a virus turned it off, even without admin access!, than I have successful network detection of viruses). We use it because some stupid people think it's necessary. What the actual fix is is less-powerful users, easier-to-control permissions, and easier-to-roll-back-from-anything systems (I should not have to put entire machines back to a known-good state just because one program as a limited user ran riot and infected their own files). Until then, AV companies will still reel in the money detecting next-to-nothing and ghosts in the machine rather than actually STOPPING programs being able to delete or write to arbitrary files without permission. Never driven through Europe? Usually the first you know about crossing a border is when your phone connects to a new network and sends you a text saying "Welcome to Germany". You literally just cross a sign at 70mph a few seconds later (like "Welcome to Middlesex" - style) without stopping and you're in another country. Not even a line, or a person, or a checkpoint, or a different tarmac on the road or anything. And there are sometimes even houses and streets that straddle the border. I did a 2500 mile round trip around Europe and wasn't hindered once (France, Belgium, Holland, Germany, Czech Republic, Austria, and then looping back to the UK through France and Germany again - the only reason I didn't get further is that my companions had to fly back to Australia and we lingered too long in Germany, but we were planning Italy, Poland, Spain, etc.). Some countries do have physical borders that they don't even enforce (e.g. France or Switzerland -> Italy means going through tunnels or over mountains, and they stop you and charge you money for a badge that allows motorway usage, but don't actually check your details at all). Europe is pretty open. It's incredibly easy to not even know what country you are in if you're not on the main motorways. And it's so easy to cross countries that you can literally do it accidentally, and with nobody knowing. Which can be a bit of pain when UK customs stop you on the ferry back and ask you to prove where you've been and start searching the car thinking that a lone male on a "road trip" to Europe with friends that can't be contacted is probably not being honest. Hell, I didn't even have a receipt for any of the hostels we stayed in because I was doing the driving and petrol because I had the car and a UK credit card, and the others paid for the accommodation because they had cash in Euros. I swear that the 5 customs officers who took an hour to search my car at 3am in the freezing cold were certain I had something even after they removed all my door panels and took my boot apart. But through Europe? Nothing until you hit Calais or the former Russian states, basically. Europe is pretty open, until you get to the extremes. Re: Did someone Artistic license means you can say that your spaceship goes faster than any spaceship is capable of, or that your main character can really jump that far and swing around a pole and still shoot straight. It doesn't account for a script line which basically says that someone is "3 litres tall", or "wider than a cheetah's top speed". It's an error. And I don't see a lot of time wasted on it, but it's certainly wrong. Part of the filmmaker's job is to suspend disbelief and make us think we are "there". Someone saying something completely nonsensical, stupid and wrong and NOBODY present in the movie questioning it does the opposite. We all just go "What? Did I hear that right?" and miss a minute of the film while we all laugh at it. And, literally, the fix was to get someone in who knew the tiniest bit about space (I mean, literally, even a student spots the error!) on your space-themed movie and have them look things over. On a multi-million dollar budget, I'm sure you could hire, say, a PhD for a day just to look over your script. This is basic diligence when writing scripts, also. Star Trek (the other nerd-franchise that I don't watch) used to have the script-writers write "insert techno-babble here" and then they'd pass it off to a real scientist who would insert the bits about Heisenberg Compensators etc. (which is what artistic license REALLY allows). It costs nothing, it aids in the suspension of disbelief, it stops you looking like an idiot, and it stops making X% of your fans CRINGE every time they hear the line. If you want an example of this in the modern day - try getting something wrong in The Big Bang Theory. It would be stupid, and embarrassing but we still would give you an awful lot of artistic license when in comes to most stuff. But even Howard using the wrong unit, unless it was a plot element and picked up on by the other characters, would jar in people's heads and make them forget they are watching entertainment - and that's the ONLY job you have if you making TV or films. I find it a real bugbear of mine that films where people do incredibly stupid things for no reason other than to support a badly structured plot really annoy me. It makes me switch off and not watch the film again. This is on a par with the "Oh, the chainsaw murderer is after us, so we'll all split up, not call the police, not prepare a defensive weapon, hide out in a convenient abandoned cabin, get killed off one-by-one through our own stupidity and separation, and then the last one will run through an empty, dark forest they don't know late at night while they know the murderer is outside and inevitably trip over something (and only then will we realise that the weird one in the group was the murderer all along). Then we might 'capture' the murderer, and lock him in a room with a nice large window and convenient replacement weapons." By comparison, say, Aliens: "I say we take off and nuke the site from orbit." Good man. Let's go. Even "The Thing": Let's gather everyone in a room, aim guns at them, formulate some sort of test and burn the hell out of whatever one turns out to be the alien (or just wait forever guarding them if we can't find out) - about the only "odd" point of that movie is locking a man they think is going insane in an outside hut while it all goes on, which is perfectly feasible in the circumstances, but a little odd that they forget about him so much. You have to "believe" in the characters. The ones who do stupid things (and, let's face it, that line is there SPECIFICALLY to show off how fast his ship is, and fails to do that and everyone he speaks to takes it utterly seriously), you can't believe in. Re: My question is @Lee $40m seems expensive when you could have just blocked your API, or put some restriction on it that would then make this software illegal. The worst that would happen is some web-scraping monstrosity would appear, that didn't use the API, and had to be updated every time you changed the way Twitter worked (which you could do whenever you liked for minimal cost). And, to be honest, you'd be within your rights to do what you liked to the internal code to make it almost impossible for them to keep scraping - eventually to the point where people would just give up on the app because it wouldn't work half the time and would need constant updates. No, there's more to it than just removing an unwanted "feature" from a third-party's access (i.e. there's nothing to stop anyone else doing exactly what that software did and waiting to also be bought up - it's like paying off terrorists, all you end up doing is making all the others raise their prices and encourage them to try harder because they know there's a pay-off in it). I bet there was some patent or other property in the business that they wanted, probably related to collating tweets, etc. and which they either already stamped on, or intended to. Re: My question is Some people talk a very good game. Notice that the original director is a "multimillionaire", and has got away perfectly legitimately, one assumes, with lots and lots of money on the basis of setting up a company that did exactly what you describe and nothing more. You'll probably find he's done that in several places and, to be honest, well done to him. He's worked out how to make lots of money legitimately on the basis of fools paying him for something which isn't worth what they think (which, if it were illegal, would mean that almost everything would collapse overnight). And he's not the one who's looking to be delisted, or the one who hasn't filed accounts (you'll probably find he files accounts religiously every year because - well, he's doing nothing wrong from what I can see), or the one who future companies might look at and say "Oh, hold on now - I heard something about this - you went under, didn't you?". In fact, he was so good he founded it from nothing, sold it for $40m, and only when he was no longer involved (and almost the moment he left) did everything go down the drain. If anything, that makes him sound better! Similarly, Facebook was never worth what it was floated at. Never. Still isn't now. But some people made a LOT of money by shuffling shares and cash around for a very brief moment (and got rich doing legitimate things), and then dumped the shares on those fools who thought they could only go up in value (which they haven't - in fact, they've done almost nothing but go down in value, and look likely to until the company disappears). It was never worth $40m. But some entity THOUGHT it was and paid it. It's like someone paying me $40m for a painting I find in my loft that I know is just a cheap painting. So long as I *don't* misrepresent it, or otherwise commit fraud while selling it, if they want to give me $40m for it? That's up to them. I'm not going to argue. I might even tell them I have other bidders (if I do) or that I won't let it go for less than $50m. I'm not going to say "But it's only worth tuppence" unless I'm an absolute idiot - but if **I** said it was a genuine Picasso when I know it's not? That's a different matter. A £40m asset that can't even be bothered to comply with statutory legal regulations. That would have me hastening to distance myself from it, more than any "bad report" of profits could ever do. Hell, you could have made a £40m loss last year and STILL it wouldn't be as bad as failing to supply the information necessary about that by law. I do like the punishment, though. Prior warning. Reasonable fine and another warning. Reasonable fine and (now) a sterner warning. And - if they continue - now you're not a company any more, making it illegal to trade, forcibly winding up the company and presumingly legal investigation into the actions of the directors etc. I think that's quite fair, given the circumstances. The pitiful profit is neither here not there. Many companies make a pitiful profit whether they are valued at billions or not. The important thing is that there's a book somewhere with a record of what you did with your money - and not filing returns is HIGHLY suggestive that that book doesn't exist or would reveal something that's illegal. It's telling that the last director filed a return before he left, and the new owners have filed NOTHING. Maybe they didn't buy what they thought, or it's been embarrassing to report they've tanked the company, or they have uncovered discrepancies that stem from the previous director's reign. But not telling which (or even initiating a lawsuit in the case of the latter) is more detrimental, and more telling, than anything they could have done. Hell, if it comes to it, probably just asking for more time would have worked wonders. But silence and not filing? I'd do everything to disassociate myself the second I heard that if I had anything to do with them. And so breed a generation of coders that think you need to reformat machines every time they exhibit the tiniest of bugs. Lovely. Exactly my point. P.S. How are you going to install to download and install the OS on the RPi machine in the first place, how are you going to get it to receive updates, how are you going to follow online tutorials for it, how are you going to make sure the kids can't just brute-force passwords or attempt DDoS on the servers using their lovely £20 machines with network connectivity, or spamming the Internet because they "downloaded a project" for the Raspberry Pi that turns them into spam-spewing zombies, how are you going to stop them bypassing filters, etc.etc.etc? By having decent security on the network to detect and/or block such activity no matter what machine tries to do it - which is necessary anyway, so what have we achieved? Nothing. This is the entire problem with BYOD, by the way - sure it can work, but what you're basically doing is APPLYING SECURITY so that it doesn't cause you legal or technical problems when used from an unsecured machine (which is kinda a daft thing to do, but that's not my problem because I don't do it). A school was fined hundreds of thousands of pounds not-so-long-ago for having a laptop stolen that was unencrypted and had children's reports on it - there wasn't even a suggestion that anyone actually has that data or has distributed it elsewhere or caused any damage from that data leak. We're not even talking highly-sensitive data (school reports usually contain name and a brief summary of their progress from their teachers - not even their address or anything related to medical / psychological problems they may have), and they did all the reporting of the theft as prescribed by law. The encryption (and subsequent password management, and security ensuring the staff member doesn't "unencrypt" even quite harmless data by putting it onto a USB stick and leaving that in their car - which has ALSO been prosecuted) is there for a reason - and to be honest, it causes me problems and I'd love to be able to do without it for client machines. Fact is, computer systems in schools (and businesses) are like that because they contain data that needs to be protected and which can't be passed off to cloud systems, can't be easily put into the hands of third-parties without explicit contracts, can't be unavailable - e.g. emergency medical information on children and/or required exam coursework for them to work on that happens to come under the DPA. There are legal requirements to store and protect that data for years (and if a kid has a photograph with a name attached to it, or even some information about themselves like, say, a test CV - that's "personal data" under the terms of the DPA, so we're not just talking about things on the "admin" network here, but the "curriculum" network that the kids use too), and doing so in accordance with various laws which mean passing it off to a third-party cloud host in the Bahamas doesn't let you off, and in fact gets you into more trouble. Even letting a single rogue host onto an unsecured network that can get access to something it shouldn't can be defined as a failure to protect that data (even if it's the kids' own work, on a kids-only network, from kids-only hardware!), and can be prosecuted - which is where basic network security comes in in terms of approving applications and plugins, limiting users, blocking off the Internet, keeping on top of antivirus and vulnerabilities, etc. comes in. Fact is, there's nothing that CAN'T be done on a properly secured network, otherwise there would be no point trying to secure the network at all (hell, virtualise everything, if it comes to it!). It just has to be done in consultation with your IT people and with due care and process. Thus why I call horse-manure on this particular quote. If security is interfering with your ability to teach ICT, you're teaching ICT badly or your IT people are failing in their job. But if there's NO security at all, in the name of not interfering in lessons, your IT people will be disappearing so that they aren't named on court proceedings when it comes to a DPA violation and your external providers will all have clauses that mean they are immune or that it's your problem, not theirs, that little Johnny's personal details just got splatted all over his friend's Facebook pages, traced to a school dataset). Want a school that doesn't have basic security applied? BYE! Want a school that has lessened security because of perceived "problems" on the user end? Slippery-slope into all-users-being-admin areas, and inevitably you'll find holes everywhere that you can't stop without being as strict as the average school network security policy anyway (that's WHY those policies are that strict as a minimum). Think that end-users are hindered in their use of a properly-configured computer on a properly-secured network? Tell your software manufacturers, especially educational ones, to pull their fingers out and not require admin access, local installation, out-of-date Quicktime and Shockwave etc. browser plugins, in order to show three croaking frogs on the screen and let the user click one of them to be directed to an external website that runs Java plugins and hasn't been updated in years. Then see how "necessary" it was to change security policies to stop "hindering users". "Important aspects of Computer Science and Information Technology teaching and learning are being compromised by the need to maintain a secure network – in the same way that health and safety myths are holding back practical science." Like what? I'd be very interested to know where and when IT security of a network involving children's data and access to communication facilities with them trumps them being able to pass an A-Level and exactly where they conflict to the point that education suffers. Unless, of course, you're counting wanting to "do everything" on a machine, install plugins from every content manufacturer that wants admin access to your machine to administer a test in Flash, or where filtering of pornography stops students getting onto sites they need to use. Because, obviously, those things are VITAL and MUST BE DONE THAT WAY. <end sarcasm> I mean, seriously - if there's any hindrance here, ALL schools, universities, government departments and businesses will also be similarly hindered, and thus that's perfect training for students not to expect to be able to do those things. Or is that just a sponsored message from your local junk educational software producer (who've only just got out of the habit of using Quicktime and still haven't grasped network paths yet). One-site-only TODAY. Never forget that things deployed by a single corporation today are what you'll be using in your corporation tomorrow (or some unspecified point in the future unless some other fad comes along). Otherwise health & safety statements would be two lines long in most places. That said, the technology is there to do this, I just wonder about the practicality. Can my child run off and buy a load of junk without me knowing while I pop to the loo? When do I find out? When I get home and see a credit card bill with one huge number from Disney on it (i.e. not even itemised)? How do I query it, get refunds, etc. and how do you know I *DIDN'T* go on that ride, but actually just brushed past the reader while opening my backpack after I gave up queuing? Not having cash in your pocket is a good thing, but if it's linked to a credit card that's in my pocket anyway, what do I gain? Am I going to go to this place without a credit card or cash because I "know" I can use this device? Or am I going to soak my wallet anyway because I had to have it with me and I went swimming and forgot it? Does this really *solve* any problem that currently exists? I don't think so. All it does is make it more difficult to query transactions, requires everyone to have an "accepted" credit card if they want to visit (I assume non-users will have to pay a transaction fee or somehow suffer for not letting their bank lend them a thousand pounds on easy-access terms, and I bet it doesn't work with pre-pay credit cards, for instance, where you don't know how much you're going to spend that day and can't just hold onto £200 just-in-case they spend that much, like with normal credit cards), and not give the customer ANYTHING they don't already have in some form. And, thus, it's just "technology because". This is what primarily annoys me about even things like board games now. Monopoly has version that use electronic cards to do your adding up for you, and also even an iPad version where each player loads their RFID card into their iPad to play a board game. Just what, precisely, do they add to the game? And what do they do about the bits it TAKES OUT (like kids having to add up to play the game with mum & dad, while trying to peel them away from the damn computer?). And thus simple facts mean you should audit ALL code you write, whenever and wherever, if you've signed this sort of contract (which he had). Hell, technically using a company pencil to sketch the idea might somehow "infect" the code (and companies complain about the GPL!). And, yes, although there is a lot of jurisdiction, contract, fairness, common-sense, and direct judicial decision-making here, it doesn't mean that it's "clear-cut". In fact, the opposite. If a judge has to decide an issue for you, even if it means having to get a lenient judge over a by-the-book judge, that's NOT clear-cut - and it means that the legal issue is still there - the individual circumstances may differ, but in the law the "crime" committed is identical (copyright infringement because of an inadequate license to allow you do to distribute said copyrighted material). In the same way, running a red light by accident leaves you open to a case of EXACTLY the same charge as someone who does it deliberately. The judge might side with you (notice: MIGHT), but it's not clear-cut, not something you should give assurances on (i.e. telling someone you'll be out of court in ten minutes and/or that you will be in Monday morning to do your normal taxi job, etc. - the same as giving others code that you tell them you had a legal right to assign the GPL license to!), and not something that you can guarantee - ESPECIALLY if you have signed a piece of paper in the past that clearly lays out your employer's side of the argument. Common-sense is all well and good, but if it ran the legal systems of the world, there'd be a lot less lawyers. Those who have signed contracts which even MENTION code contributions should carefully audit all their contributions to anything, no matter when, where or how those contributions take place. It's quite easy to know if you are writing code and distributing it or not. And those who publish code under an open-source license better have permission from the entity that OWNS that code (doesn't mean the same entity that wrote it!) or they will be in serious trouble and cause trouble for others. There is no distinction in law between distributing GPL code that your employer claims to own and didn't give you permission to GPL, and someone who takes an internal company project - say, their latest proprietary software - and makes it public on the web for people to download and even encourages them to download it with a "fake" license agreement. Both are the same legal incident and just as likely to end up with fines, sackings, jail or whatever is deemed appropriate in your jurisdiction - so consider writing GPL code on company time, or after having signed a company contract about your code contributions, exactly the same as just giving away Microsoft Office to newsgroups if you worked at Microsoft. Though you *might* be able to obtain permission from companies to do that (lots of companies give things away, from Serif giving away their DTP software for years, to other companies giving away their ancient versions, to companies - yes, letting you give away their original source code, like Quake) the two things are viewed as essentially the same act. This isn't anything "new" or exciting here. If you have signed a contract regarding code that you write, then it's up to YOU to enforce that contract to the best of your ability, which includes CHECKING what you are doing at all stages and not just assuming that your (possibly-soon-to-be-ex-)employer will always allow it. In the same way, if you sign a contract that says that the furniture in your office is the company's, you better not have a yard sale or giveaway from your office when you leave without checking with someone in authority on that contract first. The biggest problem with mentioning open-source is that everyone assumes that somehow the law applies differently to it than everything else. Companies and end-users assume that "Free" means they can do what they like with it, and some coders assume that they are somehow exempt from copyright law because of it or don't need to audit their contributions. That's NOT how it works. Open-source code is a property like any other - and needs appropriate permission to do most things on it. The contracts/licenses may give you that permission implicitly or explicitly or not at all, but it is that permission that is still required. Still don't think it comes CLOSE to profit, that's the point. You can badmouth BT all you like but keeping a phone connection open even just for emergency calls costs them money on every bit of the backend from your copper connection up to their national infrastructure. ADSL just add huge data requirements on top. And even if you assume they should plough back every bit of profit into upgrading lines, etc. it doesn't add up to supply a line that will make a loss (after nothing but running expenses) for 25-30 years, and which at any point you can tell them to stuff it and go with their competitor who might not pay toward their upkeep of that same line but run their own cables by then. Add on actually having to make a profit (they ARE a business, not a government entity any more), and having to subsidise other, even poorer connections elsewhere (some of them by government order, e.g. the "proper" rural broadbands like the islands and the 50km runs, etc.) and it's of course going to be damn expensive for the homeowner. But the fact is, even if you popped down to your local cabling supplier and picked up some ADSL backend hardware and did it yourself over 2km, it's going to be YEARS before you save enough to make it cost-effective, and even more years if you had to do the same for the whole town (which is why just about every "community broadband" supplier ends up folding, conceding, with ludicrous prices or low-speeds, or selling out to a multi-national in order to stay afloat). Hell, if it's THAT profitable, buy yourself a leased line at business prices (they will run it to your door and GUARANTEE uncontended service, no matter what the obstacle), and offer it out over wifi (no cabling costs) to the entire town. You could easily run a 100-1000 customers over a single leased line of a decent speed, but I doubt you'd ever pay for the line itself, BT involvement or not. That's what a load of community project did and you realise that actually it's damn hard to make any money at all, let alone recoup the outlay. Price up 2.5km of cabling, including digging up pavements or erecting poles to run it to the exchange (or 4km of fibre for the equivalent "independent" option). Price up a leased line of your own maximum speed from that exchange to an Internet hub somewhere that will peer with you. Now divide by the number of people that would serve (one). That's how much your house will cost to wire for broadband no matter who does it. If it's cheaper than a leased fibre line direct to your house, I'll be amazed. Now consider the only economy of scale. Do the same calculation for a line to EVERYONE in the town (including all the cabling etc. that would cost, extra cabinets, etc.). Multiply up the leased line to the exchange to handle some proportion of them being "online" simultaneously. Now divide by the number of people who would buy it. I will still be similarly amazed if the per-customer cost was recoupable from the profit you could make in under 25 years of everyone being connected -ONLY with you - on your most expensive package. You don't live "out in the sticks" but you do live 2.5km from an exchange, which is probably 20+km away from a point it can connect to the Internet reliably with an SLA. It costs as much to wire you as any company will quote you for to wire just you anyway. Hell, even if you imaginarily did a Heath-Robinson job, you're talking 2.5km of cable or fibre and technology out of your price range on either end before you even start. You do not have a right to broadband access. And providing it to you, like BT has been saying for DECADES, costs more than the 50-year-old copper line that gives you phone calls cost to install (which is probably something that, nowadays, they wouldn't fund either with increasing metal costs). This is why cable is only in pre-cabled areas (because companies went bankrupt running that cable to you, because they could never make their installation costs back, and only the companies that snapped them up "for nothing" actually run a good cable service in this country - because they basically got the copper installed for free - and that's why they won't install new areas unless the end cost of X% of customers paying £Y a month for Z years actually makes their money back AND A BIT MORE). You are stuck. Until someone funds a closer exchange, a better leased line to that exchange, or some other alternative that passes closer to your house and doesn't cost about £10,000 to install (which you won't pay back on a basic ADSL service for about 42 years - and that's assuming there are NO ongoing costs in keeping it running). Suck it up, or fund it yourself. Much as I like to point out how crap BT are, they really do have a point about rural broadband installation. "The offending code, highlighted by Micalizzi, is a simple loop that copies the entire URL into a fixed-sized buffer while scanning for '%' escape codes" Seriously? A fixed-sized buffer that you didn't bother to check the contents fitted inside? I mean, not even a check, let alone actually sizing the buffer properly in the first place? I wear a watch. I've worn the same watch since I was a kid. It's a Casio W-59. In fact, I've never worn any other sort of watch, except other Casios that look identical but that have different backlights (and they do a model that does the MSF radio-clock time-setting, I believe). Every single example of that watch I've worn has lasted 3-4 years and then the strap breaks and I buy another. I have a drawer full of the mechanisms with no straps on them that are STILL WORKING 10+ years later with no battery change (and changing the battery probably costs as much as a replacement even if it does ever happen). It shows me hour, minute, second, day-of-week, and date-of-month at a glance and has a little light so I can see it in the dark. When I was younger, I could read books in bed in complete darkness by the tiny light it gave out. I can set an alarm if I've nothing else on me capable of doing so. It's waterproof and pretty damn solid (even the strap, which takes YEARS to give out) so I've never managed to do any damage to or lose one from my arm even when swimming and forgetting it's there. And how much do I use it? Barely ever. In fact, I put it on every day and probably spend more time over my life putting it on and taking it off than I ever do looking at it, but I miss the weight of it if it's not there. Actually, I probably spend longer adjusting my watch once-every-six-months or so to make sure it's on "my time" than I do looking at it. Why do I carry it? Sheer habit. When I was younger I used it all the time for school. When I go to job interviews, I like to have it there to make sure I'm on time. Every other time, I don't use it and have actually pulled out a smartphone before I've realised that I'm wearing it (and, bear in mind, I've worn one every day for the past 15-20 years). I have a bad memory and so have a morning routine which involves the watch and, also, a pat-and-count of my body to make sure I have taken everything (without which, I would end up driving miles to the shops and not have my wallet on me when I get there, quite easily). Watches are inconvenient. If you wear long-sleeves, you have to pull them up to look at the screen. You have to sacrifice the usefulness of both hands to check the time, in that case. You have to put them on and take them off and be used to them being there (I have caught mine several times on things when working around the house and given how long I've worn them, that's quite telling). I work in front of a machine that displays the time, in an office with a clock, on an office phone that shows the time, with timed bells (I work in a school). At home, I have a machine that displays the time, a clock that displays the time, a TV that displays the time and various ways of discovering the time otherwise (including a drawer full of watch-faces!). In the car I have a radio that displays the time and a clock that displays the time. Walking around I have a watch that displays the time and a phone that displays the time (even when locked). I don't go anywhere without both. It doesn't mean I'm never late, or that I always know what the time is, but the time is everywhere. So my watch could easily do more and I would be right alongside that idea because I carry my watch and extraneous gubbins around with me all the time out of habit. But a watch that "does something" has been around since I was a kid - everything from calculators to measuring tapes to hidden pens to radios to TVs to - now - "smartwatches". I don't believe that people use them practically because they aren't in a convenient position for a) looking at anything without sacrificing at least one arm's position while you do it, b) hearing anything it does without it disturbing others, c) it hearing you speak, d) the size of the interface available on the watch, e) pressing buttons (which you have to do with your other hand rather than the "same-thumb" technique for holding a smartphone), f) being unable to comfortably use it once you've removed it (so that limits its ability and value if, like me, you take your watch off when your indoors). The watch is just not a convenient interface for anything, even hands-free. Nor are bluetooth headsets, I'd like to point out, but a watch even less so (not even close enough for audio in a noisy environment, for instance). Of all the space-age tech we saw in sci-fi and Bond movies over the last 5 decades, the gadget-watch has been around the longest and enjoyed the least success. I'm not surprised watch companies won't touch gadgets with a bargepole. Hell, I even laugh at the star-trek badge that has to be tapped to talk. I find that hilarious, given how much of a pain that must be to keep pressing (and I bet it wears a nice little hole in your nipple after a few years of busy pressing), and that's halfway between a headset and a watch for communication purposes. Honestly, watches are fashion items and items of habit. Nobody's needed one since mobile phones, same as address books, calendars, and calculators. Making it "smart" won't make it an overnight shock success (though, obviously, you'll always sell SOME of them). In fact, all it will do is make smartwatches things we can all laugh at. Because with SSD's, not much else has mattered since they were first on the market. I entered the competition. But if I won it, though, I'd sell it. I mean, seriously - it's an overpriced tablet with a docking station. Sell it, buy a nice tablet (if that's what you want), spend the rest on a real laptop, get on with life. Worked fine for me just a minute ago. Though I'll be damned if I'm going to remember the specs / reviews of 10+ model numbers of computer and nominate which one is best on the basis of that. And, presumably, the better workers at that. Muesli is fried. It's probably one of the worst things you can eat. Go compare the nutritional information to any other cereal (e.g. honey-nut cornflakes) next time you are in a supermarket. That said, it tastes like bird-seed and I'm with you on the first part, so I avoid it for that reason. I'd like a life experienced for 70 years, than death avoided through sacrifice of that experience for 100. True, but the diet versions are generally worse, believe it or not. About the only other difference between diet and not is presence of "real" natural sugar or some artificial substitute. And sugar/acid rots your teeth, don't you know? But simple physics tells you that drinking it down (and even through a straw) instead of sipping it / swilling it around your mouth (like kids do) is actually better because the exposure time on the enamel is lengthened significantly if you swill it and that's the biggest danger. You can watch a tooth dissolve overnight if you leave it in coke, but that's not what happens in your mouth if you just drink it normally. Hence, we should make straws compulsory, and drinking the drink in one go, in preference to any kind of ban whatsoever because that has a quicker, more obvious, harder to tackle, and easier to manage effect on teeth than anything to do with calcium-leeching. Which kinda makes it obvious just how relevant the damage to your calcium is compared to anything else you eat/drink damaging you (i.e. not). Personally, I drink SO MUCH coke that you'd probably recoil in horror. Seriously. I do not drink tea, coffee, alcohol of any kind (not religious, just can't stand the taste and don't see the point, and have a father who worked for breweries all his life which has meant "free beer" since I was old enough to try it) or water (except at work where I'm not allowed to bring in fizzy drinks - I work in a school) and my main beverage is actually coke. I buy it en-masse from the local Costco, because I get through that much of it. When they don't have it, I use Pepsi or some equivalent. When I go shopping, I do not buy any other beverage unless it's for guests. My ex-wife was the same, and independently of myself before I met her, her father has also been the same for years and researched the effects as a "proper" scientist too and made her stop holding the drink in her mouth when she was little - same for sweets, a boiled sweet or chewing gum running around your mouth for ten minutes does more damage than 2 litres of coke passing over only your tongue. He's also a professional fitness instructor, qualified science teacher with several PhD's, and they both ran karate clubs for decades which killed almost all visiting black-belts through sheer stamina and fitness levels that were unrivalled outside of professional sports. With my ex, it was caffeine-free diet coke though (even though that has more of a calcium-leeching effect). I've done this for literally years - since I was a teenager living at home. My parents have Coke on standby for when I visit because they are so used to it, my girlfriend's parents in Italy stock Coke especially for me, and when I meet up with my ex- or her father still we invariably have two cokes. When I go to a pub or restaurant, no matter the country, or how posh, I drink Coke. When I go to a friend's house, they know it's Coke or nothing. I have a sip of wine at Christmas to be social and toast with others, but otherwise it's Coke. It doesn't even need to be "proper" Coke, or Pepsi, or a named brand. My teeth? I haven't been to a dentist in about 15 years and have no problems with them (in fact, when I was starting university I had to have many milk and wisdom teeth forcibly removed to make way for adult teeth - they were all in pristine condition and refused to budge without an operation). My bones? I've never even broke, fractured, or damaged a bone in my entire body in my entire life (but that's purely anecdotal and doesn't indicate they won't be weaker in years to come). My sleep? I drink caffeine all day long, every day, and never have trouble sleeping (this is because caffeine has a tolerance effect that builds up, of course, but I still love it when people drink coffee all day long and then refuse a coke unless it's caffeine-free because "they won't sleep tonight". When I don't have coke for a day, I get a slight headache the next day and then it passes - tested on periods up to two weeks long). My weight? I'm actually bordering on underweight, have no diet (or toilet) problems, and eat like a pig all day long. My doctor? In the last 15 years, I've seen three doctors, and only to register with them and to have interventions not related to diet or lifestyle (e.g. wisdom teeth pulling, swine flu etc.). I can count on one hand the total visits to doctors over that time. Whenever I register with them, the blood tests and fitness tests pass straight through without comment. My blood pressure, weight, BMI, etc. are normal, always are every time I have them measured. About the only downside to my beverage of choice? When I had norovirus a few years ago in Italy (the last illness of any kind I had, and not surprising when you change country and meet 50+ people for the firs time), I'd just drank Coke and the resulting explusion was black. It merely made people think they needed to phone for an ambulance until it was explained. If you want to enforce bans, then you need to ban the right thing. A fizzy drink isn't dangerous, even if you're drinking literally hundreds of litres a year and nothing else for decades. What's MORE dangerous is washing it over your teeth for an unnecessary length of time (your teeth have no taste sensors, so why do it?). Thus the "ban" should be on boiled sweets, chewing gum (even "sugar free" which has the exact same sweeteners in it as diet drinks), anything that "fractures" in your mouth or sticks to your teeth like popping candy or chewy bars or even cereal bars. Similarly "banning" fatty foods should start with muesli. It's fried. Don't believe me? Go compare the nutritional information of muesli with ANY OTHER CEREAL (last time I did so with sugar-covered honey-nut corn flakes, the corn-flakes won hands down no matter what the brand, sometimes by as much as half the fat/carbs/sugar as the muesli). Also, a school banning those things (like mine, and most others, already do) does nothing - the parents will still pack it in lunchboxes (and be told off by the school, so they'll put it in their kids bags and tell them not to tell the teachers), the older children will pop to the tuck shop down the road, and the others will go home where it will be freely supplied. All you would do is increase the chances of kids desiring it because it's "illicit" in schools. It would have to be a BAN, outright, and that seriously infringes on my lifestyle choice that is literally hurting nobody and not added ANYTHING to a health service burden. Ideally, though, what we need to ban is stupidity and people drawing single-line conclusions from newspaper reports instead of finding out THE TRUTH about what they are eating. Ban tomato sauce (serious health risk as it's almost impossible to tell when it's gone off and is often consumed outside of the best before dates, that's before you even consider the sugar in it). Ban boiled sweets, toffees and most sweets in general. Ban cereal bars. THEN you can ban a fizzy drink for leeching your calcium. Pay the money, or find an alternative, rather than struggle along with something that's unsatisfactory. I would be equally happy, in the same situation, to stump up for an external box and installation to it to the local sparky (who, buy him a beer, and he'll do it a lot cheaper than the usual tourist-idiot-quote). If you're worried about aesthetics, buy a load of stone of the same type and build an outhouse for it that blends in. Or, similarly, to just tell the electrical company "No thanks, then" and cut off the supply entirely. I'd probably then ring round their competitors and see who could hook me back up with a decent amount of power. Failing some corporate back-pedalling, I'd then just buy a couple of solar panels or a genny and go off-grid. Seriously, if you're paying every month to be struggling with only 1.5KW, you might as well do it on your own terms and without reliance on someone else. 1.5KW is not a lot of instantaneous power to generate and you won't be doing it 24/7 (hell, I bet any modern house only pulls that when you have tools or appliances or heaters turned on, and you don't have something on for 24 hours a day except possibly lighting and background electronics like clocks, alarms, TV etc.) - hell, if you're living there permanently you don't want the hassle of the power problem and if you're living there sporadically (e.g. holiday home) you win big time by just doing it yourself. Honestly, if it was that prohibitive, I'd find an alternative and not suffer it even as a fallback (why, if they provide such pathetic service?). If it's not that prohibitive, then you should just pay it. It's not like the £10,000 that some ADSL ISP's want to charge some people because they are 20km from the nearest town and they have no cables that way - there's a reason there that costs, and if they seriously are charging too much, why would you faff about with a dial-up that cuts out every hour when you could just go with a satellite or wireless provider? Honestly, I think you're being a cheapskate and then whinging because of it. And if you're not a cheapskate, shell out the not-a-fortune on your own power independence and solve the problem once and for all. Re: Sure those numbers are right? And is it just me hoping that the typo is actually for 300,000 hours because, otherwise, it all seems a bit of a waste. 300,000 hours sounds like the sort of number where it become worth saving over 70,000 cases (i.e. several hours per case), but otherwise it all seems to be a bit pointless and expensive if it doesn't save AT LEAST that much. Hell, it would probably be quicker and cheaper to just let them dial in evidence by phone. It's not like the video-part of it adds anything to proceedings that the court can act on ("This witness is obviously lying because he looks a bit shifty", etc.) or is recorded for posterity, or broadcast to the world. Let them give evidence by phone (with suitable verification), save all the fancy-schmancy tech and get the same (or better) result. Re: Sure those numbers are right? Yeah, 300 man-hours. That's two-weeks of work for a single officer. On that amount of cases, you probably spend ten times that much by having police toilets 5m more than away from their offices / entrance. Treat the telecoms companies like an ISP spamming emails. Too many nuisances, originating from a certain international telecoms company, and you list them in a public blacklist and UK telcos are required to block all calls from them until they clear up their act (i.e. until that international telco monitor their customers and at a MINIMUM demand identification details from large callers, limit call volumes, act on abuse complaints, etc.). Don't worry about the companies that are doing the calling - that's up to the foreign telco to act on and put out of business. After all, they are paying customers of that telco and subject to the same legal jurisdiction as the telco too. Just make the telcos block the entire source (if you don't know what cable that international call has come in on - well, you shouldn't be a bloody telco). When the international telcos can't call Britain, they will go through and expunge most spammers from their customers and/or enforce things like valid Caller-ID, etc. in order to get that facility back (or, at least, stop the spammers calling the UK so they don't lose access and carry on letting them spam everyone else, but who cares about that?). Additionally, LOG ALL FECKING CALLS. Don't tell me you can't, because you bill me for them, itemise them every month, and if I'm being harassed BT are very happy to intercept my entire telephone line, take all calls, trace the harasser (Caller-ID or not) and report them to police. I know, because years ago someone from a caller-ID-withheld number was spamming my phone line so that it was just going off all the time for hours. Eventually I had BT intercept the line, they traced it, called the BANK that was faxing me private banking details thinking my home phone was one of their branches (and I didn't have a fax machine to hand or I'd have received that data myself) and had their faxes set to mad auto-redial. Even the number traced wasn't an incoming phone line, but they had customer details on hand and phoned through to the bank's data protection department to get the problem sorted. If you log all the calls, and then ENFORCE Caller-ID (i.e. don't trust the caller to supply it), and then I get a dodgy phone call, then you can provide everyone with a number (e.g. the numeric equivalent of "SPAM" on the phone) and when I dial that you can have an automated system reel off the last X numbers that called, with times and dates, and let me press 1) to report unsolicited calls, 2) to report silent calls, 3) to report harassment, 4) to block that number forever. Just what is DIFFICULT about that for a telco? And, hell, why can't I just block ALL international calls except from country X (where my relatives live) at no cost? Because there is no business interest in the telcos allowing you to do so at the moment and that's the biggest problem. OfCom is toothless, telcos are uninterested because they get paid to ferry spam back and forth. Fix those problems and the actual, technical and political problem is very easy to solve internationally (for UK customers at least). We can nearly make porn-blocking--at-your-ISP-by-default law, but we can't make it so that telcos are obliged to provide number-blocking services for free? It's also like the Royal Mail spam-con. You can tell them you don't want to receive unaddressed spam but you still end up with some of it via them no matter what, because they are getting paid to deliver it. Personally, at home I don't answer the phone unless the Caller-ID comes up with someone I know (and I have an answering machine, so leave a message if it's that important, or my bank is calling or whatever). And my mobile phone, I google the numbers before answering and spam ones go into a "SPAM" contact that has a silent ringtone. BECAUSE THE DAMN TELCOS want me to pay more to let them do that for me. Is it any wonder that people are moving onto things like Skype and abandoning traditional telephony? At least with Skype spam amounts only to "Do you wish to add [email protected] to your contact list?" which is no worse than my MSN account which has about 10 blocked addresses and has been running every day since Hotmail was still plain HTML. Re: Go right ahead... That's like saying if you raise income tax everyone will emigrate. Only if you do something INCREDIBLY stupid and price the company out of the market. A £3bn company isn't going to disappear overnight even if it wanted to and certainly isn't going to stop selling licenses until it *costs* them money to supply them. You would literally be looking at something like 70-80% tax on profit before that happens (remember, it's a tax on PROFIT). Additionally, if Oracle has to leave the UK because it can't afford to operate... FABULOUS. There'll be a humungous rush for the database market in the UK that doesn't involve them, and lots of other companies will make SOME money (maybe not on the same scale as Oracle did) and we'll pay less for database licensing. Same for Symantec. Same for Xerox. Same for Dell. Same for just about any company. These companies are doing nothing against the law. Which, in effect, means the law is broken because being able to say you made zero profit in the UK because you paid YOURSELF in another country all the profit, is a blatant tax loophole. Just because it's legal, doesn't mean it's right, or that it should be legal tomorrow. You could enforce corporate tax on those profits, and tax them to 50%. It'll change the market, but there won't be a mass exodus. If anything, it'll only make things better - you and I will have more money or pay less tax (because the government doesn't need to make up that shortfall any more from our income tax), and those industries will have more competition. Hell, it might even boost open-source take-up and force government procurement to use suppliers that are more suitable. But that's probably just a pipedream. Quote from the article: "Which is important as (in common with most sticks) there's no Bluetooth support." Again proving that your TV is nothing but a display device. This is what makes me doubtful of any such magical "Apple TV" announcement that was supposed to be forthcoming and legendary and world-ending. Pretty much anything that can be done on a smart TV, can be done with a £30 box and some open-source programming stuck on the TV instead. I'd buy one, but the controls sound a pain and I already have an IR-extender and a Wii-sensor mounted atop my TV as the only things you can actually see apart from the TV and the remote control. Everything else is tucked away in a cupboard, but another HDMI run, plus USB / PSU, plus IR-out, plus some sort of wireless mouse / keyboard combo - it's too much mess. But it would be cool to have my Google Play account on the TV and playing things like Slay on it (just bought the Android version - about the tenth time I've bought that game, one way or another, since Windows 3.1). Make me one that forgoes the IR and supports bluetooth mice/keyboards/Wiimotes, say, and you have a deal. It can't be that hard. A bluetooth dongle is only £1 now and you could literally just hide that internally and run some of the native Bluetooth support software for HID devices and Wiimotes and you're done. Probably a lot easier and cheaper to make than all that IR junk, to be honest. Re: Android and Linux I think it's more cheap, ubiquitous computing being the driver here, not so much the OS (hell, I'm as Linux-mad as anyone, just ask around). The power to hold a decent specced tablet, phone or computer in your hand and run it off batteries in a lightweight, cool, silent device that costs less than a full price game in some cases (you can get Android tablets for £50 if you look around) - that's pretty new in computing but you're already accustomed to it. I don't think it's all Linux, I think it's a combination of factors - good Linux support from a large multinational (Android - Google), good hardware standardisation (OpenGL, ARM, etc.), cheap LCD screens, large batteries, good battery life, ubiquity of wifi and bluetooth etc. It's driving a convergence. It's now possible to make something so small that it's hard to use. It's now possible to make something so powerful that nobody will notice next to something only half as powerful. It's now possible to put so much storage into a tablet that we don't need disks of any kind any more. It's now possible to connect to the Internet wirelessly as a routine operation and expect broadband speeds with low latency. All these things converge and without anyone, the project stated would be dead in the water or have to make serious compromises. But now, literally anyone can license an ARM chip (or just buy one), slap it on a board, get to a Linux prompt, have OpenGL graphics of some use, and sell it as whatever they like (anything from the Raspberry Pi to the OpenPandora to the SteamBox to a media centre PC to a tablet computer to a "Surface" heap-of-junk to a smartphone). You didn't used to be able to do that. That said, it's certainly an interesting time but I think there's going to be a period of massive confusion about to hit. Soon, anything and everything will be running Android or Windows or something similar and we'll have a mish-mash of hardware that's all pretty similar and your smartphone does all the stuff your console can do and vice versa (except phone calls, but we have Skype now), and eventually it will all settle down. In the meantime, it's almost pointless to buy any of them - I have a tablet PC for work that I've barely touched in favour of a "proper" laptop and a smartphone. Between the two I have any combination of raw power and portability that I want. And 99.9% of the junk on the Google Play store I find unsuitable for what I want it for and the stuff I do want is worth paying for - but only a few dollars. Sure, there's a bit of a renaissance at the moment as people discover how having a device that can "just connect" to the world can alter their lives, but once we're there, it's all just different flavours of the same thing. To be honest, I don't think most people know or care whether their phone is an iPhone, Windows Phone or Android phone beyond designer-labels and fashions. They all do pretty much the same thing (unless you're a developer, etc.) and have the same apps available for them. And we're literally only a handful of years away from disposable-cost computers now. I bought my 4-year-old daughter a tablet - if she breaks it, she breaks it. She might not get another until her birthday/Christmas/whatever but the fact is that it's almost a throwaway gift between family (my mum and dad have one, and neither of them know the first thing about computers and just play Angry Birds on it). Linux adds to it, but this sort of thing owes more to ARM, OpenGL, Bluetooth, Wifi, cheap LCD's and even half-decent batteries or even nVidia than Linux. That said, a world filled with cheap Linux devices isn't something to be dismissive of. Hopefully the "PC" will die a death and then MS will find out that it doesn't have enough of any other market to make a difference and hold us captive. Hell, even MS Office is slowly sliding out of existence and most people can get by perfectly fine with Google Apps and a copy of Firefox. Re: Had enough of El Reg's moneygrabbing bull****, I QUIT! I have, many a time, announced that I wouldn't touch a game again (either in an existing thread or elsewhere) and then, shockingly, never touched that game again. Worms Reloaded comes to mind - no matter what machine I tried, I couldn't get multiplayer to work at all, because they used a multiplayer system that - by their own admission - only two Steam games used: Theirs, and another which had lots of reported multiplayer problems. They had no intention of fixing it, and never did, and to this day I still haven't loaded it back up except when they claim to have "fixed" it and then I just prove to myself that, actually, it's still exactly the same. I haven't loaded it in over a year now, I don't think, and gave up looking for updates. And that also stops me from buying any of their other games too. If I've paid for a product, and I have problems, I will post my problems on your forums (don't expect me to sort things out entirely in private - there's a reason that I publicise the problems even for my favourite products - to get them acknowledged, fixed, and show people the difference between a company that cares and one that doesn't). That's what forums are there for. I will post them in the most relevant area possible and contain them in my own thread unless another is very similar and I can "piggyback" on their comments. I title them with a relevant header and search before I post. That let's people know the various problems and come straight to a relevant post when *they* do the same. I've had posts that were literally titled with the error message that only I and a group of others were getting and until I made the post, all the previous entries were just "It doesn't work", "I get an error", etc. with no details and no follow-up (or follow-up on the 32nd page which is *USELESS* for other people trying to find help). And when I announce that I am quitting a game (Age of Booty - Gamespy never worked properly. Worms Reloaded - funny UDP multiplayer mode on Steam never worked properly - both probably because I have a software firewall and a hardware one, but that's no excuse when thousands of other games work online just fine. The new X-COM demo - I got into an infinite loop in the menu and could not get out of the game without a full process kill, nobody was interested so I didn't buy it. I could go on.), I quit it. Whereas when I announce that I have a problem and someone aids me in diagnosing it (Zombie Driver - I found a crash-bug related to having a joystick device installed that wasn't really a joystick device, but a keyboard-based emulation of one), I'm happy to post the solution and go on my merry way and sign the praises of the developers. That's how this works. Put out junk, and expect your complaints department to get overwhelmed. And in the online world, the complaints department is a publicly visible forum where everyone has their own grievance. Bets taken on how long it will be before the "Azure cloud" (strange colour for a cloud, by WTH do I know?) powering the service gets overwhelmed and everyone jumps onto Google's version instead? Re: Three questions: "(1) Where are the millions of gallons required for Fracking coming from given that the UK regularly suffers water shortages;" You mean the untreated, basic water sucked from any local water source (like the sea) and then (possibly) recaptured if necessary because a bit of dirt won't stop it being useful, as compared to the filtered, tested, sanitised, flouridated, pressurised water you pay to come through your tap over a copper pipe from miles away? Only one ever has a shortage, and only for domestic supplies, and only temporary, no matter what you might believe. There's PLENTY of water around. It's just not all tappable for drinking water. If you don't believe me, fill your garden with water butts this winter with no tap on them. I guarantee you will run out of water butts and space before you run out of water after just a month or so (one night of rain = enough to fill all those butts no matter how many you put out there). It's just what you do with it that matters, and what we have a shortage of is *TREATED* water that's safe to drink. We don't need to shove Evian down there. "(2) Where the the millions of gallons of waste water generated by Fracking going to be dumped;" It's water. It will drain away, or collect in underground voids, or more likely just find its way back to the ocean. It will be "contaminated" with rocks and dirt and a bit of gas, maybe. Nothing that it wouldn't contain anyway. Or you can collect it and reuse it if it's really a problem (very doubtful, though). And it's quite a long way down that you're firing this stuff so the chances of you doing anything to it (including collecting it, or noticing that the hole that was filled with natural gas is now filled with a lot less water) is virtually zero "(3) What is going happen to the windfall revenue generated from Fracking licences?" It'll go into the UK monetary system like everything else. But only if you reduce the taxes enough to encourage the industry to grow so that when there are 50 fracking plants, you can raise the tax and get money from them all to pay you back. Just because the government make 50p more this year doesn't mean you'll get 50p cheaper tax, or products, or anything else. To suggest so means you SERIOUSLY misunderstand both economics and politics. If that's the answer you're after, you should really just give up now - IT WILL NEVER HAPPEN, no matter how much you bold your text. You can wrap a political message (having to mention Thatcher, really? I was born the same year she got into power and that was LONG time ago now) in all the hyperbole you want, you still come off as the local nutter here by just not thinking things through properly. Re: I consider it a public duty Phone scammers? There, you are costing them. Same as phone spam - they have to *pay* to do it to you (in time and phone call unless you're stupid enough to live in a country that charges the RECEIVER of a call for its cost) and that's what costs. Online scammers? It's probably not even their computer, or their connection, that you'll be wasting. Same as online spam - they don't pay a penny per million emails, or millions "visits" to their compromised site, so they don't care what happens. But the courts may take a dim view of you, say, DDoS'ing a hospital network because a single computer was compromised and you retaliated to the scam running from it (not saying they would do anything, but it's not black-and-white that they'd just ignore you either). My email logs have something like 10,000 compromised IP's trying to send me email (most of them home ISP connections and even the occasional business-with-a-proper-domain-and-authenticated-smtp). I don't even notice, the senders won't even notice (until their ISP cuts them off) and certainly the actual spammer doesn't care if I've refused his email or not or whether, like one example I have, the same IP tries 30,000 times and gets rejected before it even gets to SMTP HELO each time. He probably doesn't even know that's what happened. "suspicion of running a ransomware scam that fooled victims into paying £100 fines." It didn't "fool" them, they knew exactly what they were doing if they paid up. And they probably paid up because they were doing something wrong in the first place (or had been and thought that must be what it was about). I don't doubt that the odd clueless granny got caught up in it, but they would have got caught up in anything that asked them to pay money. But if someone puts up a sign from the Met Police on your computer saying you need to pay a £100 fine and you pay it, you haven't been "fooled" into doing it. You might have been "fooled" that they were the police, or that they could levy fines like that, but you voluntarily paid it - without question, appeal, investigation, even paperwork. Hell, you don't even get a speeding ticket without some paperwork dropping through your door, verification of your driving license, a signed statement of guilt from yourself, information concerning your right and method to appeal, and a ton of other stuff too - and that's probably the one thing that *could* (law permitting) be automated down to the point where you just get an updated paper licence in the post with an endorsement written on it. Such scams should, rightly, be stopped and the people convicted. But I can't say I feel a single pang of sympathy for any victim that was of sound mind (and those not of sound mind? Shouldn't have access to a credit card that lets them pay fines like that without someone checking first). Re: IRC is not secure I don't think we're dealing with expert hackers here who thoroughly considered the link back to themselves. Tor and Truecrypt use wouldn't be enough to cover your tracks online on their own. Tor, in particular, can be inherently leaky unless you're paranoid about what packets you send out over it (accidentally leave your IM/Skype/Email running? Whoops, there's identification right there). These people were caught by unencrypted browser histories (by the sound of it, which suggests use of non-full-disk encryption, or encrypted dual-systems - TrueCrypt's "plausible deniability" - where activities spilled over into unencrypted parts, or the part covered by the password they *did* share, of the disks). And leaving proof-of-hosting just laying around on encrypted partitions? That's just amateur. Organising over IRC? In comparison that's quite minot, but that's just asking for trouble too, because you leave full logs wherever you go - even accidentally - because a lot of people record IRC 24/7 so they can go to sleep and "catch up" on what happened later. Coordinating the attacks over IRC with random, unverified people (who were probably NOT using such methods to keep their identities hidden) seems a bit daft - especially if some of those people then moved onto social networks to pull in more people. And even using the same username - though that's hardly hard evidence, it suggests a complete lack of thought between connections of you and your activities. You couldn't convict on that alone, but if it gets to the point that there's some decent suspicion you were involved and YOUR Internet name has always been X and Internet name X appears on connections associated with the suspicion, the hosting, the IRC admins, etc. then it's just another nail in your coffin. That said, not much would have saved them by that point anyway. I suspect that if they *didn't* hand over their TrueCrypt details, that's enough to convict them anyway (perverting the course of justice by failing to provide evidence - though there's a question of self-incrimination - or one of the newer laws would handle that quite nicely). So they weren't going to get away with it once it had come down to a handful of people of interest, and giving away your username, geographical location, and leaving a trail of history since your teenage years on those same details would give police an address in a matter of minutes (one phone call to XBox Live, I would think). Even if it was only as a suspect, you would be having a word with the boys in blue within moments and then explaining why you won't decrypt all those hard drives you have is going to be tricky to make stand up in court. The story could well have been very different, but only if they actually knew enough about computers, and bothered to try to hide their identities properly. But even then, just finding evidence of connecting to the IRC channel and (then) a TrueCrypt volume that you refuse to decrypt is enough to throw you in jail. They were sloppy, and got caught, and probably thought they were immune right until the verdict. One of the reasons I would be *useless* in any sort of online activism. I often find programs connecting that I'd forgotten all about (even with software firewalls that warn me), have DNS settings that for years send DNS requests to my old ISP's server, etc. An example? Windows Vista and above talks to a server to establish the "Internet Connection" or not status of your connections. There are registry entries to tweak what server it talks to and what it expects to find in a named file on that server. I tweaked mine to point to my own private server (the theory being, if anyone is stupid enough to steal and then turn on my machine while it's on the Internet, I would capture their IP from the Apache logs), and then forgot about it for ages until I wondered why my icons never showed Internet connectivity. That's just the kind of stupid stuff that would catch me out before I even started. Re: They can explain why a slinky does its thing... Maybe not, but I can explain why you think it's obscenely expensive: You're not a businessman running a hotel for profit. Re: Is this 'signal propoagation' stuff... Not really, actually. The point you miss is that even if you have a 4-light-year-long device, it's still a physical device, made up of atoms. Those atoms have to impart force on each other and literally MOVE in order to propagate the force to the next atom. It's nothing weird or special, just the sheer length of the thing means it will take a while for the atoms to compress up to the point that they push the next atom, push the next atom, push the next atom, etc. until you have a wavefront moving towards the other end as the action takes place. Think of the rod as, say, a sponge and you'll get the idea, no special physics here, you just have to push enough for the material to see the effect all the way along (and we don't tend to deal in single materials longer than, say, a couple of hundred meters on Earth, ever, so we never "see" this effect but it's there even on the most exotic of materials and when you're talking 3.8 x 10^ 16 meters, those effects would be a little more visible). You would have to have an entire incompressible material and absolute zero for any weird physical effect to do with the speed of light (and we already have an implausibly long, straight, perfect rod that stays as such when subjected to forces necessary to move a 4-light-year-long piece of material, so we're way out of the bounds of "practical" physics here). But the effect is inherent and visible with just simple Newtonian physical explanations too. Here there is no "gravity signal", it's just that the bottom of the spring is preventing from falling because it is suspended by the bit above it. The very top of the spring may be released but that takes a fraction of a second to move any appreciable distance and yet all the parts underneath are still suspended to the atom above them (which has gravity acting down, and an attractive force from the atom above it). There is no "speed-of-light" or sub-atomic effect acting here, though the principle is similar. It's literally just going to take a little while for the top of the spring to compress the spring and impart physical forces on the atoms below it that overcome their attractive forces to each other. Again - happens in all materials, just this one is particularly pretty to watch because it takes long enough that we can see it because of the springy nature of it. (The speed-of-light effect, for instance, of the material "realising" that nothing was holding it up and thus the entire material was subject to gravity would travel a 30cm slinky in 100 picoseconds, not 0.3 seconds - one 3,000,000,000th of the time). This is a simple, Newtonian effect of having a solid material made of atoms imparting forces on each other and nothing "fancy" at all. In fact, the whole "signal" speak is very dubious from a physical point of view and I think is being misinterpreted to make it sound more interesting. The "signal" is just physics taking effect as the atoms "catch up" with their neighbours. - Vid Antarctic ice THICKER than first feared – penguin-bot boffins - Antique Code Show World of Warcraft then and now: From Orcs and Humans to Warlords of Draenor - Hi-torque tank engines: EXTREME car hacking with The Register - Review What's MISSING on Amazon Fire Phone... and why it WON'T set the world alight - Regin: The super-spyware the security industry has been silent about
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007150.95/warc/CC-MAIN-20141125155647-00230-ip-10-235-23-156.ec2.internal.warc.gz
CC-MAIN-2014-49
111,235
301
https://ljcreate.com/stem-career-exploration/human-biology-kit/
code
Human Biology Kit Typical Practical Tasks Include: - Response to Stimuli - The Effects of Exercise - Cells and the Brain LJ Create is committed to making you, our customer, successful. We offer ongoing support by phone, email, and webinar at no additional charge to you for the life of our hardware and software.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00338.warc.gz
CC-MAIN-2022-33
312
6
https://www.instructables.com/member/kenev/
code
Tell us about yourself! Thank you for your reply. Since it was some time passed without a reply, I searched around and found that already myself. Meanwhile, could you have a look at the question I have made a few days ago? Finally, I managed to get this nixie thermometer working on a breadboard. Now, I need to design a proper PCB to build it. A number of changes have been made: First, I used IN-14 nixie tubes. Second, since using an Arduino Uno is a huge waste of space, I used an ATMega328P as a stand-alone Arduino and it works perfectly OK. Since the cathode poisoning prevention cycle runs just once at start up, is there a way to include in the sketch a "reset" function once every 24 hours, so that this cycle is running on a routine basis? I am not an expert in Arduino programming and I would need your help.Thank you! Hi, Cledfo11,This is an excellent presentation for beginners. Personally, I'm a beginner in Arduino, but not in electronics. I have decided to build this project, or rather use this project as a basis for building my design (different nixie tubes, no seconds display).I have searched ebay for the DS1307 RTC, and I have found that the commonly available module is this: https://www.ebay.com/itm/Arduino-I2C-IIC-RTC-DS130...which also has an AT24C32 EEPROM chip on-board. Can I use this instead of the one you have used without any changes in the relevant Arduino sketch?Thank you,Evangelos Very nice job! I have decided to try one myself.Since I'm a newbie in Arduino, I can't figure out where the wire of the temp sensor is wired in the Arduino board. Your sketch says:"OneWire ds(19);// on pin 19 (a 4.7K resistor is necessary)"but I don't see which one is pin 19.Could you help me with this? Very nice project!It was asked again here, but I didn't see any answer, so I'm asking again: Can this tracer draw curves for jfets?Thank you Tools & Ingredients Science of Baking
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00347.warc.gz
CC-MAIN-2021-10
1,904
8
http://thehotpepper.com/topic/26468-shigshwas-grow-log-2015-2016-hydro-grow-of-carolina-reapers/page-2
code
Shigshwa's Grow Log 2015-2016 (Hydro Grow of Carolina Reapers) Posted 12 December 2011 - 05:00 PM Oh finally, page 2.....That first page took forever to load. Too many high res pics.... Nothin' But Love Posted 12 December 2011 - 08:48 PM Yeah, my tent is in the room farthest away from my woodstove, which is my primary source of heat. Right now, it's 74 directly under the bulb, with no ventilation fans turned on. With my HPS in the cool tube, I can run ventilation at half speed and the temps sit at about 78. Oh finally, page 2.....That first page took forever to load. Too many high res pics.... Sorry, I think I should do a huge resize on all these pics... EDIT: Resized it all. Edited by shigshwa, 12 December 2011 - 10:13 PM. Posted 13 December 2011 - 02:06 PM Question about lighting. I've got a 70W Metal Halide lamp that does great as far as light output, but doesn't generate enough heat to keep the temps in my tent in the mid 70's. I also have a 250w HPS that does keep the tent warm enough. I know that the HPS is far better suited to flower and fruit with, but will I be able to effectively veg with it, or am I just wasting electricity by using it before I need to? I think the difference between MH and HPS is kind of like this: A few days after switching from halogen to CFLs, approx 3000k for halogen, 2500K and 5500K CFLs. Note the height, but lack of vegetation, same thing for the Butch T in the back. Also note the new shoots that have started popping after the switch. 'Around a week after switching to the CFLs. Note the vast increase in shoots and leaves, same with the Butch T in the background. Posted 15 December 2011 - 06:36 PM Posted 15 December 2011 - 06:52 PM Nice start there Shig, nice to see younger people growing legal things. LOL What are you using for fertilizer? Keep up the good work and you will have pods before you know it. So true, ha ha ha... For fertillizer, I used a weak solution of my GH hydro nutrients, plus Age Old Grow Organic Fertillizer, 12-6-6, along with some GH Diamond Nectar. It's a pretty potent mix, with the GH nutes to fill in some micronutrient gaps. I noticed some very slight burning for one of my plants, but I think it should be fine. I fertillize every 2 weeks. The fertillizer turns the leaves dark green for a few days after, so I think it could be just a bit too much nitrogen, but I'll wait and see if anything will go wrong in the long run. Edited by shigshwa, 15 December 2011 - 06:53 PM. Posted 15 December 2011 - 09:19 PM Stop asking and just eat it! Posted 18 December 2011 - 11:25 AM Leaf size has increased, not much emphasis on height growth. The leaf curl is in all of my plants except for the Bhut, which has a wavy look to the leaves, but they seem to be growing fine. Calcium deficiency? I hear that leaves stay that way forever when affected. The CFLs have made them very bushy! A few more weeks and they will be loaded with shoots everywhere! I am thinking of applying some worm tea to the plant, see if they grow any different. Edited by shigshwa, 18 December 2011 - 11:27 AM. Posted 18 December 2011 - 01:34 PM Posted 19 December 2011 - 01:51 AM Huge leaves on your plants. Those are going to be some bushy monsters! They are quite big! As big as my hand almost. This was from a week back: Edited by shigshwa, 19 December 2011 - 01:51 AM. Posted 23 December 2011 - 12:47 AM EDIT: Slow vertical growth I mean. I don't know about foliage growth though, they seem to be increasing in size rather than quantity. Edited by shigshwa, 23 December 2011 - 01:00 AM. Posted 25 December 2011 - 03:37 PM The buds. The stalk also has a fork, with more buds showing up on the new branches too. Edited by shigshwa, 25 December 2011 - 06:20 PM. Posted 01 January 2012 - 04:37 AM Edited by shigshwa, 01 January 2012 - 06:05 PM. Posted 05 January 2012 - 01:12 PM Posted 05 January 2012 - 05:56 PM those 2 big black pots look interesting, will have to get a few of those Closeup of the flowers. Left one has been pollinated days ago. Is that a pepper I see? One to the right has just been pollinated. Edited by AtomicCobraPeppers, 05 January 2012 - 05:57 PM. Posted 05 January 2012 - 06:12 PM Posted 05 January 2012 - 08:08 PM a couple things that may help with pollination indoors is a small fan to blow the pollen around and also live laby bugs, since your not using a HPS or MH the lady bugs wont get burned up by the heat of the light. both are pretty cheap and may help! I'm hoping to get pods out of them. I don't think pollination was good though, no signs of growth yet. I might have to wait till the next few flowers. Edited by AtomicCobraPeppers, 05 January 2012 - 08:09 PM. Posted 06 January 2012 - 12:17 AM 0 user(s) are reading this topic 0 members, 0 guests
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689775.56/warc/CC-MAIN-20170923175524-20170923195524-00698.warc.gz
CC-MAIN-2017-39
4,745
53
http://closeparenthesis.blogspot.com/2014/08/bike-riding-on-xian-city-wall.html
code
I've been back in Australia for five days now, after a week in Korea, but apparently the catalyst for updating is having a long day at uni. With each semester, each painful two hour lecture, I slip further and further away from wanting a career in law. It makes me so tired. At this point I don't know if it's law or just university. I've only had one day of uni and I'm already exhausted. Exchange next year cannot come fast enough. One of my favourite days on the tour was definitely when we went bike riding on Xi'an City Wall. Since I started gymming, I've come to really enjoy pushing my endurance. Result: first in our group of nineteen to complete the 13km ride. Yeah!
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823516.50/warc/CC-MAIN-20181210233803-20181211015303-00620.warc.gz
CC-MAIN-2018-51
675
2
http://cdn-www.airliners.net/forum/viewtopic.php?f=3&t=1375165&sid=2ce3ca1ca1f4311ff78d3ad2d1bf9bad&start=100
code
TK787 wrote:IIRC, 17/35 been used at the same time. I tried to search but couldn't find those pictures. But I remember clearly I saw photos, 17R is being used for take-offs, 17L planes landing at the same time. Maybe someone else can help. This was at least few years back when runway 5 wasn't used at the current rate. THY748i wrote:I remember both 17/35 L and R being used for takeoffs at the same time. Doesn't really increase capacity though since minimum separation needs to be respected due to wake turbulence. I was on board multiple times for both parallel operations you are mentioning. The case TK787 mentioned is still used during strong southern wind bad weather conditions. However parallel takeoffs mentioned by THY784i is no more used (to my knowledge), after an incident which both planes started running together at the same time on 35L and 35R. This was also on the news at that time. It was said that, tower had ordered to turn separate directions just after take off to avoid a collusions. There were also jokes about who took off first (like a drag race). Even someone claimed that B737 was the winner against A319 (if I remember well the types).
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010225-00424.warc.gz
CC-MAIN-2017-43
1,167
4
http://fixunix.com/embedded/5679-irq-priority-tuning-print.html
code
Is there anyway to tune to Linux IRQ priority setting? I can see that /proc/interrupts that certain devices occupy certain priority ranking. Furthermore that ide1 occupies a very high ranking. I would like to push it down on the priority lis (because its just too busy all the time),is there anyway I can do this? I am using Kernel 2.6.16
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660966.51/warc/CC-MAIN-20160924173740-00165-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
338
6
http://www.techist.com/forums/f76/building-budget-gaming-pc-aka-work-progress-206776/
code
Aside from the fact this is my first post, here we go: Just to sum it up, I'm looking to build another computer for gaming and such. My current rig is: AMD Athlon 64 3200+ @ 2.0GHz ATI Radeon X800 XL 1 GB DDR 400 RAM an old Epox Mobo that I can't remember specifics about a 17 inch plasma NEC monitor an Antec 430W PSU I can't even remember what my case name is... 120GB 7200RPM HDD I'd like to build a work in progress essentially. I don't have too much money, but I NEED a new system. I was looking at maximumpc's guide to a 500 buck budget PC, but I'm not liking some of the picks. OS, monitor, and HDD aside, I'm looking to get a computer for about $800 that I can upgrade as I get more money from working. Just want some thoughts/comments about this build: Intel Core i7 920 Nehalem core 2.66GHz $279.99 Newegg.com - Intel Core i7 920 Nehalem 2.66GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1366 130W Quad-Core Processor - Processors - Desktops MSI X58 Pro Intel X58 ATX $189.99 Newegg.com - MSI X58 Pro LGA 1366 Intel X58 ATX Intel Motherboard - Intel Motherboards Antec EA 500W ATX12v v2.0 $69.99 Newegg.com - Antec earthwatts EA500 500W Continuous Power ATX12V v2.0 SLI Certified CrossFire Ready 80 PLUS Certified Active PFC Power Supply - Power Supplies Cooler Master Centurion 590 $59.99 Newegg.com - COOLER MASTER Centurion 590 RC-590-KKN1-GP Black SECC / ABS ATX Mid Tower Computer Case - Computer Cases Corsair 2GB (2 1GB sticks) DDR3 1333 $52.00 Newegg.com - CORSAIR 2GB (2 x 1GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Dual Channel Kit Desktop Memory - Desktop Memory EVGA Geforce GTX 260(recertified lol) $169.99 Newegg.com - Recertified: EVGA 896-P3-1265-RX GeForce GTX 260 Core 216 896MB 448-bit GDDR3 PCI Express 2.0 x16 HDCP Ready Video Card That totals up before taxes and shipping to $821.95 Keep in mind I'm simply budgeting now so I can turn it into a beast within a few months. Any different suggestions/general comments are appreciated!
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719397.0/warc/CC-MAIN-20161020183839-00454-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
1,960
29
https://community.filemaker.com/thread/124597
code
Check out the move/resize window script step. Thanks a lot Vodka, I needed to make a new script in which I used your script step "move/size window" and entered the desired window position and size. I saved this script and set to run in the menu; File -> File Options ->Open/Close dialog. Now every time I open my database it sets the perfect windowsize automatically. Thanks a Lot! You can also use Adjust Window script step set to "Resize to Fit" to automatically find the height and width so you don't have to type in the numbers.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105334.20/warc/CC-MAIN-20170819085604-20170819105604-00108.warc.gz
CC-MAIN-2017-34
532
6
http://www.rebol.org/ml-display-message.r?m=rmlNYMJ
code
[REBOL] Internationalization appeal From: sunandadh::aol::com at: 16-May-2002 12:32 I'm about to embark on a design and code exercise that will end up with a set of functions that will enable me to properly internationalise (or -ize) an application I'm likely to be writing. First and most important: if any one has already done this, _please_ let me know....I've reinvented enough wheels in my time. I'm appealing to the List for three things: 1. Some feedback on the requirements. Please take a look at the sample functions below and tell me what is missing so it can work for your 2. A few people off-list (no point burdening the list with a techie design discussion) to help debug my design, implementation, and documentation. Please reply to me privately if you want to see the design docs (such as they are) or generally want to put some input into this. 3. When it's ready, I'd like some people to contribute the code to provide the functions for their language/country/locale. What I am doing? I want functions that render dates, times, currency and country names appropriately for international and local audiences. The initial client is a web application for a listings service. Given the recent discussions about collating sequences, I'll aim a design that can handle those too, though it's not something I currently need. >> render-date/short "uk-eng" 8-apr-2001 == 08 Apr 2001 ;; uk english short rendering >> render-date/long "us-eng" 8-apr-2001 == April, 8 2001 ;; US english long (full month name) rendering >> render-date "iso" 8-apr-2001 == 2001-04-08 ;; ISO standard >> render-date/long "es-esp" 8-apr-2001 == 8 de Abril 2001 ;; spain/spanish >> render-date/long "us-spa" 8-apr-2001 == Abril 8, 2001 ;; US/spanish (I'm guessing the acceptable format) >> render-date "travel-int" 8-apr-2001 == 08APR01 ;; international format used by travel trade >> render-date/long/day "uk-eng" 8-apr-2001 == Sunday, 8 April 2001 ;; long format with weekday >> render-date/long/day "uk" 8-apr-2001 whatever...returns a default format for the UK >> render-date/long/day "spa" 8-apr-2001 whatever....returns a default format for spanish Summary: render-date returns a display-formatted date in a format widely acceptable to people of that country/language. First parameter is usually xx-yyy where xx is the ISO 3166-1 2-alpha country code, and yyy is the ISO 639 3-alpha language code -- though variations are acceptable. >> render-time "cz" 13:00:00 == 13:00 ;; 24-hour clock format hh:mm >> render-time "us" 13:00 == 1.00PM ;; 12hr clock, leading zero suppressed in hour >> render-country-name "es" == Espana ;; renders local short name format >> render-country-name/ISO-eng "es" == Spain ;; renders ISO-639 english short name format >> render-country-name/ISO-fre "es" == Espagne ;; renders ISO-639 french short name format >> render-money "us" $12.50 >> render-money "de" -$1002.50 == EUR1.002,50- ;; German negative format -- trailing sign >> render-money "uk" -$1002.50 == (GBP1,002.50) ;; UK negative format -- in brackets >> render-money/symbol "us" $1 >> render-money/words "uk-eng" $20.50 == "Twenty Pounds and 50 pence" >> render-money "in" $123456.78 == INR1,23,456.78 ;; Indian rupees =E2=80=93 note "nonstandard" location for >> sf: Get-collate-function "at-deu" ;; returns collate/compare function for Austria/German >> sf: Get-collate-function "hu-hun" ;; returns collate/compare function for Hungary/Hungarian Without prejudicing the design, I suspect there is a two-stage look-up.... Example: "us-spa", "us-esp", "us-sp", "840-spa", "840-esp", and "840-sp" (all ISO-639 variants for US/Spanish) will point to a rule set for render-date. That rule-set contains the actual code (maybe as a single 'compose statement) to action the rendering. There are then issues about caching functions for higher performance. I'm willing to do all the leg-work: design and write the supervising code, write the english documentation, and release it all under a BSD (flexible open-source) licence. But the more help I can get, the higher-quality the end result -- and I hope we can all benefit from that. Thanks for reading!!
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647614.56/warc/CC-MAIN-20230601042457-20230601072457-00113.warc.gz
CC-MAIN-2023-23
4,127
78
https://serverfault.com/questions/586107/how-to-map-terminal-service-printer-port-to-something-like-comx
code
I'd like to state my environment configuration first as follows: - A Zebra label printer with serial port (RS232) (print script .bat uses serial port only) - A serial cable (RS232) suitable for the printer. - A RS232 to USB switch cable. - A PC installed ubuntu 12.04.3 x86. - A Hyper-V VM installed Windows 7 with SP1. What I've tested and succeded: - on ubuntu, type "sudo chmod a+rw /dev/ttyUSB0", and the type "echo ~WC > /dev/ttyUSB0", and this will print a test page on the printer successfully. - in ubuntu printer configuration GUI, add the usb serial port as a printer, and print a test page in the GUI. Print successfully. - on ubuntu, use freerdp (a most famous open source RDP client) to connect to the VM with "/printer" argument, it will redirect the local printer to Windows VM. - In the Windows VM, I can see the redirected printer in "Devices and Printers", and I can print a test page successfully. What I've tested but failed: - In the printer's "Port" settings, it uses "TS004" or "TS005", the name is not fixed, it changes randomly... - The users have many printer scripts (industry .bat) which use only "type xxx > COM1", I cannot ask them to change their scripts to "TSxxx". What's more, "TSxxx" changes. - Can I make the terminal service port name "TS004" fixed? - Can I map the terminal service port name to "COM1"?How to do that? (I tested "net use COM1 TS004" but it does not work) Any hints are appreciated. Many thanks!~
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258621.77/warc/CC-MAIN-20190526025014-20190526051014-00467.warc.gz
CC-MAIN-2019-22
1,451
17
https://www.quantcast.com/blog/gdpr-one-year-on/
code
Over the last 18 months, one four letter acronym has dominated both advertising and publishing headlines, and has pushed brands to consider data protection in a different way. Although hard to believe, the one year anniversary of the General Data Protection Regulation is here. To mark the occasion we’ve taken a look at the numbers to see how website owners have adapted to compliance using Quantcast Choice, our free consent management platform based on the IAB Europe’s Transparency and Consent Framework. We are happy to report that the industry is listening and consumer’s online privacy is clearly a top priority for publishers around the world. Check out the stats here: A big thank you to all of our partners and customers who have provided feedback and input along this journey and contributed to the success of Quantcast Choice. Last week we launched Quantcast Choice Premium, an enhanced version of Quantcast Choice, for publishers and advertisers who need to manage consent across multiple websites. We’re excited to continue helping consumers signal their consent and manage their data privacy throughout their online experiences. Learn more about Quantcast Choice Premium here.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00169.warc.gz
CC-MAIN-2021-04
1,199
7
https://supportforums.cisco.com/discussion/10536006/unity-cx-21-user-msg-recipient-system-call-handler
code
When I try to delete my voice mail box, I get the following message: "The user is a message recipient for a system call handler or system call handler template." Many call handlers to look through. Is there another way to id where this user is referenced within Unity Cx? yes - the download page has the details and a link to the ODBC drivers and a link to the help. You run this form any Windows XP/2000/Vista/2003/2008 box that can connect to the Connection server. The help file for hte tool has details on what you need to do to allow this (it's pretty easy).
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720153.61/warc/CC-MAIN-20161020183840-00069-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
563
6
http://nismorack.com/comic/backdoor-insecurity/
code
Seems like this is a week of computer related shenanigans. Last week some yahoo who is apparently the director of the FBI declared he wants to mandate backdoors for the government. This kind of thinking requires a special kind of stupid that is honestly quite baffling. You might be wondering what’s so bad about something like this. Well ignoring the gigantic breach of privacy it entails and access to your data the government should not have let’s go down the list of reasons why this is a bad idea. - Not Secure The most obvious reason is that a backdoor is not secure. It is by design created to circumvent existing security protocols. Most notably software can’t distinguish people. If you log in with the correct credentials you have free reign. Regardless of ‘who’ you are. - People aren’t secure A backdoor is going to be known about by certain people. Large scale projects such as Apple, Google, and Microsoft produce are going to have at least ten (very conservative) people who know about the backdoor. Then government officials need to be cleared and know how to implement it. People by nature are inherently lazy so they’ll do anything to avoid complicated methods of accessing the data. At some point enough people will know about the backdoor that it will get leaked somehow, or someone even gets extorted. - Mandating it makes it known Now obviously backdoors are a bad idea to start with. But mandating it means everyone knows said backdoor exists. There are enough malevolent people around with time to spare to find this backdoor. Heck even well intentioned people will try to find it for the kick of being able to find it. - It won’t catch criminals, except the really stupid ones If I were a criminal mastermind, I’d shun the use of the internet for my communications to begin with. Regardless of what sort of crazy encryption I can throw around. I’d use a sneakernet for any and all communications that might be sensitive. And if I ‘had’ to use the internet, I’d go the old fashioned codephrase route. While I might not be a criminal mastermind, it’s not that hard to think of these things and if I can do it, so can they. This has been another long winded post on why governments are terrible at anything computer related.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00530.warc.gz
CC-MAIN-2019-35
2,274
11
https://stackoverflow.com/questions/54175793/real-memory-usage-is-larger-than-xmxmaxdirectmemorysizemaxmetaspacesizenxss?noredirect=1
code
This question already has an answer here: I'm setting up a HTTP gateway service using Spring Cloud Zuul. The code does run normally, but I want to do some memory optimization. Here is my start-up command: java -XX:NativeMemoryTracking=detail -XX:MaxDirectMemorySize=64m -XX:-UseCompressedClassPointers -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=128m -Xms256m -Xmx256m -Xmn64m -Xss256k -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintHeapAtGC -Xloggc:gateway_gc.log -jar gateway-service.jar --spring.config.location=bootstrap.yml I expected the total memory allocated by the gateway process is 64m + 128m + 256m + (256k * num_of_threads), which approximately 540MB. But when I checked it out using top, 983444 RES(960MB) was allocated by the gateway process. There is 400MB more than what I expected. Then I checked the gateway process using jcmd VM.native_memory, which shows the memory committed by the jvm is 789720KB(771MB), still less than 960MB. I've also checked the /proc/<pid>smaps file, and I sum all the Rss field from the file, which result is exactly 960MB. But I can not make sense of the mappings showed in the file. 1. How can I check the exactly memory usage of JVM? 2. Is there any way I can restrict the JVM memory allocation via JVM arguments? -- I'm using HotSpot java version "1.8.0_181".
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202728.21/warc/CC-MAIN-20190323060839-20190323082839-00470.warc.gz
CC-MAIN-2019-13
1,432
14
http://french.stackexchange.com/questions/tagged/qu%c3%a9bec%20pronoms-personnels
code
French Language Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Difference between “nous autres”, “vous autres”, “eux autres” and “nous”, “vous”, “eux”? I live in Quebec, and I hear all the time people saying this: “Nous autres” instead of “nous” I guess “Vous autres” instead of “vous” “Eux autres” instead of “eux” I have some questions: Is it ... Feb 26 '12 at 2:46 newest québec pronoms-personnels questions feed Hot Network Questions How do I constrain variables to a user-defined domain in ArgMax? Why is it that if I count years from 2011 to 2014 as intervals I get 3 years, but if I count each year separately I get 4 years? How to use fields in java enum by overriding the method? Why was the first compiler written before the first interpreter? Why does this C# code return what it does Make a circle illusion animation How to append multiple lines to a file with bash, with "--" in front of string Does using std::array<T, N> lead to code bloat? Should a progressbar go both ways? Could a malicious JS file pointed on URL/URI attack the browser/computer? Was the American Civil War the “bloodiest civil war in history”? How does the OS know that a command needs sudo? Is this really a golden ratio spiral? How to tell TikZ to first compute and then display a variable? Ask how to pronounce advisor's last name? How can I add my website to DMOZ? Could Khal Drogo's Khalasar really have defeated the Iron Throne? Which is kernel similar gaussian kernel? Who is the Main Villain in Hearthstone's Curse of Naxxramas? Rearrange pixels in image so it can't be recognized and then get it back Inserting 1 line into my SQL database Why do some companies not give minor benefits to employees? Is it safe to stand by the windows during a thunderstorm? Is this kids experiment a legitimate way to show that air has mass? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267075.55/warc/CC-MAIN-20140728011747-00001-ip-10-146-231-18.ec2.internal.warc.gz
CC-MAIN-2014-23
2,468
53
https://github.com/golang/go/issues/26280
code
Join GitHub today GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up cmd/go: unhelpful go cache error when running in container as non-root user #26280 Please answer these questions before submitting your issue. Thanks! What version of Go are you using (
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00008.warc.gz
CC-MAIN-2019-26
343
5
https://old.braintech.pl/pisak/pisak-about/
code
Thousands of people are living hell on Earth, called Locked-in State. Disease or accidents have left them unable to speak or type, turn on a radio or ask for help, unless accompanied by a trained caregiver. Salvation in terms of self-agency comes from assistive technologies. Communication system provided by Intel to Stephen Hawking proves that empathy in not the sole reason why society should help those people by providing appropriate technologies. PISAK (Polish Integrative System for Alternative Communication) was created during a 3-years project subsidized by the Polish National Centre for Research and Development, which included testing on groups of disabled users (project completes in March 2016). Written in Python, customizable via JSON/CSS configs, works in GNU/Linux and provides email, blogs and multimedia for those who can control switch, sip-and-puff, head movement or eyetracking interfaces (BCI planned in close future), in a FOSS and highly customizable system.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00512.warc.gz
CC-MAIN-2019-26
985
2
http://blog.smaga.ch/your-change-to-get-to-know-game-theory/
code
There is a section in this blog that I haven’t been able to fill with new content for a long time: the part dedicated to Game Theory. The main reason is that I’m not using that field in my day-to-day job, so it’s a bit difficult to find good and accessible topics to discuss here. However, I have good news for those of you who are eager to discover more about this topic. A few months ago, I told you about Coursera in this post. For those of you who missed it, Coursera provides users with free high quality online classes from top-tier universities. By now, you probably guessed what this post is all about! As a matter of fact, starting January 7th 2013, there will be a Game Theory from Stanford University available to us. You can check out all the details and register for the class on the official page. Here is the abstract of the class: Popularized by movies such as “A Beautiful Mind”, game theory is the mathematical modeling of strategic interaction among rational (and irrational) agents. Beyond what we call ‘games’ in common language, such as chess, poker, soccer, etc., it includes the modeling of conflict among nations, political campaigns, competition among firms, and trading behavior in markets such as the NYSE. How could you begin to model eBay, Google keyword auctions, and peer to peer file-sharing networks, without accounting for the incentives of the people using them? The course will provide the basics: representing games and strategies, the extensive form (which computer scientists call game trees), Bayesian games (modeling things like auctions), repeated and stochastic games, and more. We’ll include a variety of examples including classic games and a few applications. For those of you who haven’t seen the movie “A beautiful mind” yet, I strongly encourage you to do it, and if you’re not yet convinced, here is the trailer: About the class itself, the description says it is based on a book I extensively used for my paper on Penalty Shots in Ice Hockey, and which is available on Amazon: Both authos, Kevin Leyton-Brown and Yoav Shoham are two leading contributors of the game theory field and they are both professors of the class. Hence, I can’t emphasize enough how good I think this class will be. I do not expect I will be available to take all the exercises because my spare time will be mainly dedicated to the CFA Level II (see the countdown at the top of the menu on the right), but if you wish to do so, don’t forget you can get a certificate of completion. That’s it for now! Please let me know what you think of the class! See you next time!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474440.42/warc/CC-MAIN-20240223153350-20240223183350-00175.warc.gz
CC-MAIN-2024-10
2,624
10
http://pmichaud.com/pipermail/pmwiki-users/2006-March/024529.html
code
haganfox at users.sourceforge.net Thu Mar 9 15:58:01 CST 2006 On 3/9/06, Patrick R. Michaud <pmichaud at pobox.com> wrote: > On Thu, Mar 09, 2006 at 12:34:46PM -0700, H. Fox wrote: > > Earlier in the week I asked Pm to make the first two examples on that > > page look the same, but he declined for now because it might break > > existing content. > Yes, but I'm beginning to dislike the inconsistency. My > suggestion is to start formatting things in the way that will > work best *for authors*, and then we'll adjust the [@...@] > markup to make it look right in skins. As an author, I'd like the second example to not include the newline. The reason is because I want to have an easy way to style preformatted Note that the indent doesn't matter. Try the various ?skin= links on this page to see the effect: If there's another easy (best for authors) way to style preformatted blocks, such as the proposed %code% wikistyle, then this isn't as > > I think the last one would be a reasonable default wikistyle so we could use > > %code% [@ > > Bla bla > > @] > > Is that any more author-friendly than > > -> [@ > > Yadda yadda > > @] > > though? In one way it is. An author can more easily add additional > > style attributes to the wikistyle version. (Otherwise it > > double-applies the wikistyle.) > We can always do "-> %code% [@ ... @]" -- it double-applies > the wikistyle if %code% has "apply=block" in it. I don't understand... Since the left margin is already there (in the %code& wikistyle definition), why would someone do "-> %code% [@ ... @]"? Why not just do "code margin-left=70px[@ ... @]" instead? > means to apply the style to any HTML block element on the line, > and since that includes both <div> and <pre> (as well as lists), > it makes it appear as though it's being applied twice. Define %code% > with apply=div or apply=pre and it should be applied only once. If I try replacing this working wikistyle definition (the one I'm proposing for the core) %define=code block padding:4px margin-left:30px% with any of these %define=code pre padding:4px margin-left:30px% %define=code apply=pre padding::4px margin-left:30px% %define=code block apply=pre padding:4px margin-left:30px% it stops working properly More information about the pmwiki-users
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527866.58/warc/CC-MAIN-20190419181127-20190419203127-00497.warc.gz
CC-MAIN-2019-18
2,268
46
https://www.scholars.northwestern.edu/en/publications/online-perfect-matching-and-mobile-computing
code
We present a natural online perfect matching problem motivated by problems in mobile computing. A total of n customers connect and disconnect sequentially, and each customer has an associated set of stations to which it may connect. Each station has a capacity limit. We allow the network to preemptively switch a customer between allowed stations to make room for a new arrival. We wish to minimize the total number of switches required to provide service to every customer. Equiv-alently, we wish to maintain a perfect matching between customers and stations and minimize the lengths of the augmenting paths. We measure performance by the worst case ratio of the number of switches made to the minimum number required. When each customer can be connected to at most two stations: Some intuitive algorithms have lower bounds of (Formula Presented) and (Formula Presented). When the station capacities are 1, there is an upper bound of (Formula Presented). When customers do not disconnect and the station capacity is 1, we achieve a competitive ratio of (Formula Presented). There is a lower bound of (Formula Presented) when the station capacities are 2. We present optimal algorithms when the station capacity is arbitrary in special cases.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374616.70/warc/CC-MAIN-20210306070129-20210306100129-00503.warc.gz
CC-MAIN-2021-10
1,243
1
https://supportforums.cisco.com/discussion/11005021/branch-office-router
code
We currently run CCM7 which I need to extend to a branch office for about 35 - 40 users. I'd like to provide SRST and also have a local breakout to PSTN via a SIP turnk to a 3rd party. I also want to host voicemail locally on unity express. I'm having trouble specing the router for the site and would appreciate some help. Can anyone confirm if the following makes sense: As you will probably guess I'm new to this so please be gentle !!
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427750.52/warc/CC-MAIN-20170727082427-20170727102427-00589.warc.gz
CC-MAIN-2017-30
438
2
https://ccm.net/forum/affich-945811-dvd-player-does-not-eject
code
hello, I have a Sony VAIO laptop with core i5 processor. when I press the eject key present on the keyboard, the DVD drive does not open at all ! How am I supposed to insert a DVD then? tried opening it through the mouse click also(i.e, by clicking on the eject option in the My Computer's DVD drive.). Please suggest as many options possible, will try each one.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00276.warc.gz
CC-MAIN-2022-27
362
2
https://www.questarter.com/q/what-causes-quot-a-disk-read-error-occurred-press-ctrl-alt-del-to-restart-quot-2_407087.html
code
I have a virtual machine containing Windows XP SP3. When I resized the VHD file (and the embedded partition), and tried booting, I got: A disk read error occurred Press Ctrl + Alt + Del to restart FixMBR don't help. ChkDsk doesn't help. The partition is indeed active. The partition starts at sector 63 (it also did so before the problem) of cylinder 1, head 1, and is marked as type 0x07 (NTFS) My host OS reads the VHD and the partition completely fine I'm interested in knowing the cause rather than the fix. So "re-format the disk", "reinstall Windows", etc. aren't valid solutions. It's a virtual machine after all... I have nothing to lose, so I don't care about fixing it. I just want to know what's causing this problem, in case I run into it again on a physical machine (which I have done before). I made a sample VHD file illustrating (almost) the same problem which you can download here. To reproduce the problem: Download the file (it's highly compressed, be careful!), and try booting it in VirtualBox (or some other VM). Notice that you'll be told "Error loading operating system". (While the error is different, it's the same issue.) Now try mounting the VHD in Windows's Disk Management, and running BootSect.exe /NT60 X: /MBR, where X: is the drive letter of the mounted volume. (The location of the tool is likely to be C:\boot\bootsect.exe, but if it's not there on your system, then you'll need to find it somewhere else...) Now un-mount it, and try booting. The boot should now proceed correctly. (Although it won't find Hal.dll, at least you know it's working.) Now do the same thing as the last step, but use /NT52 instead of /NT60. You will now be greeted with the first error -- indicating that the Windows XP loader doesn't like the disk. So my question is: Why? The cause is either that bootloader is calling the BIOS to read other disk sectors and that call is failing, or the bootloader does not consider the partition table valid. If I am not mistaken, when Windows formats a disk, doesn't it create a small partition at the very end of it, or leave an unpartitioned area there? Did your resize utility recreate that? Sounds silly but this might be the reason why it is complaining. Weird, but I wouldn't put it past Windows (and possibly something in newer versions) to have such a quirk. One possibility I see is that after resizing the VHD, some incompatibility exists between what the BIOS thinks it knows about the hard disk and the disk itself. Another possibility is that the free space on the disk is too small or too fragmented for Windows to boot. Sometimes defragmenting a disk with this error makes it bootable. According to your added info, you have drastically reduced the VHD from 127GB to 1.5GB, so there might not be enough space for the pagefile. The resizer you have used may have been too aggressive, or it may have moved such unmovable Windows system files and therefore rendered the disk unbootable. For proper operation, Windows needs quite a lot of free space on the disk, with some of it contiguous (for at least the pagefile). I think that the correct procedure should have been from the 127GB VHD to turn off in XP the pagefile and system restore, clear the Recycle bin, defragment the disk with a defragmenter that can consolidate used space at the top (or free space at the bottom), then do the resize leaving free space that is several times the defined RAM size. With this procedure you might have ended up with a viable and bootable disk to start with. Sometimes these problems clear up with repeated reboot, maybe by Windows finally managing to rearrange the disk to its liking. File-systems are intricate, complex, finicky things. For example, an old copy of Partition Magic complains about some little numeric inconsistency or something about the partitions on one of my disks while Windows (XP) and Easeus Partition Master and such all chug along without issue. Even an old copy of Norton Disk Editor doesn’t complain about that disk. The fact is that there are a lot little things that can go wrong, or worse, that can “go not wrong” yet still be incorrect (i.e., an incorrect value could show no symptoms). What likely happened was that when you resized the VHD file, the tool that you used had a bug and did not (correctly?) update a field somewhere in the file-systems of the disk (partition table? boot-record? boot-sector? NTFS meta-files?) As others have pointed out, the error you are getting is usually a BIOS error as opposed to an OS error. What is likely happening is that the field that was not correctly updated was early in the disk (e.g., in the boot-record or partition table) so when the VM BIOS tries to read the disk, it is finding incorrect/inconsistent values and throwing an exception. You did not mention what kind of resizing you did. Did you shrink or expand it? My hypothesis is that you resized it down, and the BIOS is reading the partition table and trying to read beyond the disk (to non-existent sectors) because the size of the disk was not properly updated. As for the host, I would surmise that the reason that it can correctly read the disk is because the software that mounts the VHD file is somehow masking the error. After all, to the host, the “disk” is not a real disk and is actually just a ( .vhd) file, while to the guest, the disk is supposedly a real, physical disk. As such, the host can error-correct problems that the guest cannot. You can check to see if there is an updated version of the tool or use a boot disk like CloneZilla (or find a copy of PTEdit) to run in the VM and examine the “disk” from within the host. ╒═════════════════════════════════════════════════════════════════════════════╕ │ Sectors: 3149824 Disk Signature: 0xEE3EEE3E │ ├─────────────────────────────────────────────────────────────────────────────┤ │Pos Idx Type/Name Size Boot Hide Start Sector Total Sectors DL Vol Label │ ├─── ─── ───────── ──── ──── ──── ────────────── ────────────── ── ───────────┤ │ 1 1 07-NTFS 1.5G Yes No 63 3,148,677 F: <None> │ ╘═════════════════════════════════════════════════════════════════════════════╛ 3,148,677 / 3,149,824 = 0.999636 = 1 - 0.000364 1.5G * 0.000364 = 0.000546G There only seem to be about 546 KB free, it might be possible that it can't write a file upon boot. (Adding another and somewhat different answer.) You say that booting with the NT52 boot-sector does not work, but that NT60 does. The difference may be in the boot process. NT52 is the XP boot that uses NTLDR, while NT60 is the Vista method that uses Bootmgr. NTLDR uses the boot.ini file to locate hard drives and partitions. It consults the computer's firmware (BIOS) to find out which hard drive is considered to be drive zero, then looks at the partition table on that drive to find out which partition is number one. Once it knows the location of the partition it can then find the Windows\system32 folder of the OS it has been asked to start. Bootmgr consults the BCD file in the Boot folder for the information it needs to find for the correct drive and partition, but it does not use the firmware to find the hard drive, or the partition table to find the partition. Instead it uses the unique Disk Signature in the MBR of a hard drive and the partition offset (starting sector) of a partition. Apparently while resizing the disk you have destroyed an element that is used for the NTLDR boot process but is not used by Bootmgr. This could be the BIOS, in the sense that the information it holds about the hard disk is no longer correct, such as the number of cylinders or sectors, or something in boot.ini itself. In addition, bootsect updates the volume boot code, not the master boot code. The master boot code is part of the master boot record (MBR) and there's only one per physical disk. The volume boot code is part of the volume boot record and there's one per volume. It may have been that with your tries at making the disk bootable, some incompatibility has crept in between the two that requires better knowledge than mine of the boot process to analyze. As Bootmgr does not use the BIOS or boot.ini, it apparently manages to use the MBR and to boot. I had a similar problem. xp with the same error. bootsect /nt52 wouldn't fix the problem. I cloned the drive out and cloned it back in, and presto- it boots. The lesson is that you have to be an expert in partitioning to pinpoint the problem. the rest of us have to resort to hacking around and whatnot. someone on the internet said that these problem can be caused by a bios limit of 137gb. might be, but it actually has several causes. To solve this problem, Acronis was used. I made a copy of the boot disk, then I restored it and the system started up.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00465.warc.gz
CC-MAIN-2019-47
9,373
51
http://www.dreamteammoney.com/index.php?showtopic=198023
code
I am not admin Link : https://btc-arbs.com Arbitrage is one of the most profitable forms of investment, however to the dismay of most investors, arbitrage opportunities are rare, especially in the efficient markets that characterize the global economic system. In 2009, that changed, with the introduction of Bitcoin. Unlike regular investment markets, the Bitcoin exchange market is inefficient, for a variety of reasons, thus creating near daily arbitrage opportunities. Btc-Arbs allows you to take advantage of these arbitrate opportunities, without the need for sophisticated software, large balances on numerous exchanges, or any of the other complicated tactics that are typically required to profit from arbitrage opportunities as they arise. Arbitrage is the act of profiting from inefficiencies on any sort of exchange or economic situation. This is possible when the price of a particular commodity is one value in a particular marketplace and a different value is another marketplace. In the context of Bitcoin arbitrage, the situation occurs when Bitcoins are being sold at one price on one exchange, and a completely different price on another exchange. This allows to purchase Bitcoins the exchange with the lower price, and re-sell those same coins on another exchange, and profit through the difference.Accept : PM, STP, Egopay, BitcoinLink
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171664.76/warc/CC-MAIN-20170219104611-00143-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
1,356
4
https://mailman.open-bio.org/pipermail/bioperl-l/2009-July/081392.html
code
[Bioperl-l] DB2 driver for BioPerl florian.mittag at uni-tuebingen.de Mon Jul 6 16:08:18 UTC 2009 On Saturday 04 July 2009 12:39, Hilmar Lapp wrote: > On Jul 2, 2009, at 11:28 AM, Florian Mittag wrote: > > We were able to adapt the "load_ncbi_taxonomy.pl" script from BioSQL > > to fill > > our DB2 database with taxonomy data > Would you mind posting to the BioSQL list which changes you had to > make to make the script work with DB2? No problem, I will post the diff sometime this week, since there are a few changes not necessary anymore, e.g., the new DB2 Express-C version 9.7 supports the "TRUNCATE TABLE" command, which it previously didn't. > More generally, is there some kind of comprehensive documentation on > what is different in DB2 from standard SQL92? The > load_ncbi_taxonomy.pl script should in principle work with any SQL92- > compliant RDBMS ... Have you found that not to be the case (which > would be a bug), or is DB2 in some ways not SQL92-compliant? I don't know, I haven't looked for this kind of documentation, but the two things that annoyed me most were: 1) DB2 doesn't support UNIQUE on columns that allow for NULL values. Solution: create triggers that ensure UNIQUEness and create an INDEX. 2) Columns of type CLOB do not allow to be compared through "=", but only through "LIKE", which leads to problems with BioJava's Hibernate queries. Solution: currently none I want to discuss these problems in more detail on the other mailinglists, since they do not really belong here. > > , but loading the gene ontology with BioPerl's "load_ontology.pl" is > > somewhat harder. > The ontology as well as the sequence loader are really just front-ends > to the Bioperl-db object-relational mappers (ORMs). So I would start > there, rather than looking at errors the script does or does not throw > (you don't want to run all combinations of command line parameters > that would exercise each and every feature of the script). > In order to create DB2 driver support in Bioperl-db, you need to add > two things. First, you need to create a module Bio/DB/DBI/DB2.pm that > overrides the methods from base.pm according to DB2. The fact that you > didn't report any errors about that module not having been found > suggests that you've done this already. > The second step is as you say to create a package Bio/DB/BioSQL/DB2 > with at least BasePersistenceAdaptorDriver.pm as module in it, and > starting with a copy of the existing ones is indeed the best way to > get started on this. Unless you also created the DB2 database DDL > scripts from the Oracle ones, I wouldn't necessarily copy from Oracle > though, but maybe rather from Pg. And rather than looking for errors > of one of the scripts, I'd just go systematically through the files > and make sure the SQL in there is DB2 compliant. Okay, I'll do that, but that will take some time and I'll probably turn to this mailings for further assistance with more specific questions. > > [...] > > It first ran a few minutes processing the file and then died after the > > following SQL-command was prepared and executed: > > "SELECT term.term_id, term.identifier, term.name, term.definition, > > term.is_obsolete, NULL, term.ontology_id FROM term WHERE identifier > > = ?" > Could you post the full error message? It is rather difficult to > diagnose what's going on w/o the error message and stack trace. Right now, unfortunately not, because this error message won't appear again. I'm not sure is this is because of the database now containing data or because of some other changes I've made, but I will see this in the process of rewriting the DDL scripts. > I'd be surprised BTW if DB2 were indeed offended by the NULL in the > above statement - I'm pretty sure that "SELECT NULL FROM > sometable" (or "SELECT 1 FROM sometable") is standard SQL. Are you > sure that if you execute such a statement at a SQL prompt it results > in an error? > Since I can hardly believe that DB2 doesn't support selecting > constants (NULL is as much a constant as 1 is), maybe what it wants > though is aliasing the column. So if > SELECT NULL FROM bioentry; > yields an error, does > SELECT NULL AS colAlias FROM bioentry; > work fine? Well, it is like this with version 9.5 of DB2 Express-C: SELECT NULL FROM bioentry; SQL0206N "NULL" is not valid in the context where it is used. But if I do: SELECT cast(NULL AS VARCHAR(255)) FROM bioentry; it returns the correct result without error. Thew new version 9.7 claims to have changed this behavior, so that the first query would run fine, but I didn't have time to test the new version, yet. > > I don't know if the "NULL" column is supposed to be there > It is. The code in BaseDriver.pm that you were looking at should not > need to be modified. (Rather, DB2/BasePersistenceAdaptorDriver.pm is > supposed to override any method that needs to be adapted to DB2.) The > way the ORM works is by trying to map all properties of a BioPerl > object that are persistent to a column of a table in the database. If > it can't map a property (for whatever reason) its value is simply > always undef (or NULL in SQL). I.e., NULL columns are the placeholder > for a column that failed to be mapped to a property. You can't simply > remove them or all subsequent columns are shifted. It ran fine without the NULL column, but that isn't necessarily a sign of correctness. My problem was that (as stated above) the old version of DB2 requires you to cast the NULL value to a data type, which I wasn't able to determine from the code. With the new version, it should work, so I'll have to rerun my tests again and see if the problem is still there. I will keep you updated on the Perl issues and hope to have some useful results by the end of the week. And I hope you excuse me for posting things here that are hardly related to BioPerl, but the some problems are a complex entanglement of issues with BioSQL, BioPerl and BioJava, so it's hard to decide where to post it ;-) More information about the Bioperl-l
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00287.warc.gz
CC-MAIN-2021-25
6,010
102
https://www.shapeways.com/product/9BV7UZSSP/z-76-lr-rend-middle-tp3-plus-bg-bso-1?optionId=59957386&li=more-from-shop
code
1/76 scale(OO) .middle floor to go above shop unit , flat/cement rendered wall, windows with open shutters drainpipe on both sides This has equivalent of extra 8 rows of brick than the short version which goes above shop unit. If used above house section, it means first floor will match up with first floor above shop.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824180.12/warc/CC-MAIN-20181212225044-20181213010544-00494.warc.gz
CC-MAIN-2018-51
319
5
https://blog.thedaysman.com/category/sabbatical/page/2/
code
I have two book ideas, and I expect to get to them eventually. This would be the time if the pandemic continues to rage and the vaccine fails and the world ends. Any “plan” to work in a developing country has unexpected challenges and complications, more so in a worldwide pandemic. In a pandemic year the answers are shifting beneath my feet, even as we make (uncertain) plans for how to use this time for the greatest benefit. It’s also 2020, about 8 months since I had Covid-19. And the world is closed, mostly. At least the corner where I want to go: Nepal.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519843.24/warc/CC-MAIN-20210119232006-20210120022006-00599.warc.gz
CC-MAIN-2021-04
567
4
https://vivo.library.tamu.edu/vivo/display/n314710SE
code
TOWARD PRECISION BLACK HOLE MASSES WITH ALMA: NGC 1332 AS A CASE STUDY IN MOLECULAR DISK DYNAMICS Additional Document Info 2016. The American Astronomical Society. All rights reserved. We present first results from a program of Atacama Large Millimeter/submillimeter Array (ALMA) CO(2-1) observations of circumnuclear gas disks in early-type galaxies. The program was designed with the goal of detecting gas within the gravitational sphere of influence of the central black holes (BHs). In NGC 1332, the 0.3-resolution ALMA data reveal CO emission from the highly inclined () circumnuclear disk, spatially coincident with the dust disk seen in Hubble Space Telescope images. The disk exhibits a central upturn in maximum line-of-sight velocity, reaching 500 km s-1 relative to the systemic velocity, consistent with the expected signature of rapid rotation around a supermassive BH. Rotational broadening and beam smearing produce complex and asymmetric line profiles near the disk center. We constructed dynamical models for the rotating disk and fitted the modeled CO line profiles directly to the ALMA data cube. Degeneracy between rotation and turbulent velocity dispersion in the inner disk precludes the derivation of strong constraints on the BH mass, but model fits allowing for a plausible range in the magnitude of the turbulent dispersion imply a central mass in the range of (4-8) 108 . We argue that gas-kinematic observations resolving the BH's projected radius of influence along the disk's minor axis will have the capability to yield BH mass measurements that are largely insensitive to systematic uncertainties in turbulence or in the stellar mass profile. For highly inclined disks, this is a much more stringent requirement than the usual sphere-of-influence criterion.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644867.89/warc/CC-MAIN-20230529141542-20230529171542-00317.warc.gz
CC-MAIN-2023-23
1,789
3
http://www.receptional.com/blog/sql-injection-attacks-what-to-do-if-your-site-is-vulnerable
code
Database injection attacks are a form of hacking that is becoming increasingly common, and is being conducted on an extremely large scale. Tens of thousands of sites, including those operated by governments and universities are currently exploited in this way. What is SQL/database injection? Many sites now use databases to store content, since this provides huge benefits: from easier management to the ability to interact with users, for instance in a shopping cart process. To understand SQL injection, you need to have a basic idea of how database-driven sites work. A simple shopping cart has a database of products, images, descriptions and so on. Each product has its own row within the database, and a product is identified by a numeric identifier (an ID). A basic URL to retrieve product number 1 would be: www.example.com/view_product.asp?id=1. This URL says ‘get the product with an ID of 1, and then display all the information like the product name and image’. Database injection attacks work by modifying the code used to query the database, so instead of just ‘get the product with an ID of 1’ it will also alter the product details, to install viruses or make other modifications to a site’s database – including deleting all of the contents. These attacks are currently automated, and scan hundreds of thousands of sites daily to check for vulnerability, and attack sites that have not taken appropriate security measures. What problems can result from being vulnerable? Once an attacker has control of a site’s database, the repercussions are extremely serious, and common problems include: - A whole website can be unusable, and/or infecting all visitors with viruses - Loss all of the data from the site - Dropping out of all search engine listings - Listings on various hacker sites and the associated problems How do I check if my site is vulnerable? The ideal method is to consult with your web development provider, and ensure that they are aware of the severity of this issue, and have taken appropriate steps to prevent it from happening. However, there is a simple test you can conduct yourself which in most cases will reveal if a site is likely to be vulnerable to database injection: Find a page containing a database (preferably ‘ID’) parameter, e.g. www.example.com/view_product.asp?id=123) Append an apostrophe to the end of the parameter and view the URL (e.g. www.example.com/view_product.asp?id=123′) The apostrophe prematurely ends the database query. If you receive an SQL error of any kind, your site may be vulnerable. If this is the case with your site, you should contact your developer as soon as possible to ensure your site is secured against attack. How do I fix SQL injection vulnerabilities? The golden rule of secure website programming is don’t trust user input. Any data that is originates from a user (including parameters in URLs) must be considered insecure and appropriately sanitised before being used for functions like database queries. Sanitising ID variables is extremely easy, since they normally only contain numbers. Any data supplied for an ID parameter that does not contain numbers should return an error to the user. The same principle applies to all parameters contain user-supplied data: they should only be accepted if they match the expected syntax for that parameter. Unfortunately, because this type of hacking has historically been uncommon and not widespread, some developers have neglected the basic requirements of web application security. Due to the current scale of SQL injection attacks, it’s literally only a matter of time before vulnerable websites are compromised. Chief Technical Officer
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152097.59/warc/CC-MAIN-20160205193912-00268-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
3,698
21
https://ispapp.co/blog/github-issues-only-project-management
code
Github is an amazing tool! This great blog post from Tom MacWright discusses the simplicity of using Github Issues instead of more complicated project management tools. I really like this idea. Github Issues are like a glorified shared checklist. You can categorize them with tags, discuss each item, and then mark them complete by closing them. Also, compared to other project management tools, Github is very affordable!
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00162.warc.gz
CC-MAIN-2021-49
422
2
https://www.hypertrack.com/blog/2016/06/20/why-and-how-we-built-a-live-demo-of-hypertrack-dashboard/
code
Back in April in this blog post we explained the design philosophy behind the HyperTrack dashboard. Over last two months we got the dashboard ready for our early users. Today we are excited to make the live demo available to all our users. Visit dashboard.hypertrack.io/demo to see what we have built. Why is it important to have a live demo? As we started talking to users interested in trying HyperTrack the first question that came up was “Can I play around with the dashboard? Can I touch and feel the dashboard before I take the effort to integrate?”. The idea of exposing a live dashboard to our prospective users seemed risky. We were going to showcase a product that was not yet fully baked. But as we reflected back on the kind of company and product we had set out to build we realized that having a demo dashboard is essential. Users expect that we would always say good things about our own product. Letting users experience it through a live demo helps build trust that the product delivers the promise. Once we decided to have a live demo of the dashboard our next challenge was to produce generous amounts of data that reflects real life drivers end customers tasks and trips. Generating profiles of drivers and customers Two key entities of our dashboard are drivers and end customers. A driver is the person you want to track and a customer is the person expecting the driver to show up for a pickup or delivery. Generating hundreds of driver and customer profiles with names photos etc. could be demanding not to mention boring. Our team explored various options but eventually found a free open-source API for generating random user data – randomuser.me. It is a simple to use API that outputs JSON user data. It provides a bunch of toggles to configure the output. For example since we just needed the profile picture and name in the JSON this is the call we made: https://randomuser.me/api/?inc=namepicture Generating tasks and trips We recently open sourced the HyperTrack python helper library. The library made the job of creating drivers destinations tasks etc. super simple. Within a few minutes we were able to create ~20 drivers ~30 destinations and ~150 tasks. But the data we generated modeled an ideal world with all trips on time. Since we had to model the real world in the demo we randomly made a few of our drivers late. How to play with demo dashboard Here are snapshots of what you would see when you log-in into the demo dashboard. - Live Dashboard: The home screen of the dashboard is a live summary of your business right now. Besides the day’s aggregates like number of orders delivered pending or running late you would see a list of drivers that are currently en route a live trip. Due to proactive alerts about orders running late this becomes the default screen for operation managers to stare at. - Live Trips: Clicking on live trips would show you details of all the tasks assigned to that trip customer destinations for those tasks statuses of individual tasks and so on. If you see a driver running late this trip detail page would give the most updated ETAs of the pending tasks. - Drivers & Fleets: This section of the dashboard helps analyze on-time performance and utilization of drivers and fleets. You can analyze how a driver is doing compared to his fleet. Metrics like these help your operations team answer questions like which of your drivers make deliveries on time most often or which drivers are under utilized. - Past Trips: All historical trip can be audited to see what exactly happened on that trip. You can replay a full trip as many times you like rewind a driver’s trip and look at his exact location at specific times. This would help you find any blind spots in your operations – that was missed by automated alert systems. - Customers: This section of the dashboard helps analyze your pickup and delivery performance with respect to customers and neighborhoods. You can analyze service levels for the most loyal customers or for new customers. You can also compare your on-time performance by neighborhoods or see which ones are underserved. - Analytics: This section of the dashboard analyzes your operations over days weeks and months. You can see heatmaps of your busiest neighborhoods and performance of your business across these neighborhoods. With a demo dashboard we also hope to provide our users a sample of how a dashboard would look once you leverage various advanced features of our services like adding multiple tasks to a trip assigning drivers to a fleet creating hubs and so on. We hope that you would love the demo of the dashboard and would give HyperTrack a shot. If you like what you see sign up for HyperTrack here. Subscribe to HyperTrack Blog: Imagine. Build. Repeat. Get the latest posts delivered right to your inbox
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578616424.69/warc/CC-MAIN-20190423234808-20190424020808-00161.warc.gz
CC-MAIN-2019-18
4,827
21
https://stackoverflow.com/questions/3620530/global-asax-error-handling-quirks-of-server-transfer
code
In my global.asax, I am checking for a 404, and transferring to the 404 error page as per the below: If HTTPExceptionInstance.GetHttpCode = 404 Then Server.ClearError() Response.TrySkipIisCustomErrors = True Response.Status = "404 Not Found" Server.Transfer("~/Invalid-Page.aspx") End If The problem is, my Invalid-page.aspx uses some session code (Session("somevariable")), which throws an exception "Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. " because I am using a server.transfer (I believe this is a known issue?). If I use a Response.Redirect, there is no problem. However, this would mean that the header of the error page is a 200, not a 404. What would be the best workaround for this?
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999200.89/warc/CC-MAIN-20190620085246-20190620111246-00032.warc.gz
CC-MAIN-2019-26
779
5
https://wiki.openmrs.org/pages/diffpagesbyversion.action?pageId=3346195&selectedPageVersions=7&selectedPageVersions=8
code
This module is no longer maintained. Please use the Metadata Sharing Module instead. |Table of Contents| The FormImportExport module lets you export an OpenMRS form (as a ZIP archive) from one server and import it into another. The only restriction is that you must have identical concept dictionaries on both servers. (Specifically, every concept referenced by the form you export must exist with the same conceptId on the server you're going to import it into.) See the module repository for the latest downloads.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00084.warc.gz
CC-MAIN-2019-51
515
4
https://www.nccourts.gov/contact-the-webmaster
code
If you were attempting to pay or find your citation but your citation is not listed, please check your citation number and make sure you entered the citation number correctly. If you don't know or can't find your citation number, you may search citations by name There are several reasons your citation might not appear even if you entered it correctly. Your citation may: - Require a court appearance - Not have been entered into the system yet (can take 24-48 hours from the time your citation was issued) - Not be payable online - Data entry error If you choose to pay the citation , the website is provided as an alternative method to a payment in person or via mail. If the website is not available, it remains your responsibility to make timely payments to the courthouse or appear in court as noted in the citation. See the Traffic Violations Help Topic for more information. You may direct questions about your citation, or find other payment options, to the Clerk of Superior Court office in the county in which the citation was issued.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817249.26/warc/CC-MAIN-20240418222029-20240419012029-00142.warc.gz
CC-MAIN-2024-18
1,045
11
https://community.mendix.com/link/space/widgets/questions/130299
code
you can use Dynamic row class and play around it. Its should be something like this. (your-class-for-hide this should be class that should include your custom styling and it's not need to be for hide things it's also can be for adding additional container bellow row for display additional content) and if you need to be on specific coulmn Appearance Dynamic cell class I don't think so. This seems to be only for specifying the CSS styles of the row and cell, not for throwing content there. Hi, what I think Slavko is trying to say, is that you could add a dynamic class like this: Here, the visible property determines whether something should be shown or not. The displaynone class simply adds a 'display:none' to the item. Now, this would not be a full row, it would also have columns like the other rows do. As far as I know, what you are asking is not possible, and you could only do it with a similar workaround as this.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474653.81/warc/CC-MAIN-20240226062606-20240226092606-00014.warc.gz
CC-MAIN-2024-10
928
7
https://knowledge.broadcom.com/external/article/228509/wss-agent-is-not-fully-loaded.html
code
WSS Agent is showing a 'not fully loading' error. Web Security Service A third-party application was taking lots of time for initialization. There were many boot time errors logged in "Application" event logs with a third-party application. You can change the Symantec WSS agent service startup type to Automatic (delayed start) under Control Panel > System and Security > Administrative Tools. Ultimately, the third-party application causing the initialization problems will need to be fixed, removed or re-installed. Symdiag with reboot WPP confirms this issue.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817144.49/warc/CC-MAIN-20240417044411-20240417074411-00347.warc.gz
CC-MAIN-2024-18
563
6
http://cla.cachlambanh.co/research-paper-on-turing-machine.html
code
Most of all, we are proud of our dedicated team, who has both the creativity and understanding of our clients' needs. Our writers always follow your instructions and bring fresh ideas to the table, which remains a huge part of success in writing an essay. We guarantee the authenticity of your paper, whether it's an essay or a dissertation. Furthermore, we ensure confidentiality of your personal information, so the chance that someone will find out about our cooperation is slim to none. We do not share any of your information to anyone. Let's call a string of characters that can be typed in an hour or less a "typable" string. In principle, all typable strings could be generated, and a team of intelligent programmers could throw out all the strings which cannot be interpreted as a conversation in which at least one party (say the second contributor) is making sense. The remaining strings (call them the sensible strings) could be stored in an hypothetical computer (say, with marks separating the contributions of the separate parties), which works as follows. The judge types in something. Then the machine locates a string that starts with the judge's remark, spitting back its next element. The judge then types something else. The machine finds a string that begins with the judge's first contribution, followed by the machine's, followed by the judge's next contribution (the string will be there since all sensible strings are there), and then the machine spits back its fourth element, and so on. (We can eliminate the simplifying assumption that the judge speaks first by recording pairs of strings; this would also allow the judge and the machine to talk at the same time.) Of course, such a machine is only logically possible, not physically possible. The number of strings is too vast to exist, and even if they could exist, they could never be accessed by any sort of a machine in anything like real time. But since we are considering a proposed definition of intelligence that is supposed to capture the concept of intelligence, conceptual possibility will do the job. If the concept of intelligence is supposed to be exhausted by the ability to pass the Turing Test, then even a universe in which the laws of physics are very different from ours should contain exactly as many unintelligent Turing test passers as married bachelors, namely zero.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513611.22/warc/CC-MAIN-20171211144705-20171211164705-00727.warc.gz
CC-MAIN-2017-51
2,370
2
https://www.br.freelancer.com/projects/html/expert-required-30544721/?ngsw-bypass=&w=f
code
I need a person who is an expert in the IT field 8 freelancers estão ofertando em média $143 nesse trabalho Hi, As a professional IT manager, I can easily fix these kind of issues. kindly consider me as a professional. If you have any kind of questions do let me know. Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00121.warc.gz
CC-MAIN-2021-39
278
3
http://www.ongsono.com/trends/computers/is-it-safe-to-download-limewire.html
code
Many computer tech experts associate spywares with peer to peer networks that allow computer users to download files directly from other computers. P2P network sharing programs also became popular as there is no need for a centralized server to store files for download. As long as a computer user installs and uses P2P program such as Limewire, files can be easily searched and downloaded. To upload files to the network, users only have to share access of files with other users in the network. Experts warn people not to download Limewire because of the dangers involved in sharing across P2P networks. This has nothing to do with the program itself. So, if you are interested in having access to numerous free entertainment sources, you should learn more about the right way to use P2P networks to reap the benefits. Malware and adware can damage your computer if you download them from P2P programs like Limewire. So, computer experts believe that downloading and installing P2P programs will infect the computer for sure. However, the reality is different. The open source P2P program provides unlimited access to high speed file downloads without the dangers of spyware. There are millions of users in the popular Gnutella peer to peer network and several millions of files are available for download. If you are careful about downloading and using these files, you can stay away from spyware. There is no central body in the P2P network to regulate sharing and downloading of files when you download Limewire. It becomes the responsibility of the users to protect their computers from viruses and spyware. The chances of spyware spreading to your computer are high when you download a file that is already infected with spyware. When there are millions of users willingly accepting files from other computers, hackers and malicious people try to spread virus and spyware by deceiving users. Spyware and virus may get downloaded to your computer in disguise and this can happen only if you don't regulate your sharing and downloading practices. To be safe while using Limewire, you have to update your computer with latest antivirus and antispyware tools. There are many freeware programs available and they are as good as commercial ones. Many of these programs automatically scan downloaded files before opening and you should activate this feature to ensure that no spyware or virus gets downloaded to your computer without your authorization. This general rule of thumb is applicable for every file you download via internet because any file downloaded from another source is exposed to spyware risks. There are unnecessary privacy concerns with download Limewire as it is falsely believed that P2P programs share everything in the hard drive. Limewire Plus program has a default sharing folder and any file you store in this folder will be shared in the network. No other files will be open for other users. It is your responsibility to ensure that you share only those files that can be accessed by other users. With sharing and downloading moderation on your part, you will only have more to gain from P2P file sharing programs.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00156-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
3,142
5
http://dragoscloud.xyz/archives/3952
code
Novel–Versatile Mage–Versatile Mage Chapter 2388 – The Worst Era of Mankind scattered connect A Crowd Of Evil Spirit Lines Up To Confess To Me A semi-made World Vein was just like a half-packed center. The Universe Vein rapidly developed after taking in the vitality leaky right out of the Fact Orb! He was planning to take care of some more some fruits, like the one out of Prison Mountain peak. Her prolonged hair was like a blazing red-colored waterfall. She has also been draped in a very cloak established with burning off petals. Her easy skin area was br.i.m.m.i.n.g with scorching heat. He possessed attained a complete Universe Vein just before finis.h.i.+ng his task! It turned out taking in the power of your prisoners the Evil Orb experienced harvested from other hatred, along with its possess energy that it had nurtured after residing in the prison’s natural environment over quite a long time. The 2 main head-controlled Mages possessed an desire to betray their become an expert in after discovering her terrifying fire. Flames Belle Empress surely could go walking without restraint inside the air. She hovered between two Awesome Point prisoners. “Are you absolutely sure the us government won’t notice them once they go losing out on?” Lu Kun asked. Translator: Exodus Tales Editor: Exodus Tales A semi-constructed Universe Vein was like a fifty percent-stuffed central. The Universe Vein rapidly evolved after soaking up the vitality dripping right out of the Basis Orb! “There are too lots of communities and communities across the Miraculous Town after it had been promoted to your head office metropolis. Irrespective of how watchful the administrators are, they won’t manage to account for everybody and work out them lower. A number of people probably have ended up losing out on as soon as they ended up transferred on the head office location. It’s not as easy to record them downward now,” the old person replied on his serious speech. Mo Admirer could not treatment much less in regards to the Satanic Orb’s wants. Lu Kun was nurturing this sort of useful thing in Prison Mountain peak. He did not head enjoying the harvest! “It seems as if there’s some remains kept.” Mo Enthusiast spotted his Basis Orb was already entire. Several wisps of red-colored power were still circling the Substance Orb, like a number of orphans with nowhere to go. They had to adopt their time. This became just a little bothersome. He had gathered a thorough World Vein in advance of finis.h.i.+ng his task! He recalled how the people on the streets always moved an abundance of crosses about them once they were actually on the roads during the night time in Countries in europe. Flame Belle Empress could wander without restraint during the fresh air. She hovered between two Very Degree prisoners. Mo Supporter was still experiencing troubled in the event the green power started to supply toward his s.p.a.ce Bracelet. The green Aura circling the Bad Orb began to dissipate, and its particular tone was altering progressively. Do not appear any much closer! The 2 main intellect-controlled Mages obtained an urge to betray their excel at after seeing her terrifying fire. The Ayatollah Begs To Differ It was actually soaking up the power of your prisoners the Satanic Orb possessed obtained of their hatred, along with its very own energy it experienced nurtured after remaining in the prison’s atmosphere over a while. “It seems as if there’s some residue kept.” Mo Fanatic seen his Fact Orb was already full. A few wisps of red-colored energy were circling the Essence Orb, like a lot of orphans with nowhere to visit. “Are you sure government entities won’t notice them if they go losing out on?” Lu Kun asked. It was actually soaking up the electricity of the prisoners the Bad Orb experienced harvested using their hatred, along with its individual energy it had nurtured after vacationing in the prison’s atmosphere over quite a long time. Mo Lover was still feeling distressed once the reddish colored electricity started to stream toward his s.p.a.ce Bracelet. Edited by Aelryinth The Heart and soul Orb was getting huge bites each time now. The Crazy Forensic Doctor Consort He recalled exactly how the pedestrians always brought a lot of crosses about them once they were definitely on the roads in the evening in The european union. It eventually reverted into a lifeless and uninteresting gray Orb. “There are way too quite a few communities and towns throughout the Miraculous Town after it turned out promoted into a headquarters area. However careful the representatives are, they won’t have the ability to manage everyone and settle them downward. A lot of people probably have gone skipping after they have been transferred on the headquarters town. It’s not too an easy task to record them down now,” the previous male replied within his deeply sound. Within an abandoned setting up about the outskirts of Magic Town, a guy in a red-colored s.h.i.+rt stood for the edge of its roof covering having a prolonged checklist in his palm. whiskey beach washington Mo Enthusiast understood the enormous bother he produced needs to have picked up Lu Kun’s recognition. Mo Fan failed to be expecting the Evil Reddish Orb to hold a great deal electricity. It obtained somehow supplied him a complete Universe Vein. He would soon possess a fourth Ultra Element! Novel–Versatile Mage–Versatile Mage
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00505.warc.gz
CC-MAIN-2023-06
5,478
42
https://jobs.washingtonpost.com/job/38987339/it-solutions-architect-data-management-technical-lead-relocate-to-richmond-va/
code
IT Solutions Architect/Data Management Technical Lead - Relocate to Richmond, VA Position Summary Involves technology-based analysis of business practices, processes and problems; developing solutions which involve process design, data and information architecture, software development and policy or procedural changes; creating specifications for systems to meet requirements; validating requirements against needs; designing details of automated systems; planning and executing unit integration and end-user acceptance testing; may develop training materials for system implementation. Manage and lead group of ETL/Data Engineer professionals to solve complex business or systems issues. Under the general supervision of the Sr Manager, the Data Warehouse Technical will lead a contractor team of ETL/Data Engineering professionals and manages architecture ,design and development and support of data warehouse and data engineering applications. Key Responsibilities * Act as a hands-on technical lead of the data warehouse team to ensure all objectives are met * Manage and lead small team of ETL /Data Engineering professionals * Act as a team lead to design, develop, document, test and maintain ET, Data Warehousing and Data Engineering applications and processes * Develop and maintain data warehouse and data engineering capabilities for reporting and analysis * Work with Production Support /Run the Shop IT Partner vendor team and provide directions and assistance. * Should able to work Hands On and code if needed. * Interact closely with various functional and cross -functional, business intelligence and data analytics team to determine reporting needs and translate those needs into the Data Warehouse Data Model and ETL processing * Develop and Document all ETL/Data pipeline and processes according to the Best Practices * Responsible for building and maintaining design standards and best practices and ensure to adhere these standards * Promote and Drive Data Management Strategy with the help of offshore IT Vendor. * Evaluates and recommends unique hardware /software configurations; defines special hardware/software requirements, capacities, capabilities, etc. to meet user needs while adhering to technical standards Basic Qualifications * Proficient in Data warehousing theory, practice and combined with strong hands-on experience in data warehousing and data architecture * Proficiency in data warehousing tools and database systems (for example: Oracle, Greenplum, Hadoop, Informatica and Unix) * Solid understanding of data * Excellent analytical abilities * 8-10 or more years of experience in a data warehousing environment * Solid 4 or more years of building and supporting ETL/Data Integration applications * 2 + years of strong experience is data architecture and data modeling * Hands-on experience in managing and delivering high performance ETL solutions * Minimum 5-6 years of experience with an emphasis on Informatica, Unix and Oracle. * Proficient in SQL and Database Design and SQL Tuning. * Strong Unix Shell Scripting knowledge along with sufficient experience in any Enterprise Scheduling tool. * Must have multiple full lifecycle project experience in developing ETL solutions. * Excellent communication and collaboration skills. * Strategic and tactical thinking. * Strong focus on providing service to the customer. * Out of the box thinking; Flexible / Critical thinking skills a must. * Able to sell concepts and designs / benefits to multiple audiences.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583807724.75/warc/CC-MAIN-20190121193154-20190121215154-00405.warc.gz
CC-MAIN-2019-04
3,506
2
https://dragotto.net/blog/the-oberwolfach-problem
code
In conferences held at the Oberwolfach Institute, participants usually dine together in a room witg tables of different sizes, and each participant has an assigned seat. Gerhard Ringel asked whether there exists a seating arrangement for an odd number \(v\) of people and \((v−1)/2\) meals so that all pairs of participants are seated next to each other exactly once. The full pre-print is available on Arxiv here, while the published version is on Australasian Journal of Combinatorics - OP Jopt: JOPT 2019 Optimization Days in Montreal - GitHub repository for the OberSolver - GitHub repository for the brute-force Constraing Programming solver for the OP - GitHub repository for the proof of \(OP(23,5)\) For the specific case of \(OP(23,5)\), it is likely that no solution exists. Despite this fact, apparently no proof (either computational or analytical) has been published. All the references loop around this, and this. Finally, here Piotrowski says the non-existence has been proven with a computer calculation around the 80s, and the result is safely stored inside an unpublished paper in German. Well, thanks to Integer Programming there is now a citable proof :)
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588244.55/warc/CC-MAIN-20211027212831-20211028002831-00105.warc.gz
CC-MAIN-2021-43
1,176
9
https://www.cybrary.it/catalog/learn-on-demand/implement-manage-and-monitor-azure-storage/
code
This Learn On Demand Pro Series is part of a Career Path: Become a Microsoft Azure Cloud Engineer In this challenge by Learn On Demand Systems, you will implement, manage, and monitor Azure Storage. First, you will create an Azure storage account, and then you will upload a document to a container in the storage account. Next, you will configure storage account security mechanisms. Finally, you will configure monitoring for a storage account. See the full benefits of our immersive learning experience with interactive courses and guided career paths.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00499.warc.gz
CC-MAIN-2021-04
555
3
https://forum.xda-developers.com/t/rom-gsi-port-beta-miui-12-5-android-10.4391637/
code
- Jan 21, 2022 * Your warranty is now void. * We are not responsible for anything that may happen to your phone by installing any custom ROMs and/or kernels. * You do it at your own risk and take the responsibility upon yourself and you are not to blame us or XDA and its respected developers. MIUI 12.5 for Galaxy S9/S9 Plus - Warning this is not for daily use. - Made with GSI created by ErfanGSI Tools, credit goes to him and other GSI Devs. - Full MIUI 12.5 firmware from GSI Global. - Added a brightness overlay to try and improve it but more work needs to be done. - Modified vendor and disabled KeyStore. - SD Card. - Hard press button. - Most likely more stuff I haven't noticed yet. Instructions only for Samsung S9 Plus users: -Reboot to recovery -Wipe system, cache, dalvik and data -Flash CrDroid Android Q (CrDroid) -Flash System img from miui -Mount vendor in twrp and go to /vendor/lib64/hw/ and rename the keystore file with a .bak at the end All rights reserved to the creator of the ROM. [ @dylanneve1 ]
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00383.warc.gz
CC-MAIN-2022-27
1,021
18
https://www.integrate.io/glossary/what-is-tensor-flow/
code
Developed by Google, TensorFlow is now an open-source solution for tackling ML and AI problems related to Big Data. It's a versatile system, notably for its ability to run on GPUs and mobile devices, as well as CPUs. What is the Structure of TensorFlow? The TensorFlow libraries consist of multiple APIs that can be roughly divided into two categories: low-level and high-level. Low-level TensorFlow APIs TensorFlow Core is powerful but has a steep learning curve. Anyone working with Core must understand not only the main API but also the data concepts that form the basis of TensorFlow. High-level TensorFlow APIs These are a collection of high-level tools and libraries that run on TensorFlow. Some of them help to create models that can form the basis of a graph. Others provide a modular layer that makes it possible to develop without knowing all the ins and outs of TensorFlow. Many of these APIs are smaller and more consistent than the Core API, and come with a much more forgiving learning curve. TensorFlow Core Key Concepts Working with TensorFlow Core means engaging with some complex data science. The main concepts to know include: A tensor is a multi-dimensional collection of data. Each tensor has four key attributes: rank, shape type, and label. A tensor's rank is the number of dimensions, so a single column of data has rank 1, while a table has rank 2. Tensors can have unlimited dimensions. Shape refers to the tensor's overall size, which is the combined size of each dimension. Tensors have a single data type, which the developer specifies on instantiation. Finally, each tensor has a name, or label. Tensors are objects defined by the tf.Tensor class. Graphs are the basis of computation in TensorFlow. A graph consists of nodes and connectors called edges. Each node represents an operational function, defined in a tf.Operation object. The edges contain tensors. During execution, tensors pass through nodes. The nodes process the data accordingly and pass the results along to the next node on the graph. To process data in TensorFlow, developers must first create a session and call its run method (tf.Session.run). The arguments of this method specify the starting point of the graph. The session then prompts the execution of all relevant operations in the correct sequence, eventually producing the desired output. Constants are edges that do not contain tensors. These can be simple data types, such as integers, or a more complex construct such as a NumPy array. The tf.constant function creates constant values. Variables are similar to constants, except that their values can change during runtime. The tf.Variable function declares variables. A placeholder is an empty tensor. The code defines a placeholder's structure, but put passes values dynamically during runtime. This allows developers to provide a new tensor in each session. TensorBoard is a command-line tool that allows developers to visualize their TensorFlow graphs. This tool works with the logs that a TensorFlow session generates, and users can choose to view the data in multiple formats. What are the High-Level TensorFlow APIs? TensorFlow comes packaged with several high-level APIs that developers can use to create machine learning systems or neural networks. These APIs rely on TensorFlow Core to perform data operations. However, these APIs generally offer a gentler learning curve for people who are building applications. They also provide a higher degree of modularity, as they don't require coders to work directly with the TensorFlow Core API. TensorFlow and Keras have become inextricably linked in recent years, although the two libraries are independent of each other. Keras is a high-level API that uses TensorFlow as a deep learning backend. The main feature of Keras is usability. The Keras API is relatively straightforward, and many developers or data scientists can pick it up with ease. It's also highly configurable and modular, which means that it's possible to build extremely sophisticated applications on this framework. When working with the TensorFlow Core API, developers have to build graphs from scratch. TF Estimator can perform much of this hard work by estimating the data model required for the available data. Estimator has some high-level functions such as estimator.evaluate, estimator.predict(), and estimator.export. The output of this API is an object that contains the suggested data model. This can then be fine-tuned to fit the data. TF Slim is a lightweight library that can define and train machine learning models. Developers build the model using slim layers, then create a session with the appropriate data sources. The resulting operation estimates any error or data loss, and developers can then optimize to improve accuracy. TFLearn is a highly transparent API that allows for the rapid creation and modeling of prototypes. Users have full control over individual tensors, and there are some detailed visualization tools available. This makes it possible to monitor and control the way models develop. TFLearn also comes with examples and tutorials, which makes it an ideal educational tool for developers who want to learn more about the functionality of TensorFlow Core. This API is a TensorFlow wrapper that's used in the development of neural networks. With Pretty Tensor, developers can create objects that resemble tensors. These objects have a chainable syntax that is ideal for building complex structures, such as neural networks. Sonnet is an object-oriented Python library that creates an abstraction layer for TensorFlow. Sonnet modules are extremely flexible, self-contained, and decoupled from all other modules.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948235171.95/warc/CC-MAIN-20240305124045-20240305154045-00131.warc.gz
CC-MAIN-2024-10
5,686
30
http://sto-forum.perfectworld.com/printthread.php?t=374171
code
teamviewer was crashing the launcher i think !! (launcher issues) I dont know if this is a full fix but will know more when i try again tomorrow but i had Teamviewer which lets you have remote access to your pc, was some how interfering with the sto launcher thing so i disabled it and now im in the game on my pc !! freaking strange , will test again tomorrow, maybe some hope for you guys well, so far so good, at least I can get back into the game now for me it seems to have been teem viewer stopping me I also looked up on another forum and some one else has had this program I got the game going now... darn strange , i hope other people can get there game going.. You know, remote access is something that should only be activated if you know you definitely need it. |All times are GMT -7. The time now is 05:14 AM.|
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246634331.38/warc/CC-MAIN-20150417045714-00275-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
823
13
https://www.react-next.com/?ref=softwaretalks.io
code
The biggest React conference in Israel What is React Next? After a very successful 4 consecutive years, EventHandler brings you the 2020 edition of ReactNext. The all technical conference brings top local and international speakers to share their experiences with over 750 developers in Israel. The conference features advanced topics aimed for experienced developers, team leaders, and consultants. And a great oppurtinuty R&D and product managers to evaluate the business advantages of React. React Next takes place in the David Intercontinental hotel conference space, a luxurious venue for high-end events, and a 5 minute walk from the beautiful beaches of Tel-Aviv.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00265.warc.gz
CC-MAIN-2020-24
670
5
http://landward.org/wordpress/wp-content/plugins/SCORM_Reader/courses/Theorizing%20Cultural%20Heritage/c5e2331a-6408-4501-baf7-bbc48772d949/unit3/s111.html
code
In the Swedish context the heritage is stately owned. How does the situation look in other European countries? Selinge, K-G (ed.). 1994: ‘Cultural Heritage and Preservation’, National Atlas of Sweden, vol 11. Damell, D et al. 1994. ‘The Cultural Heritage in Society’, National Atlas of Sweden, vol 11, pp. 154-175 .
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202199.51/warc/CC-MAIN-20190320024206-20190320050206-00485.warc.gz
CC-MAIN-2019-13
323
3
https://www.waterburypubliclibrary.com/2019/06/citizen-science-9-10-tues/
code
Citizen Science – 9/10 (Tues) When: Tuesday, Sept. 10th, 6:30pm Interested in helping protect our world? Become involved in a Citizen Science Project! Anyone can participate in citizen science, regardless of your scientific background. A Citizen Science Project can involve one person or millions of people, all collaborating towards a common goal. Involvement includes data collection, analysis, or reporting of a wide range of topics including ecology, wildlife, astronomy, botany, pathology, and much more. As long as you have access to a smartphone, computer, or camera, you can participate. In this workshop we’ll provide an overview of the current projects, and let you know how you can get involved!
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00456.warc.gz
CC-MAIN-2019-35
710
7
https://rock-fan.ru/rstudio-bookdown/
code
H-indexes of CRAN package maintainers. Read to the end to find your ownh-index. Bookdown: Authoring Books and Technical Documents with R Markdown. This book explains how to use bookdown to write books and technical documents. The bookdown package is built on top. This is the website for Tidy Modeling with R. This book is a guide to using a new collection of software in the R programming language for model building, and it has two main goals: First and foremost, this book provides an introduction to how to use our software to create models. We focus on a dialect of R called the tidyverse that is designed to be a better interface for common tasks using R. If you’ve never heard of or used the tidyverse, Chapter 2 provides an introduction. In this book, we demonstrate how the tidyverse can be used to produce high quality models. The tools used to do this are referred to as the tidymodels packages. Second, we use the tidymodels packages to encourage good methodology and statistical practice. Many models, especially complex predictive or machine learning models, can work very well on the data at hand but may fail when exposed to new data. Often, this issue is due to poor choices made during the development and/or selection of the models. Whenever possible, our software, documentation, and other materials attempt to prevent these and other pitfalls. This book is not intended to be a comprehensive reference on modeling techniques; we suggest other resources to learn such nuances. For general background on the most common type of model, the linear model, we suggest Fox (2008). For predictive models, Kuhn and Johnson (2013) is a good resource. Also, Kuhn and Johnson (2020) is referenced heavily here, mostly because it is freely available online. For machine learning methods, Goodfellow, Bengio, and Courville (2016) is an excellent (but formal) source of information. In some cases, we describe models that are used in this text but in a way that is less mathematical, and hopefully more intuitive. Investigating and analyzing data are an important part of the model process, and an excellent resource on this topic is Wickham and Grolemund (2016). We do not assume that readers have extensive experience in model building and statistics. Some statistical knowledge is required, such as random sampling, variance, correlation, basic linear regression, and other topics that are usually found in a basic undergraduate statistics or data analysis course. Tidy Modeling with R is currently a work in progress. As we create it, this website is updated. Be aware that, until it is finalized, the content and/or structure of the book may change. This openness also allows users to contribute if they wish. Most often, this comes in the form of correcting typos, grammar, and other aspects of our work that could use improvement. Instructions for making contributions can be found in the contributing.md file. Also, be aware that this effort has a code of conduct, which can be found at The tidymodels packages are fairly young in the software lifecycle. We will do our best to maintain backwards compatibility and, at the completion of this work, will archive and tag the specific versions of software that were used to produce it. This book was written in RStudio using bookdown. The tmwr.org website is hosted via Netlify, and automatically built after every push by GitHub Actions. The complete source is available on GitHub. We generated all plots in this book using ggplot2 and its black and white theme ( theme_bw()). This version of the book was built with R version 4.0.5 (2021-03-31), pandoc version 2.7.3, and the following packages: |finetune||0.0.1.9000||Github (tidymodels/[email protected])| |nlme||3.1-152||CRAN (R 4.0.5)| |nnet||7.3-15||CRAN (R 4.0.5)| |rpart||4.1-15||CRAN (R 4.0.5)| |tidymodels||0.1.3.9000||Github (tidymodels/[email protected])| |tidyposterior||0.1.0.9000||Github (tidymodels/[email protected])| This site contains supplemental materials for Stat 1201, mainly: 1) clarifications on which sections we cover in the textbook (Devore, Probability and Statistics for Engineering and the Sciences9th edition), 2) R code, and 3) links to helpful resources online. It is not in any way a substitute for materials available in CourseWorks. If you find additional online resources that are helpful to this class, please create an issue or send me an email and I’ll add them to this resource. Let me know as well if you find any typos or other mistakes. Note that while you’re encouraged to look ahead, be sure to circle back to those sections when they’re covered in class since content may be added or modified slightly. General study tips The website for the book Make It Stick offers a summary of the experimentally tested study strategies. The tl;dr is: working out problems is better than reviewing notes / textbook doing mixed reviews is better than focusing on one type of problem at a time learning is hard work; if it seems too easy your study strategy might not be the most effective making mistakes and learning from them is a useful strategy (don’t wait until you’ve mastered all of the examples to try a problem) Using Rstudio Bookdown You’ve likely heard a lot of these ideas before, but it’s worth really thinking about them and putting them into practice. As you’re reading the textbook or working on a problem set, keep a list of questions. Challenge yourself by thinking about how the problem would differ if you changed the setup. Try creating your own questions and solving them. Try solving problems in multiple ways. Learn from a variety of sources: class, textbook, Cartoon Guide, etc. If you find differences, ask. Github Rstudio Bookdown You will need to install two applications: R and RStudio: - R – the programming language itself – is available here: - RStudio – an integrated development environment (IDE) which makes it much easier to use R. It is optional but highly recommended. This is the app you will open to use R. Choose the free version of RStudio Desktop: Getting Started with R: Working in the Console The first step in getting started is getting comfortable working in the RStudio console. It works like a calculator in the sense that your work is not saved. Do the following: Quick review of material covered in the video, plus additional examples Working in the console pane is similar to a using a calculator: each line of code is executed when you press enter. Note that your work is not saved with this approach. Assigning a variable Drawing a stem and leaf plot Working with vectors: Read and try the examples in Chapter 1 of Introduction to R Creating Graphs, Saving your work Saving code as an .R file (Also covered in video above) Saving with this method saves only the code, not the output. Below are two methods for creating .html documents that contain both code and output: Convert .R file to .html (Also covered in video above) This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00006.warc.gz
CC-MAIN-2021-49
7,045
54
https://www.ssec.wisc.edu/mcidas/doc/mcv_guide/current/display_controls/ImageCaptures.html
code
Image and Movie Capture Controls You can capture the Main Display window as an image, a QuickTime movie or an animated GIF. You can save a display as an image by selecting the View->Capture->Image... menu item in the Main Display window. A file dialog will open where you can enter a file name for the image file. McIDAS-V supports JPEG, PNG, GIF, PDF, PS, and SVG formats. From this File Dialog you can also specify the image quality (only used for JPEGs) and what to capture. You can also select whether or not the background is transparent. Note: When capturing an image, the Main Display window cannot be obscured. McIDAS-V can also write out an image and the corresponding Google Earth KML or KMZ file. For this to be correct, the projection must be a Lat/Lon geographic projection (i.e., rectilinear) and in an overhead view. Some of the default projections that are Lat/Lon include World, Africa, Asia, Australia, and the individual state projections (US->States->...). You can also create your own Lat/Lon projection using the Projection Manager. The simplest way to get a correct projection is to select the Projections->Use Displayed Area menu item in the Main Display window. If you specify a .kml file, McIDAS-V will generate an image with the same file prefix and the kml file that refers to the image. If you specify a .kmz file (which is a zip format) it will contain the image and the kml You can send any display to a printer. Select the View->Capture->Print... menu item in the Main Display window. A Print dialog will popup where you can configure and print an image. You can save any sequence of displays as a movie. Select the View->Capture->Movie... menu item in the Main Display window to open the Movie Capture window: Image 1: Movie Capture Window Note: More information about these controls is found in the Creating a Movie section below. - Capture - - - Captures one image to create a movie. - - Captures a loop of images at the specified dwell rate through the Time Animation Controls. - Reset to start time - Resets the loop to the initial timestep before recording the video. - - Creates a new frame in the movie every given number of seconds. - What to capture - - Current View - Captures only what is in the view screen of the Main Display window. - All Views - Captures all of the panels in the Main Display window without including any legends or toolbar buttons. - Current View & Legend - Captures both the view and the Legend. - Full Window - Captures the entire Main Display window, including the toolbars. - Full Screen - Captures everything visible on the monitor. - Beep - Produces a beeping noise at the beginning of each frame captured in the movie. - Background Transparent - Captures the movie with the background image transparent. - Image Files - - Image Quality - Sets the quality of the images. Lower quality images will utilize less memory, but will not look as good. - Save Files To - Allows you to individually save each image in your movie. To use this option, click the checkbox for Save Files To and specify a directory. - Filename Template - Sets the naming structure for the images. This option is only available if you are saving the individual images in your movie with the Save Files To option. There are several ways you can customize the output, described in the section below. - Frames - - # frames - Displays the number of frames in the movie. - - Opens a Movie Preview window, where you can see individual slides before saving the movie. - - Deletes every frame saved in the movie. - - Saves the movie to the specified location using the chosen format type. To make a movie, there are three steps: capturing the frames, previewing the frames, and creating the movie. McIDAS-V supports QuickTime movies, animated GIF and AVI files, Compressed ZIP, Google Earth KMZ and AniS or FlAniS HTML file formats. - To capture frames, in the Capture section of the Movie Capture window, do the following: - Capture one image - Makes a single frame of the McIDAS-V display. Progressively change the display and capture one frame at a time to create a movie. - Capture animation - Captures all frames in a display time sequence that you control with the usual Time Animation Controls. Check Reset to start time to ensure you capture the entire animation sequence. The QuickTime animation capture starts on the first frame visible in McIDAS-V and goes to the end. This tool can be used to capture part of a loop. - Capture automatically - Takes snapshots of the frames in McIDAS-V display while you make changes, such as changing the view point, zooming, rotating, etc. Click the button again to stop the snapshot. You can change the sampling rate of the snapshots with the Rate field. - You can combine these three different methods of capture. The list of frames is additive. - You can Preview the movie by pressing the button. This opens a Movie Preview window, where you can see the individual slides before saving the movie. The Movie Preview window also allows you to remove individual frames from the movie before saving it with the button. - If you want to save the individual intermediate files that are used to create the movie, check the Save Files To box and specify a directory and file name format. Otherwise, the intermediate files will be saved in a temporary directory and will be removed. You can use the following templates to customize the name of the output file: - %count% - Represents the image counter. - %count:decimal format% - Allows you to format the count using the same rules defined in the lat/lon format section of the User Preferences window's Formats & Data tab. You can also specify a Java DecimalFormat, e.g., %count:000% outputs three-digit counts with leading zeros (001, 002, etc.). - %time% - Represents the animation time in the default format. - %time:time format% - Begins with "time:" and contains a time format string using the the same date formatting rules described in the User Preferences window's Formats & Data tab. - When done capturing the frames, select the button to specify the name and format of the movie you want to save. - When saving a movie, in the Save dialog, there is an option for 'Use 'global' GIF color palette'. This option is applicable for animated GIF movies, and it is enabled by default. Animated GIFs are limited to 256 colors, and this option uses the same 256 colors for every frame in the loop when the setting is enabled. Enabling this option allows for certain display settings such as a color scale to remain constant throughout the loop. If this option is disabled, then each frame uses it's own color palette, so one color may represent one value in one frame, and the same color may represent a different value in the next frame. One circumstance where disabling this option may be ideal is if the movie starts with a low-light scene (not many colors) and there is no color scale in the display. McIDAS-V supports displaying certain types of QuickTime movies (including the ones McIDAS-V generates). These movies can be loaded in the General->Files/Directories Chooser in the Data Sources tab of the Data Explorer.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00142.warc.gz
CC-MAIN-2021-39
7,147
65
https://www.octopusoverlords.com/forum/viewtopic.php?f=6&t=97997&sid=ea31c8738b38ddd6bcb44f7a7079d4e4&start=1080
code
El Guapo wrote: ↑Wed May 26, 2021 4:34 pm So do I understand you correctly that while it's theoretically possible that Chinese scientists intentionally created COVID in a lab as part of "gain of function" scientific research (not as a bioweapon), but that it's very unlikely based upon what we know? What is it that we know that makes this unlikely (vs. natural mutation in the wild)? Also, in theory COVID could have arisen in the wild, been isolated in a lab for study, and then been accidentally released, right? Is that also unlikely? Yes. I'll try to rank this based on my understanding. However, it's not a linear progression of likelihood from 1-5 (1) Chinese scientists created SARS-CoV-2 in a lab for nefarious purposes (2) Chinese scientists found and then modified SARS-CoV-2 in a lab and released it for nefarious purposes (3) Chinese scientists found and then modified SARS-CoV-2 and accidentally infected themselves (4) Chinese scientists discovered SARS-CoV-2 in the wild and were researching at, then accidentally infected themselves (5) SARS-CoV-2 randomly appeared in nature #1-#3 are the scenarios being pushed to further some type of agenda against China. These are the least likely scenarios based on the virologists and zoonotic researchers I trust saying the genetic and physiological nature of SARS-CoV-2 does not suggest any type of human manipulation (in part or in whole). #4 is certainly possible, thought it also seems unlikely, mainly because you'd expect scientists that accidentally infect themselves to not rush out and spread it. However, I guess if they they infected themselves and were all asymptomatic they could have inadvertently introduced it outside the lab; I'm not sure how you'd prove this. #5 is still the most likely culprit just based on what we know about how viruses jump species all the time. It's also part of the reason research exists - to figure out if we can understand why they jump and then if they do, how could they be really bad for humans ("gain of function") - what type of mutations or changes are needed? How likely is that to happen? Would a jump result in a really deadly disease that isn't highly communicable? Or would it involve jumping and it's not deadly at all but spreads pretty easily? Could it then evolve into something more problematic? This is where Rand Paul is talking out of his ass (or mouth, same thing) and trying to come up with some nefarious motivation for research, completely oblivious to the idea that emerging infectious diseases is a whole field of study and something scientists from around the globe had been watching.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00308.warc.gz
CC-MAIN-2021-25
2,615
12
http://www.infoworld.com/d/developer-world/moonlight-2-expands-silverlight-capabilities-linux-662?page=0,1
code
"This is a logical and, frankly, necessary decision from Microsoft," O'Grady said. "Competing with the ubiquity of [Adobe] Flash is an uphill battle to begin with, even for a vendor with the distribution reach of Microsoft. If the threat of litigation hangs over potential users of Moonlight on non-Suse platforms, its competition with Adobe for market penetration on non-Windows and Mac platforms would be over before it begun." Moonlight, while limited to the relatively small user base of Linux and Unix desktops, will help Silverlight compete with Adobe's more established Flash platform. "I want to live in a world where I can write my Web applications in C++," as opposed to just ActionScript, which is used in Flash development, said de Icaza. Also this week, Mono developers released Mono 2.6 and version 2.2 of MonoDevelop. Mono 2.6 offers capabilities like Windows Communication Foundation client and server support and backing for Low Level Virtual Machine compiler optimization. It also includes Microsoft's open-source ASP.Net MVC (Model View Controller) and is faster and slimmer. MonoDevelop 2.6 code now is licensed under Lesser GPL v2 and MIT X11. GPL code has been removed, allowing add-ins to use Apache and MS-PL code and allowing use of proprietary add-ins. The user interface has been improved in this release. This story, "Moonlight 2 expands Silverlight capabilities for Linux," was originally published at InfoWorld.com. Follow the latest developments in application development at InfoWorld.com.
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830074.72/warc/CC-MAIN-20140820021350-00172-ip-10-180-136-8.ec2.internal.warc.gz
CC-MAIN-2014-35
1,521
6
https://erikduval.wordpress.com/2010/01/14/future-of-interoperability-food-for-thought/
code
Future of interoperability: food for thought… I spent Monday and Tuesday in Bolton, for a meeting of the CEN/ISSS Workshop on Learning Technologies and a CETIS day on the Future of Interoperability Standards. I arrived late on the first day, but it seemed to proceed like a regular CEN workshop meeting: I have doubts about the relevancy of some of the work, but things do progress and have some traction. The second day attracted more people than expected: the good news is that quite a few people seem to care about the future of interoperability standards. The bad news is that the day was organized because of the feeling of dissatisfaction with how standardization of learning technologies is taking place. Seems like quite a few people share that feeling then… Of course, the standardization process is far from optimal: it is slow, doesn’t always lead to results, or at least not always to results that matter to folks outside of these meetings. On the other hand, I am an optimist: in ARIADNE, we now have more than 400.000 learning objects with LOM metadata: that is not even near where we want to be, but it is much better than 5 years ago when nobody had more than 10.000! In the introductory session, I mentioned some of my concerns: - fragmentation: too few people work on too many different things in too many different organisations – there may well be more organisations than people in this area! I think that one of the reasons is that involvement in the development of a standard can be listed on a cv, whereas the work behind the scenes to develop infrastructure is often … rather hidden behind the scenes. A standard should really not be a goal, but more a last resort when we have a problem to make things work together. - consensus: This is inherently a difficult type of process: it is a lot like “herding the cats”. In other areas where consensus is the norm, things move forward in a rather messy way too. I kept thinking throughout the day about the United Nations or the Global Warming Summit. Maybe the standards process is a bit like democracy in that respect: more kind of the least bad system… At least, the future of humanity or our planet doesn’t depend on what we do. (Then again, I happen to think that learning is important…) - Don’t stop too early: Many of the “standards people” move on to something else when a standard is finished. (Incidentally, around 80% of the participants said that they were paid to develop standards. That feels odd to me: it seems to position standards as a goal rather than as a means…) I think that we should pay more attention to what happens after a standard is finished: LOM was finished more than 6 years ago, but we are still very much working with all kinds of communities on the deployment of application profiles, back-end infrastructures and front-end tools. And making progress – slower than expected maybe, but real progress: more on that later. There was a lot of interesting discussion over the day – you can actually follow it quite nicely on twitter. I’m not sure that the discussion helped us to make a lot of progress. I certainly had a strong “them is us” kind of feeling: if we want to get better at developing standards, we need to agree on how we will make that happen… And we need to move beyond the “we should all work better together” stage… In any case, there is some Good Stuff in the position papers on the web site and I’d love to hear your comments and feedback: what are your ideas on how to improve the standardization of learning technologies?
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119637.34/warc/CC-MAIN-20170423031159-00251-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
3,590
11
https://rthomsen6.wordpress.com/2015/09/
code
Ark 15.08.1 was released as part of KDE Applications yesterday. This release contains a handful of bugfixes, including a fix for a long-standing, highly-voted bug. The bug was first reported in 2009 and had 738 votes in bugzilla. The bug caused drag’n’drop extraction for multiple selected archive entries to not function properly. When selecting multiple entries in an archive and dragging them to e.g. Dolphin for quick extraction, the selection would previously be undone and only the single entry under the mouse cursor would be extracted. This is now fixed so that all selected entries are extracted. Any dragged files are simply extracted without path, while for dragged folders any subfolders/files beneath them are also extracted. This is comparable to how most file archiving software works. Of other changes in this release, there was a fix for extracting rar archives when using unrar version 5 (bug 349131). If one or more of the destination files already existed, Ark would stall and the extraction process would never complete. This was caused by Ark only supporting the overwrite prompt of unrar version 3 and 4. Enjoy the release and look out for a great release in December with several new features 🙂 Thanks to Elvis Angelaccio and Raphael Kubo da Costa for reviewing the code changes.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00092.warc.gz
CC-MAIN-2018-30
1,310
5