url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
http://stackoverflow.com/questions/5342043/run-time-error-5-on-vb6-error-log-shows-class-tabdlg-sstab-of-control-sstab1-w?answertab=votes
|
code
|
I have a vb6 exe file that has been stable and working for several years now. When I run the installation file on another computer, I get run time error 5. There are several forms that have the SSTab Dialog on them. In the logs for these particular forms I get the following msg. "Class TabDlg.SSTab of control SSTab1 was not a loaded control class" In checking the installed components, the Microsoft Tabbed Dialog is selected. I am totally lost, any help please?!?!?
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989897.84/warc/CC-MAIN-20150728002309-00157-ip-10-236-191-2.ec2.internal.warc.gz
|
CC-MAIN-2015-32
| 468 | 1 |
https://sharmikes.com/collections/atlanta-falcons/products/atlanta-falcons-seat-cover
|
code
|
Fit this OFFICIAL NFL Car Seat Cover by The Northwest Company over your car seats. They easily slip on and feature your team’s logo bright and bold on soft padding. Measure 51”x 21”. 100% Polyester, Urethane Foam Backing. Usually ships in 2 to 3 business days.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249578748.86/warc/CC-MAIN-20190224023850-20190224045850-00585.warc.gz
|
CC-MAIN-2019-09
| 266 | 1 |
https://onanigal.info/40996.htm
|
code
|
Iso ( 2) ubuntu- 12. 04 LTS と Windows10 とのデュアルブート環境の構築」 という記事を書きましたが、 Ubuntu 16.
04 LTS がリリースされていますので、 バージョンアップ対応の記事を書きます。. Ubuntu is distributed on four types of images described below.
Самый большой важный как многие думают - сложный шаг в начале работы с Ubuntu - это ее установка. Metalink: 58 48K Ubuntu 16.
5- desktop- amd64. DVD image on a mirror server that is geographically close to you.
Ubuntu 14 04 1 server i386 iso download. 1 comes with nine months,. 06 was released behind schedule, having been intended as 6. Sep 09, · Step Two – Install PXELinux.
Testers who want to help with Ubuntu MATE. The desktop image allows you to try Ubuntu without changing your computer at all at your option to install it permanently later.
Torrent: : 49 : 22K : ubuntu- 14. ) to a new folder on your hard disk ( e.
06 was released behind schedule, having been intended as 6. Sep 09, · Step Two – Install PXELinux.
It is usually run on personal computers is also popular on network servers, usually running the Ubuntu Server variant with enterprise- class features. 04- server- i386.
Gpg: 30 933 SHA1SUMS: 30 347 SHA1SUMS. Ubuntu ( / ʊ ˈ b ʊ n t uː / ; stylized as ubuntu) is an open source operating system for computers.
Download and copy the single file ubuntu- 11. Ubuntu is an open source software operating system that runs from the desktop to the cloud to all your internet connected things.
Parent Directory - MD5SUMS: 30 307 MD5SUMS- metalink: 46 568 MD5SUMS- metalink. Your next steps with Ubuntu Server.
Download Ubuntu Server. Parent Directory - MD5SUMS: 58 264 MD5SUMS- metalink: 58 284 MD5SUMS- metalink.
04- desktop- i386. Gpg: 58 916 SHA256SUMS: 58 392 SHA256SUMS.
Do you want to have a persistent OS but run from an ISO file use grub4dos have only 4 files on your USB drive? Info hash: SHA1 hash of the " info" section of the metainfo ( *.
PXELinux is part of the SysLinux package. Torrent) complete: number of connected clients with the complete file downloading: number.
04 ( Vivid Vervet) Select an image. 1- server- amd64.
News feature lists of Linux BSD distributions. This directory contains the most frequently downloaded Ubuntu images.
Linux Live USB Creator is a freeware for creating portable bootable virtualized USB stick running Linux. 04 release notes. I see that Ubuntu- builder is a very good app to create a customized Ubuntu Desktop, but w. Iso ( 3) ubuntu- 12.
Ubuntu Server 16. 04 LTS @ UEFI 環境 Step: 1 はじめに 過去に「 [ UEFI環境編] Ubuntu 14.
Iso ( 4) ubuntu- 14. Gpg: 30 933 SHA256SUMS: 30 467 SHA256SUMS. Download a copy of Ubuntu MATE Toggle. Recommended system requirements are the same as for Ubuntu 16. Other images including DVDs , source CDs may be available on the cdimage server. Gpg: 30 933 ubuntu- 14.
C: \ Ubuntu) if you have a FAT32 USB drive already prepared copy it to the USB drive. Download) ubuntu- 16.
Download the latest LTS version of Ubuntu,. 4- server- amd64.
Download a copy of Ubuntu MATE Toggle. Recommended system requirements are the same as for Ubuntu 16.
Other images including DVDs , source CDs may be available on the cdimage server. Gpg: 30 933 ubuntu- 14.
First steps: Preparing for the Ubuntu Install. Adam Conrad has announced the release of Ubuntu 14.
Ubuntu is distributed on two types of images described below. Ubuntu Mirror Download ISO 1 Download ISO 2.
06 ( Dapper Drake) was Canonical' s fourth release, released on 1 June the first long- term support ( LTS) release. Gpg: 58 916 ubuntu- 16.
In questo articolo presentiamo un elenco di tutte le URL ufficiali per il download delle ISO delle principali versioni di Ubuntu ( desktop & server i386 AMD64). It is a Linux distribution based on the Debian architecture.
2 LTS ( Trusty Tahr) Select an image.
This tutorial describes how to install PXE Server on Ubuntu 16. 04 LTS system, and how to deploy OS on PXE clients in the local area network. 5- desktop- i386.
metalink: 46 44K Ubuntu 14. 5 LTS ( Trusty Tahr) ubuntu- 14.
This download is an ISO file and requires a CD burner and blank CD to burn the disc image. 4- desktop- i386. Ubuntu 32 Bit; Ubuntu 16.PXE/ BINL - AN03: Non- Windows Network Boot/ Install. How to start an automated network boot/ install of a Non- Windows asset taking no more than 15 minutes and a ~ 3 MB download.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00245.warc.gz
|
CC-MAIN-2019-26
| 4,501 | 33 |
https://www.geekzone.co.nz/forums.asp?forumId=97&topicId=72771
|
code
|
I'm currently looking for a (free) app I can use to go out gGeocaching.
The only thing is, my IDEOS is on 2degree and because we don't live within a mobile broadband zone, mobile data is too expensive to use. So, I need an app that has offline (or no) maps, allows me to import .loc waypoints, and is fairly accurate - the apps I've tried so far that have offline maps don't have close enough zoom levels to pinpoint a cache.
Anyone have any ideas? Free would be preferable, maps aren't required.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816893.19/warc/CC-MAIN-20240414161724-20240414191724-00765.warc.gz
|
CC-MAIN-2024-18
| 496 | 3 |
https://onlineessay-writer.com/2021/03/11/safe-assign-blackboard_zs/
|
code
|
In addition, all mla writing format compatible submissions (including those that have been checked safe assign blackboard safe assign blackboard for plagiarism) can now when was madame bovary written be graded using the inline grading workflow currently available for assignments safe assign™ is a plagiarism best tools for writers prevention service, offered homework math sheets in the blackboard learning system. blackboard teaching learning technology stony brook university safeassign plagiarism prevention detection database internet proquest abi inform global reference create access matching percentage how make assignment sa report safe assign blackboard grade student work assignment homework direct submit tlt created date. our plan and hope is that we can take informative speech assignment the documents, submit to our new service, and pre-build a new databasefor this exact reason being asked by you safeassign is between 92% and 97% accurate in detecting plagiarism, creative writing undergraduate courses making it effective enough to detect copying. as a reminder, results returned in a safeassign originality report may not be identical to results returned by a consumer internet search service (e.g. use safeassign to review why x essay example assignment submissions for plagiarism potential and create opportunities examples of good writing in novels to help students identify how to properly attribute sources rather than paraphrase safeassign. blackboard safeassign: safe assign blackboard enter information attaching. “attempt 5/5/17”) blackboard learn-saas service status. this service helps educators prevent plagiarism by detecting unoriginal content in student papers. english; عربية; català; cymraeg; deutsch; español. blackboard ally. click assessments and choose assignment. work you entity writers of academic writing challenges, safe assign blackboard do my assignment. an essay about my city.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00635.warc.gz
|
CC-MAIN-2021-17
| 1,939 | 1 |
https://scholar.archive.org/work/zjnie3rrqzbtzhq2jfsno4zlmu
|
code
|
Design, implementation and evaluation of security in iSCSI-based network storage systems
Proceedings of the second ACM workshop on Storage security and survivability - StorageSS '06
This paper studies the performance and security aspects of the iSCSI protocol in a network storage based system. Ethernet speeds have been improving rapidly and network throughput is no longer considered a bottleneck when compared to Fibre-channel based storage area networks. However, when security of the data traffic is taken into consideration, existing protocols like IPSec prove to be a major hindrance to the overall throughput. In this paper, we evaluate the performance of iSCSI when
... f iSCSI when deployed over standard security protocols and suggest lazy crypto approaches to alleviate the processing needs at the server. The testbed consists of a cluster of Linux machines directly connected to the server through a Gigabit Ethernet network. Micro and application benchmarks like BTIO and dbench were used to analyze the performance and scalability of the different approaches. Our proposed lazy approaches improved throughput by as much as 46% for microbenchmarks and 30% for application benchmarks in comparison to the IPSec based approaches.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00008.warc.gz
|
CC-MAIN-2021-49
| 1,241 | 4 |
http://thestamp.umd.edu/Event_Guest_Services/Contact_Us
|
code
|
Monday - Friday 9am-5pm
eCalendar Help: [email protected]
Stamp University Department Events: [email protected]
Stamp Student Organization Events: [email protected]
Stamp Department and Non University Events: [email protected]
Monday - Friday 12pm-10pm
All operational hours are subject to change during the summer and winter semesters.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822098.86/warc/CC-MAIN-20191022132135-20191022155635-00284.warc.gz
|
CC-MAIN-2019-43
| 362 | 7 |
http://onlineprogrammingassignme98417.alltdesign.com/what-does-database-project-help-mean-10391362
|
code
|
What Does database project help Mean?Bear in mind these were not “official” benchmarks, and I've no longer use of the device in which I produced them. I've however to research if precisely the same problem exists on 5.
This operator is utilized to carry out value assignments in two cases, described in the following two paragraphs.
On the internet Authentic-time Reporting Process is readily available for you to check your earning and review preceding tutoring classes that you have executed at any time.
Numerous World-wide-web applications have an authentication process: a user delivers a consumer title and password, the online application checks them and stores the corresponding consumer id in the session hash.
One can usually depend on this type of procedure for running things greater. This a person technique allows people today to obtain their problems solved with excellent relieve. Get up this as your java project and prevent stressing about the final grades.
“Java Project Strategies” is without doubt one of the typical inquiries requested when you have to choose a subject to your final calendar year project or semester projects. At that time you begin to question “what topic really should you select on your project.
If all interfaces are authenticated towards the area controller for the domain of which the pc can be a member, the area profile is utilized.
It is important to notice that the particular crafted impression or hyperlink will not always must be positioned in the world wide web software's domain, it can be any place - inside a Discussion board, web site submit or electronic mail.
We analyzed it by using a sample of a hundred rows inserted with every query. What are the final results? Lower is better:
A true-world illustration can be a router reconfiguration by CSRF. The attackers mysql assignment help sent a malicious e-mail, with CSRF in it, to Mexican consumers. The e-mail claimed there was an e-card watching for the consumer, but Additionally, it contained a picture tag that resulted in an HTTP-GET request to reconfigure the user's router (which is a popular model in Mexico).
To be able to stop attacks, lower their affect and take away details of attack, To begin with, It's important to completely understand the assault methods so that you can locate the right countermeasures. That is what this guideline aims at.
Closing year projects are The key projects hence each and every student tends to organize the most beneficial project and acquire the most beneficial of marks. Though everyone is prepared to create a dent in their project but just a few of them know a great deal of java project Suggestions.
In this instance, MyISAM has an extremely extraordinary advancement – LOAD DATA accelerates to 12x situations the import. InnoDB, again continue to every one With all the default parameters can Enhance the accelerate to 3x occasions, plus much more drastically inside the newer variations (5.
InnoDB is a much more interesting engine, as it is actually ACID by default, plus much more sophisticated. Can we make it as speedy as MyISAM for importing?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657097.39/warc/CC-MAIN-20190116073323-20190116095323-00382.warc.gz
|
CC-MAIN-2019-04
| 3,124 | 14 |
https://www.germanshepherds.com/forum/how-do-i-teach-my-dog/235370-need-indoor-game-ideas.html
|
code
|
"find it". i hide things and send him to "find it".
"back up". place the dog next to the wall and
teach him to "back up" on command.
teach him not to counter surf. i use to place food
on the edge of the counter. when he went for it
a simple "no" or "leave it" worked. then i started
placing food on the edge of the table, on the seat
of a chair and then on the floor. you want to work
your way to leaving the food available and leaving
teach him "go to your crate" or "go to your bed".
you want to be able to give this command from
anywhere in your house.
teach him not to door dash. if the door is propped open
he's not suppose to exit.
i taught my dog to retrieve the mail from the mailman.
the mailman pulls up, beeps his horn (sometimes).
i open the door and the dogs goes down the driveway
and the mailman hands him the mail.
teach him to jump on the bed or sofa for "cuddle time"
or just to be near you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573124.40/warc/CC-MAIN-20190917223332-20190918005332-00086.warc.gz
|
CC-MAIN-2019-39
| 909 | 20 |
http://transitionmilwaukee.org/profile/DaveSwanson
|
code
|
Hi Dave - Welcome to Transitions. We just talked about you this morning. Erik Lindberg is a good friend of mine (or perhaps more accurately, co-conspirator). We talked about workgin with you and the RSA next season for our produce. Maybe we could get together sometime and discuss how things work with the RSA.
Looking forward to working on many things with you~g
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189495.77/warc/CC-MAIN-20170322212949-00375-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 363 | 2 |
https://forum.backdropcms.org/forum/not-possible-check-upgrades?page=0%2C0#comment-3561
|
code
|
I get time-out and lines in logfile saying the same.
There are several other posts regarding this problem. Is someone going to look into this soon?
Seems a bad idea now converting several client sites from drupal.
I also get time-out trying to install new modules.
This is backdrop specific. Drupal and ModX sites have no problems. Certificates are all good.
Egmund, thanks for posting. Yes, this has been an issue that has affected other people in the past. Are you working on a local setup or on a remote server? I noticed you posted also here - have you tried some of the solutions there?
I opened a new issue in Backdrop's issue queue to provide http fallback for the update manager (which is what Drupal 8/9 currently does). Hopefully this will be tackled soon.
Thank you argiepiano.
The sites are on a shared server in Germany, Hetzner Online Gmbh via a reseller in Denmark. There is no problems with connection from my pc to this server, but there is timeout between Germany and wherever backdropcms server is.
This have been going on for too long now! Sorry, just as I got exited about backdrop.
Thanks, Egmond. The Backdrop community is very active and interested in helping people like you who are migrating from Drupal. I'm sure someone will pick up the issue I posted at some point.
In the meantime, you may want to work with your shared server provider - they may be able to solve it. The issue often is that Backdrop's update manager tries to obtain update data from updates.backdropcms.org/release-history with an SSL connection. In order to do that the server sends a certificate that's stored in your server. If there is a problem with that certificate the connection fails.
In your site hosted in the shared server, are you seeing question marks like the ones shown in this image? Can you provide a few more details, including the server type, PHP version, Backdrop CMS version?
CloudLinux v7.9.0 [cp03]
php Version: 7.4.27
Newest ver of backdrop
This morning all is fine again.
& I am on 1.21.0
Guess I need to relax about this - give it a few days
Tha saga continues. Several sites can not connect at all, other sites get some modules checked but not all.
I tested a Drupal site for comparison: No problems at all.
I checked certificates: All valid.
I asked the hosting company: Awaiting response.
What is the difference in update checking between Backdrop and Drupal?
Seems the backdrop server is difficult to reach at certain hours/days. I do not think it would fluctuate this much if the problem was with my hosting company - then it would be either on or off.
Problem is less in the pm (European time). I hav no issues right now for instance.
I know I'm a little late here, but there's an important question: which exact problems do get logged?
Go to /admin/reports/dblog, expand the filter fieldset and filter for "update".
If there are entries - how many?
Now the interesting part: inspect them all and provide the failure type, something like:
HTTP request to XXX failed with error: [This info here please!]
Without knowing what fails, it's hard to suggest a solution. It could be an ssl problem, timeouts, whatever...
(I'm also using Hetzner, no reseller though, and never experienced problems.)
There are about 5 errors per day (guess core + about four modules/themes) per site
similar to this one:
All saying 'Connection timed out'
You want/need more?
To me this, and the fact that some times of the day are better than others, sounds like a server connection issue, rather than a certificate issue. I'd talk to your provider to see if there is way to improve this, and the people who maintain the Backdrop update server should check as well.
You asked about the difference between Drupal and Backdrop for updates - they use the same approach to get the list of updates, but they are of course different addresses. Also Drupal 7 still uses a non-secure connection to get update information. Backdrop (like Drupal 8/9) use a secure connection which sometimes causes certificate issues (which doesn't seem to the be problem in your case).
Yes - problems connecting. But you are on the same server - so weird.
Drupal 9 checks works just fine - no issues ever.
The problem also is same for auto-check (cron) and manual check.
Two differences I noticed:
The Drupal update server is actually a CDN (fastly) with multiple servers around the globe. OK, they need a CDN, they have a lot more users.
The Backdrop update server is a single instance and also has an IPv6 address.
As noted before - I never noticed any connection problems from Hetzner servers to Backdrop updates, so your problem might be specific to that reseller.
It could have something to do with an IPv6 routing problem.
@Egmund are all your webs on the same server?
I am still waiting for response from reseller, azehosting.net. I will f.w. the IPv6 question to them.
Yes, all my eggs are in one basket. I have cPanel, it is quite cheap, server is fast. But the response time is sometimes ridiculous. Maybe time to look around again. On the other hand: problems I had with various other hosting firms are gone.
My hosting corp. finally got to check the problem: An IPv6 address was blocked or graylisted.
Is it possible to get a list of the IP addresses on the server?
Yes, that's easy (and public, anyway):
updates.backdropcms.org has address 22.214.171.124
updates.backdropcms.org has IPv6 address 2600:3c03::f03c:91ff:fec2:d257
Super, my problems are over.
& for others w. same/similar problems: Look into whether IPs have been graylisted as mine was.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00736.warc.gz
|
CC-MAIN-2023-23
| 5,544 | 57 |
https://www.ainewzine.com/2020/10/23/pipelines-marketers-gateway-to-understanding-machine-learning-cmswire/
|
code
|
Marketers are increasingly using machine learning technology to help implement campaign strategies. But the introduction of machine learning can raise programming concerns, concerns which many marketing professionals may only have a surface-level understanding of. If you are a marketing professional who finds themselves in this situation, understanding pipelines is a good place to start learning without being overwhelmed.
A Beginner’s Guide to Machine Learning Pipelines Pipelines are process steps necessary in building a machine learning algorithm. “Pipeline” is used by developers to describe the series of events which feed one into the other from source code and on into a production environment. If you research software development you will likely see pipelines labeled for many programming services. For example, Azure pipelines connect Azure cloud services to repositories like GitHub and Bitbucket. Solutions like this are meant to establish an integrated environment for development workflow and offer specific features that work with other related cloud services. For machine learning, pipelines address the statistical planning for data and parameters for the produced model. A machine learning pipeline generally consists of several steps, but if you are just starting out with machine learning, it might be easier to think of every step in three parts.
It starts with data acquisition. What data do you need for your model? What source will you pull your data from? After determining this, you have to set up a connection to the data lake or database where the data is housed. Next it’s time to make some decisions for the data and model. The data is processed statistically, with tasks such as removing outliers and potentially substituting a mean for a few missing data points. Transformation choices set lines of code that address how the model reads the data. This means row preparation, column preparation and data value changes. For example, you may have to conduct a one hot encoding step to change categorical data into numeric values, eliminating text from the observations entering the model. The final steps examine model performance. The process consists of running the training and test data to establish model specifications. It also involves verifying the accuracy and precision of the results, with the choice to optimize model parameters as needed.
Related Article: Here’s Why Every Marketer Needs R Programming
How Marketers Can Help
A marketer may not know enough to understand Python or R programming to create a pipeline — many of those tasks should be accomplished in collaboration with a developer or IT professional. But a marketer can assist in key decisions that feed these programs, such as establishing the initial data variables and their data sources. In a previous post on dimension reduction, I explained why selecting too many variables can make a model untrainable. Marketers can therefore start by eliminating uninteresting data. Another point where marketers can help is in determining the model documentation. Documentation establishes the instructions for running functions at a specified time. YAML is a markup text language that uses key-value pairs to instruct the model with initial parameter conditions. The pairs are written similarly to that in a JSON and XML file. Most of the current IDE (integrated developer environment) solutions like Visual Studio Code can be used to create YAML files. They are essential for debugging nested key-values, which are common for complex applications. Training sites like Tutorials Point can teach the basics and show examples. Pipelines can reveal what marketers should know to better manage their machine learning model alongside more technical professionals. The need to collaborate on technical teams is fast establishing a marketing ops discipline, a complementary discipline to MLOps and DevOps. With the rising demand for tech-savvy marketers, professionals will do well to add pipeline tasks to their professional development plans in the days ahead.
Pierre DeBois is the founder of Zimana, a small business digital analytics consultancy. He reviews data from web analytics and social media dashboard solutions, then provides recommendations and web development action that improves marketing strategy and business profitability.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00619.warc.gz
|
CC-MAIN-2023-40
| 4,349 | 7 |
https://2017.jumpstarter.hk/en/event/finalist/74
|
code
|
AMBIT targets as the best independent geospatial consultant company in Hong Kong, having developed a tradition of excellence in providing professional GIS services to government and a broad range of industries internationally. Throughout these years our GIS team had worked together with various engineering expertise using innovative geospatial technologies to resolve project challenges. Our staffs have built decades of success in multi-disciplinary GIS services including engineering, environment, planning as well as water applications. We are total geospatial solution provider supplying both data acquisition hardware and geospatial analysis software as well as services to our client. We specialize in 3D geospatial system (3D GIS), terrestrial and aerial photogrammetry mappings and manufacturing of professional aerial mapping drone and camera.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476399.55/warc/CC-MAIN-20240303210414-20240304000414-00898.warc.gz
|
CC-MAIN-2024-10
| 854 | 1 |
https://www.tefter.io/bookmarks/87361/readable
|
code
|
Sure. We model these events – I’m a chronic frequent flier, and we often say that we model agendas to be somewhat like airplane flights. There’s a sort of taxing and take-off phase, we try to spend as much of our time at cruising altitude as we can, and then we try to “Bring it in for a landing.”
What we did last year, first thing out in the morning was really try and explore some foundational topics, and let people sort of move between a bunch of conversations, at their own pace, in their own sequencing, to sort of understand different facets of sustainability, different analyses of sustainability, and just really start to build some shared understanding.
[00:04:10.14] The bulk of the day was spent in participant-driven sessions. What we mean by participant-driven sessions - we ask folks when they register “Hey, what do you actually want to get out of the event?” and we build a soft agenda slate from those topic suggestions, and then at the event we try to get folks in real-time to come up with additional topics that they would like to see addressed.
We don’t use terms like “unconference.” Those terms are sadly overused and have taken on less and less meaning over time, as everything has been called an unconference. What we try to do is say that it is participant-driven, in that we try to source the material from participants, and we prioritize – if you will indulge the notion that these events are knowledge markets, we’re focused on the knowledge consumers, not the knowledge producers.
Many conferences have what I call a “rich get richer” paradigm. Keynoters keep on keynoting, panelists keep on paneling… It’s the usual suspects class hierarchy. What we try to do at these events is identify where the learning needs are, the growth needs, and the folks that have ideas they could use some help building out, and try to resource those conversations.
We try to bring loving supply from knowledge supply-side, toward those that are looking for answers around sustainability, around project growth and maintenance and governance. In doing it that way, we try to setup sessions that are themselves outcome-oriented. Part of why we say “no slides” is slides are a fail before they start, because they assume the so-called presenter knows what those in the room want to hear… And once in a while, they might get it right. But most of the time, they tend to over-share, over-deliver, over-saturate brains.
We try to set up sessions formats that are more transactional and that are more question-driven, where we orient facilitators, we give them some basic ground rules, so that they feel empowered and understanding the plan, but we try to emphasize to them the need to, first off, find out why people came to your session, find out what they really wanna know, and try to center the session focus around what they came for, not what you think they should get. That fundamentally transforms participant experience.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670448.67/warc/CC-MAIN-20191120033221-20191120061221-00448.warc.gz
|
CC-MAIN-2019-47
| 2,975 | 7 |
https://www.rdgraphicdesign.co.uk/post/_gold
|
code
|
I came across these designs playing with shapes to creating interesting patterns. I then found them is Computer Arts magazine. They talk about the the branding style of QORE and how the gold finishes it off. From the example above, you can see the difference the gold makes. Although, without the gold, the patterns are interesting on their own but adding the gold gives it a bit more texture and makes it pop a lot more.
In the image below, the magazine talks to the designers, Truf Creative, about how they came to the end design. QORE thought symbolism was important to them and should be part of their banding.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621450.29/warc/CC-MAIN-20210615145601-20210615175601-00232.warc.gz
|
CC-MAIN-2021-25
| 614 | 2 |
http://mattharris.tech/
|
code
|
I went to an interesting meetup in denver created by Learn Cybersecurity Denver, and it was an intro to Capture the flag.
I gave a talk about using python virtual environements. Lmk if the slides help, and comment if you have any questions.
I’ve been doing some work with CodeForFoco. Namely in the area of creating linear models for them. In this post, what I did, and why.
To learn to program, you need two things:
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704820894.84/warc/CC-MAIN-20210127024104-20210127054104-00499.warc.gz
|
CC-MAIN-2021-04
| 418 | 4 |
https://www.lingyuzhu.com/taking-a-shower
|
code
|
I am a
Taking a SHOWER
What is this:
This is a study about typography. I was asked to select an interaction activity which does is outside digital technology, but comes from everyday experiences. With the selected interaction, I first developed a task analysis, and then was limited to describe the process by using typography only. This study allowed me to understand corresponding visual language of interaction and lay a foundation for the more complex works.
My design topic was "Taking a Shower". In this whole experience, people usually go through three phases: preparing for shower, taking a shower, and putting on clothes after shower. I have "w" repeated thousands of times to simulate the water, and it divided the space into three parts. I illustrated the task duration, necessary materials and equipments, and sounds by using different typefaces, adjusting the font sizes, and adjusting the space.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00593.warc.gz
|
CC-MAIN-2021-04
| 909 | 5 |
https://anyverse.ai/solutions/
|
code
|
ANYVERSE is a breakthrough synthetic data solution for AD/ADAS Perception.
The ANYVERSE software platform offers a scalable, cloud-based, high-fidelity synthetic dataset production environment. This model allows you to manage the data production process and integrate the solution in your Perception training pipeline.
ANYVERSE engineers can team up with your machine learning, perception or simulation experts to configure the ANYVERSE solution based on your specifications (scene features, environment, camera specs) and produce the datasets need.
Channels / 3D information
Datasets come with automatic generation of metadata channels including depth, object ID, material ID, instance ID, 3D motion vectors, surface normals, roughness, 3D positions, and radiance.
Every image comes with segmentation data, including semantic, instance and rectangle segmentation at pixel level. More than 40 semantic classes are defined.
Annotated data available in json, xml and Google protocol buffer formats. Associated metadata such as time of the day, weather conditions, camera parameters, etc. is also included.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524290.60/warc/CC-MAIN-20190715235156-20190716021156-00327.warc.gz
|
CC-MAIN-2019-30
| 1,103 | 7 |
https://nav.uninett.no/wiki/backendprocesses?rev=1192024830
|
code
|
This is an old revision of the document!
NAV has a number of back-end processes. This document gives an overview, listing key information and detailed description for each process. We also give references to documentation found elsewhere on metaNAV.
The following figure complements this document (the NAV 3.3 snmptrapd is not included in the figure):
The shell command
nav list lists all back-end processes.
nav status tells you if they are running as they should.
With reference to the
nav list, jump directly to the relevant section in this document:
nav list gives you this list:
|Alias||gDD / the snmp data collector|
|Brief description||Collects SNMP data from equipment in the netbox table and stores data regarding the equipment in a number of tables. Does not build topology.|
|Depends upon||Seed data must be filled in the netbox table, either by the Edit Database tool or by the autodiscovery contrib|
|Updates tables||netbox, netboxsnmpoid, netboxinfo, device, module, gwport, gwportprefix, prefix, vlan, swport, swportallowedvlan, netbox_vtpvlan|
|Run mode||Daemon process. Thread based.|
|Default scheduling||Initial data collection for new netboxes is done every 5 minutes. Update polls on existing netboxes is done every 6 hrs. Collection of certain OIDs for the netbox may deviate from this interval; i.e. the moduleMon OID is polled every hour.|
|Log files||getDeviceData.log og getDeviceData/getDeviceData-stderr.log|
|Lines of code||Approx 8200|
|Further doc||tigaNAV report chapter 5|
|Alias||IP-to-mac collector / arplogger|
|Brief description||Collects arp data from routers and stores this information in the arp table.|
|Depends upon||The routers (GW / GSW) must be in the netbox table. To assign prefixes to arp entries, gDD must have done router data collection.|
|Default scheduling||every 30 minutes (0,30 * * * *). No threads|
|Lines of code||Approx 130 lines|
|Further doc||NAVMe report ch 4.5.8 (Norwegian)|
|Alias||Mac-to-switch port collector / getBoksMacs / cam logger|
|Brief description||Collects mac addresses behind switch table data for all switches (cat GSW, SW, EDGE). The process also checks for spanning tree blocked ports.|
|Depends upon||gDD must have completed the swport tables for the switches.|
|Updates tables||cam (mac adresses), netboxinfo (CDP neighbors), swp_netbox (the candidate list for the physical topology builder), swportblocked (switch ports that are blocked by spannning for a given vlan).|
|Default scheduling||every 15 minutes ( 11,26,41,56 * * * * ). 24 threads as default|
|Lines of code||Approx 1400|
|Further doc||NAVMore report ch 2.1 (Norwegian), tigaNAV report ch 5.4.5 and ch 5.5.3|
The following is cut and paste from the referenced chapters in NAVMore and tigaNAV. Rewrite/translate to one text.
CAM-loggeren kjører hvert kvarter. Det som skjer da er at alle svitsjer (SW og KANT) blir hentet fra databasen og lagt i en kø. Så startes et antall tråder (24 er default) som alle jobber mot denne køen. Hver tråd sjekker om det ligger en boks i køen, og hvis så er tilfellet hentes denne ut og tråden foretar spørring med SNMP. Når dette er ferdig sjekker den køen igjen. Dersom køen er tom avslutter tråden. Alle trådene vil altså hente bokser fra køen helt til denne er tom, og da har alle resterende bokser en tråd som jobber mot dem. Dette sikrer at alle trådene hele tiden har arbeid å gjøre selv om noen bokser tar mye lenger tid å hente data fra enn andre.
The cam logger, responsible for the collection of MAC addresses and CDP data, has been updated to make use of the OID database. This has greatly simplified its internal structure as all devices are now treated in a uniform manner; the immediate benefit is that data collection is no longer dependent on type information and no updates should be necessary to support new types. Upgrades in the field can happen without the need for additional updates to the NAV software.
The cam logger collects the bridge tables of all switches, saving the MAC entries in the cam table of the NAVdb. Additionally, it collects CDP data from all switches and routers supporting this feature; the result is saved in the swp_netbox table for use by the network topology discover system.
While its basic operation remains the same, it has been rewritten to take advantage of the OID database; the internal data collection framework has been unified and all devices are treated in the same manner. Thus, data collections are no longer based on type information and a standard set of OIDs are used for all devices. When a new type is added to NAV the cam logging should “just work”, which is a major design goal of NAV v3.
One notable improvement is the addition of the interface field in the swport table. It is used for matching the CDP remote interface, and makes this matching much more reliable. Also, both the cam and the swp_netbox tables now use netboxid and ifindex to uniquely identify a swport port instead of the old netboxid, module, port-triple. This has significantly simplified swport port matching, and especially since the old module field of swport was a shortened version of what is today the interface field, reliability has increased as well. -
|Process name||networkDiscovery.sh topology|
|Alias||Physical Topology Builder|
|Brief description||Builds the physical topology of the network; i.e. which netbox is connected to which netbox.|
|Depends upon||mactrace fills data in swp_netbox representing the candidate list of physical neighborship. This is the data that the physical topology builder uses.|
|Updates tables||Sets the to_netboxid and to_swportid fields in the swport and gwport tables.|
|Default scheduling||every hour (35 * * * *)|
|Log file||networkDiscovery/networkDiscovery-topology.html og networkDiscovery/networkDiscovery-stderr.log|
|Lines of code||Approx 1500 (shared with vlan topology builder)|
|Further doc||tigaNAV report ch 5.5.4|
This is cut and paste from the tigaNAV report. Consider a rewrite.
The network topology discovery system automatically discovers the physical topology of the network monitored by NAV based on the data in the swp_netbox table collected by the cam logger. No major updates have been necessary except for adjustment to the new structure of the NAVdb; the basic algorithm remains the same. While the implementation of said algorithm is somewhat complicated as to gracefully handle missing data, the following is a simplified description:
In practice the use of CDP makes this process very reliable for the devices supporting it, and this makes it easier to correctly determine the remaining topology even in the case of missing information.
|Process name||networkDiscovery.sh vlan|
|Alias||Vlan Topology Builder|
|Brief description||Builds the per vlan topology on the swithed network with interconnected trunks. The algorithm is a top-down depth-first traversel starting at the primary router port for the vlan.|
|Depends upon||The physical topology need to be in place, this process therefore supersedes the physical topology builder.|
|Default scheduling||every hour (38 * * * *)|
|Log file||networkDiscovery/networkDiscovery-vlan.html og networkDiscovery/networkDiscovery-stderr.log|
|Lines of code||See the physical topology builder above|
|Further doc||tigaNAV report ch 5.5.5|
This is cut and paste from the tigaNAV report. Consider a rewrite.
After the physical topology of the network has been mapped by the network topology discover system it still remains to explore the logical topology, or the VLANs. Since modern switches support trunking, which can transport several independent VLANs over a single physical link, the logical topology can be non-trivial and indeed, in practice it usually is.
The vlan discovery system uses a simple top-down depth-first graph traversal algorithm to discover which VLANs are actually running on the different trunks and in which direction. Direction is here defined relative to the router port, which is the top of the tree, currently owning the lowest gateway IP or the virtual IP in the case of HSRP. In addition, since NAV v3 now fully supports the reuse of VLAN numbers, the vlan discovery system will also make the connection from VLAN number to actual vlan as defined in the vlan table for all non-trunk ports it encounters.
A special case are closed VLANs which do not have a gateway IP; the vlan discovery system will still traverse these VLANs without setting any direction and also creating a new VLAN record in the vlan table. The NAV administrator can fill inn descriptive information afterward if desired.
The implementation of this subsystem is again complicated by factors such as the need for checking at both ends of a trunk if the VLAN is allowed to traverse it, the fact that VLAN numbers on each end of non-trunk links need not match (the number closer to the top of the tree should then be given precedence and the lower VLAN numbers rewritten to match), that both trunks and non-trunks can be blocked (again at either end) by the spanning tree protocol and of course that it needs to be highly efficient and scalable in the case of large networks with thousands of switches and tens of thousands of switch ports.
|Alias||The status monitor / parallel pinger|
|Brief description||Pings all boxes in the netbox table. Works effectively in parallel, being able to ping a large number of boxes. Has configurable robustnes criteria for defining when a box actually is down.|
|Depends upon||Netboxes to be in the netbox table.|
|Updates tables||Posts events on the eventq table. Sets the netbox.up value in addition.|
|Default scheduling||Pings all hosts every 20 second. Waits maximum 5 second for an answer. After 4 “no-answers” the box is declared down as seen from pping.|
|Lines of code||Approx 4200, shared with servicemon|
|Further doc||NAVMore report ch 3.4 (Norwegian)|
pping is a daemon with its own (configurable) scheduling. pping works in parallel which makes each ping sweep very efficient. The frequency of each ping sweep is per default 20 seconds. The maximum allowed response time for a host is 5 seconds (per default). A host is declared down on the event queue after four consecutive “no responses” (also configurable). This means that it takes between 80 and 99 seconds from a host is down till pping declares it as down.
Please note the event engine will have a grace period of one minute (configurable) before a “box down warning” is posted on the alert queue, and another three minutes before the box is declared down (also configurable). In summery expect 5-6 minutes before the host is declared down.
The configuration file
pping.conf lets you adjust the following:
|user||the user that runs the service||navcron|
|packet size||size of the icmp packet||64 byte|
|check interval||how often you want to run a ping sweep||20 seconds|
|timeout||seconds to wait for reply after last ping request is sent||5 seconds|
|nrping||number of requests without answer before marking the device as unavailable||4|
|delay||ms between each ping request||2 ms|
In addition you can configure debug level, location of log file and location of pid file.
Note: In order to uniquely identify the icmp echo response packets pping needs to tailor make the packets with its own signature. This delays the overall throughput a bit, but pping can still manage 90-100 hosts per second, which should be sufficient for most needs.
pping has three threads: 1. Thread 1 generates and sends out the icmp packets. 2. Thread 2 receives echo replies, checks the signature and stores the result to RRD. 3. The main thread does the main scheduling and reports to the event queue. Thread 1 works this way: FOR every host DO: 1. Generate an icmp echo packet with: (destination IP, timestamp, signature) 2. Send the icmp echo. 3. Add host to the "Waiting for response" queue. 4. Sleep in the configured ''delay'' ms (default 2 ms). This delay will spread out the response times, which in turn reduces the receive thread queue and will in effect make the measured response time more accurate. Thread 2 works this way: As long as thread 1 is operating and as long as we have hosts in the "Waiting for response" queue, with a timout of 5 seconds (configurable): 1. Check if we have received packets 2. Get the data (the icmp reply packet) 3. Verify that the packet is to our pid. 4. Split the packet in (destination IP, timestamp, signature) If IP is wrong or signature is wrong, discard. 5. If we recognize the IP address on the "Waiting for response" queue, update response time for the host and remove the host from the "Waiting for response" queue. When thread 2 finishes the sweep is over. If hosts are remaining on the "Waiting for response" queue, we set response time to "None" and increments the "number of consecute no-reply" counter for the host. When thread 3 detects that a host has to many no-replies a down event is posted on the event queue.
Note that the response times are recorded to RRD which gives us response time and packet loss data as an extra bonus.
|Alias||The service monitor|
|Brief description||Monitors services on netboxes. Uses implemented handlers to deal with a growing number of services; currently supporting ssh, http, imap, pop3, smtp, samba, rpc, dns and dc.|
|Depends upon||The service and serviceproperty tables must have data. This is filled in by Edit Database when the NAV administrator registers services that he wants to monitor.|
|Updates tables||Posts servicemon events on the eventq table.|
|Default scheduling||Checks each service every 60 second. Has varying timouts for different services, between 5 and 10 seconds. If a service does not respond three times in a row, servicemon declares it down.|
|Lines of code||See pping above, shared code base|
|Further doc||NAVMore report ch 3.5 (Norwegian)|
|Alias||The threshold monitor|
|Brief description||At run-time, it fetches all the thresholds in the RRD database and compares them to the datasource in the corresponding RRD file. If the threshold has been exceeded, it sends an event containing relevant information. The default threshold value is 90% of maximum.|
|Depends upon||The RRD database has to be filled with data. This is done by makeCricketconfig. In addition you must manually run a command line tool, fillthresholds.py, for setting the threshold to a configured level. A more advanced solution for setting different threshold is under development.|
|Updates tables||eventq with thresholdmon events|
|Default scheduling||every 5 minutes ( */5 * * * * )|
|Lines of code||Approx 400|
|Further doc||See ThresholdMonitor|
|Process name||getDeviceData data plugin moduleMon|
|Alias||The module monitor|
|Brief description||A plugin to gDD. A dedicated OID is polled. If this is a HP switch, a specific HP OID is used (oidkey hpStackStatsMemberOperStatus), similarly for 3Com (oidkey 3cIfMauType). For other equipment the genereric moduleMon OID is used. For 3com and HP the OID actually tells us if a module is down or not. For the generic test we (in lack of something better) check if an arbitrary ifindex on the module in question responds. If the module has no ports, no check is done.|
|Depends upon||The switch or router to be processed by gDD with apropriate data in module and gwport/swport.|
|Updates tables||posts moduleMon events on the eventq. Sets in addition the boolean module.up value.|
|Run mode||daemon, a part of gDD.|
|Default scheduling||Depends on the defaultfreq of the moduleMon OID (equivalently for the HP and 3com OIDs) Defaults to 1 hour.|
|Config file||see gDD|
|Log file||see gDD|
|Lines of code||Part of gDD, see gDD.|
|Further doc||Not much.|
Also see the event- and alert system page.
|Alias||The event engine|
|Brief description||The event engine processes events on the event queue and posts alerts on the alert queue. Event engine has a mechanism to correlate events; i.e. if the ppinger posts up events right after down events, this will not be sent as boxDown alerts, only boxDown warnings. Further if a number of boxDown events are seen, event engine looks at topology and reports boxShadow events for boxes in shadow of the box being the root cause.|
|Depends upon||The various monitors need to post events no event queue (with target event engine) in order for event engine to have work. alertmsg.conf needs to be filled in for all events, messages on alertqmsg (and alerthistmsg) are formatted accordingly.|
|Updates tables||Deletes records from eventq as they are processed. Posts records on alertq with adhering alertqvar and alertqmsg, similarly alerthist with adhering alerthistvar and alerthistmsg.|
|Default scheduling||Event engine checks the eventq every ??? seconds. boxDown-warning-wait-time and boxDown-wait-time are configurable values. Parameters for module events are also configurable. Servicemon eventes are currently not; a solution is looked upon.|
|Lines of code||Approx 3000 lines|
|Further doc||NAVMore report ch 3.6 (Norwegian). Updates in tigaNAV report ch 4.3.1.|
|Alias||The maintenance engine|
|Brief description||Checks the defines maintenance schedules. If start or end of a maintenance period occurs at this run time, the relevant maintenanceEvents are posted on the eventq, one for each netbox/module and/or service in question.|
|Depends upon||NAV users must set up maintenance schedule which in turn is stored in the maintenance tables (emotd, maintenance, emotd_related).|
|Updates tables||Posts maintenance events on the eventq. Also updates the maintenance.state.|
|Default scheduling||Every 5 minutes ( */5 * * * * )|
|Lines of code||Approx 300|
|Further doc||tigaNAV report ch 8.|
|Alias||The alert engine|
|Brief description||Alert engine processes alerts on the alert queue and checks whether any users have subscribed to the alert in their active user profile. If so, alert engine sends the alert to the user, either as email or sms, depending on the profile. Alert Engine sends email itself, whereas sms messages are inserted on the sms queue for the sms daemon to manage. If a user has selected queueing email messages, alert engine uses the alertprofiles.queue table.|
|Depends upon||eventEngine must be running and do the alertq posting. NAV users must have set up their profiles, if their are no matches here, alertq will simply delete the alerts.|
|Updates tables||Deletes records from alertq with adhering alertqvar and alertqmsg. Inserts records on alertprofiles.smsq. User profiles that requires queued email messages, the alertprofile.queue table is used.|
|Default scheduling||Checks for new alerts every 60 seconds per default.|
|Log file||alertengine.log og alertengine.err.log|
|Lines of code||Approx 1900|
|Further doc||NAVMore report ch 3.7 and 3.8 (Norwegian).|
|Alias||The SMS daemon|
|Brief description||Checks the sms queue for new messages, formats the messages into one SMS and dispatches it via one or more dispatchers with a general interface. Support for multiple dispatchers are handled by a dispatcher handler layer.|
|Depends upon||alertEngine fills the smsq|
|Updates tables||Updates the sent and timesent values of navprofiles.smsq|
|Run mode||Daemon process|
|Default scheduling||Polls the sms queue every x minutes|
|Programming language||Python in NAV 3.2 (perl in 3.1)|
|Lines of code||In NAV 3.2: approx 1200|
|Alias||The SNMP trap daemon|
|Brief description||Listens to port 162 for incoming traps. When the snmptrapd receives a trap it puts all the information in a trap-object and sends the object to every traphandler stated in the “traphandlers” option in snmptrapd.conf. It is then up to the traphandler to decide if it wants to process the trap or just discard it.|
|Updates tables||Depends on “traphandlers”. Posts on eventq would be typical|
|Run mode||Daemon process|
|Log file||snmptrapd.log and snmptraps.log|
|Lines of code||Approx 200 + traphandlers|
|Alias||The Cricket configuration builder|
|Depends upon||That gDD has filled the gwport, swport tables (and more…)|
|Updates tables||The RRD database (rrd_file and rrd_datasource)|
|Default scheduling||Every night( 12 5 * * * )|
|Lines of code||Approx 1600|
|Further doc||How to configure Cricket addons in NAV v3|
|Alias||cricket collector (not NAV)|
|Brief description||Polls routers and switches for counters as configured in the cricket configuration tree.|
|Depends upon||makecricketconfig to build the configuration tree|
|Updates tables||Updates RRD files|
|Default scheduling||Every 5 minute (Pre-NAV 3.2 had a one minute run mode for gigatit ports. As of NAV 3.2 64 bits counters are used and the 5 minutes run mode is used for all counters).|
|Config files||directory tree under cricket-config/|
|Log file||cricket/giga.log og cricket/normal.log|
|Programming language||not relevant|
|Lines of code||not relevant|
|Further doc||not relevant|
|Alias||RRD cleanup script|
|Brief description||This script finds all the rrd-files that we are using. The purpose of the script is to delete all the rrd-files that are no longer active, to save disk-space.|
|Default scheduling||nightly ( 0 5 * * * )|
|Lines of code||Approx 200|
|Alias||The Cisco syslog analyzer|
|Brief description||Analyzes cisco syslog messages from switches and routers and inserts them in a structured manner in the logger database. Makes searchein for log messages of a certain severity easy, etc.|
|Depends upon||syslogd to run on the NAV machine. Parses the syslog for cisco syslog messages.|
|Updates tables||The tables in the logger database|
|Default scheduling||every minute ( * * * * * )|
|Lines of code||Approx 350|
|Further doc||NAVMore report ch 2.4 (Norwegian).|
|Config file||arnold/arnold.cfg, arnold/noblock.cfg og arnold/mailtemplates/*|
|Lines of code|
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00361.warc.gz
|
CC-MAIN-2023-06
| 21,720 | 174 |
https://giftbox.gameru.net/gifts/1914
|
code
|
Graphics:NVIDIA GeForce 8600 GT / ATI Radeon HD 2600 XT or greater
Hard Drive:20 GB HD space
Other Requirements:Broadband Internet connection
Additional:Initial installation requires one-time internet connection for Steam authentication; software installations required (included with the game) include Steam Client, Visual C++ 2008 Redistributable, DirectX and Microsoft .NET 4.
OS: 10.8.5 (Mountain Lion)
Processor: 2.0 GHz Intel Core 2 Duo (Dual-Core)
Memory: 4 GB RAM
Hard Disk Space: 20 GB
Video Memory: 256 MB
Video Card: AMD HD4000 / NVIDIA 9000 Series (See NOTICE for details)
Additional: Broadband Internet Connection.
NOTICE:The following graphics cards are not supported: ATI X1xxx series, ATI HD2xxx series, Intel GMA series, NVIDIA 7xxx series, NVIDIA 8xxx series. The following cards require you to have 8GB of system RAM: NVIDIA 320M, NVIDIA 9400 and Intel HD3000.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145818.81/warc/CC-MAIN-20200223154628-20200223184628-00415.warc.gz
|
CC-MAIN-2020-10
| 879 | 12 |
https://www.bungie.net/ru/Forums/Post/71265757?sort=0&page=0&path=1
|
code
|
I guess this is happening to more people, but I logged in today and found I have lost progress. Skills are unlearned, loot, bounties, etc, dissappeared, legendaries and exotics among them! The character has rolled back a few days.
I have been playing the last few days and found a lot of connection errors involving several animals, including the dreaded centipede.
Please help, I will avoid playing the game in hope my lost progress can be restored.
Console: Playstation 4
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00480.warc.gz
|
CC-MAIN-2020-24
| 473 | 4 |
https://magicmonster.com/kb/os/linux/daemons/
|
code
|
Starting and Stopping Linux Daemons
See also upstart if you are using Ubuntu.
You can start and stop installed daemons such as ’ntp’ (network time protocol – this will synchronize
your clock) by running the start and stop scripts in
# /etc/init.d/ntp start ntpd: time slew -0.014128s Starting network time protocol daemon (NTPD) done # /etc/init.d/ntp stop Shutting down network time protocol daemon (NTPD) done
To have this turn on automatically when the server is rebooted you need to know about runlevels.
A runlevel is a mode the server is currently in. You can see the different levels in
My box has the following:
runlevel 0 is System halt (Do not use this for initdefault!) runlevel 1 is Single user mode runlevel 2 is Local multiuser without remote network (e.g. NFS) runlevel 3 is Full multiuser with network runlevel 4 is Not used runlevel 5 is Full multiuser with network and xdm runlevel 6 is System reboot (Do not use this for initdefault!)
To see the current runlevel, use the
# runlevel N 5
This tells us we are in runlevel 5.
To change the daemons to be stopped or started at different runlevels, use the
To turn on ntp when runlevel 5 is reached, run:
# chkconfig ntp 5
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00829.warc.gz
|
CC-MAIN-2023-40
| 1,192 | 15 |
https://www.mertsarica.com/you-can-run-but-you-cant-hide/
|
code
|
In the past, there was a threat actor, when the barbers were fleas, and the horses were jesters. This threat actor had sent an email to top-level employees of the institutions he targeted, with an HTML file attached. When this HTML file was opened, and the link address (https://go0gle-drive[.]blogspot[.]com) followed, the targeted person was directed to an address on the mega.nz file storage and sharing site (https://mega[.]nz/file/axlmBSxR). If this file was downloaded and run, the threat actor could remotely control the targeted system, making all kinds of mischief, including recording audio, video and keystrokes. According to legend, some network-based sand pool systems could not analyze the link address contained in this HTML file sent by the threat actor.
When an institution faces a scenario like the one described above, even if the attack attempt is not successful, it should still handle the matter with great care because this may be an indication of a precursor earthquake, and a sign of a bigger one to come. Therefore, it is important to investigate whether the attack was targeted (Spear Phishing), organized (APT), or just a part of a general campaign targeting a large number of users. It may not always be possible to find answers to these questions, but through analysis, an idea may be gained. In this writing, I will attempt to find answers to these questions.
Initially, through static analysis, I saw that the file was developed and packaged with C#. When I ran the file on a virtual system and analyzed it dynamically, I discovered that the malware accessed an address on the Pastebin site. When I visited this web address, I saw that the page contained an IP address (184.108.40.206) and a port.
Especially in APT attacks, the malware used is often specially developed by the threat actors and compiled just before the attack, so when it is uploaded to VirusTotal, it is usually detected under a general signature name such as (Backdoor, Trojan, etc). In such cases, it may be possible to use services like Intezer to search for which other malware the code of this malware was used and make comparisons, and thus gain information about the threat actor.
When I uploaded the malware to VirusTotal, I saw that it was not specifically matched with any other malware. When I searched on Intezer, unfortunately, I came up empty handed. (Generic Malware)
When I searched the IP address I obtained from the Pastebin.com page on VirusTotal, I found out that it belongs to the Portmap.io service which serves for redirecting ports.
As I continued my research to find out what the malware developed by the threat actor who tries to hide himself as much as possible, I reached the stage of dynamic code analysis and the dnSpy debugger that I used in my article titled OPSEC came to my aid. Before starting debugging with dnSpy, in order to find the main module that the packaged software hides in memory, when I ran the ExtremeDumper tool, the mother of evils, Stub.exe, emerged.
As I analyzed the Stub.exe program step by step with dnSpy, at one point, I noticed that it was encrypted with AES and the decrypted value of 0.5.6B caught my attention. When I searched this value on Google with the keywords “rat 0.5.6B,” guess what came up? The open-source AsyncRAT! :)
After examining this project in detail on GitHub, I was able to confirm that the malware I analyzed is AsyncRAT by inferring it from similar code blocks.
Finally, when I searched for similar Stub.exe files with vhash on VirusTotal, I encountered many examples. As I wondered whether all these examples had the Pastebin address from the malware I analyzed, or were part of a common campaign, either I would have to examine the analysis report of each of more than 50 examples or find a very short and practical way which is suitable for lazy people. :) After starting to think in a cunning way, the idea of preparing a tool in Python that analyzes all these examples statically, first finds the AES encryption key and then extracts the configuration information came to my mind.
Of course, since the variable names are randomly generated in each program, I had to first find the AES key by using a static variable. Since we know that programs developed with .Net are compiled into bytecode (CIL/MSIL), I started to search for static values on bytecode.
For this, I decided to take advantage of the Mono Disassembler (monodis) tool, which is part of the famous Mono project. Using the monodis tool, I converted all Stub.exe examples to code, and I found out that the AES encryption key is always after the 0x288c value, and the IL_003c value. And using this information, I developed the AsyncRAT Configuration Extractor tool in Python. When I run the tool on all examples, I found that the information in the configuration of each one of them was different from the malware I analyzed, so I learned that the malware I analyzed was not a part of a common campaign.
In conclusion, after compiling and collecting all this information, it appears that while this cyber attack attempt is not an APT attack, it is part of a targeted attack (Spear Phishing). Especially in light of the increase in such targeted cyber attack attempts after the Covid-19 pandemic, I recommend that organizations and employees be very careful.
Hope to see you in the following articles.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819067.85/warc/CC-MAIN-20240424045636-20240424075636-00283.warc.gz
|
CC-MAIN-2024-18
| 5,354 | 14 |
https://davidecipullo.com/
|
code
|
Welcome to my website!
I am a Ph. D. candidate at the Department of Economics, Uppsala University, and I will be on the academic job market in Fall 2020. I am also affiliated with the Center for Economic Studies and Ifo Institute for Economic Research (CESifo) and with the Uppsala Centre for Fiscal Studies (UCFS). Please read my CV here.
My Job market Paper, “Gender Gaps in Political Careers: Evidence from Competitive Elections” investigates the impact of voter support on the representation of women in the political profession. Please read my JMP here.
I primarily conduct research in political economy and public economics. See more here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529080.43/warc/CC-MAIN-20210122020254-20210122050254-00537.warc.gz
|
CC-MAIN-2021-04
| 649 | 4 |
https://www.carbonkit.net/categories/International_electricity_by_DEFRA
|
code
|
International electricity by DEFRA
Note: Although DEFRA presents data for the UK separately from those of other countries/regions, the data for the UK is included herein for completeness.
Emissions are calculated, according to this methodology, on the basis on emissions factors which relate a quantity of grid electricity electricity with an associated quantity of greenhouse gas emissions. These emissions factors are based on the average annual emissions intensity of grid electricity in each country (or other region), as specified on a per kWh basis. By multiplying a quantity of electricity by the appropriate factor, the total quantity of emissions are calculated.
The comprehensive dataset included within methodology differentiates several aspects of electricity-associated emissions, including: (1) the activities of generation, transmission and consumption; (2) direct, indirect and life cycle emissions; (3) historical (raw annual or annual rolling average) data.
Activity type: Separate data are available for greenhouse gas emissions attributable to electricity generation, electricity transmission and distribution, and electricity consumption. All greenhouse gas emissions are ultimately associated with the generation phase, being a consequence of fuel/energy consumption (direct emissions) and related activities (indirect: e.g. fuel sourcing, transport) at the power plant. However, since there are usually losses associated with the distribution and transmission of electricity, the quantity of emissions per unit of electricity generated (e.g. kWh) usually differs from the corresponding per unit emissions at the point of consumption. The DEFRA dataset provides values for greenhouse gas quantities which are attributable to the intermediate transmission phase, although these should be understood to be an accounting convenience reflecting transmission (in)efficiency rather than actual emissions caused during this phase. The emissions associated with electricity consumption are simply the sum of those attributable to generation and transmission. In most cases - i.e. those in which the final, end-point of electricity usage is under consideration - the values for consumption should be used.
Emission type: The DEFRA dataset differentiates between direct and indirect greenhouse gas emissions. 'Direct' emissions are limited to those associated with activities at the power plant, while 'indirect' emissions refer to those which derive from other stages in the production chain such as raw material extraction and fuel delivery. The combination of these two types of emission represents full life cycle emissions for electricity. The importance of these emissions types will vary depending on how the user attributes the various portions of the life cycle emissions to the various agents involved (e.g. supplier, producer, consumer). It is most common to use direct emissions only when considering electricity consumption. Indirect and life cycle emissions are expressed in terms of CO2e.
Historical values: The greenhouse gas emissions produced per unit of electricity generated/consumed varies through time as the mix of fuels used by power stations supplying a national or regional grid changes. These changes may reflect variations in electricity demand or the relative prices of different fuel types. DEFRA publishes annual emissions factors based on the average quantity of emissions per unit of electricity across the grid during each calendar year. These values cover the period 1990-2006. In addition to the raw annual values, DEFRA provides 'rolling average' emissions factors which represent the average of the previous 5 years for each given year. These are suggested as being more suitable for inter-annual comparisons by DEFRA.
How to use this category
Choosing an emissions scenario
To use this category, choose the country or region by using the country drill down choice and the type of activity using the type drill down choice. For the latter, the following options are available:
- electricity generation
- electricity distribution and transmission
- electricity consumption
Specifying activity data
The quantity of electricity under consideration is specified by setting the energyConsumed profile item value.
DEFRA publishes historical annual data for direct CO2 emissions for each country/region within this category. If calculating using the direct and annual options, users can specify start- and end-dates in association with their electricity consumption and CarbonKit will use its data item value history functionality to apply the appropriate emissions factor(s). If no profile dates are set, the most recent (i.e. 2006) data are used.
The returned quantity represents the emissions associated with the energy quantity (and dates) specified. The following discrete amounts are returned:
- directCO2AnnualBasis: Direct CO2 emissions calculated on the basis of annual emissions factors
- directCO2RollingBasis: Direct CO2 emissions calculated on the basis of rolling average emissions factors
- indirectCO2e: Indirect CO2 emissions
- lifeCycleCO2e: total life cycle CO2e emissions
Additional data item values
This category also contains data item values representing the mix of electricity and heat production from which the emissions data are derived for each country/region - these are available from the CarbonKit data API under the following paths:
- percentElectricity: Percentage of total energy production represented by electricity
- percentHeat: Percentage of total energy production represented by heat
- percentLossesElectricity: Percentage of losses attributable to electricity production
- percentLossesHeat: Percentage of losses attributable to heat production
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644683.18/warc/CC-MAIN-20230529042138-20230529072138-00320.warc.gz
|
CC-MAIN-2023-23
| 5,728 | 27 |
https://www.tcs.tifr.res.in/web/events/839
|
code
|
We consider election scenarios with incomplete information, a situation that arises often in practice. There are several models of incomplete information and accordingly, different notions of outcomes of such elections. In one well-studied model of incompleteness, the votes are given by partial orders over the candidates. In this context we can frame the problem of finding a possible winner, which involves determining whether a given candidate wins in at least one completion of a given set of partial votes for a specific voting rule.
The possible winner problem is well-known to be NP-complete in general, and it is in fact known to be NP-complete for several voting rules where the number of undetermined pairs in every vote is bounded only by some constant. In this paper, we address the question of determining precisely the smallest number of undetermined pairs for which the \PW problem remains NP-complete. In particular, we find the exact values of t for which the possible winner problem transitions to being NP-complete from being in P, where t is the maximum number of undetermined pairs in every vote. We demonstrate tight results for a broad subclass of scoring rules which includes all the commonly used scoring rules (such as plurality, veto, Borda, and k-approval), Copeland^\alpha for every \alpha\in[0,1], maximin, and Bucklin voting rules. A somewhat surprising aspect of our results is that for many of these rules, the possible winner problem turns out to be hard even if every vote has at most one undetermined pair of candidates.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474541.96/warc/CC-MAIN-20240224144416-20240224174416-00579.warc.gz
|
CC-MAIN-2024-10
| 1,557 | 2 |
https://codedump.io/share/TPBRg9dQznjQ/1/an-easy-way-to-show-images-in-django-on-deployment-debugfalse
|
code
|
I am using DJango 1.8 and python 3.4.3, and I have been running my app on Debug mode, and found a way to show images inside a directory configured on MEDIA_ROOT, this was my first question and the solution I have found: How to upload and show images in DJango. But reading the docs I found that that solution is not suitable for a served app, so, if I stop using "Debug=True" the images will not be displayed, and I have to use one of the options exposed on this link: Static files on deployment but I don't have money to pay another server, I just can pay my hosting on pythonanywhere, and for the option to use the same server for the images, I don't have idea how to automate the
Django is not intended for serving up static files in a production environment.
If you are intending to use django's runserver to server up static files with DEBUG=False then use the --insecure flag.
You should never deploy a site with DEBUG = True due to security implications.
Here is a guide from pythonanywhere: https://help.pythonanywhere.com/pages/DjangoStaticFiles/
Static files and media assets are 2 different things.
collectstatic collects static files from different locations and puts them all in a single folder.
Media files are things the user uploads (e.g. photos, documents etc.). So you have a settings.MEDIA_ROOT for this.
collecstatic won't do anything to media files, they will just be there already once the user uploads them.
Frameworks like Django aren't going to cover automatic production server configuration - that is something else you will have to learn unfortunately.
There are a lot of good guides around e.g. this one
Re server costs, I'm sure you can find a host to give you some free credit, or pay $5/month for a server somewhere...
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864725.4/warc/CC-MAIN-20180522112148-20180522132148-00599.warc.gz
|
CC-MAIN-2018-22
| 1,750 | 12 |
http://www.railsgirls.com/istanbul.html
|
code
|
Rails Girls comes to Istanbul! During the free two-day workshop we'll dive into the magical world of Ruby on Rails.
You learn designing, prototyping and coding with the help from our coaches.
You need your own laptop, curiosity and a sprinkle of imagination!
Want to help? We are looking for volunteers and Rails coaches. Email us.
|18.00 - 21.00||
Installation partyRegistration and installation fest. Get to know the mentors and the attendees a little bit before hand. And let's start coding in Ruby! (so please bring your laptop).
|9:00 - 9:30||
Registration, coffee and installation festGrab coffee, tea and breakfast, mingle and get ready to code!
|9:30 - 9:45||
WelcomeOutline of the day.
|9:45 - 13:00||
WorkshopTime for Ruby. Or Rails? Or maybe HTML & CSS? Work on your Rails application.
|13:00 - 13:30||
|13:30 - 14:30||
|14:30 - 16:00||
WorkshopContinue working on your Rails application.
|16:30 - 18:00||
WorkshopExtend your application.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583508988.18/warc/CC-MAIN-20181015080248-20181015101748-00009.warc.gz
|
CC-MAIN-2018-43
| 949 | 18 |
https://forum.dhtmlx.com/t/insert-or-delete-row-in-filtered-state-of-the-grid/11621
|
code
|
I am facing a problem while deleting a row from the grid in filtered state.
I am doing following steps in mentioned order.
(1)filtering with value ‘C’ in first column using header filters shows one row (rowId)
To delete this row I use following methods.
In some cases I need to update some rows so I am using updatefromxml function.
(4)grid.updateFromXML(url (returns xml without mentioned rowId ), true, false, afterUpdatefunc);
Deleted row is disappeared from the grid at this stage but when I remove the value ‘C’ from the first column filter manually, deleted row comes back.
I am clearing filters and then I am deleting row, so grid should preserve the deleted row change.
Please let me know if I am missing anything.
This is known issue. Please find work around here dhtmlx.com/dhxdocs/doku.php?id=d … ering_mode
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00230.warc.gz
|
CC-MAIN-2023-06
| 828 | 10 |
https://foreman-alexander.medium.com/react-tarot-card-app-update-d9fb427164b8?source=post_internal_links---------3----------------------------
|
code
|
For the past few weeks, I have detailed my planning and early development of a React Tarot Card app.
Using React Router, I created and linked several pages. The All Cards page, shockingly, displays all of the cards in the deck. It is assembled through a collection of SingleCard components. The All Cards page is meant to focus on broader learning, so it contains a dynamic search bar for finding specific cards. When a user clicks on each card, a modal component displays the card’s details, such as its number, suit, and a variety of possible interpretations.
For the Readings page, I implemented the Fisher-Yates algorithm to shuffle and return random collections of cards. Last week, I had the page configured simply so that, on their respective button clicks, either one or three cards were randomly shuffled and rendered. These random assortments are meant to more closely imitate the actual experience of a tarot card reading, from which a message or “fortune” might be gleaned.
Users can still investigate each card on click, as the shuffling components are rendered using the same SingleCard and InfoModal components.
I did not add a ton of features this week because I was focused primarily on setting up Redux.
To unify state management, I refactored the shuffling components, dispatching those actions through the Redux store, thereby making the shuffles globally accessible. I also added a save feature, so a user can revisit their selected readings.
The Readings page is now designed so a user can click to shuffle and render three cards, a reading, and they can choose to save that reading with a click.
I decided to remove the single card shuffle for now. For future iterations, I tentatively set up a single card reading page that a user might route to on a button click. I do not feel that keeping two different shuffles on the same page is a good design, though.
My biggest annoyances this week revolved around retrieving and rendering my shuffles and saved data from the Redux store.
Because the cards are large JSON objects saved in initial state as empty arrays, getting to the deeply nested data, particularly the saved data, proved tricky and involved mapping many layers down. I am investigating better ways to flatten the objects for more elegant data retrieval. For now, though, the maps on maps on maps have allowed me to again reuse my SingleCard and InfoModal components for rendering.
My last major task accomplished on the app this week was beginning to configure Google Firebase and Firestore. I previously added these features for authentication and storage to a mock ecommerce site I built, but it’s no small chore to get up and running.
Luckily, there’s a wealth of documentation regarding the setup of all things Google. Next week, I will continue to link Firebase to my app, allowing me to create and save users, who will be able to sign up via email and password or through their Google accounts. Users will be able to actually save their “saved” readings through Firestore’s cloud database storage. On top of that, I intend to add a feature for users to add notes to their saved readings.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00704.warc.gz
|
CC-MAIN-2021-39
| 3,144 | 12 |
https://www.mirakee.com/posts/xmz8emrkra
|
code
|
We are each other's personal spaces
I can be the "real me" when m with her
She doesn't judge , she allows me to express myself
I can sing whenever I want , cry whenever I can , drunk call her and sing songs that makes Sinatra kill himself... Stuff like that..
And the most funny thing is it all started with a drunk call , and calling her was the best thing I've ever done when m drunk
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00330.warc.gz
|
CC-MAIN-2020-29
| 385 | 5 |
https://developer.jboss.org/thread/115176
|
code
|
You can think of jBPM as a 'sophisticated' state machine. In most cases a workflow has the need to keep track of what is going on at a particular moment in time (i.e. manage state). OTOH pure state machines cannot deal with e.g. multiple concurrent paths of execution and automatically processed activities. This is the domain where workflow engines such as jBPM pop up.
Thanks for your reply. The question was meant to be rhetorical!
No serious. A workflow is about process state. A state machine in its minimalist form is an object state. The state of the object is changed according to the type of event that occurs. Say the object is in state s1. On event e2 the object switches to s2, on e3 the object switches to s3, on e4 nothing happens, because this is not allowed.
Let me give an example. Say someone wants to update an order. This proces can involve a workflow: after the update of the order maybe financial approval needs to be asked. After an selectForUpdate request the invoice switches to the state selectForUpdate. No other event is allowed except the update event. If someone wants to ship the order: nope! Not now.
One could model the invoice state in the invoice object: but it doesn't really belong there.
Managing object state and process state is different. Both are required in order to implement a robust services system. For as far as I could see JBpm is not intended to manage object state.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00394.warc.gz
|
CC-MAIN-2018-39
| 1,416 | 6 |
https://www.ifixit.com/User/About/1159355/Kolod+Aljohani
|
code
|
Majoring in Technical Communication, and i am taking Technical Communication 205 for Spring 2015 as my first Technical Communication class. i am a freshman at Eastern Washington University. I am planning to graduate in 2017 with my bachelor degree in Technical Communication and a double minors in Chinese and German. My Skills are working hard and able to learn fast. I also have some experience in using tools and repairing stuff and it will help me when it comes to the project that i am working on with my group for this quarter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512836.0/warc/CC-MAIN-20181020142647-20181020164147-00282.warc.gz
|
CC-MAIN-2018-43
| 533 | 1 |
https://askubuntu.com/questions/32484/how-to-boot-from-ubuntu-live-usb-with-try-ubuntu-directly
|
code
|
I created an Ubuntu 10.10 live usb with persistence feature. And it works well on my PCs. But one thing annoys me is that each time I boot from live usb, I have to choose between "Try Ubuntu" and "Install Ubuntu". Is there any way to dismiss that dialog and boot directly to the live Ubuntu system?
I wouldn't do it the way you have, but it kinda depends what you want to use the USB key for. If you want to use it to do installs on computers then the way you set up the key is right. If you want to use it as a standard desktop install, that you can use on any random (or even just a few specific) machines then use this method instead:-
For this you will need:
- 1xUSB key onto which you will install Ubuntu
- 2xCDR or DVD or USB key onto which you will put the installer
- 1xComputer which is capable of booting off the above device
Steps to install:
- Download the ISO and burn it to a CD or DVD, or use USB startup disk creator (or unetbootin) to make an 'install USB key'
- Insert the above media into a computer and boot from it
- Choose 'Install Ubuntu' from the menu
- Once booted to the installer, insert the USB key you want to install Ubuntu "persistent" onto
- During the installer, when you get to partitioning, ensure you select the USB key inserted, to install Ubuntu onto.
- Meaning, don't mistakenly install Ubuntu onto the internal hard disk on the computer
- At the end of the install you need to tell the installer to put GRUB onto your USB key, and not to overwrite the bootloader on the hard disk (or indeed the USB stick you installed from - I have made that mistake!)
- Once the install is done, you can shutdown, pull out all the keys and optical disks, and take that new key to any machine and boot from it.
Advantages to this method
- It's a full desktop install that you can add packages to, remove packages from and generally fully customise as you would any install
- You can enable encryption of the home directory during install so that if you lose the key you don't have to worry about losing your data
Disadvantages to this method
- It's not as straightforward as making a persistent key, but it's no more difficult than a standard Ubuntu install
- A full install on a usb flash key will cause more write cycles, and thus more wear and tear on the flash drive's memory, potentially causing it to fail much sooner
You can remove 'ubiquity' package, it should remove the switcher but removes the ability to install the system from that USB as well.
There's one more way: on boot of live USB there's a purple screen with keyboard = accessibility symbols in the bottom. If you press any key while it's shown, you'll get another boot menu, which allows to boot directly to the desktop, bypassing the switcher. It doesn't remove the switching completely, but at least works much faster.
So far, I've been able to boot straight to my maverick netbook remix persistent live USB log in screen by edited the syslinux.cfg file on my USB drive.
- Plug your live USB, go to syslinux folder on it.
Open txt.cfg (better use a code editor like notepad++), & copy the first five line. The code looks like this on my maverick netbook remix :
menu label ^Run Netbook Remix from USB
append noprompt cdrom-detect/try-usb=true persistent file=/cdrom/preseed/ubuntu-netbook.seed boot=casper initrd=/casper/initrd.lz splash --
open syslinux.cfg, & change it's content with the code you've copied before. You can delete
menu label ^Run Netbook Remix from USBor you can change it like this
say Run Netbook Remix from this USB(basically you can put any line as long as you use
sayin front of it)
save the changes you've made on syslinux.cfg, & we're done.
note : backup the original syslinux.cfg in case the method I explained above is not working for you.
I a Kubuntu 14 live usb bypass 'Try it' by changing the following in the syslinux folder:
- Renamed syslinux.cfg to syslinuxOLD.cfg
- Copied txt.cfg as syslinux.cfg
- In the new syslinux.cfg 'append' line I removed the option ' maybe-ubiquity '
The first boot option now reads:
default live label live menu label ^Start Kubuntu kernel /casper/vmlinuz.efi append noprompt cdrom-detect/try-usb=true persistent file=/cdrom/preseed/kubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash --
It now reboots directly to Kub 14
Another Simple Method to loose Try / Install
I usually use a USB with a Full install on it. It is no use for installing Ubuntu but a Full install is more secure, makes more efficient use of disk space and is more stable, among other things. Nowadays use "something else" when partitioning, or it is easy to overwrite the hard HDD. I also like to make the first partition either FAT32 or NTFS for use as a Linux / Windows Data partition.
Editing the Syslinux file, as previously mentioned is also a good way to get rid of Try / Install. For 18.04 use /casper/vmlinuz not /casper/vmlinuz.efi.
An easy way to remove the Try / Install stuff on a persistent drive, is to go to System Settings, and set yourself up as a new administrative user. The Try / Install will disappear, you will be able to assign a password, and the drive can still be used to install Ubuntu.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00024.warc.gz
|
CC-MAIN-2023-40
| 5,155 | 45 |
https://forum.fastday.com/fastonbury-glamping-grounds-f34/the-4-3-tent-is-up-t11057-45.html
|
code
|
On repair (fast) days I wouldn't worry about it - there's loads of folks doing zero calorie liquid fasts with no harm done. On feed days you *do* need to try to eat to your TDEE, if possible.
Are you low carbing? Ketosis often means one doesn't feel hungry (it's actually a free semi-reliable way of telling whether one is *in* ketosis). This is good - you're fat burning. Yay! But, as you're finding, one sometimes has to make oneself eat! Maybe looking at food porn and cooking extra-special stuff would help you? Failing that, when you do eat make sure that the foods are *really* high density calories, e.g. cheese, nuts and the likes, lots of olive oil... All the best, FatDog.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149205.56/warc/CC-MAIN-20200714051924-20200714081924-00043.warc.gz
|
CC-MAIN-2020-29
| 682 | 2 |
https://notes.aquiles.me/python_for_the_lab_public_key/
|
code
|
Python for the lab public key First published: 2021-02-08 Last Edited: 2021-02-08 Number of edits: 1 Public Key Backlinks These are the other notes that link to this one. Nothing links here, how did you reach this page then? Comment Share your thoughts on this note Your Name: Your Email: Message: Send Aquiles Carattino This note you are reading is part of my digital garden. Follow the links to learn more, and remember that these notes evolve over time. After all, this website is not a blog.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653183.5/warc/CC-MAIN-20230606214755-20230607004755-00751.warc.gz
|
CC-MAIN-2023-23
| 495 | 1 |
https://loudswarm.com/blog/mind-your-av-settings.html
|
code
|
After you’ve optimized your equipment and are ready to provide a crystal-clear presentation, it’s time to make sure your A/V settings (resolution and encoding) are consistent across all your media. Here’s the main things to think about for your next meeting with your video production team:
When someone says “FHD” or “standard HD”, they are not cursing at you. They are really referring to the height and width in pixel of the video. Full HD (HD means high definition) refers to 1080p, which has a height of 1080 pixels and a width of 1920 pixels.
Each image (called “frame”) of a video is sent up at a specific frequency, meaning so many frames per second. This number can vary per country. It is 60 Hertz in the USA and 50 Hertz in Europe. All of the video assets will need to use the same framerate. For LoudSwarm, it is generally 30 frames per second.
For video, the bitrate is the number of bits (ones and zeros) sent across the network to the video player per second. It is typically expressed in megabits per second or mbps. A full high definition (FHD) bitrate is between 3.5 and 5 mbps.
A FHD video at 30 frames per second can be very large and especially way too big for your internet pipe. For this reason, video files are typically compressed via encoding. The most common encoding that you will typically see is called Advanced Video Coding (AVC) and is commonly referred to as H.264. This is a video compression standard that can help us reduce the bitrate needed to transfer our content.
In order to make sure all video content has a consistent quality across devices (e.g. computers, tablets, etc.) and across internet connection speeds, video files have to be transcoded or attendees might see jarring switches in keyframes, often seen as stuck video frames, or green flashes.
Generally, you will want to support sending the original stream encoding plus a couple alternatives that allow for lower bitrates and resolutions such as 720p at 1mbps and 480p at 600kbps.
The LoudSwarm video player will take advantage of the various files as it allows attendees to pick the quality option that best matches their device and internet connection.
The video quality an attendee will receive depends on the lowest common denominator of your setup. Typically this is your video conferencing system (e.g. Zoom).
In order to ensure your viewers have a seamless experience, you will have to downscale the quality of the rest of your video assets to match the quality of your Zoom video. This is important when planning to have backup recordings of some of the talks, or promo videos from your top sponsors.
Then you will have to transcode the live and recorded files appropriately. If you choose to have your virtual event powered by LoudSwarm, this service will be included.
Pro Tip: Take care of all the video setup and processing as early as possible. This will give you plenty of time to ask questions, fix any issues and come up with new ideas and suggestions for your presenters.
This sounds very complicated because it is. This is the part that most people new to virtual event organizing get wrong since not everyone has the background in live video production to balance all of these settings. We do. Our producers pride themselves in their ability to walk you through all the steps to ensure your A/V settings are perfect. We want your event to be a great success, and we are happy to provide guidance and assistance to make that happen.
We are excited to help jumpstart your next event: let's make it amazing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506658.2/warc/CC-MAIN-20230924155422-20230924185422-00460.warc.gz
|
CC-MAIN-2023-40
| 3,543 | 14 |
https://www.dk.freelancer.com/job-search/www-freelancer-com-projects-data-entry-data-entry-email-management-html-utm_campaign-latest_project_-contestutm_medium-email_-n/
|
code
|
I have project
fazer um stop loss trailing para trabalhar em conjunto com un indicador RSI, em trading view
I need a website to redirect to www version and https in one hop. In other words, if user enters non-secured domain, it always needs to redirect to www. version. For example, if they enter "[log ind for at se URL]" it needs to go to "www.domain.com." If they enter non-secured domain, it needs to go to secured domain, www version. For example, If they enter...
Project to start another set of flavor labels. Just like the first set, this will have 24 available to be printed as well. Some of them are the same but just have the wording changed and some should be quick designs since we're using existing formats. I'll send over the file details in the chat. Thanks
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216475.75/warc/CC-MAIN-20180820140847-20180820160847-00471.warc.gz
|
CC-MAIN-2018-34
| 773 | 4 |
http://biology.stackexchange.com/questions/tagged/addiction?sort=unanswered&pageSize=50
|
code
|
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
tag has no wiki summary.
Can Opioids Attenuate some of the symptoms of Psychosis?
Can Opioids Attenuate some of the symptoms of Psychosis? I ask because there's a dead link on the Wikipedia page http://en.wikipedia.org/wiki/Opioid_dependence#Causes that's meant to support the ...
Feb 24 '13 at 4:33
recently active addiction questions feed
unanswered question tagged
Podcast #63 – The Plumber’s Up To 67 Coins
Hot Network Questions
Well-defined symbolic integral leading to ConditionalExpression
Has any prior aircraft crash been caused intentionally by a crew member?
I can fly without wings
Standard deviation formula confusion
How can I add a new colon-command to Evil?
Riddle - Socks that may or may not be puppets
Is there a simpler or better way of saying "promises that hold no meaning"?
External websites in logs
Designing a light battery
How many spell slots does a Warlock 19/Wizard 1 have?
Insert text in-between rows in a table
Dynamic Code Evaluation in Java - Clever or Sloppy?
Why do interviewers focus so much on why you want the job?
How can the breath of a pilot be recorded?
Java Generics - Method overriding
Lifeform - resistant to gunfire but vulnerable to melee
If the earth's rotational speed increased by 2% each day starting today…what would be the difference in age 20 years from now?
Proving that for when AB = 2BA, then B is not invertible if A is invertible.
What is the difference between the address stored and displayed in C and C++?
How is a chord with a diminished third called?
Does the FDR record data about the cockpit door lock system?
Pinging different IP address than what I entered
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2015 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296603.6/warc/CC-MAIN-20150323172136-00253-ip-10-168-14-71.ec2.internal.warc.gz
|
CC-MAIN-2015-14
| 2,267 | 53 |
https://www.instructables.com/community/Measuring-DC-Motor-Load-Resistance-Selecting-the/
|
code
|
Measuring DC Motor Load Resistance - Selecting the Right Switching Transistor Answered
I am an electronics noob, although I have done several Arduino based circuits, the stuff I have done is all a bit "painting by numbers" and I feel the need to understand more about what I am doing. I have searched the internet for an answer but can't quite get all the info I need. Time to ask for help!
I am building a circuit (without a microprocessor) where a PIR sensor will start a small 3v DC motor (from the junkbox) when the sensor is activated (PIR output pin goes high). The most helpful page I stumbled upon so far (because I think I understand it) is here:
Under the heading: Choosing a Suitable NPN transistor, where it gives a procedure for selecting a transistor.
The first step is to ensure that the transistor's maximum collector current is greater than the load current, where load current is calculated by dividing the supply voltage by the load resistance.
The thing that is tripping me up is: how do I measure load resistance of the motor? My hunch is that I just measure the resistance across the positive and negative terminals of the motor whilst it is disconnected. Can it be that simple I ask myself.
I appreciate, I could just try any old NPN transistor and see if it works, but that will leave me no better off for the next time.
Thanks for any help you can give, be they answers or other places to look!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00124.warc.gz
|
CC-MAIN-2021-17
| 1,419 | 8 |
https://grafo.etsii.urjc.es/en/publication/gonzalez-2013-new/
|
code
|
Constraint Satisfaction Problems (CSP) have been widely studied in several research areas like Artificial Intelligence or Operational Research due their complexity and industrial interest. From previous research areas, heuristic (informed) search methods have been particularly active looking for feasible approaches. One of the critical problems to work with CSP is related to the exponential growth of computational resources needed to solve even the simplest problems. This paper presents a new efficient CSP graph-based representation to solve CSP by using Ant Colony Optimization (ACO) algorithms. This paper presents also a new heuristic (called Oblivion Rate), that have been designed to improve the current state-of-the-art in the application of ACO algorithms on these domains. The presented graph construction provides a strong reduction in both, the number of connections and the number of nodes needed to model the CSP. Also, the new heuristic is used to reduce the number of pheromones in the system (allowing to solve problems with an increasing complexity). This new approach has been tested, as case study, using the classical N-Queens Problem. Experimental results show how the new approach works in both, reducing the complexity of the resulting CSP graph and solving problems with increasing complexity through the utilization of the Oblivion Rate.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.3/warc/CC-MAIN-20230923194908-20230923224908-00686.warc.gz
|
CC-MAIN-2023-40
| 1,367 | 1 |
https://phillyjug.com/2008/06/24/sonnygroovy/
|
code
|
By now you’ve probably heard a bit about Groovy and Grails. Come find out what the hype is all about! This presentation will provide an overview of Groovy and Grails and will be complemented by code demonstrations.
Sonny To has been in the IT industry for more than 10 years and started programming computers during childhood. He started out his career as a Unix System administrator (HPUX, Solaris, Linux, FreeBSD) and now is a freelance software engineer focusing manly on Java and related technologies. In his free time, Sonny has authored some open source projects housed on GoogleCode. He has also taught Java, C#, and Python at Learning Tree International, Learn Quest, and The Chubb Institute. He has been using Java since 1995 and Groovy/Grails since 2007. He’s a graduate from the Wharton School of the University of Pennsylvania.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.92/warc/CC-MAIN-20170219104611-00541-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 843 | 2 |
http://blog.caseybroughton.ca/2013/12/kelowna-atis-merry-christmas.html
|
code
|
Earlier this evening, I was listening to the Kelowna ATIS phone line (yes, I have odd hobbies; its at 250-491-0310 if you're interested) when I was surprised to hear it say "Merry Christmas". Naturally, I recorded this event, and have embedded it below, along with a transcript.
From the description:
This was taken around 20:00 local time from the CYLW (Kelowna International)ATIS phone line (250-491-0310) on Christmas Day, 2013. I am an avid ATIS listener, as my school is near the airport and I need the weather readings, and was surprised to hear the ATIS say "Merry Christmas". I therefore recorded it on my iPhone via a cable to my computer for posterity.
Kelowna Airport, Information Quebec
Weather at Zero Three Two One Zulu
Wind One Four Zero at Six
Niner Hundred broken
One Thousand Four Hundred overcast
Dew point Minus One
Altimeter Three Zero Three Eight
IFR approach ILS DME runway One Six
Active, runway One Six
Runway surface condition at Zero Three Zero One Zulu
One Hundred Percent bare and wet
Inform ATC that you have information Quebec
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817650.14/warc/CC-MAIN-20240420122043-20240420152043-00615.warc.gz
|
CC-MAIN-2024-18
| 1,057 | 15 |
https://replit.com/talk/ask/How-can-I-use-Google-Test-in-my-repl/77246
|
code
|
How can I use Google Test in my repl
I need to be able to run GoogleTests in my repl for tutorial purposes, is there a simple possibility to do that?
I am sorry, I do not think that is possible. You will have to do a little more research. Repl does not have a package manager for C++ but I heard you can use C++ extensions or libraries with exe's or something. Maybe try this: https://automaticaddison.com/how-to-add-an-external-c-library-to-your-project . You can upload files by clicking on the three buttons
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00735.warc.gz
|
CC-MAIN-2022-40
| 510 | 3 |
https://storiesbus.com/posts/natural-formation-resembles-phallic-shape-stirring-unease-among-observers-amazing-nature-hoai123-2/
|
code
|
The receпt fiпdiпg of a tree iп Thailaпd with a phallic shape is merely the most receпt iпstaпce showcasiпg пatυre’s ability to prodυce forms that caп leave observers feeliпg υпeasy.
Pterocarpυs iпdicυs, also referred to as the Bυrmese rosewood, is a tree пative to Soυtheast Asia, capable of reachiпg heights of υp to 30 meters. What captivates people isп’t jυst its impressive size bυt its υпiqυe aпd phallic shape.
SWheп some iпdividυals came across images of the tree oп social media, they foυпd themselves feeliпg υпeasy aпd υпcertaiп aboυt how to react. Maпy have resorted to hυmor aпd wordplay, highlightiпg its resemblaпce to male geпitalia.
While some may fiпd the tree’s distiпctive shape amυsiпg, others express coпcerпs aboυt its poteпtial impact oп the local commυпity. The tree’s images coυld be coпsidered offeпsive by some Thais dυe to the coυпtry’s coпservative cυltυral пorms. Compoυпdiпg this issυe is the fact that the park where the tree is sitυated is a commoп destiпatioп for families with yoυпg childreп, makiпg it easy for its phallic form to be misiпterpreted.
Reqυests have emerged to either remove or modify the tree, yet there are those who argυe that it shoυld be left υпtoυched siпce it represeпts a пatυral occυrreпce. This discυssioп prompts reflectioп oп the delicate eqυilibriυm betweeп пatυral elemeпts aпd hυmaп iпterveпtioпs iп pυblic spaces.
Oп oпe haпd, the tree’s shape is eпtirely υпavoidable aпd beyoпd hυmaп iпflυeпce. It has beeп a part of the laпdscape for a coпsiderable time, althoυgh its receпt sυrge iп popυlarity is primarily attribυted to social media shariпg.
Nevertheless, pυblic spaces shoυld strive to be iпclυsive aпd welcomiпg to all iпdividυals, aпd depictioпs of the tree may iпdeed be offeпsive to some. Giveп that the park where the tree is located is a commoп destiпatioп for families with yoυпg childreп, safegυardiпg them from iпappropriate coпteпt is a paramoυпt coпcerп.
Ultimately, the local goverпmeпt will have the fiпal say iп determiпiпg the tree’s fate. However, this episode serves as a catalyst for profoυпd coпtemplatioп regardiпg oυr iпteractioп with the пatυral world iп pυblic spaces. It υпderscores the esseпtial пeed to strike a fair aпd harmoпioυs balaпce betweeп the пatυral eпviroпmeпt aпd the demaпds aпd seпsitivities of hυmaп civilizatioп.
The discovery of the phallic-shaped tree iп Thailaпd left maпy people feeliпg υпeasy aпd sparked a coпversatioп aboυt the delicate balaпce betweeп пatυre aпd hυmaп iпterveпtioп iп pυblic spaces. While some advocate for leaviпg the tree υпtoυched, others express coпcerпs aboυt its poteпtial impact oп the пeighborhood. Ultimately, the decisioп rests with local aυthorities, bυt perhaps this iпcideпt will serve as a remiпder to be miпdfυl of hυmaп seпsitivities wheпever possible.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100745.32/warc/CC-MAIN-20231208112926-20231208142926-00739.warc.gz
|
CC-MAIN-2023-50
| 3,070 | 9 |
http://www.snakehandler.com.au/index.php?p=snake-catching
|
code
|
Snakehandler staff are licensed snake catchers and currently remove reptiles from all over metropolitan Melbourne.
Copperhead rescued from ensuite in Pakenham
This crafty snake eluded The Snakehandler Team for almost a whole day, in what appeared to be an almost fully sealed ensuite with no visible escape routes or holes. After the toilet being removed and tiles ripped up to search around the pipes for this snake, it took peace and quiet for the snake to reveal its sneaky hiding spot in a small hole next to the vanity unit.
After a whole day trapped in his hidey hole the copperhead was the next day released back into its native habitat where it immediately headed for the water and rehydration.
Patriotic tiger snake removed from under a couch
Australia day and soaring temperatures saw a busy day for The Snakehandler Team, beginning by rescuing a tiger snake from a bird aviary where he had been happily snacking on nestlings, and ending with another separate tiger under a couch in a Research families lounge room.
Not quite clear what all the fuss was about this snake was released once the temperature had come down near a local river where it headed straight for the water and apparent safety.
Tiger snake caught on Boxing Day
The snake was found in an unused swimming pool under an old children's slide.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708144156/warc/CC-MAIN-20130516124224-00078-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 1,318 | 9 |
http://www.soundandvision.com/content/lets-give-em-something-thx-about
|
code
|
Let's Give 'em Something to THX About
Sin #1: Wrong Aspect Ratio: A Blackbird enabled player will recognize the aspect ratio on content and signal the Blackbird enabled display device to switch to it. Say goodbye to fat and tall people forever (oh, wait, that's the John Edward's health plan)
Sin #2: Wrong Video Mode: We could argue this one I suppose, but if you think sports should be watched in Sports mode and movies in Movie mode, then Blackbird can get it done.
Sin #3: Wrong Audio Mode: So you left your processor in Pro Logic IIx Game mode and now you've inserted a CD. Hmmmm. You get the idea?
How all this works is pretty fascinating. Imagine a movie studio knows that the disc you were about to watch was originally recording in two channel with no matrixed surround signal at all. If they put that information into the Blackbird's database, then your Ethernet equipped Blackbird-enabled DVD player would look in that database and then send an instruction over HDMI to your AVR telling it to switch to two channel stereo. So even if you had last set your AVR set to "Rock Concert," to watch a Pink Floyd concert DVD, the Blackbird system would set things right before you sat down.
Sounds pretty nifty, but it will require all devices in the chain to have Blackbird technology in them for it work. Just getting a Blackbird DVD player is not enough. So, we'll see if it catches on.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186891.75/warc/CC-MAIN-20170322212946-00440-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,392 | 6 |
https://apple.stackexchange.com/questions/350481/i-cant-log-into-root-or-my-user-account
|
code
|
I have a mid-2012 MacBook Pro, with root setup and SIP disabled.
I was downloading files today, and suddenly my computer couldn't connect to the internet (it said that it had a self-assigned ip). No problem, I get an ethernet cord.
Later, my apps start freezing, so I restart my computer. When I log in, I got stuck at the login window, but on top of the login window, messages launches. I shut down my computer, restart in Recovery mode, and re-install macOS Sierra (with ethernet to download it quicker).
As soon as it's done, I try logging into my user account. It loads infinitely. I then try logging into my user account and root through the other users option. Doesn't load.
Is there any way to login? I've never had this problem before, though I periodically have to re-install macOS Sierra.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593295.11/warc/CC-MAIN-20200118164132-20200118192132-00389.warc.gz
|
CC-MAIN-2020-05
| 798 | 5 |
http://social.microsoft.com/Forums/en-AU/projserv2010setup/thread/6409dbc2-e11c-4955-8752-8fd3ec1256cd
|
code
|
Monday, 13 February 2012 10:32 PM
Looking for any assistance/shortcuts/step by step instructions/directions to the "clue store" etc regarding how to convert (or rebuild) existing data analysis views that we have defined in the project server 2007 environment to project server 2010.
Thanks in advance.
Monday, 13 February 2012 10:47 PMModeratorDan --If it were me, I would simply write down the definition of each Data Analysis view in Project Server 2007. This information would include:
Once you have documented the preceding information, you can recreate your Data Analysis views in the Business Intelligence Center in Project Server 2010. In essence, you will find that the 14 OLAP cubes are pretty much the same in both versions of the software, and I believe you will find all of the dimensions and total fields in the same OLAP cubes in both versions. Hope this helps.
- Name of the OLAP cube used in the view
- Dimensions included in the Row Fields drop area
- Dimensions included in the Column Fields drop area
- Dimensions included in the Filter Filters drop area
- Total fields included in the Total Fields drop area
Monday, 13 February 2012 11:13 PMThanks Dale - we'll give it a try (I think we just may be overthinking it and making it more complicated than it is)
Tuesday, 14 February 2012 2:58 PMModerator
Wednesday, 15 February 2012 8:24 PMModeratorAlso remember that you should investigate the Excel Services views that are written directly against the reporting database. You may find that you no longer need all of the DA views. The advantage is that these views do not rely on a cube rebuild and so the data is available in the report as soon as the project is published.
Wednesday, 28 March 2012 3:33 PMThanks for the added info. I'll keep that in mind.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00046-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 1,774 | 14 |
http://www.sevenforums.com/network-sharing/255452-win-7-cannot-detect-wired-network-pc-but-internet-can-used-2.html
|
code
|
reset was good but still same. i know it cant be the actuall script of the icon thats bad because as i said i have a program that wont launch because it also thinks i have no internet. when i right click on connection status icon it says no not connected to any network. This is just a hunch, but is it possible that win 7 is confused somehow with wired and lan? Although their is no wireless status showing and i have no wireless device on it, perhaps something in that sense? Because i just dont know how can win not detect network yet i am connected to internet and able to surf. If it wasnt for that program that cant launch because it doesnt detect my network, all of this would be fine with me.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703057881/warc/CC-MAIN-20130516111737-00045-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 700 | 1 |
https://techcommunity.microsoft.com/t5/microsoft-teams/reply-to-a-specific-message/m-p/2402457/highlight/true
|
code
|
may I know why it's almost end of Q2 2021 and replying to a comment is still NOT A FEATURE on desktop app? I see you guys put a lot of effort by offering lots of app integrations, however I found most of them useless and not adding any value.
PS: Please don't suggest any manual workarounds (i.e. copy/paste message and highlight).
Really appreciate if you guys make "direct reply to message" your top priority, thanks!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00442.warc.gz
|
CC-MAIN-2021-31
| 419 | 3 |
https://southandfinsbury.wordpress.com/2014/08/03/this-gold-beauty-is-on-my-wishlist/
|
code
|
Ah Dior,Dior,Dior , just enough sparkle!
You can do no wrong when it comes to nail polish in my humble opinion.
DIOR VERNIS VIBRATO 618
a beautiful rose gold that I would love to get my hands on.
I’m getting the impression that this beauty is hard to get hold of.
Its exclusive to SELFRIDGES in the UK 18 pounds 50 here
I am trying to find out if its available here in NZ
If anyone come across it in Auckland or Wellington, please let me know 🙂
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00105.warc.gz
|
CC-MAIN-2018-17
| 449 | 8 |
https://forums.opensuse.org/t/11-3-with-gnome-disappearing-sound/59683
|
code
|
GNOME desktop, 11.3. I followed the restricted media format installation. It passes the MMCHECK script. I have only 1 sound card, and only do 1 thing at a time. No audio playback programs produce sound except xine. The soundcard “test” function produces sound, but there is no sound from Firefox. No system sounds such as open and close firefox or other windows. Installed RedDwarf media plugin, which is verified in the preferences window of Firefox.
I assume I have messed up some configuration files. Suggestions on what to look for?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00642.warc.gz
|
CC-MAIN-2023-50
| 540 | 2 |
https://jamesachambers.com/fixing-storage-adapters-for-raspberry-pi-via-firmware-updates/
|
code
|
I’ve covered how to get the right type of storage adapter for your Raspberry Pi for years on this site and cataloged storage adapters that both work and don’t work with the Raspberry Pi. Over the years we’ve learned that many of these adapters can be “fixed” with a firmware update to work with the Raspberry Pi.
In this article I’ll put together an evolving list of firmware adapters that can be fixed with these updates from my own experience as well as comments people have left over the years!
The preferred and safest way to identify your device is by brand name. This will work if you have a “popular” or “name brand” storage adapter.
If you have a generic / unbranded adapter then the next best way is by chipset. We can identify your chipset by using the following command:
This yields the following result:
pi@pi:~ $ lsusb Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. Name: ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 04d9:0007 Holtek Semiconductor, Inc. Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Yours will look similar to mine. I’ve bolded the important line which is our storage adapter. The above result is for the StarTech 2.5″ SATA to USB 3.1 (USB312SAT3CB) adapter. This example is a name brand adapter that will be on the list but if it wasn’t the generic chipset would be the ASMedia ASM1153E chipset for this adapter. Other common chipsets include JMS-578, etc.
If you’re confused about which is which use this version of the command to get a lot more detail (including device properties that often make it much easier to identify):
sudo lsusb -v
This will yield something like this:
Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. Name: ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.10 bDeviceClass 0 bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 9 idVendor 0x174c ASMedia Technology Inc. idProduct 0x55aa Name: ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge bcdDevice 1.00 iManufacturer 2 asmedia iProduct 3 ASMT1051 iSerial 1 123456799FA6 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 0x0079 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk-Only iInterface 0
The above example didn’t give us the name “StarTech” anywhere but it did give us some clues. This is identified as a “Mass Storage” interface class device which definitely narrows things down. Your other peripherals will show as the category they are from like mouse, keyboard, etc. You should be able to narrow things down by unplugging everything else from your Pi if you are still having trouble identifying which is which.
Many of these updates need to be applied using a Windows machine as that will be the only platform these updates will be offered on from their web site. Some manufacturers have update utilities for multiple platforms available but from what I’ve found if you’re lucky enough they offer them at all it will usually be for Windows.
Warning / Disclaimer
This is not an entirely risk free procedure. If something goes wrong during a firmware update it is possible to brick it. This doesn’t happen very often but understand it’s possible. If you lose power at the moment you are updating the firmware for example that could definitely do it.
There is less risk for the “branded” adapters as these are the manufacturer’s tools intended for the manufacturer’s devices. It’s as safe as it gets but even in these cases things can go wrong (like the examples I mentioned above). There is also some risk that even chipsets identifying as the same chip may have slight variations in how they are actually implemented or which revision they are.
Make sure you understand this is not a completely risk free procedure (and carries the same risk as firmware updates on any other device, and a little bit extra risk even for the generics since they may not have been intended for that exact device) before proceeding!
StarTech 2.5″ SATA to USB 3.1 Adapter
The USB 3.1 variant of the StarTech 2.5″ SATA adapter works well with the Pi 4. The USB 3.0 variant doesn’t have firmware updates available and is not recommended.
There are a few different variants. Check the tag on your cable to see which exact model you have.
usb3s2sat3cb – USB 3.0 Version – No updates available
Click the “downloads” tab and choose “firmware.zip”. This must be ran on a Windows machine. It will update the firmware almost instantly when you launch the program with the adapter plugged in and say “SUCCESS – UNPLUG AND REPLUG THE DEVICE”.
Sabrent and Orico both have the worst track records for working storage adapters for the Pi. I don’t recommend them at all but they can sometimes be fixed.
The following Sabrent JMicron adapters can be updated with their official tool:
Important Note: After the update the Sabrent adapters often work but usually only with quirks mode enabled (see bottom Quirks section of article).
For Sabrent’s version of the JMicron firmware update tool: Sabrent JMicron Update Tool
For the general Sabrent adapters firmware update list (check if your adapter is listed): Sabrent Firmware Update Download Page
Note with generic adapters there is some risk. These may not necessarily be by the same manufacturer of your device. It usually doesn’t matter as these all have the same storage controller but due to slight variations in the way some manufacturers implement this technology it’s possible it could cause an issue / brick it. Make sure you understand that there is some risk before proceeding!
JMicron JMS578 Firmware
This is a copy of the JMS578 firmware that has fixed this issue for many people (but not everyone) on the Raspberry Pi. You may still need to enable “quirks mode” (see quirks mode section) even with the updated firmware in some cases.
It’s a little bit trickier to use than some of the other ones but not too extraordinarily difficult. You will need the updater utility and the .bin file.
Here is the updater utility: ODroid – jms578fwupdater.tgz
Here is the JMS578 firmware update: ODroid – jms578_fw_update
And finally the how to use the updater utility / instructions here: ODroid Wiki – How to use jms578_fw_update
This thread is worth a read as well to see the different types of adapters/chipsets people tried with this and their different results: Raspberry Pi Forums – Topic 245931
Watch Out For Power Issues
If you are using a drive that has high power demands a common solution I’ve been recommending for years is to use a Sabrent powered USB hub to power the drive. This eliminates your Pi from having to use it’s own power to power the drive at all. This is often required for higher performance NVMe drives.
The Sabrent powered USB hub delivers a whopping 2.5A of dedicated power for your USB attached devices. This is almost as much as the Pi adapter itself is rated for (3.0A). It will easily power the most thirsty of setups such as NVMe enclosures.
Note: Make sure Amazon doesn’t try to take you to the non-powered version and that it’s the one with the AC adapter that plugs in to provide extra power
Verify Drive Performance
You can make sure everything is running correctly (and as fast as it should be) by running my quick storage benchmark. You can run the benchmark with the following one-liner:
sudo curl https://raw.githubusercontent.com/TheRemote/PiBenchmarks/master/Storage.sh | sudo bash
This will give you a score you can compare to the other Raspberry Pi Storage Benchmark results and make sure that you are getting an equivalent speed to your peers with the same device!
Benchmarking / Testing Storage
If you want to verify your drive’s performance you may want to run my storage benchmark with:
sudo curl https://raw.githubusercontent.com/TheRemote/PiBenchmarks/master/Storage.sh | sudo bash
If you search for the model of your drive on Pi Benchmarks you can compare your score with others and make sure the drive is performing correctly!
Fix (some) USB Adapter Problems Using Quirks
Some adapters can be made to work by using USB quirks to disable UAS mode on the drive. This lowers performance, but it’s still much faster than a SD card and your adapter won’t go to waste. Some adapters also require it even with the updated firmware!
To find out the quirks we need to find the device ID string for your adapter and then add an entry to cmdline.txt telling the kernel to apply them on boot.
Find Your Adapter
To apply the quirks we first need to get the adapter id. We will use the sudo lsusb command:
$ sudo lsusb Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. Name: ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
On line 2 we can see my ASM1051E SATA 6Gb/s bridge adapter (it’s the known working StarTech.com 2.5″ SATA to USB 3.0* adapter). You will see something very similar to mine when you run the command and it shouldn’t be too hard to figure out which device it is. If you need more information add a -v switch to make the command sudo lsusb -v. This can sometimes add some additional details to make it easier to figure out which one is your adapter.
If you’re still not sure, we have another command that between the two that can narrow things down. Type / paste the following:
sudo dmesg | grep usb [0.828535] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 4.19 [0.828568] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [0.828597] usb usb3: Product: DWC OTG Controller [0.828620] usb usb3: Manufacturer: Linux 4.19.75-v7l+ dwc_otg_hcd [0.828644] usb usb3: SerialNumber: fe980000.usb [0.830051] usbcore: registered new interface driver uas [0.830182] usbcore: registered new interface driver usb-storage [0.836488] usbcore: registered new interface driver usbhid [0.836511] usbhid: USB HID core driver [0.971598] usb 1-1: new high-speed USB device number 2 using xhci_hcd [1.154217] usb 1-1: New USB device found, idVendor=2109, idProduct=3431, bcdDevice= 4.20 [1.154254] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0 [1.154281] usb 1-1: Product: USB2.0 Hub [1.301989] usb 2-1: new SuperSpeed Gen 1 USB device number 2 using xhci_hcd [1.332965] usb 2-1: New USB device found, idVendor=174c, idProduct=55aa, bcdDevice= 1.00 [1.332999] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [1.333026] usb 2-1: Product: ASM105x [1.333048] usb 2-1: Manufacturer: ASMT [1.333071] usb 2-1: SerialNumber: 123456789B79F
This is the dmesg log showing the hardware detection as hardware is activated on the Pi. If your log is really long you can generate fresh entries by just unplugging a device and plugging it back in and running the command again. Here we can clearly see that the ASM105x is what our StarTech adapter is being detected as.
Now we can go back to our first lsusb command and we want the 8 characters from the ID field that comes right after the Device:
Bus 002 Device 002: ID 174c:55aa ASMedia Technology Inc. Name: ASM1051E SATA 6Gb/s bridge
Our adapter’s ID is: 174c:55aa
To apply the quirks to our USB adapter we are going to edit /boot/cmdline.txt. Type:
sudo nano /boot/cmdline.txt
We are going to add the following entry into the very front of cmdline.txt:
In place of the X’s above you will put in your adapter’s ID that we got before. With the example commands I gave above mine would look like this: usb-storage.quirks=174c:55aa:u. After this my cmdline.txt looks like this (everything should be one continuous line, no line breaks!):
usb-storage.quirks=174c:55aa:u console=serial0,115200 console=tty1 root=PARTUUID=d34db33f-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait
Now reboot the Pi. If the Pi fails to boot you can plug the SD card into the computer and go to /boot/cmdline.txt and undo the change we did so you can boot back in with your SD card.
Once you have rebooted after changing cmdline.txt we can verify the quirks have been applied by doing another dmesg | grep usb command:
sudo dmesg | grep usb [1.332924] usb 2-1: New USB device found, idVendor=174c, idProduct=55aa, bcdDevice= 1.00 [1.332957] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [1.332983] usb 2-1: Product: ASM105x [1.333006] usb 2-1: Manufacturer: ASMT [1.333028] usb 2-1: SerialNumber: 123456789B79F [1.335967] usb 2-1: UAS is blacklisted for this device, using usb-storage instead [1.336071] usb 2-1: UAS is blacklisted for this device, using usb-storage instead [1.336103] usb-storage 2-1:1.0: USB Mass Storage device detected [1.336479] usb-storage 2-1:1.0: Quirks match for vid 174c pid 55aa: c00000 [1.336611] scsi host0: usb-storage 2-1:1.0
This time we can see in dmesg that UAS was blacklisted for the device and it has loaded with the usb-storage driver instead. This driver tends to be more compatible with the “problematic adapters” but the performance is usually significantly lower. It’s definitely worth a try though as some adapters do better with the quirks performance-wise. The only way to know for sure is to run a benchmark (see “Verify Drive Performance” section).
For the CM4 (Compute Module 4) check out Raspberry Pi Compute Module 4 and using real PCI-Express/NVMe on the Pi
To find out where to get the 64 bit Raspberry Pi OS beta check out my Where to get 64 bit Raspberry Pi OS article here
If you are looking for storage adapters or the best SSDs to use for Raspberry Pi my Best Storage Adapters / SSDs for the Pi 4 / 400 guide should be able to be of some assistance
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00120.warc.gz
|
CC-MAIN-2023-50
| 14,448 | 72 |
https://help.o2.verizonmedia.com/hc/en-us/articles/360012498052-What-is-a-Scheduled-Playlist-
|
code
|
A scheduled playlist is a type of a playlist that has current and future versions. The playlist scheduling option allows you to add changes to your existing playlist and pick date and time when these changes become active.
A scheduled playlist can be created out of an already existing playlist. When you schedule any changes to your playlist, you basically create a separate version of it that will override the current version at a given time. The original playlist is the current version, while all changes scheduled in it are the future versions.
For example, you would like your player to play back different types of content during the weekdays and over the weekend. So instead of going to your playlist on Saturday morning and changing it manually, you can simply create a new version of your playlist in advance and schedule it to start playback on, say, Saturday 00:00 a.m.
You can create a scheduled playlist according to one of the following options:
- Replace content - You create a completely new playlist version and compile new placements inside of it. The new content that you have added will override the old one according to your scheduled date and time.
- Modify the existing playlist - You duplicate your existing playlist to a future version and modify it - change the playback order, add/remove videos/placements etc. These changes will apply to the playlist according to the scheduled date and time.
To learn more on how to schedule your playlist updates, please refer to How to Create a Scheduled Playlist.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00660.warc.gz
|
CC-MAIN-2021-04
| 1,530 | 7 |
https://forums.adobe.com/thread/1411740
|
code
|
I’m about to buy a laptop and am concerned about using the cloud as a means to sync files between my desktop and my laptop, as both will be used for the same project. Obviously this is not a complicated or unusual situation, but I still need some clarification on how the cloud works exactly.
How does the cloud file syncing handle a scenario like this:
I work on my video on my desktop. Files sync.
I go on location with my laptop and work on video some more. Files partly sync but do not have time to fully upload to cloud.
I return to desktop, begin work on a different project.
Will the cloud files then revert back to the way my files are on my desktop (since my laptop is off and desktop is on)? Or will the partial file uploads be downloaded to my desktop? Then when I turn my laptop back on, will the cloud files continue to upload, or will the cloud take priority and files on my laptop will be removed to match the way they appear on the cloud?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827769.75/warc/CC-MAIN-20181216143418-20181216165418-00484.warc.gz
|
CC-MAIN-2018-51
| 956 | 6 |
https://onlinedictionary24.com/word/behavior
|
code
|
Psychology, Animal Behavior. observable activity in a human or animal. the aggregate of responses to internal and external stimuli. a stereotyped, species-specific activity, as a courtship dance or startle reflex.
Often, behaviors. a behavior pattern.
the action or reaction of any material under given circumstances: the behavior of tin under heat.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00038.warc.gz
|
CC-MAIN-2021-39
| 349 | 3 |
http://www.linuxquestions.org/questions/linux-kernel-70/how-to-use-sparse-tool-818479/
|
code
|
How to use sparse tool
I would like to use the sparse tool developed by Linus but I have some troubles to use it.
There is a lack of documentation and tutorial about this tool and maybe the man page is no up to date since it only speak about two options but into the code, we can see a switch on many options.
Do you know how to use it outside of the kernel tree ? Everything is good if I use make C=1 or C=2 to compile kernel source but I don't succeed to use it with my driver Makefile.
If I try to use sparse in standalone, I have a lot of errors to find include files even if using gcc-base-dir option and giving it the kernel headers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125719.13/warc/CC-MAIN-20170423031205-00242-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 639 | 5 |
https://bugs.ruby-lang.org/issues/16013
|
code
|
In the past, I've seen several times where the bundler default gem has caused issues in external CI. It is currently causing issues with CI's that use nested 'bundle' commands.
External repos should not have to force an install/update of RubyGems/Bundler for ruby-head to work.
Also, the lib/bundler/build_metadata.rb file contains no information about the files currently installed in master, and hence neither the gemspec or commands like
bundle version are correct.
There has also been a gem release of bundler that occurred almost two weeks after the last update here. Technically, that version is 2.0.2, while master is using bundler master/2.1.0.pre.1, but I'm sure some of the commits overlap.
Hence, can bundler be updated more frequently and accurately?
Updated by hsbt (Hiroshi SHIBATA) over 3 years ago
- Status changed from Open to Feedback
- Assignee set to hsbt (Hiroshi SHIBATA)
I didn't understand what you requested.
If your concern is `build_metadata, I will set it from the upstream commit.
PS. Please set the informational subject always.
Updated by MSP-Greg (Greg L) over 3 years ago
Sorry, bad week, and really hot where I am.
Main thing, I would like ruby-head/master/trunk to not break external CI. In some instances, it is currently doing so due to an issue with Bundler. I am also willing to help with that breakage...
Regarding Bundler, I was looking at ruby-loco and a recent Travis build, and
build_metadata has no date or sha, and the gemspec has no date. So, without looking at the git repo, there's no way to determine what's being used, other than a somewhat meaningless version.
I know that requires modifying files, but it would be really helpful to have that info, especially given that RubyGems and Bundler have a constant flow of commits, unlike other std-lib/default gems.
Many people contribute here, but you have actual things/tasks that you apear to be responsible for. I don't want to add to that, but more timely updates to RG/Bundler would be helpful.
I've gotten the 'you have to commit/PR to upstream'. Given that RG/Bundler are dynamic and they can break trunk, maybe the rule for RG/Bundler should be commit/PR in upstream, or both, but not just in trunk...
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00639.warc.gz
|
CC-MAIN-2023-06
| 2,206 | 20 |
https://migtown.show/migtown-show/episode-012-drexel-vs-love/
|
code
|
Migtown Podcast Episode 012: Drexel vs Love
Drexel and Producer Tim talk about the death of Rush Limbaugh, the death of love, ain’t nobody got respect, why Ron Toye acts the way he does, answer some emails and more!
Show website and schedule: https://migtown.show/about
Subscribe to our YouTube channel: https://www.youtube.com/migtownpodcast
MGTOW.tv link: https://www.mgtow.tv/@migtownpodcast
Download the audio: https://migtown.libsyn.com
Love the show? It costs money to produce this. Support the show by subscribing to our Patreon or Subscribe Star (or both, I won’t stop you).
Find Migtown on podcasting apps
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00723.warc.gz
|
CC-MAIN-2022-27
| 618 | 8 |
https://docs.yoyogames.com/source/dadiospice/002_reference/windows%20and%20views/display_get_dpi_x.html
|
code
|
Dots per inch (DPI) is a measure of spatial printing or video
dot density, in particular the number of individual dots that can
be placed in a line within the span of 1 inch (2.54 cm). When
working on mobile devices (in particular Android devices) this is
an important factor to take into consideration as what may be
appropriate for one display resolution, may not be appropriate for
another. For example, you may have two displays with the same
resolution of 400 x 800, but display 1 has a dpi of 60 and display
2 has a dpi of 30. In this case, any text or image displayed on
display 2 will appear much larger, even though the actual
resolution is the same.
This function will get the dpi of the device display along the x axis (this value is also dependant on the orientation of the device). Please note that Mac and iOS do not return specific dpi settings but appear to return the same values as the OS, which are not correct (but will have to do) as Apple do not give the correct values.
dpx = display_get_dpi_x();
This would set the variable "dpx" to the dpi value of the x axis.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00507.warc.gz
|
CC-MAIN-2021-39
| 1,085 | 14 |
https://gist.github.com/frnhr?direction=desc&sort=created
|
code
|
- We export a Swagger spec file (
api_swagger.json) from DRF code and import it into StopLight as a new API, first version.
- Commit this file on
devbranch on GitHub.
- In StopLight we make any necessary changes:
- assign groups to the API endpoints
- we should not edit descriptions that were imported, because of possible Git merge conflicts later on
- Export a Swagger spec file from StopLight.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00383.warc.gz
|
CC-MAIN-2023-40
| 397 | 8 |
https://www.digitalocean.com/community/questions/able-to-login-but-not-able-to-access-my-projects
|
code
|
Not able to access this link below after login
Failed to load resource
which in detail says
Because a cookie’s SameSite attribute was not set or is invalid, it defaults to SameSite=Lax, which prevents the cookie from being sent in a cross-site request. This behavior protects user data from accidentally leaking to third parties and cross-site request forgery.
Resolve this issue by updating the attributes of the cookie: Specify SameSite=None and Secure if the cookie should be sent in cross-site requests. This enables third-party use. Specify SameSite=Strict or SameSite=Lax if the cookie should not be sent in cross-site requests
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663048462.97/warc/CC-MAIN-20220529072915-20220529102915-00753.warc.gz
|
CC-MAIN-2022-21
| 822 | 6 |
https://www.eureka.im/3666.html
|
code
|
When a Definition/Results File containing Monitor Points is opened in CFX Pre 11.0, some of the Monitor Points are not highlighted in the Viewer. Highlighting them causes the following error to be generated:
CCL validation failed with message:
Error: The essential parameter 'Option' is missing from /POINT:Point Point 1
This is due to a bug in CFX Pre, and can can occur when the file contains Monitor Points defined using both the Coordinate and Expression options.
The error can be fixed by visiting the Monitor tab of Output Control, and re-applying the Monitor Point data.
ANSYS CFX Release 11.0 Service Pack 1.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00105.warc.gz
|
CC-MAIN-2020-05
| 616 | 6 |
https://www.geekzone.co.nz/forums.asp?forumid=64&topicid=51810
|
code
|
And of those who have have you noticed the cool HA ability of the IW online gaming system?
Let me explain when I say HA ability:
What I have noticed is if you are connected to a game and playing away happily, then suddenly the game experience begins to deteriate on the host. you think great, my connection is bugging out. Then the screen freezes and you think "awesome, here comes the timeout/lost score/restart session" but no, the host migrates the game to a more suitable host for you to continue play on!
The first place I saw this was with ESX and Virtual Center and the ability to have your guest OS move between ESX hosts if one ESX host fails.
What a good idea, means online game play is more reliable and more seamless for people not familiar with it (no need to find another server/host or worrying about the connection speed)
Has anyone else experienced this yet?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148375.36/warc/CC-MAIN-20200229022458-20200229052458-00022.warc.gz
|
CC-MAIN-2020-10
| 875 | 6 |
http://goldmourn.livejournal.com/
|
code
|
#VEDA Day 27 (VEGAN TAG + Vlog)
]#VEDA Day 26
#VEDA Day 25
]#VEDA Day 24
#VEDA Day 23
]#VEDA Day 22
The Tragically Hip on CBC
(20 August 2016) [Link
]30 minutes of my personal experience watching The Tragically Hip on CBC television a 3 hour concert that had 30 songs including 3 encores by The Tragically Hip.
I didn't record all of the greatest or most moving moments - for example, "Grace, Too" - because I was crying a lot.
Official Hip website: The Tragically Hip
Albums (with links to music & lyrics):
Gord Downie Fund for Brain Cancer Research:
All Music Belongs to The Tragically Hip.
Broadcast on CBC Television.#VEDA Day 21
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983019893.83/warc/CC-MAIN-20160823201019-00043-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 633 | 15 |
http://stackoverflow.com/questions/3954088/frameworks-comparation-lift-play-and-wicket?answertab=oldest
|
code
|
What are the advantages and disadvantages of frameworks Lift, Play and Wicket? What characteristics are best or only supported by each?
Lightweight Java-based framework, with Scala support available as an extra.
very good for rapid prototyping, fast-feedback-loop kind of work. Embeds the compiler, so you just edit source code in place and pages get immediately updated. Learning curve is shallow.
Stateful Java-based framework, with Scala support available as an extra.
Shallower learning curve into Scala, especially if you already have wicket experience. Good separation of concerns, POJO-based model. Arguably one of the best Java web frameworks currently available.
Stateful native-Scala framework. Deep Scala integration, so no need to generate bean setter/getter methods or worry about interop between Java/Scala collections. Fully embraces functional-programming concepts, such as immutability and closures.
Also the steepest learning-curve of the three. One common piece of advice is therefore to learn the Scala language before getting started with Lift, especially if you come from a Java background.
There are also other Scala-based frameworks available (such as Scalatra and Pinky) for web development, though not as well-known as Lift. It wouldn't hurt to check these out as well!
For more information, see this question: http://stackoverflow.com/questions/1488412/what-scala-web-frameworks-are-available
There are many threads that compares these web frameworks for Scala. See
http://click.apache.org/ - stateless Java-based framework for light web applications.
Both have excellent documentation and are easy to learn.
Talking about the advantages of Lift, one should mention Seven Things where Lift really excels. In short:
Just visit the linked page for more details - these features really make Lift unique among competitors.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737937342.66/warc/CC-MAIN-20151001221857-00245-ip-10-137-6-227.ec2.internal.warc.gz
|
CC-MAIN-2015-40
| 1,845 | 14 |
http://www.trulia.com/voices/blogs/popular/Grants_Pass---16588
|
code
|
Boy are we out there on this one.....you can not answer any questions realting to hate crimes. Mansur is correct. With reference to asnwering a question like "are there a lot of xxx in the neighborhood", well, I'm a professional and do enjoy having my license, nevermind the obvious discriminatory issues, so, no, I would never answer a question like that either!!!
Can you clarify your question? I understand why Mansur provided the answer he did...but while questions regarding a community's ethnicity, for instance, should not be answered by agents, I don't think a question like, "Are there a lot of xxxx in this neighborhood?" rises to the level of a "hate crime." It may or may not be motivated by bigotry, but hate crime? No.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276780.5/warc/CC-MAIN-20160524002116-00212-ip-10-185-217-139.ec2.internal.warc.gz
|
CC-MAIN-2016-22
| 732 | 2 |
https://answers.ea.com/t5/Technical-Issues/Game-refuses-to-Download/m-p/6493146
|
code
|
My launcher has been sitting here for two days "downloading" the game and it is still at 0% AND nothing is flashing by like what usually happens when you have a download bar. I uninstalled the game two days ago bc for two days before that it was "patching" the game, but with no progress shown. Someone please help!!
Most Common Solution:
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371883359.91/warc/CC-MAIN-20200410012405-20200410042905-00524.warc.gz
|
CC-MAIN-2020-16
| 338 | 2 |
https://www.freelists.org/post/oracle-l/Force-specific-plan-to-be-used,12
|
code
|
But wouldn't that show up in the plan? I must be missing something.
How can you tell by examining the predicate section where the filtering is
On Thu, Oct 31, 2019, 8:35 PM Tanel Poder <tanel@xxxxxxxxxxxxxx> wrote:
Even if the plan hash values are the same, go still ahead and compare the
predicate sections of the good vs bad child cursors.
Predicate existence (or placement) is not part of the plan hash value - so
how early you're filtering the rows may differ.
On Thu, Oct 31, 2019 at 2:03 PM Jeffrey Beckstrom <jbeckstrom@xxxxxxxxx>
I have a couple of SQL statements that have multiple child cursors. Each
child cursor has the same plan hash value. The plans all show "this is an
adaptive plan (rows marked '-' are inactive)". The difference is that on
the "good" child cursor, the plan also shows "statistics feedback used for
this statement". Since all of the plans have the same plan hash value, I
can not use baselines (or can I).
Any suggestions on how to force Oracle to always use the "good" child
Lead Database Administrator
Information Technology Department
Greater Cleveland Regional Transit Authority
1240 W. 6th Street
Cleveland, Ohio 44113
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595282.35/warc/CC-MAIN-20200119205448-20200119233448-00275.warc.gz
|
CC-MAIN-2020-05
| 1,157 | 20 |
https://community.filemaker.com/thread/94120
|
code
|
Copy data,table to table
I want to copy field information from several fields in one table to another table, into alike fields.
The script I am using is this,
Go to Layout["CLIENT DATA"(Client)]
Goto Layout"[CHILD DATA"(child)]
Go to Field[Child::Home_Address]
I do this for each field of information I want.
Basicly, if the address information is empty in the child table and it will be the same as the parents address info, I want to copy the info.
I do not want to have to repeat the step for each field of information.
Is there a way around is?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591296.46/warc/CC-MAIN-20180719203515-20180719223515-00024.warc.gz
|
CC-MAIN-2018-30
| 548 | 10 |
https://www.r-bloggers.com/2010/06/comparing-standard-r-with-revoutions-for-performance/
|
code
|
Following on from my previous post about improving performance of R by linking with optimized linear algebra libraries, I thought it would be useful to try out the five benchmarks Revolutions Analytics have on their Revolutionary Performance pages.
For convenience I collected their tests into a single script revolution_benchmark.R that I can simply run with
Rscript --vanilla revolution_benchmark.R.
The results, compared with the speed-up factors Revolution claims for their version:
|R||R + ATLAS||Speed-up||Revolution’s
|Singular Value Decomposition||98.73||23.57||3.2||12.6|
|Principal Components Analysis||454.55||40.92||10.1||15.2|
|Linear Discriminant Analysis||271.44||79.61||2.4||4.4|
In all instances Revolution’s claimed speed-up is greater, though probably not significantly so for the Matrix Multiply test and hardly so for the Principal Components Analysis. (Of course, I do not have a copy of Revolution Analytics’ product, so I can’t verify their claims or make a comparable test.)
Whether saving 48 seconds on a linear discriminant analysis is enough to justify buying the product is a decision I leave to you: you know what analysis you do. For me, there are (many) orders of magnitudes to be gained by better algorithms and better variable selections so I am not too worried about factors of 2 or even 10. For extra raw power, I run R on a cloud service like AWS which scales well for many problems and is easy to do with stock R while I guess there are some sort of license implications if you wanted to do the same with Revolution’s product. (But I like Revolution and am still trying to find an excuse to use their product.)
Your mileage may vary.
Jump to comments.
You may also like these posts:
Can we make our analysis using the R statistical computing and analysis platform run faster? Usually the answer is yes, and the best way is to improve your algorithm and variable selection. But recently David Smith was suggesting that a big benefit of their (commercial) version of R was that it was linked to a to a better linear algebra library. So I decided to investigate. The quick summary is that it only really makes a difference for fairly artificial benchmark tests. For “normal” work you are unlikely to see a difference most of the time.
When using R , the statistical analysis and computing platform, I find it really annoying that it always prompts to save the workspace when I exit. This is how I turn it off.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00131.warc.gz
|
CC-MAIN-2022-40
| 2,458 | 15 |
https://www.meetup.com/css-56/messages/7664960/
|
code
|
Seeking committed Web Designer - Natuba.com
Thursday, September 24, 2009 4:17 PM
Local (Houston) inquiries only, please - this is an in-house position.
Natuba.com is looking for a highly ambitious, very committed Web Designer who takes his or her job seriously.
We are an up-and-coming website with a high-energy, low maintenance staff and expect nothing less from our future employee.
The ideal candidate should be proficient in:
Natuba is written mostly in Django Python, so someone willing to work with/learn Django templates will help.
Please reply with the following:
* Your name
* References (5 minimum preferably)
* Why you wish to join our team
* Anything else we should know about you
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00293-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 693 | 12 |
https://www.helsinki.fi/en/researchgroups/ancient-near-eastern-empires/news/from-sherds-of-pottery-to-machine-readable-hieroglyphic-texts
|
code
|
My research project, From Sherds of Pottery to Open Egyptological Data, aims to promote the digital research of ancient Egyptian hieroglyphic texts. I am an Egyptologist from my background and a member of ANEE. I previously worked with Assyriologists from ANEE to study Akkadian texts using digital methods. I was responsible for the pre- and post-processing of text data and the visualization of the analysis results. My current project started in 2021 with funding from the Finnish Cultural Foundation. Since the beginning of 2022, I have been able to focus on the project thanks to a three-year grant from the Kone foundation.
The use of digital methods in the study of texts requires that the texts are in a machine-readable format. Assyriologists have several corpora of machine-readable cuneiform texts at their disposal. Several ANEE researchers use texts that can be freely downloaded to one’s computer from the Open Richly Annotated Cuneiform Corpus online service. There is no similar service in Egyptology, although certain online portals can be used to search for phrases in which different words are used, and these services are based on corpora of machine-readable texts.
Hieroglyphic texts are more complex in structure than texts written in many other writing systems. Hieroglyphic signs usually form groups; for example, smaller signs are placed above or below an oblong one (figure 1), and sometimes a character can even be on top of another. In fact, Egyptologists have long been producing machine-readable hieroglyphic texts using special hieroglyphic text editors. With these programs, the hieroglyphs can be arranged as they are in the original text and an image of the hieroglyphic text can be produced, which can then be used, for example, in a book. The hieroglyphs are produced using codes based on a standard classification of hieroglyphs, the so-called Gardiner's sign list. The signs are classified into lettered categories according to what they represent; each sign has a number in the category it belongs to (figure 2). The encoding produced with these codes is machine-readable, but since it is stored in a binary file, it cannot be read without a program built for that purpose. It hasn't even occurred to Egyptologists to publish the encoded texts, because they don't use the codes otherwise but interpret the texts directly into transliterated words.
Since there is still no working method for the text recognition of hieroglyphic texts, I produce encoded hieroglyphic texts by hand with a text editor called JSesh. In addition, I build tools for processing and publishing machine-readable texts. One of the tools helps convert a binary file containing encoded text into a text file. The project's main goal is to build a workflow for the semi-automatic transliteration of encoded hieroglyphic texts. For that, I have created language models from two available text corpora the Ramses Translitteration Corpus and Thesaurus Linguae Aegyptiae. The language models consist of all word forms in the texts and their frequency as well as transliterations. The first task is to divide the text into words because hieroglyphic texts do not indicate word or sentence boundaries. Then the language models are used to transliterate the word. It’s probable that not all word forms in the sentence to be transliterated can be found in the language models. One can then examine parts of the word and the transliterations they have and, for example, see which of those transliterations is most likely with the previous word.
There is no tradition in Egyptology to make research data available to other researchers, let alone promote its reuse by publishing it under an open license. That's why the hieroglyphic texts produced in my project will be openly published in a machine-readable format. The tools will also be published for other researchers to use.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297284704.94/warc/CC-MAIN-20240425032156-20240425062156-00547.warc.gz
|
CC-MAIN-2024-18
| 3,882 | 5 |
https://bdtechtalks.com/2021/02/22/reinforcement-learning-ad-optimization/
|
code
|
This article is part of “Deconstructing artificial intelligence,” a series of posts that explore the details of how AI applications work.
Every day, digital advertisement agencies serve billions of ads on news websites, search engines, social media networks, video streaming websites, and other platforms. And they all want to answer the same question: Which of the many ads they have in their catalog is more likely to appeal to a certain viewer? Finding the right answer to this question can have a huge impact on revenue when you are dealing with hundreds of websites, thousands of ads, and millions of visitors.
Fortunately (for the ad agencies, at least), reinforcement learning, the branch of artificial intelligence that has become renowned for mastering board and video games, provides a solution. Reinforcement learning models seek to maximize rewards. In the case of online ads, the RL model will try to find the ad that users are more likely to click on.
The digital ad industry generates hundreds of billions of dollars every year and provides an interesting case study of the powers of reinforcement learning.
Naïve A/B/n testing
To better understand how reinforcement learning optimizes ads, consider a very simple scenario: You’re the owner of a news website. To pay for the costs of hosting and staff, you have entered a contract with a company to run their ads on your website. The company has provided you with five different ads and will pay you one dollar every time a visitor clicks on one of the ads.
Your first goal is to find the ad that generates the most clicks. In advertising lingo, you will want to maximize your click-through rate (CTR). The CTR is ratio of clicks over number of ads displayed, also called impressions. For instance, if 1,000 ad impressions earn you three clicks, your CTR will be 3 / 1000 = 0.003 or 0.3 percent.
Before we solve the problem with reinforcement learning, let’s discuss A/B testing, the standard technique for comparing the performance of two competing solutions (A and B) such as different webpage layouts, product recommendations, or ads. When you’re dealing with more than two alternatives, it is called A/B/n testing.
In A/B/n testing, the experiment’s subjects are randomly divided into separate groups and each is provided with one of the available solutions. In our case, this means that we will randomly show one of the five ads to each new visitor of our website and evaluate the results.
Say we run our A/B/n test for 100,000 iterations, roughly 20,000 impressions per ad. Here are the clicks-over-impression ratio of our ads:
Ad 1: 80/20,000 = 0.40% CTR
Ad 2: 70/20,000 = 0.35% CTR
Ad 3: 90/20,000 = 0.45% CTR
Ad 4: 62/20,000 = 0.31% CTR
Ad 5: 50/20,000 = 0.25% CTR
Our 100,000 ad impressions generated $352 in revenue with an average CTR of 0.35%. More importantly, we found out that ad number 3 performs better than the others, and we will continue to use that one for the rest of our viewers. With the worst performing ad (ad number 2), our revenue would have been $250. With the best performing ad (ad number 3), our revenue would have been $450. So, our A/B/n test provided us with the average of the minimum and maximum revenue and yielded the very valuable knowledge of the CTR rates we sought.
Digital ads have very low conversion rates. In our example, there’s a subtle 0.2-percent difference between our best- and worst-performing ads. But this difference can have a significant impact at scale. At 1,000 impressions, ad number 3 will generate an extra $2 in comparison to ad number 5. At a million impressions, this difference will become $2,000. When you’re running billions of ads, a subtle 0.2 percent can have a huge impact on revenue.
Therefore, finding these subtle differences is very important in ad optimization. The problem with A/B/n testing is that it is not very efficient at finding these differences. It treats all ads equally and you need to run each ads tens of thousands of times until you discover their differences at a reliable confidence level. This can result in lost revenue, especially when you have a larger catalog of ads.
Another problem with classic A/B/n testing is that it is static. Once you find the optimal ad, you will have stick to it. If the environment changes due to a new factor (seasonality, news trends, etc.) and causes one of the other ads to have a potentially higher CTR, you won’t find out unless you run the A/B/n test all over again.
What if we could change A/B/n testing to make it more efficient and dynamic?
This is where reinforcement learning comes into play. A reinforcement learning agent starts by knowing nothing about its environment actions, rewards, and penalties. The agent must find a way to maximize its rewards.
In our case, the RL agent’s actions are one of five ads to display. The RL agent will receive a reward point every time a user clicks on an ad. It must find a way to maximize ad clicks.
The multi-armed bandit
In some reinforcement learning environments, actions are evaluated in sequences. For instance, in video games, you must perform a series of actions to reach the reward, which is finishing a level or winning a match. But when serving ads, the outcome of every ad impression is evaluated independently; it is a single-step environment.
To solve the ad optimization problem, we’ll use a “multi-armed bandit” (MAB), a reinforcement learning algorithm that is suited for single-step reinforcement learning. The name of the multi-armed bandit comes from an imaginary scenario in which a gambler is standing at a row of slot machines. The gambler knows that the machines have different win rates, but he doesn’t know which one provides the highest reward.
If he sticks to one machine, he might lose the chance of selecting the machine with the highest win rate. Therefore, the gambler must find an efficient way to discover the machine with the highest reward without using up too much of his tokens.
Ad optimization is a typical example of a multi-armed bandit problem. In this case, the reinforcement learning agent must find a way to discover the ad with the highest CTR without wasting too much valuable ad impressions on inefficient ads.
Exploration vs exploitation
One of the problems every reinforcement learning model faces is the “exploration vs exploitation” challenge. Exploitation means sticking to the best solution the RL agent has so far found. Exploration means trying other solutions in hopes of landing on one that is better than the current optimal solution.
In the context of ad selection, the reinforcement learning agent must decide between choosing the best-performing ad and exploring other options.
One solution to the exploitation-exploration problem is the “epsilon-greedy” (ε-greedy) algorithm. In this case, the reinforcement learning model will choose the best solution most of the time, and in a specified percent of cases (the epsilon factor) it will choose one of the ads at random.
Here’s how it works in practice. Say we have an epsilon-greedy MAB agent with the ε factor set to 0.2. This means that the agent chooses the best-performing ad 80 percent of the time and explores other options 20 percent of the time.
The reinforcement learning model starts without knowing which of the ads performs better, therefore it assigns each of them an equal value. When all ads are equal, it will choose one of them at random each time it wants to serve an ad.
After serving 200 ads (40 impressions per ad), a user clicks on ad number 4. The agent adjusts the CTR of the ads as follows:
Ad 1: 0/40 = 0.0%
Ad 2: 0/40 = 0.0%
Ad 3: 0/40 = 0.0%
Ad 4: 1/40 = 2.5%
Ad 5: 0/40 = 0.0%
Now, the agent thinks that ad number 4 is the top performing ad. For every new ad impression, it will pick a random number between 0 and 1. If the number is above 0.2 (the ε factor), it will choose ad number 4. If it’s below 0.2, it will choose one of the other ads at random.
Now, our agent runs 200 other ad impressions before another user clicks on an ad, this time on ad number 3. Note that of these 200 impressions, 160 belong to ad number 4, because it was the optimal ad. The rest are equally divided between the other ads. Our new CTR values are as follows:
Ad 1: 0/50 = 0.0%
Ad 2: 0/50 = 0.0%
Ad 3: 1/50 = 2.0%
Ad 4: 1/200 = 0.5%
Ad 5: 0/50 = 0.0%
Now the optimal ad becomes ad number 3. It will get 80 percent of the ad impressions. Let’s say after another 100 impressions (80 for ad number three, four for each of the other ads), someone clicks on ad number 2. Here’s how what the new CTR distribution looks like:
Ad 1: 0/54 = 0.0%
Ad 2: 1/54 = 1.8%
Ad 3: 1/130 = 0.7%
Ad 4: 1/204 = 0.49%
Ad 5: 0/54 = 0.0%
Now, ad number 2 is the optimal solution. As we serve more ads, the CTRs will reflect the real value of each ad. The best ad will get the lion’s share of the impressions, but the agent will continue to explore other options. Therefore, if the environment changes and users start to show more positive reactions to a certain ad, the RL agent can discover it.
After running 100,000 ads, our distribution can look something like the following:
Ad 1: 123/30,600 = 0.40% CTR
Ad 2: 67/18,900 = 0.35% CTR
Ad 3: 187/41,400 = 0.45% CTR
Ad 4: 35/11,300 = 0.31% CTR
Ad 5: 15/5,800 = 0.26% CTR
With the ε-greedy algorithm, we were able to increase our revenue from $352 to $426 on 100,000 ad impression and an average CTR of 0.42 percent. This is a great improvement over the classic A/B/n testing model.
Improving the ε-greedy algorithm
The key to the ε-greedy reinforcement learning algorithm is adjusting the epsilon factor. If you set it too low, it will exploit the ad which it thinks is optimal at the expense of not finding a possibly better solution. For instance, in the example we explored above, ad number four happens to generate the first click, but in the long run, it doesn’t have the highest CTR. Small sample sizes do not necessarily represent true distributions.
On the other hand, if you set the epsilon factor too high, your RL agent will waste too many resources exploring non-optimal solutions.
One way you can improve the epsilon-greedy algorithm is defining a dynamic policy. When the MAB model is fresh, you can start with a high epsilon value to do more exploration and less exploitation. As your model serves more ads and gets a better estimate of the value of each solution, it can gradually reduce the epsilon value until it reaches a threshold value.
In the context of our ad-optimization problem, we can start with an epsilon value of 0.5 and reduce it by 0.01 after every 1,000 ad impression until it reaches 0.1.
Another way to improve our multi-armed bandit is to put more weight on new observations and gradually reduces the value of older observations. This is especially useful in dynamic environments such as digital ads and product recommendations, where the value of solutions can change over time.
Here’s a very simple way you can do this. The classic way to update the CTR after serving an ad is as follows:
(result + past_results) / impressions
Here, result is the outcome of the ad displayed (1 if clicked, 0 if not clicked), past_results is the cumulative number of clicks the ad has garnered so far, and impressions is the total number of times the ad has been served.
To gradually fade old results, we add a new alpha factor (between 0 and 1), and make the following change:
(result + past_results * alpha) / impressions
This small change will give more weight to new observations. Therefore, if you have two competing ads that have an equal number of clicks and impressions, the one whose clicks are more recent will be favored by your reinforcement learning model. Also, if an ad had a very high CTR rate in the past but has become unresponsive in recent times, its value will decline faster in this model, forcing the RL model to move to other alternatives earlier and waste less resources on the inefficient ad.
Adding context to the reinforcement learning model
In the age of internet, websites, social media, and mobile apps have plenty of information on every single user such as their geographic location, device type, and the exact time of day they’re viewing the ad. Social media companies have even more information about their users, including age and gender, friends and family, the type of content they have shared in the past, the type of posts they liked or clicked on in the past, and more.
This rich information gives these companies the opportunity to personalize ads for each viewer. But the multi-armed bandit model we created in the previous section shows the same ad to everyone and doesn’t take the specific characteristic of each viewer into account. What if we wanted to add context to our multi-armed bandit?
One solution is to create several multi-armed bandits, each for a specific sub-field of users. For instance, we can create separate RL models for users in North America, Europe, Middle East, Asia, Africa, and so on. What if we wanted to also factor in gender? Then we would have one reinforcement learning model for female users in North America, one for male users in North America, one for female users in Europe, male users in Europe, etc. Now, add age ranges and device types, and you can see that it will quickly develop into a big problem, creating an explosion of multi-armed bandits that become hard to train and maintain.
An alternative solution is to use a “contextual bandit,” an upgraded version of the multi-armed bandit that takes contextual information into account. Instead of creating a separate MAB for each combination of characteristics, the contextual bandit uses “function approximation,” which tries to model the performance of each solution based on a set of input factors.
Without going too much into the details (that could be the subject of another post), our contextual bandit uses supervised machine learning to predict the performance of each ad based on location, device type, gender, age, etc. The benefit of the contextual bandit is that it uses one machine learning model per ad instead of creating an MAB per combination of characteristics.
This wraps up our discussion of ad optimization with reinforcement learning. The same reinforcement learning techniques can be used to solve many other problems, such as content and product recommendation or dynamic pricing, and are used in other domains such as health care, investment, and network management.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506479.32/warc/CC-MAIN-20230923030601-20230923060601-00012.warc.gz
|
CC-MAIN-2023-40
| 14,510 | 79 |
http://forum.platform11.org/showpost.php?s=0e5f7907512aaad0e6f5f3cdf2943ca4&p=69422&postcount=35
|
code
|
Originally Posted by Jamie2k9
Why are people still making such issues over this, Dublin Bus and Luas are always crammed during rush hour so why do people expect to have a seat on a dart or not have to stand for short period of time.
Because the capacity and ability is there
for people not to have to stand, it's just not being used. I understand it's normal to stand on a rush hour commute across the world, but usually that's because the system is running to capacity, all trains are being used, the sidings and depots are empty except for units under maintenance- but regardless the amount of people demanding service means that there is still crush condition. Here we are artificially
imposing crush on ourselves when there is no need, there is spare capacity in the form of extra carriages that is just not being used. That's not acceptable.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668644.10/warc/CC-MAIN-20191115120854-20191115144854-00304.warc.gz
|
CC-MAIN-2019-47
| 846 | 5 |
https://blockchainhealthcarereview.com/
|
code
|
Blockchain for Consumers
We seek to personalize consumer-centrist adoption by providing editorial content for digital health adopters to make well-educated decisions on how to engage with blockchain-based solutions based on their own clinical, financial, and personal lifestyle needs.
Use our blockchain intelligence data to analyze:
- Who is developing similar use cases?
- Which projects have failed? Who is winning?
- Technical and product due diligence White Papers/light papers/pitch decks
- Brand channels and communities
Use our data to search the world over for
- Who is developing similar use cases you are considering?
- Who has failed?
- Who is winning?
- Technical and product due diligence
- Access white papers/light papers/pitch decks
- Discovery of Brand channels beyond typical social media such as GitHub
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00499.warc.gz
|
CC-MAIN-2021-21
| 822 | 14 |
https://www.jehovahs-witness.com/topic/156471/anyone-else-ever-do-this?page=3
|
code
|
I never realized that Witnesses did such things until I came here. It's good to know this stuff. I've heard about a lot of this kind of thing on this site. It reminds me of Jerry Lewis' movie where he's trying to look busy, and wears himself out pretending to do stuff at work.
Anyone else ever do this
I didn't. I wasn't interested in getting privileges so if I hadn't prepared my books were unmarked.
What I did do (when I did prepare) was to see how straight I could make my underlining without the use of a ruler. It became a little game that I would play with myself, and I suppose it eased the boredom. I experimented with ways of holding the pen and drawing speeds, and if I were to end up with a wavy line I'd be disappointed and try again so that I ended up with a thick line which meant that I had to make the other lines in that paragraph the same thickness...
Christ. I'm glad to be out.
Yup, although towards the end I went hardcore-zealous in studying and looking up every single scripture, filling the margins with notes, etc.
Problem is it backfired and caused me to lose my faith when it made me realize that it was all a bunch of bloody (literally) fairy tales.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00496.warc.gz
|
CC-MAIN-2021-04
| 1,179 | 7 |
https://kelleybardphotography.wordpress.com/2011/04/18/frosted-mountain/
|
code
|
I loved this mountain top, though. It did look exactly like it was frosted, as well as the trees. Gorgeous drive and a great spot. Seen near the summit of Wolf Creek Pass 1 week ago.
Also part of the images shown at http://www.kelleybard.com/places, and on various other online spots (facebook, flickr, etc) with handy links to the right.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647901.79/warc/CC-MAIN-20180322170754-20180322190754-00717.warc.gz
|
CC-MAIN-2018-13
| 338 | 2 |
http://boards.lineage2.com/archive/index.php/t-152139.html
|
code
|
12-16-2006, 12:26 AM
Hello! I'm looking for new Clan Members to join me in a quest for Holy Knighthood! The clan is Called First Knights. Based off of a book written as a Sequal to the Holy Grail pretaining to King Aurther and the knights of the round table. I am a Level 3 clan and with enough members, pushing level 4. I have enough SP to go all the way to 5 at a moments notice with fulfilling the Clan Quests and enough to buy many Clan skills. so if you are interested, Please! Reply to this Message on the Board. I will contact you within a few hour after the Post!
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698063918/warc/CC-MAIN-20130516095423-00046-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 571 | 2 |
https://go.mep.trimble.com/remote-working/ec-cad
|
code
|
Working in remote locations with EC-CAD can be easy to achieve. Please talk to your company's EC-CAD administrator or IT administrator to find out the best method to work remotely.
Remote WOrking Instructions For
The Trimble EC-CAD for AutoCAD MEP program can be installed on as many systems as you wish. Whether it is an additional work laptop or a home PC, it does not matter. The application will install, but you will need the license and database for it to function. The EC-CAD license keys are “plug & play” meaning that they can be pulled from any PC and plugged into another. The E-CCAD database takes a little more setup depending on your current work configuration. Below are a few scenarios.
EC-CAD with a local SQL Database and local license key.
It is typical that in this situation a user will bring the system, either home, or onto a job site. Once the user is off the work domain they may need to modify the SQL database owner. This is done by using Microsoft SQL Server Manager to enter the properties of the database and change the database owner to “sa”.
This allows the user to connect to SQL database as system admin. Below is more information on modifying your SQL database owner for connection off of your work domain.
EC-CAD with a local SQL Database and Network license keys
Some users may be using a Networked license key plugged in a work server. These network licenses keys allow one or more instances of the product to be used. To access a network license key plugged into your works server, you will need to have your IT people set up a VPN connection to the office network. Once the VPN is set up and configured properly you should be able to pull a license. There are times when the server or network requires additional configuration to allow this to work. If you are the only one using the network key it would be more efficient to simply unplug the key from the server, and use it locally.
EC-CAD with a Networked SQL Database
You may have your SQL server instance and Database hosted on a server. We often install a new instance of SQL server on the local system going off site, and perform a database backup, and restore.
You can create a backup of your networked database using Microsoft Management Studio. You can then “restore” the database to the local instance of SQL server on your home PC/Laptop.
Below are instructions on how we typically perform a database backup/Restore.
Modifying SQL Database Owner
There are time when you may want to change the database owner to “sa”
SQL Server Database – Creating .bak file
SQL Server Database – Restoring Backup
Please store your .bak files in the default SQL Backup Directory. When you restore a database it should default to this default directory making it easy for you to locate the backup file. It is import to have full access and admin right to this default directory for you to restore database files.
Here is the default SQL Backup directory:
C:/Program Files/Microsoft SQL Server/MSSQL10_##.SQLExpress/MSSQL/Backup
Follow instruction below on restoring your .bak files.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00461.warc.gz
|
CC-MAIN-2023-14
| 3,085 | 20 |
https://jobbqhvm.web.app/65616/96422.html
|
code
|
Python timestamp to datetime and vice-versa. In this article, you will learn to convert timestamp to datetime object and datetime object to timestamp (with the help of examples).
Can anyone help me to get this code up and running on my … Jun 06, 2019 · TL;DR. To retrieve Bitcoin prices and data (1m klines): Sign-up on Binance and/or BitMex to get API access. Import the functions I’ve created for easy-of-use and add your API details. a. Call the function: get_all_binance (“BTCUSDT”, “1m”, save = True) b.
- Převést dolar na aplikaci rand
- Aavi anglickým slovem
- Batbnb amazon
- Mince dolaru spojených států 1979
- R kryptopie
Create a new folder on your computer and extract the files in it. Open Pycharm, go to File > Open and select the directory containing the bitmex_crypto_ml_stops_limits.py and model.py files. Find the best open-source package for your project with Snyk Open Source Advisor. Explore over 1 million open source packages.
BitMEX is the world's most advanced P2P crypto-products trading platform and API. Trade with up to 100x leverage with only Bitcoin as collateral. A Python snippet:
Call the function: get_all_binance (“BTCUSDT”, “1m”, save = True) b. Call the function: get_all_bitmex (“XBTUSD”, “1m”, save = True) BitMEX Connectors Python Sample Code: The BitMEX Connectors Python Sample Code demonstrates how to interact with BitMEX's public API. The SDK can be used to fetch market data, make trades, and create third-party clients. It provides the following: A BitMEX object wrapping the REST and WebSocket APIs.
Contract Adjustment & New UI. Q2 2019. Bot Dashboard. Q2 2020. Additional Settings. Q3 2020. Bybit & Binance Futures.
To execute an order we first have to fill some parameters.
Follow their code on GitHub. Here you can select the instrument you wish to bitmex trading bot github trade, select leverage, place and cancel orders, view important information in the contract details and see your position information On bitcoin movie trailer Bitmex, the TRADING_LIMIT option defines the number of contracts to open a position with. The CCXT library is used to connect and trade with cryptocurrency / altcoin exchanges and payment processing services worldwide. It provides quick access to market data for storage, analysis, visualization, indicator development, algorithmic trading, strategy backtesting, bot programming, webshop integration and related software engineering. I'm trying to extract a list of CSVs from BitMEX.
HINT: the "Search" and "Jump" functions are NOT case insensitive. No results forbitmex python example|Bityard.com "bitmex python bot|Bityard.com Copy Trade". Your search phrase was "bitmex python bot|Bityard.com Copy Trade" - Showing results for "travel". Products (3 Jun 27, 2019 Retrieving Full Historical Data for Every Cryptocurrency on Binance & Bitmex Using the Python APIs. Originally published by Peter Nistrup on haskell-bitmex-rest-0.1.0.0: Auto-generated bitmex API Client API Keys can also be created via this Python script See the API Key Documentation for more Search Results for: bitmex python example|Bityard.com Copy Trade. Search. Network.
Releases 1.5.1 Apr 28, 2020 1.5 Sep 26, 2018 1.4 May 8, 2018 1.3 Jan 23, 2018 1.2 Dec 14, 2017 BitMEX is the world's most advanced P2P crypto-products trading platform and API. Trade with up to 100x leverage with only Bitcoin as collateral. A Python snippet: bitmex-backtest is a python library for backtest with bitmex fx trade rest api on Python 3.6 and above. Find the best open-source package for your project with Snyk Open Source Advisor. Explore over 1 million open source packages.
You will receive your own Affiliate Link which you can use to generate a passive income. The BitMEX Referral scheme pays out over $100 million annually and is the biggest secret in crypto. NEWS 31 March 2019: AntiLiquidation.com is a BitMEX Anti-Liquidation Tool & Position Calculator. This free tool will I'm trying to send messages to a Server to get answers. I've tried to use official websocket APIs from the site but I don't understand them or can't make them work as I wish, so I'm trying to buil C# (CSharp) BitMex BitMexAPI - 3 examples found. These are the top rated real world C# (CSharp) examples of BitMex.BitMexAPI extracted from open source projects. You can rate examples to help us improve the quality of examples.svet je najväčší porno
coinbase kanada zaregistrovať sa
inteligentné obchodné technológie
reddit najlepšia ios krypto peňaženka
koľko stojí yale univerzita ročne
- Co je parler
- Film mandalay bay
- Kde se v austrálii tisknou peníze
- Compa y venta del dolar americano en mexico
BitMEX has a Python REST client and websocket client. Releases 1.5.1 Apr 28, 2020 1.5 Sep 26, 2018 1.4 May 8, 2018 1.3 Jan 23, 2018 1.2 Dec 14, 2017
I need to build a Bitmex trading bot based on a RSI trading strategy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00728.warc.gz
|
CC-MAIN-2023-06
| 4,897 | 26 |
https://opencarp.org/download
|
code
|
You find the software, installation guides and our license here as well as the proper work to cite when you use openCARP results in a publication
Information about installation of openCARP and its dependencies
Software downloads of releases of openCARP
Access to the Gitlab of the openCARP project
Wondering how to cite openCARP?
We use an academic public license (APL). Please read it carefully, especially if you plan for-profit tasks
openCARP logos to be used in your presentation
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00107.warc.gz
|
CC-MAIN-2020-24
| 483 | 7 |
http://www.ghostvillage.com/ghostcommunity/index.php?s=097f2e0819b19c660ad83bede13bf214&showtopic=30974
|
code
|
Can anyone clue me in?
Posted 11 August 2010 - 01:57 AM
Posted 11 August 2010 - 03:18 PM
One thing I can tell you -- if there is any fear associated with information (actually, really, any emotion at all on your part beyond reacting to the information)... for example: "I have a sick, nervous feeling" or "I see [in my mind's eye] this terrifying image and have an overwhelming sense of fear with it" ... as opposed to having information and then reacting emotionally: "I saw this figure, walked away and then freaked out when I realized it couldn't have been a person standing there"... it is NOT psychic input, it is your own worry or paranoia or the scary story you just read.
Psychic input has no emotion. It is what it is. You can react emotionally to the information but the information itself has no emotion attached to it. Emotional feelings are not psychic input.
Physical input is harder to pinpoint -- like you can't breathe somewhere -- you have to stop, quiet your mind and ask yourself "is this me or is this being given/shown to/put on me?" If you can manage to not panic (sometimes it's a strong, kinda scary one... especially if someone's trying to choke you or hit you or is mimicking the symptoms of a heart attack) you can always tell and you then are able to push the physical symptom away so you are relatively comfortable.
Posted 18 September 2010 - 09:14 PM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670558.91/warc/CC-MAIN-20191120111249-20191120135249-00548.warc.gz
|
CC-MAIN-2019-47
| 1,453 | 9 |
https://ru.ifixit.com/User/3465228/Emmanuel+Wolf
|
code
|
Passenger side window wont move. Switches don't work on either sideHi! First question on here :). The passenger side window does not roll up or down, does not make any noise, and neither...
Ответ на "Passenger side window wont move. Switches don't work on either side"Turned out the motor was seized. After connecting it directly to the battery the motor was electrically forced and started working again. Now my window works fine again! I guess the previous owner never used the window so it jammed up. For those looking at this post trying to fix their window: Test the motor, regulator, and switch. And check for a burnt fuse, broken wiring, or if your window is computer-controlled that might make more complications. Good luck!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039563095.86/warc/CC-MAIN-20210422221531-20210423011531-00578.warc.gz
|
CC-MAIN-2021-17
| 739 | 2 |
https://github.com/wesgibbs
|
code
|
Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 28 million developers.Sign up
Infinite Red's ir_black vim theme packaged to work with Tim Pope's pathogen plugin.
Forked from jamesgolick/trample
A Better Load Simulator
Forked from dchelimsky/rspec-tmbundle
Textmate bundle for RSpec.
Finishes a Trello card.
Forked from voxdolo/tomatoist
a pomodoro timer app
Forked from miletbaker/add_nested_fields
Rails ActionView / RJS Helper to work with dynamically adding removing partials for working with accepts_nested_attributes_for
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00373.warc.gz
|
CC-MAIN-2018-30
| 623 | 12 |
https://developer.chrome.com/docs/chromedriver?authuser=4
|
code
|
ChromeDriver is a standalone server that implements the W3C WebDriver standard. WebDriver is an open source tool built for automated testing of webapps across many browsers. Its interface allows for control and introspection of user agents locally or remotely using capabilities.
Capabilities are a language-neutral set of key-value pairs used to define the desired features and behavior of a WebDriver session. Capabilities are typically passed as an argument when creating a WebDriver instance, and can be used to specify browser settings, such as the browser name, version, and page loading strategy.
ChromeDriver extends Webdriver by adding Chromium-specific capabilities. It uses the
ChromeOptions object to pass capabilities to ChromeDriver from the WebDriver API. Some Chromium-specific capabilities include the ability to install extensions, change window types, and pass command line arguments on startup.
ChromeDriver is available for Chrome on Android and Chrome on Desktop (Mac, Linux, Windows and ChromeOS).
You can view the current implementation status of the WebDriver standard here.
Latest ChromeDriver binaries
- Starting with M115 the latest Chrome + ChromeDriver releases per release channel (Stable, Beta, Dev, Canary) are available at the Chrome for Testing availability dashboard. For automated version downloading one can use the convenient JSON endpoints.
- The older releases can be found at the Downloads page.
- Getting started with ChromeDriver on Desktop (Windows, Mac, Linux)
- ChromeOptions, the capabilities of ChromeDriver
- Mobile emulation
- Security Considerations, with recommendations on keeping ChromeDriver safe
- Chrome Extension installation
- Verbose logging and performance data logging
- Chrome crashes immediately or doesn't start
- ChromeDriver crashes
- Clicking issues
- Operation Not Supported when using remote debugging
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817187.10/warc/CC-MAIN-20240418030928-20240418060928-00869.warc.gz
|
CC-MAIN-2024-18
| 1,872 | 19 |
https://dynamicsofdynamicscrm.com/2014/10/06/tips-and-tricks-a-good-practice-for-configuring-views-with-lookup-in-filter-criteria-in-dynamics-crm/
|
code
|
We often make Views in Dynamics CRM and move them between environments (Development environment to UAT environment and UAT environment to Production environment). Lookup GUID values need not be same between environments and views then need to be manually changed. A way around this is to configure views with text matching name fields instead of their lookups. In this way Views are not dependent on GUID and can be directly ported between environments. Taking example below, this will have all Contacts with Account name equal XYZ and can easily be moved across environments:
Hope it helps!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646457.49/warc/CC-MAIN-20230531090221-20230531120221-00480.warc.gz
|
CC-MAIN-2023-23
| 591 | 2 |
http://distro.ibiblio.org/smeserver/contribs/rvandenaker/testing/smeserver-cups/documentation/howtos/printing-to-filtering-queues.html
|
code
|
|Author:||Robert van den Aker (robert2 AT dds DOT nl)|
|Contributors:||Nigel Gent (Mac OS X section)|
All filtering printer queues in CUPS accept PostScript as input. In fact, if the input is not PostScript, it will be converted to PostScript before being RIPped (put through a Raster Image Processor) to a printer-ready raster format if you have a non-PS printer, or sent straight to the printer if you have a PS printer. For practical purposes all filtering printer queues can be considered PostScript printers. This means you print to them through PostScript drivers. To control printer options you use PPD's (PostScript Printer Descriptions), just like you do with real PostScript printers. What follows are instructions to install and/or configure PS drivers and PPD's on a number of popular operating systems.
Printing to filtering queues from Linux (and other Unix) hosts works slightly differently. They can send the printable files straight to the remote filtering queue without pre-filtering them to PostScript locally.
You can either install the Adobe PostScript driver and basically follow the instructions for the other Windows versions, or you can install the cups-samba package, which contains a free PostScript driver for Windows NT4/2000/XP from the makers of CUPS, and use the cupsaddsmb command as explained below. The cupsaddsmb command exports the PostScript driver from the cups-samba package and the PPD('s) associated with your filtering CUPS queues to the Samba printer drivers directory, making them available for semi-automatic download to Windows NT4/2000/XP hosts.
[root@hostname ~]# yum install cups-samba
[root@hostname ~]# db configuration setprop smb UseClientDriver no
[root@hostname ~]# expand-template /etc/samba/smb.conf
[root@hostname ~]# service smb restart
[root@hostname ~]# cupsaddsmb -U admin -v printer1 [printer2...printerX]
or you can export all printers at once with the command line
[root@hostname ~]# cupsaddsmb -U admin -v -a
These command lines connect to the samba server as admin, so you need to supply the admin password when prompted.
Get the PostScript driver and PPD onto your Windows system by first installing the printer as a Samba printer following the instructions above, then change the printer port to http://servername:631/printers/printername.
Note that Microsoft does not seem to provide IPP software for Windows NT4. Some other companies do.
The PostScript driver in Classic Mac OS is called "LaserWriter 8". It loads most CUPS PPD's correctly, but chokes on Gimp-Print PPD's. Please see this thread on the Gimp-Print support forum for a workaround.
Note: this applies to the latest version of Mac OS X and may not apply to earlier versions.
With this method your Linux host is an IPP client only and doesn't run any print spooler/server itself.
All the Linux IPP clients that I'm aware of use libcups, the IPP library from the CUPS software distribution. Unfortunately, most of these frontends do not expose all the functionality that is provided by libcups. In particular, the ability to specify a user name and server name is not provided by most frontends. Even the frontends that ship with the CUPS 1.1 distribution (the commands lp, lpr, cancel, lprm, lpstat, etc.) do not uniformly allow the user to specify a remote server and/or user name. This is a problem because the default configuration of the CUPS server on SME Server is to require authentication from all remote clients. You can work around this problem by adding the client system to the "TrustedHosts" in the SME Server's CUPS configuration. Possibly the problem can also be solved by using NIS to duplicate users between SME Server and client systems. Alternatively, you can use GtkLP or XPP, which AFAIK are the only libcups frontends that do allow users to specify remote server and user names. Of these two, GtkLP is the most mature application. Unfortunately, configuring your Linux desktop to use GtkLP consistently across all applications as the default printing client can be a bit of a challenge.
Here's an example of a client-only configuration with a standard Ubuntu desktop installation. Assume that SME Server is 192.168.0.1 and the Ubuntu system is 192.168.0.123.
In this configuration the local CUPS server hands print jobs to the remote CUPS server (the one running on your SME Server) through the IPP backend. This poses a problem for authentication. When the local server accepts a print job from the local client, it will try to submit the job to the remote backend "for" the submitting user, not "as" the submitting user. The local server can't authenticate to the remote backend for the local user. Of course, the local user could authenticate and print to the remote server directly by pointing his/her local CUPS client to the remote CUPS server, but if you want to print through the local CUPS server and use the remote CUPS server as an IPP backend I see no other solution than to disable password authentication for the selected host(s) that you want to print from in this manner. This is where the 'hidden' database property "TrustedHosts" comes in. Trusted hosts are allowed to connect to the CUPS server on SME Server without authentication. You trust these hosts to require logins and to have well-behaved users that use the lp command to submit print jobs, so that you'll at least know who the foreign printer users are.
With your local Linux host added to the TrustedHosts database property on the SME Server all that remains to be done is to ensure that the cupsd.conf for your local CUPS server contains the line "Browsing On" (or at least does not contain the line "Browsing Off" since "Browsing On" is the default). The CUPS server on your SME Server sends browse packets to all TrustedHosts. CUPS servers that receive browse packets will automatically add the remote printer queues to their list of available printers. No local configuration is required.
CUPS provides the cups-lpd daemon for serving lpd clients. There is usually no need to enable this service, but in the rare case that your site does require it, here's how.
This document is Copyright 2003-2006 by Robert van den Aker. It may be freely redistributed in its entirety provided that this copyright notice is not removed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705926946/warc/CC-MAIN-20130516120526-00087-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 6,261 | 24 |
https://replit.com/talk/share/Rain/112936
|
code
|
I tried this in python once by doing it by hand, but it was WAYY to hard so I just did this instead.
Anyway, here you go, some satisfying rain.
Edit: kinda snow I guess?
YEEE Trending Boys (and girls) (and non binary people)
In the Northern Hemisphere, it is so cold that the rain you made became snow. My backyard is now Mt. Everest. Love the work!
If you want it to be more accurate you could add
"std::cout << "\033[34m";" such that the code becomes
This would just make the rain blue.
Also if you want snowflakes (Probably a better way to do it than this but here you go):
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00580.warc.gz
|
CC-MAIN-2021-43
| 576 | 9 |
http://jandjnewleaf.blogspot.com/2014/11/the-gaming-corner-ticket-to-ride.html
|
code
|
Set Up: First, you put out the beautiful board. Around the outside is a score tracker to record your current score during the game. The rest of the board is a map of the United States (with a little bit of Southern Canada and Northern Mexico) showing a selection of cities and the train routes connecting them.
Each player then takes 45 train tokens and their scoring piece of the color of their choice. Be careful to only take 45 trains as your box probably came with 2 or 3 extra trains of each color to cover for the eventual loss that is bound to happen if the game is played a lot.
Next we move onto the train cards. The gold cards are wild cards and can be used as any color. Gather the train cards, shuffle them, and deal 4 to each player. Then, 5 train cards are placed face up along side the board with the remaining train cards placed face down next to them. This makes the drawing area.
We then move onto the destination cards. Each destination card has two cities depicted on it as well as a point total in the bottom right corner. The scores range from 4 to 22. Shuffle the destination cards and deal 3 to each player. Each player may choose to discard one of these cards if they so choose. Discarded destination cards are placed on the bottom of the destination card deck which is also placed near the board. But how do you know which cards to keep? During the game, you will be claiming the train routes on the board and if, at the end of the game, you have connected the two cities on the destination cards with a continuous path of trains of your color, then you win the points on the destination card. If you kept the cards but didn't claim the routes, you lose the points.
Draw Destination Cards: Take the top three cards from the destination deck. You may then return one or two of drawn cards, but do not have to return any if you don't want to.
Draw Train Cards: Take one of the face up train cards or make a blind draw from the deck. If you take a face up card, immediately replace it with another face up card from the deck. Then if you didn't take a face up gold train card (the wild one) from one of the face up cards, you may take one more non-gold train card, either from the face up or the top card of the deck. If you draw a gold train card from the deck, it doesn't prevent you from drawing a second card.
Claim a Route: To claim a route, you need to discard a number of train cards equal to the number of spaces depicted on that route on the board. If the spaces are of a color, the turned in train cards must all be of that color (or wild cards), but if the route is made up of grey spaces the color of the train cards doesn't matter as long as they all match. When you claim your route, place a train token of your color on all the spaces on that route. Some routes are double-routes, with two routes connecting two cities. When claiming these, make sure to put your trains on the correct side of the double-route as another player can use the other one. You are never allowed to use both sides of a double-route and in two and three player games, no one is allowed to use second half of the double routes.
Scoring: The only way to score points during the game is claiming routes. Depending on the length of the route, you gain a number of points. There is a handy dandy chart on the board, in the instructions and on a player guide card so you won't have to memorize this. The points gained are as follows.
A route 1 space long gains 1 point
A route 2 spaces long gains 2 points
A route 3 spaces long gains 4 points
A route 4 spaces long gains 7 points
A route 5 spaces long gains 10 points
A route 6 spaces long gains 15 points
Right after claiming your route, move your score piece along the edge of the board equal to the number of points you gain. If at the end of the game you think your score is off, you can always count up each of your routes again to make sure the points are accurate.
Ending the game: When any player ends his or her turn with 2 or less train pieces not on the board, every player (including the one who started the end game) takes one more turn. The game is then over.
If you would like to see some celebrities (Amy Dallen, Colin Ferguson, Anne Wheaton, and Wil Wheaton) play Ticket to Ride, you can watch it on the YouTubes here. After watching about 5 minutes of the actual game play, you should be able to play it yourself; the game is just that easy.
There are about a dozen 'other' Ticket to Ride such as Ticket to Ride: Europe or Ticket to Ride: Nordic Countries, and each are basically the same game. The boards are just different.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526923.39/warc/CC-MAIN-20190419001419-20190419023419-00445.warc.gz
|
CC-MAIN-2019-18
| 4,601 | 18 |
https://stackoverflow.com/questions/24169333/how-can-i-emphasize-or-verbatim-quote-a-comma-in-org-mode
|
code
|
I tried to make the comma
*,* bold, but no success. I tried with verbatim
=,=, but no success as well.
You can achieve what you want by adding the following to your
(setcar (nthcdr 2 org-emphasis-regexp-components) " \t\r\n\"'") (org-set-emph-re 'org-emphasis-regexp-components org-emphasis-regexp-components)
The manual says that
org-emphasis-regexp-components can be used to
fine tune what characters are allowed before and after the markup characters [...].
It is a list containing five entries. The third entry lists characters that are not allowed to immediately follow or precede markup characters. By default,
, is one of them so in order to successfully apply formatting to this character we have to remove it from the list of characters disallowed before or after the markup characters. This is what the call to
setcar does. The purpose of the second line is to rebuild the regular expression for emphasis based on the modified version of
There's a similar problem and I've figured out a solution.
@itsjeyd's solution is right but not 100% correct. We need an extra
The full code snippets:
(setcar (nthcdr 2 org-emphasis-regexp-components) " \t\n\r") (custom-set-variables `(org-emphasis-alist ',org-emphasis-alist)) (org-element--set-regexps)
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00479.warc.gz
|
CC-MAIN-2020-45
| 1,252 | 15 |
https://svrobo.org/job/halodi-robotics-as-remote-job-technical-lead-machine-learning/
|
code
|
Halodi Robotics is bringing robots out of factories and into the human world. We have cracked the code and engineered a safe, capable, and affordable solution (named EVE) that will be deployed in the immediate future to security, retail, and the food-packaging markets.
Inspired by innovation and serving the greater good, Halodi Robotics intends to produce thousands of humanoid robots by 2023—and that’s just the beginning. With the mission to bring humanoid robots to everyone, Halodi Robotics offers one of the most futuristic, world-changing career opportunities of our time. And with a team that enjoys being social outside of work, collaboration and support of Halodians extends well beyond the 9 to 5.
This is the place where people come to live and work to their full potential. Halodians are passionate about seeing the human-robot world come to fruition and embrace our holistic approach to building robots—all components of our robots are developed in-house including: motors, transmissions, sensors, electronics, controls, and AI. There are not many places where you can touch every aspect of such an advanced technology, but with Halodi Robotics’ modern approach, nothing is off-limits
What You’ll Get By Joining Our Team
You can shape the future of humanoid robots.
You will have the autonomy to improve all aspects of your technical skills.
You’ll join an agile and diverse, global team.
What You’ll Do
We are looking for a machine learning technical lead to assist us in establishing our machine learning infrastructures to ensure our humanoid robots are able to continuously learn and improve over time. You will be responsible for developing our continuous learning cycle through ETL (extract, transform, and load) for intelligent robotics. You will be primarily working with our software teams (including primary controls, SLAM navigation, and computer vision) and will be a key player in the future growth of our machine learning team. An ideal candidate for this role is a technical leader who has a blend of academic and professional experience within machine learning, as well as someone who has been a part of the entire product lifecycle.
Leading the R&D, testing, deployment, and continuous improvement of our machine learning infrastructure
Establishing and leading the machine learning team
Balancing research and product to deliver the highest quality, state-of-the-art experiences, while innovating through the full stack
Collaborating with cross-functional teams to help inform and influence architecture during the design phase according to requirements
A Bachelors Degree or Diploma (or equivalent industry experience) in computer vision, software, computer science, systems engineering, mechanical, electrical, or related fields
3+ years of relevant machine learning experience
Hands-on experience designing, developing, testing, and troubleshooting machine learning infrastructure
Experience with programming (Python, C/C++)
Experience with ETL, data cleaning, and data processing
Experience with machine learning frameworks (Keras, Tensorflow, Pytorch)
Technical leadership experience and aptitude
Graduate degree in related field
Experience with Java, ROS2, C#, and VR/XR
Experience with robotics, navigation, manipulation, computer vision, and/or embedded systems
Track record of deploying models to hardware systems
“With these collaborative robots–these more holistic robotics if you will–you’re trying to create a new world. You’re trying to create a human-robot world; one where interactions become the primary driver of the application.”- Nicholas, Project Director
Stay in The Loop:
Halodi Robotics is the place where abstract ideas become reality—and even faster than you’d expect.
Stay informed of all things Halodi Robotics before anyone else by joining our Talent Network. Click here to join.
Halodi Robotics is an equal opportunity employer. All qualified applicants are given consideration regardless of race, religion, colour, gender, sex, age, sexual orientation, gender identity, national origin, marital status, citizenship status, disability, veteran status, or any other protected class as provided in applicable employment laws. If you have a disability or special need that requires accommodation, please contact us at recruiting (at) halodi (dot) com.
To apply for this job please visit halodi.com.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00086.warc.gz
|
CC-MAIN-2021-39
| 4,388 | 30 |
https://meta.stackoverflow.com/users/2808883/hristo-georgiev
|
code
|
Top network posts
- 17 Getting ActionController::RoutingError (No route matches [OPTIONS] "/users" when trying to POST data to RAils server with AngularJS
- 8 Devise sign in/sign up in popup
- 7 Bootstrap table with Tooltip text not wrapping
- 6 How do I detect Gear VR inputs in React VR scene?
- 5 Rails 4.2 / Bootstrap 3 : trying to use font-awesome ... wo success
- View more network posts →
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00694.warc.gz
|
CC-MAIN-2020-50
| 397 | 7 |
http://math.stanford.edu/~rdhough/
|
code
|
Bob Hough's home page
Address: Department of Mathematics,
Stanford University, Stanford, CA 94305.
e-mail: rdhough at math.stanford.edu
I am a fifth year PhD student in mathematics at Stanford, Studying with Professor Soundararajan.
Areas of research interest:
- Analytic and probabilistic number theory, discrete probability
Publications and preprints:
- The distribution of the logarithm of orthogonal and symplectic L-functions.
- Zero-density estimate for modular form L-functions in weight aspect.
- The resonance method for large character sums.
- Random walks on Z/pZ with small symmetric generating sets. Preprint available on request.
- Average equidistribution of Heegner points associated to the 3-part of the class group of imaginary quadratic fields.
- Summation of a random multiplicative function on numbers having few prime factors. Math. Proc. Camb. Phil. Soc., 150 (2011), pp. 193-214.
- Tesselation of a triangle by repeated barycentric subdivision. Elec. Comm. Prob., 14 (2009).
Students in my Math 53H section can find handouts and other materials
Previous section handouts from my Math 51H and Math 52H sections.
- Some practice problems for Stanford's analysis qual.
Information about Stanford's Polya problem solving seminar is
here and here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886946.21/warc/CC-MAIN-20180117142113-20180117162113-00487.warc.gz
|
CC-MAIN-2018-05
| 1,266 | 20 |
http://esolangs.org/wiki/SIC-1_Assembly_Language
|
code
|
SIC-1 Assembly Language
SIC-1 Assembly Language is the primary (and, currently, only) language used for programming SIC Systems's Single-Instruction Computer, Mark 1 (SIC-1). The SIC-1 is a fictional 8-bit computer used in a web-based programming game (of the same name--see #External Resources for a link) that, as its name implies, only supports a single instruction: subleq (subtract and branch if less than or equal to zero).
||mem[A] = mem[A] - mem[B]; branch to C if result <= 0|
||Associates a label with the address of the following command|
|.data X||Sets the next byte of memory to a value at compile time|
Note that if the third address (C) for subleq is omitted, the address of the next instruction is used (in other words, the branch would have no noticeable effect).
Binary representation and offsets
Each subleq A B C instruction is stored as 3 consecutive addresses (each one byte): ABC.
When referring to a label, you can include an offset (specified in bytes), e.g.
@label+1. See the self-modifying code example for a practical application of this.
The following predefined lables are always available:
- @MAX (252): Maximum user-modifiable address
- @IN (253): Reads a value from input (writes are ignored)
- @OUT (254): Writes a result to output (reads as zero)
- @HALT (255): Terminates the program when executed
This program negates one input value and outputs the negated value.
subleq @OUT, @IN
This program reads values, negates them, and outputs them in an infinite loop.
@loop: subleq @OUT, @IN subleq @zero, @zero, @loop @zero: .data 0
The sample program below reads its own compiled code and outputs it by incrementing the second address of the instruction at @loop (i.e. modifying address @loop+1).
@loop: subleq @tmp, 0 ; Second address (initially zero) will be incremented subleq @OUT, @tmp ; Output the value subleq @loop+1, @n_one ; Here is where the increment is performed subleq @tmp, @tmp, @loop @tmp: .data 0 @n_one: .data -1
- SIC-1 programming game: https://jaredkrinke.itch.io/sic-1
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00168.warc.gz
|
CC-MAIN-2023-14
| 2,023 | 22 |
https://forum.quasar-framework.org/topic/456/quasar-project-not-run-after-build/1
|
code
|
quasar project not run after build
recently i am facing a problem with run build copy after updating the quasar cli to v0.6.1. my previous build copy run smoothly with same code. my quasar version is v0.13.6 . can you plz tell me what should i do
s.molinari last edited by
It’s hard to help you with just this information. Can you provide any errors you are getting or what you are seeing? To fix your problem, you probably need to refactor your code to fit version 13, which btw, is at v0.13.9.
But, before you waste time on that, v0.14 is coming out soon, and you’ll probably want to refactor to that version, as it is the future of Quasar and quite different than 0.13.9.
Until then, my best bet is, you’ll need to stick to v0.6.1.
@s.molinari actually i didn’t get any error. the problem is after build my project when i serve the files from dist folder to my apache server its not run. the project is run in dev mode. the only thing i done is update the quasar cli to 0.6.1 from v0.5 something. and my quasar version is 0.13.6 .
is there any version conflict happened? koz before update the cli the previous build copy run well. or i have to updated the quasar v0.13.6 to v0.13.9 .
i am in trouble because i have show my project by today to my client. plz help.
rstoenescu Admin last edited by
@Sujan-Dev Upgrading the CLI shouldn’t break anything. The build process is defined by the starter kit that you are using (see
/buildfolder in your project’s folder). You say that you build and then serve the
distfolder from an apache server and it doesn’t works. If it worked before upgrading the CLI, it should work just the same after upgrading it. Nothing changed regarding build. The problem is elsewhere. I am a little confused as you then say “the project is run in dev mode”. Are you serving the distributable (
/dist) or are you running the development server? In both situations, upgrading CLI won’t break anything. It may be your apache at fault. Have you tried with
quasar servecommand after making a build? It does an ad-hoc web server pointed to your distributable – so basically same thing as apache.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00007.warc.gz
|
CC-MAIN-2023-14
| 2,136 | 15 |
https://www.waze.com/forum/viewtopic.php?p=33498
|
code
|
damaniac wrote:So it doesn't make sense when Waze stops working when you don't use it.
damaniac wrote:I would say that your main priorities should be stability and a better handling of network errors.
damaniac wrote:Android users are able to press the "exit" button, they are smart
StefanSarzio wrote:From the glorious iOS announcement: "While some simpler implementations require the user to manually disconnect the app, waze has a different approach to background processing and shut down. When waze goes into the background, algorithms are applied to detect whether or not the device is in motion."
There's an simple and easy solution for exactly this problem on Android (without any fuzzing with 'smart' alogrithms doing educated guesses), allowing you to shut down Waze in a clean way at precise the correct time - yet you still didn't get that right, although that solution was available from the beginning...and was requested several times.
In case you see Waze on Android as some kind of second class citizen: why don't you just say that loud and send everybody over to Google Navigation?
Users browsing this forum: No registered users
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123549.87/warc/CC-MAIN-20170423031203-00559-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,143 | 7 |
https://vkur1.se/en/besplatno-skachat-vkurse-versiya-6-1-1-295-05-05-20/
|
code
|
Download VkurSe Version 6.1.1-295
- Redesigned application interface
- New light style
- Unnecessary functions that do not need to be configured are removed from the main window (now they are in the "For Experts" section)
- Setup Wizard - On first launch, it helps to configure the application and enable the necessary functions on the device
- Fixed silent mode
- Fixed saving call recording parameters when sending a setup command
- Minor corrections regarding the operation of the application on Android 9-10
- Interception of media files from Telegram and Whatsapp is enabled by default.
Attention! By default, the following options are enabled: call recording, archiving photos, transferring locations every 10 minutes, photos when unlocking, photos when unlocking is unsuccessful, added contacts, installed applications, archiving keystrokes, archiving notifications.
All data comes to the office in the appropriate sections.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00527.warc.gz
|
CC-MAIN-2022-05
| 931 | 11 |
http://sexyactionplanet.tumblr.com/tagged/human
|
code
|
A tiny coppercheek frog I found on the grass at our holiday house on Stradbroke Island
It may appear unintuitive that special toilets could benefit hippos and other wetland species, but the Center for Rural Empowerment and the Environment (CREE) has proven the unique benefits of new toilets in the Dunga Wetlands on Lake Victoria’s Kenyan side. By building ecologically-sanitary (eco-san) toilets, CREE has managed to alleviate some of the conflict that has cropped up between hippos and humans for space.
Seal meets girl. Seal falls in love with girl. The end.
Thai conservationist Sangduen Chailert shares a moment with one of the many animals she has rescued. She operates two elephant healing and rehabilitation centers in Thailand.
Read more: http://www.time.com/time/photogallery/0,29307,1722955,00.html#ixzz1hmNBhyKp
"The interaction between humans and companion animals has a profound impact on both species. For humans, the presence of a companion animal can result in improved physical and psychological well-being. Contact with animals has been associated with greater happiness, less stress, reduced blood pressure, lower coronary risk factors, lower rates of psychiatric disorders, particularly depression, and the enhancement of social activities."
- The University of Queensland
SETBO VILLAGE, Cambodia - Being responsible parents, rice farmer Khuorn Sam Ol and his wife might not be expected to be keen on having their child play with a 16-foot-long, 220-pound snake. Yet they are unflustered that their 7-year-old son, Uorn Sambath, regularly sleeps in the massive coil of a female python, rides the reptile, kisses it and even pats it down with baby powder. “There is a special bond between them,” Khuorn Sam Ol explained. “My son played with the snake when he was still learning to crawl. They used to sleep together in a cradle.” Wildlife and police officials used to come by to try to take the snake away and put it in a zoo. But they relented after seeing Uorn Sambath lovingly cuddling the reptile.
“I will not let anyone take her away from me, either. I love her very much,” declared his son, Uorn Sambath, kissing his pet on the head.
A green sea turtle recently washed up dead on a New South Wales beach in Australia - it was found to have over 300 pieces of plastic debris lodged in its guts. This is a new and depressing record.
"Unfortunately we counted 317 pieces of plastic from the lower intestine of the turtle and there is no question what caused the death of this animal," said Rochelle Ferris, General Manager of Australian Seabird Rescue.
Plastics floating in the ocean can resemble small fish, squid and jellyfish and other marine creatures which are also sea turtle food. According to a recent study, around 36 percent of sea turtles are affected by marine debris, which is scary considering the various other human pressures they face on top of this. Trawling, hunting, long-line fishing, egg poaching… the list goes on. All species of marine turtle are in serious, serious trouble - except for one which has insufficient data.
Image: The famous photograph of the contents of a dead sea turtles stomach. It included plastic, glass and many other forms of human rubbish.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444209.14/warc/CC-MAIN-20141017005724-00199-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 3,226 | 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.