url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
http://www.CMStatistics.org/RegistrationsV2/COMPSTAT2022/viewSubmission.php?in=357&token=4nrs8o84sq8r865roonorpo2pn0nq16n
code
Title: Asymptotic properties of pseudo-ML estimators based on covariance approximations Authors: Reinhard Furrer - University of Zurich (Switzerland) [presenting] Michael Hediger - University of Zurich (Switzerland) Abstract: Maximum likelihood (ML) estimators for covariance parameters are highly popular in inference for random fields. In the years, the dataset sizes have steadily increased such that ML approaches can become quite expensive in terms of computational resources. Several covariance approximation approaches have been proposed (e.g., tapering, direct covariance misspecification, low-rank approximation) and have various advantages and disadvantages. We present an approach based on covariance function approximations that are not necessarily positive definite functions. More specifically, for a zero-mean Gaussian random field with a parametric covariance function, we introduce a new notion of likelihood approximations (termed pseudo-likelihood functions), which complements the covariance tapering approach. Pseudo-likelihood functions are based on direct functional approximations of the presumed covariance function. We show that under accessible conditions on the presumed covariance function and covariance approximations, estimators based on pseudo-likelihood functions preserve consistency and asymptotic normality within an increasing-domain asymptotic framework.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00465.warc.gz
CC-MAIN-2022-33
1,393
4
https://techmeabroad.com/offers/full-stack-developer-at-itnig
code
Posted on Dec 01 2017 (about 6 years ago) At Camaloon we're looking for a new developer to help us to bring our customization platform to the next level. Even considering that Camaloon is not a small startup anymore, we're still trying to stay nimble in terms of our processes with a minimum of bureaucracy and legacy habits. Current tech challenges: A lot of e-commerce websites are selling finished goods. Camaloon is different: we're allowing our customers to customize things. That is where all challenges start: from defining data models and domain concepts to designing and implementing user-facing parts like "ateliers". And even sometimes optimizing things for our factory. Because, you know, we have our own factory. What you will be doing day-to-day if you join: What we are looking for in you: We don't want to add any formal requirements (like 5-7 years of experience with whatever), just come talk to us and see if Camaloon is a good fit for you! But let's be more specific about our tech. Our tech stack: We use Ruby on Rails on the backend with PostgreSQL being the main data storage. Frontend is currently split into two parts: Obviously, every day the % of the first one shrinks and the second one grows. But we want to be honest describing the position, so it is true that sometimes you will need to touch the older stack. What you will find in us: Itnig is a venture builder with rapidly growing startups such as Factorial, Camaloon, Quipu, Playfulbet, Parkimeter, and Gymforless. It is a great place to learn from ridiculously talented professionals, where we have interesting discussions and debate different solutions to similar problems for all startups. Located in the heart of 22@, Itnig organizes activities to bridge and exchange know-how between the different teams and many external inputs. Camaloon is our first of Itnig's growing startups — an e-commerce for custom products (textile, paper, and many more) directed to serve all promotional needs for our b2b customers.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473824.45/warc/CC-MAIN-20240222193722-20240222223722-00502.warc.gz
CC-MAIN-2024-10
2,002
14
https://chat.pantsbuild.org/t/9730322/i-m-just-a-little-confused-to-how-to-start-debugging-this-or
code
average-australia-8513703/05/2021, 3:00 PM on the new release candidate it still takes about 50% longer to run the tests - is that expected? (remit) [nate@ragin-cajun remit-srv]$ time pytest tests/test_aml_actions.py ======================================================================================================= test session starts ======================================================================================================== platform linux -- Python 3.8.5, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 rootdir: /home/nate/wave/remit-srv plugins: celery-4.4.7, requests-mock-1.8.0, Faker-4.1.1, cov-2.10.1 collected 37 items tests/test_aml_actions.py ..................................... [100%] ======================================================================================================== 37 passed in 1.46s ======================================================================================================== real 0m6.757s user 0m5.114s sys 0m1.107s (remit) [nate@ragin-cajun remit-srv]$ time ./pants test --force tests 11:30:27.95 [INFO] Completed: test - tests/test_aml_actions.py succeeded. ✓ tests/test_aml_actions.py succeeded. real 0m9.105s user 0m0.437s sys 0m0.027s pytest -s tests/test_aml_actions.py 37 passed in 1.46s ./pants test --output=all --force tests/test_aml_actions.py I'm not sure how to tell how to determine what how pants is running pytest differently? 37 passed, 86 warnings in 2.13s hundreds-father-40403/05/2021, 4:39 PM . Is this consistent? Are you using pantsd (on by default)? I don't think I'd expect that much overhead, but some known remaining places of Pants overhead 1. ~0.6s to start up Pantsd at first 2. iiuc, the first time you create a new Pex like when it says creating , there is some time involved to unzip it, but then it's cached 3. .pyc files are not cached so must be recompiled every time 4. general Pants overhead of things like determining dependencies Running this command a second time with pantsd enabled should nullify 1 and 4. Regardless of pantsd, running a second time should nullify 2. Leaving the .pyc files as a likely culprit for why this is still slower witty-crayon-2278603/05/2021, 5:56 PM hundreds-father-40403/05/2021, 5:57 PM witty-crayon-2278603/05/2021, 5:58 PM and then seeing how long the script in the captured directory (it will be logged) takes to run would be helpful will render more of the log, which will help point to the process runtime vs overhead hundreds-father-40403/05/2021, 6:01 PM can be helpful to print "starting" messages so you can see the time between "starting" and "completed" witty-crayon-2278603/05/2021, 6:13 PM average-australia-8513703/05/2021, 6:48 PM
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506646.94/warc/CC-MAIN-20230924123403-20230924153403-00574.warc.gz
CC-MAIN-2023-40
2,683
21
https://gitlab.torproject.org/tpo/community/l10n/-/wikis/Localization-for-reviewers
code
Lets start with some best practices to review other people's work: - Remember that is hard to receive negative feedback. Be kind. - Be patient. We are all learning here. - Try to explain why some options are better than others. - Share resources you use for translation (dictionaries, glossaries, etc) A good review is done looking at the actual website-application you are reviewing. For example, most of the Tor Project websites have a preview version that publishes languages straight from transifex, even when they are incomplete. See the 'preview' links of each resource at the resource list The most common errors on our website translations are the links. We have a page that lists the current website broken links. This page is refreshed once a day. Please check your language strings for problems. If you are not sure how to review links, please read this explanation about links in markdown - localize - localize_link - no_localize_link: https://www.transifex.com/otf/tor-project-support-community-portal/translate/#ar/$/163988578?q=tags%3Alocalize_link some of the links can be changed to the proper locale, for example https://community.torproject.org/localization/becoming-tor-translator/ can be translated to Spanish as https://community.torproject.org/es/localization/becoming-tor-translator/ . Some other links dont have translation and should stay as on the source string. - Notranslate: https://www.transifex.com/otf/tor-project-support-community-portal/translate/#ar/$/158007399?q=tags%3Anotranslate - we try to tag all the commands or other non-translatable strings that make it to the translation files. Many users translate for example the extracts of the tor logs, but the logs are always in English so we should leave them as is. You can copy the content from the source file when you see this tag. - translate_alt: https://www.transifex.com/otf/tor-project-support-community-portal/translate/#ar/$/158007399?q=tags%3Atranslate_alt - images' alt attribute goes usually untranslated, because of the transifex interface, but it is bad for accessibility because user will not understand what the image is about. There are also some tags to group specific topics or parts of the documentation. This tags cover sections of our websites. They are: - glossary - the Tor Glossary: http://support.torproject.org/glossary - localization the l10n section of the community portal: https://community.torproject.org/localization/ - onion-services the onion services section of the community portal: https://community.torproject.org/onion-services/ - outreach the outreach section of the community portal: https://community.torproject.org/outreach/ - relay-slides presentation about tor relays: https://community.torproject.org/training/resources/tor-relay-workshop/
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00362.warc.gz
CC-MAIN-2021-39
2,775
17
http://napraia.blogs.ua.sapo.pt/2011/05/
code
CETAC.MEDIA (www.cetacmedia.org) is a research centre in Communication Technologies and Sciences jointly held by the Universities of Aveiro and Porto, Portugal. We are seeking for researchers holding a Doctorate degree in relevant fields interested in future positions as Post-doc grants to work in research activities in the areas covered by the research unit: Information Representation and Organization; Informational Behaviour; Communication Processes in New Media; Design and Applications of Participatory Media. Candidates with relevant research experience in Information Science, Communication Technologies, New Media (mobile, interactive TV), Cyberculture, Usability and Evaluation Studies are very welcome. Interest should be communicated by email to Prof. Fernando Ramos ([email protected]) up to June 13, 2011, including a detailed academic and professional curriculum, specific research interests and indication of availability. Currently the Portuguese Science and Technology Foundation (FCT) has opened an application for post-doc grants (http://alfa.fct.mctes.pt/apoios/bolsas/c
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189734.17/warc/CC-MAIN-20170322212949-00021-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,093
5
https://hpi.de/en/plattner/projects/project-archive/spatio-temporal-data-analysis.html
code
In recent years, rapid advances in location-acquisition technologies have led to large amounts of time-stamped location data. Positioning technologies like Global Positioning System (GPS)-based, communication network-based (e.g. 4G or Wi-Fi), and proximity-based (e.g. Radio Frequency Identification) systems enable the tracking of various moving objects, such as vehicle, people, and natural phenomena. A trajectory is represented by a series of chronologically ordered sampling points. Each sampling point contains spatial information, which is represented by a multidimensional coordinate in a geographical space, and temporal information, which is represented by a timestamp. Additionally, an object identifier assigns each sampling point to a specific moving object and the corresponding trajectory. Thereby, the duration and sampling rate depend on the application. The trajectory data is collected from various moving objects with sensors by using the already mentioned location-acquisition technologies. To gather insights for different applications, the trajectory data has to be processed. This process can be classified in four layers, which are preprocessing, data management, query processing, and data mining. It strongly depends on the requirements of the application and the collected data, which steps of the different layers have to be performed during the trajectory mining process. The preprocessing step attempts to improve the data quality. Data management tackles the topic of storing large-scale trajectory data in an efficient and scalable manner. The next steps focus on the retrieval of appropriate data from the underlying storage system and to provide trajectory-based metrics for the next layer in the framework, which list several important mining techniques on spatio-temporal data.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818067.32/warc/CC-MAIN-20240421225303-20240422015303-00848.warc.gz
CC-MAIN-2024-18
1,814
2
https://forums.askmrrobot.com/t/junk-finder-duplicate-disregards-corruption/8429
code
When looking for junk, it appears as mr robot considers two items with the same name, but one with corruption and one without, as duplicate, which I’d wager is unwanted. I tried to get a snapshot ID from junk finder but got ### Error #### Error converting value "wib" to type 'TeamRobot.Wow.OptimizerActions'. Path 'Action', line 1, position 38. Ticket Number: 1720959f509f43bb8a4e941d5be0abbd If you need further assistance, please [contact technical support](https://www.askmrrobot.com/contact?ticket=1720959f509f43bb8a4e941d5be0abbd). here is snapshot ID for BiB instead
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401601278.97/warc/CC-MAIN-20200928135709-20200928165709-00179.warc.gz
CC-MAIN-2020-40
575
4
https://electronics.stackexchange.com/questions/104849/should-ac-voltage-supplied-to-an-external-device-through-a-relay-be-electrically
code
Used the way you describe, a relay does not provide isolation between the mains and the device being powered. It does provide isolation between whatever is providing the control signal (to open and close the relay) and the power circuit (assuming there are no other connections between them). You can isolate the powered device from the mains using a transformer, if that is part of your system requirements. What would be the considerations given to determine if the 120 VAC connected to the relay output need to be isolated from the primary input power? If the device being powered does not have any specific safety features, it would be a good idea to isolate its power from ground. The reason for isolation is that the neutral wire of mains is tied at some point to earth ground. If the user contacts the hot wire (or some part that is connected to hot by a fault) a hazardous return path could be made through the user's body. Safety isolation breaks this path. However the load device might already be designed with its own safety features (such as isolation or double-walled shielding) to allow it to be powered from mains, and in that case you wouldn't necessarily have to provide isolation in your switching device. You would want to make sure the cable between the switch and the load is routed with due regard to safety (for example, through earthed conduit). Also remember that give the supply voltage is 120 V, isolation alone is not enough to ensure safety.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668529.43/warc/CC-MAIN-20191114154802-20191114182802-00218.warc.gz
CC-MAIN-2019-47
1,471
7
https://community.wolfram.com/groups/-/m/t/2443748?sortMsg=Likes
code
I think the 109 values you call "tests" are usually called the validation set, but this is a matter of nomenclature. In any case, I am surprised this can work at all. I've always thought market prices are more or less random walks. In fact, I'm skeptical about the very possibility to predict them from a theoretical point of view, based on the following reasoning. What would happen if such models turned out to be really efficient at predicting the price and if such model was made public? Wouldn't everyone eventually use it, making the prediction self-fulfilling? In fact, there could be a positive feedback happening. The model would predict prices fluctuations that would be amplified by speculators. In the end the model itself would direct the market and the price would become a loose degree of freedom.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00616.warc.gz
CC-MAIN-2023-14
812
4
http://www.1-script.com/forums/php/pear-command-line-not-working-142812-.htm
code
Do you have a question? Post it now! No Registration Necessary. Now with pictures! - Posted on - pear command line not working - kurt krueckeberg April 6, 2012, 2:10 pm rate this thread Virtual Private Server. I have installed php5 and php-pear. When I go to install a pear package from the command, or if I attempt to upgrade packages, for example # sudo pear upgrade-all pear always gets a "could not download" message. The command above Could not download from "http://pear.php.net/get / Structures_Graph-1.0.4.tgz", cannot download I tried purge php-pear and reinstalling. I tried Debian instead of Ubuntu, but got the same message. Re: pear command line not working Works fine here on Debian squeeze. Does it still fail (temporary problem at the server? Maybe a firewall or routing trouble? Can you get to http://pear.php.net from the failing VPS, i.e. with a web browser? I'd suggest checking the Ubuntu mailing lists to diagnose it as a possible network problem. Remove the "x" from my email address JDS Computer Training Corp. - » pass arbitrary vars into a func from another func? - — Previous thread in » PHP Scripting Forum
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721027.15/warc/CC-MAIN-20161020183841-00524-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
1,138
26
http://stackoverflow.com/questions/10413674/hadoop-flume-log4j-configuration/10421740
code
If you run a hadoop flume node, as default it generates logs under /var/log/flume using log4j. The files will look like According to the flume user guide here, the only way to change the flume log configuration is via flume-daemon.sh which runs flume node using the Flume Environment Variables like: export FLUME_LOGFILE=flume-$FLUME_IDENT_STRING-$command-$HOSTNAME.log export FLUME_ROOT_LOGGER="INFO,DRFA" export ZOOKEEPER_ROOT_LOGGER="INFO,zookeeper" export WATCHDOG_ROOT_LOGGER="INFO,watchdog" The questions are: - if I want to change the log level from INFO to DEBUG, this is the only place to do it? - Is there a configuration somewhere I can do this? - what about I want to set some packages' log level to DEBUG while others stay INFO?
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645171365.48/warc/CC-MAIN-20150827031251-00049-ip-10-171-96-226.ec2.internal.warc.gz
CC-MAIN-2015-35
741
7
https://tech.forums.softwareag.com/t/error-no-ocijdbc10-in-java-library-path-while-setting-up-a-jdbc-connection/178712
code
I know this question has been asked several times but I really don’t find any helpful answers. I am trying to setup a JDBC connection with an OCI driver to connect to an Oracle 11g database. The error that comes up is: no ocijdbc10 in java.library.path. I am trying to use the Oracle Database 11g Release 2 (18.104.22.168) JDBC OCI Drivers. I have setup the environment path variable. The curious thing is if I replace the jar files in the path with the JDBC driver version 10 it works without any error. Please let me know what I am doing wrong here. - webMethods Integration Server - 9.0 - JDBC Adapter - 6.5 - Oracle 11g R2 Enterprise - Java Version - 1.7.0_25 - JDBC driver version - 22.214.171.124 Thanks in advance,
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00390.warc.gz
CC-MAIN-2020-40
723
7
https://services.addons.thunderbird.net/EN-uS/thunderbird/addon/lookout-fix-version/reviews/1164800/
code
Rated 5 out of 5 stars Please note that this version shows a menu item in the tools menu of TB78 that reads "Your localized menuitem". When clicked, it does nothing...This review is for a previous version of the add-on (3.0.5). Thanks for pointing that out, I've fixed the issue for the next release. If you notice any other issues please report them on Github via the support site link
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00066.warc.gz
CC-MAIN-2021-25
386
3
http://windowssecrets.com/forums/showthread.php/28536-SMTP-time-out-running-OE-XP-IIS-%28Win-XP-Pro%29
code
Results 1 to 1 of 1 2002-08-28, 01:26 #1 - Join Date - Aug 2002 - Thanked 0 Times in 0 Posts SMTP time out - running OE XP & IIS (Win XP Pro) A lot of times I am unable to send messages because there Outlook encounters a time out error: Taks 'e-mail address - Sending' reported error (0x8004210B) : 'The operation timed out waiting for a response from the sending (SMTP) server. If you continue to receive this message, contact your servr administrator or Internet service provider (ISP).' I am running Outlook XP under Win XP Pro and IIS. However, if my memory serves me right I had the same problem under Win2000 Pro (and Outlook XP). When I stop and restart the SMTP server the message(s) are usually sent (i.e. leave my Outbox and are relayed via my SMTP server). However, sometimes only a few messages are sent and I have to stop and restart again to get the rest to go out. It doesn't seem to be size-related. At times a message that previously got stuck will suddenly be sent out without any action on my part. Other times I can get it to go out by opening it from the Outbox and hitting the send-button again. Does anyone have any ideas? <img src=/S/confused.gif border=0 alt=confused width=15 height=20>
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189734.17/warc/CC-MAIN-20170322212949-00607-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,212
12
https://codereview.stackexchange.com/questions/254587/generating-binary-sequences-without-repetition
code
I am trying to generate sequences containing only 1's. I have written the following code, and it works. import numpy as np batch = 1000 dim = 32 while 1: is_same = False seq = np.random.randint(0, 2, [batch, dim]) for i in range(batch): for j in range(i + 1, batch): if np.array_equal(seq[i], seq[j]): is_same = True if is_same: continue else: break batch variable is in the thousands. This loop above takes about 30 seconds to complete. This is a data generation part of another for loop that runs for about 500 iterations and is therefore extremely slow. Is there a faster way to generate this list of sequences without repetition? Thanks. The desired result is a collection of batch_size number of sequences each of length dim containing only 1s such that no two sequences in the collection are the same.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587770.37/warc/CC-MAIN-20211025220214-20211026010214-00530.warc.gz
CC-MAIN-2021-43
807
9
http://www.chibios.com/forum/viewtopic.php?f=3&t=5505&view=print
code
Giovanni wrote:I need to buy one of those NAS too, it would be nice to have something running a Linux instance so I can use rsync and also have a subversion server at home. I have a synology DS1511+, it's gnu/linux based and you can add synology maintained package, svn is part of the free offer. You can even add third party package from the entware ecosystem. I don't know other brands like QNAP, but they are probably linux based too. Before that, i had a dedicated pc to do the job, but NAS are more robust, they are built to last, hard disk and electronic are under temperature control, and it's very easy to maintain : there is nothing to do, NAS autoload update and apply them when there is no client connected. When disk fault, there is nothing to dismount, they are in a bay that can be extracted without tools and without stopping the device. And yes, they should be powered from an UPS, i have link my UPS and NAS with usb cable and the NAS immediately recognise the UPS and is programmed to shutdown properly when battery goes below 30%. You can do the same with a debian based dedicated PC, but you will have to fight a little bit with configuration files.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400226381.66/warc/CC-MAIN-20200925115553-20200925145553-00629.warc.gz
CC-MAIN-2020-40
1,169
5
http://stackoverflow.com/questions/4432998/how-do-i-have-a-picture-act-as-a-game-character-on-top-of-a-canvas?answertab=votes
code
I have a maze game in JAVA I'm doing for a class, the maze is made on a canvas in an applet, I wanted to have another picture provided to me (of a statue like thing) act as a game character and be able to be moved around the maze by the arrow keys, so how do I place an object on a canvas that can move around and be controlled, etc.? At the moment, the only way I could get my statue on the maze was by just copying the pixels of the picture onto the maze, so it's just part of the background now...Please help me! I've been posting everywhere looking for help to no avail. Above the ContentPane you will find that exists a GlassPane. It is the right place to add your statue picture without messing with the background.
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121985.80/warc/CC-MAIN-20160428161521-00097-ip-10-239-7-51.ec2.internal.warc.gz
CC-MAIN-2016-18
721
2
https://worldbuilding.stackexchange.com/questions/170920/what-kind-of-climates-and-biomes-would-exist-on-the-continents-a-tidally-locked
code
The more I think, the more I'm having this hunch that a tidally-locked planet with an ice free ocean on the sunny side is impossible. I know that is not what you asked, but the question's Comment section is not enough to detail my questions. It's unlikely that you can have the tidal-lock on your ocean-only low-density water on the sunny side - doesn't make sense to have the lowest altitude of the lighter side of the planet attracted more by the star. At best, you can handwave a heavy asteroid or a former moon that had a "low speed collision" with the planet so that it sorta-sunk-in-the-mantle-but-didn't-reach-the-core, so that the planet's mass-center is askew, but this is likely to require a good bulge of rock straight in the middle of your ocean. Then, a tidally locked planet is unlikely to have an active core - if it had, then the viscous friction between a slow rotating tidal-locked crust and a fast rotating core will not last long enough for life evolution. With a dead core, a magnetosphere will be weak (or non-existent). Long term, the atmosphere long term will be gone blown away by the solar wind; except that I now realize that's not the worse thing to happen to the atmosphere... ... the "dark side" will gobble your atmosphere fast - at the temperatures of cosmic space, it will act as a cold-trap for the gasses in the atmosphere. So, what gas is likely to freeze-solid on the dark-side first? I bet will be the water vapors. Now, either: your ocean is small - then it will end with no-sunny-ocean-all-dark-side-ice fast; or massive ocean - but then it will drive enough mass on the dark side, which will become heavier than the sunny side. A dead core makes the things even harder - there's no chance to distribute the weight by isostasy. Heavy-side farther than the lite-side is not a stable configuration, a tidally-locked body will minimize the potential energy turning their heavy side towards the attractor. So... before getting to the "how's life on that planet", my question is "how do you explain the planet in the first place"?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00018.warc.gz
CC-MAIN-2024-18
2,065
8
https://forum.duplicati.com/t/fedora-31-and-duplicati/8420
code
Tried upgrading and it errors out… and when I’ve got a VM set up with Fedora 31 installed and try to install duplicati, I get: Problem: conflicting requests - nothing provides mono(appindicator-sharp) needed by duplicati-184.108.40.206-220.127.116.11_canary_20191105.noarch So is it known that duplicati doesn’t work with Fedora 31? Or at least with the version of mono provided there? Apparently it’s a bug in libappindicator in F31 1768157 – libappindicator-sharp-12.10.0-25.fc31 no longer provides mono(appindicator-sharp) . I got it working by installing all other dependencies (also making sure that libappindicator-sharp is installed) and then forcing the install with “rpm -i --nodeps …”
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667442.36/warc/CC-MAIN-20191113215021-20191114003021-00240.warc.gz
CC-MAIN-2019-47
710
5
https://news.ycombinator.com/item?id=3291640
code
Newspaper journalism I believe can be forced. IMO it's like high school essay writing, you find your source and you just learn to churn. With some newspapers this can be so bad that you notice the 'filler' attempts where about 2/3 of the way through they go into "summary" mode and simply pad the ending of the article with the exact same info they had in the first 1/3. By 'from experience' I mean I've worked as a reviewer, I've got my own personal blog (one of my pieces actually hit the front page of HN back in the 1Q of 2011 IIRC, and a few have popped up other places) and I'm now pushing through for a novel - I've had one short story published and a lot of editor comments (which is great, I've never received a form rejection letter, even from places that are notorious for them; my problem is that with a short story I see little point of struggling to edit it on the chance someone might say yes, when I might as well learn my mistake and write something else because there's always the chance a story will grab an editor and they'll say 'hell, I can fix the mistakes' - and having worked as a reviewer I trust editors to fix problems I don't know are problems) Like the guy who posted the automated sports writer, it's not difficult to take the stats and say "Campbell scored a last minute goal winning the game" when campbell was the last person to score and it happened in the last minute of play. It's merely filtering data and rewriting a standard comment. It's not far from news of a house fire: did the house burn down? yes/no; if no make 'devastating' comment. was people caught inside? yes/no; if yes did they survive? yes/no; if no make 'tragedy' comment; if yes did they escape? yes/no; if yes make 'valiant escape' comment / if no make 'heroic rescue' comment. It's quite different when you have to write 200 words from a basic formula with 20 keywords, compared to writing 80,000 words from a basic formula with 20 keywords. Yes Star Wars and Harry Potter might have same basic principles (orphan, living with aunt and uncle, special powers, special connection to main antagonist). However, I'm never failed to be amused when someone says it's unoriginal or a rip off, but those same people will read article after article on their sports teams and not think it's ripped off when the articles a probably written by an intern in a coat closet switching words on a template. But simply Vader being or not being Luke's father would have made a major story diversion (IE Luke wouldn't have gone to Endor to confront his father, Vader wouldn't have turned good and killed the emperor, etc.)
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825473.61/warc/CC-MAIN-20171022222745-20171023002745-00483.warc.gz
CC-MAIN-2017-43
2,610
5
https://lidarmag.com/2018/04/27/i-am-therefore-i-think-part-1/
code
We are hearing a lot about “deep learning” these days, with even Amazon Web Services offering hosted deep learning networks for rent. Recently Pix4D announced a prerelease of a deep learning classifier for point cloud data. I thought it might be fun to review deep learning and see what all this hype is about. Editor’s note: A 530Kb PDF of this article as it appeared in the magazine is available HERE Way back in 1957, a new binary classifier algorithm called the Perceptron was invented by Dr. Frank Rosenblatt at Cornell’s Aeronautical Laboratory. A binary classifier divides input into one of two categories. For example, you might have a bunch of geometric shapes feeding a binary classifier that decides if each shape most closely resembles a rectangle or a circle. The novel thing about the perceptron was that it was not a preprogrammed analytic algorithm with intrinsic knowledge of the characteristics of a rectangle or a circle. Rather it was a generic “filter” that “learned” the classes by being fed known examples and having a set of weights adjusted to move the network toward the correct response. This is shown in Figure 1. In this figure, you can imagine the inputs as being cells on a grid that feed in a 1 if the shape intersects the cell and a minus one otherwise. Each of these individual inputs (for example, if our sampling grid were 16 x 16 cells, we would have 256 individual inputs) are conditioned at the initial input layer. For our example, the conditioning is to make the input 1 for a covered pixel and -1 for an uncovered one. Each input is then multiplied by an adjustable weight and fed to a summer. Finally, the output of the summer is fed to a discriminator that outputs one of two values (say 1 or 0 which represent circle or rectangle respectively). The discriminator might be a simple threshold that says if the output of the summer exceeds 18.7, output a 1, otherwise output a 0. A training set is presented to the network and the error is fed back into the system to adjust the weights. For example, if we feed the system a circle and output a zero, we have an error since we said a circle is represented by an output of 1. We feed numerous examples to the system and tweak the weights based on whether the output is correct or not. We test the efficacy of our training by feeding the system shapes which have not been used in the training process and see how it does. We typically express the success rate as a fraction or percentage. Thus a score of 97.5% means that our system is correctly classifying all but 2.5% of the test samples. So what is the big deal about this? Well, if I had made a coded set of algorithms to detect a circle or a rectangle, I might do something such as code up an edge detector, look at curvature of edges, number of sides and so forth. It would be very “hard coded” to detect circles and rectangles. If I wanted to switch to detecting circles and triangles, I would have to go in and rewrite my core logic. Not so with the perceptron. I just reset the weights and train with the new set of training data. Thus as long as I have sufficient training samples, my classifier is trainable. This was a very novel concept for its time, becoming one of the pillars of the birth of computer implemented artificial intelligence (AI). Unfortunately, the perceptron is binary and thus can only work with inputs that cleanly segregate into two classes. In 1969 Marvin Minsky (and Seymour Papert) pointed out that the perceptron could not solve a simple XOR classification. Minsky went on to say (without proof) that while this could potentially be solved by adding more layers to the network, it would not be possible to train. Since Minsky was such a force to be reckoned with in AI at the time, the perceptron and its variants were dead in mainstream AI research. During this same period of time, adjusting parameters in a systematic fashion was being explored by electrical engineers in control systems design. The general technique is called back-propagation where the error of the output is fed in reverse order through the system using an adjustment algorithm called gradient descent. In 1974, Dr. Paul Werbos applied back propagation to multilayer perceptron’s and the artificial neural network (ANN) was born (see Figure 2). ANNs were very popular in research circles in the late 1980’s but the compute power for solving the weights for a large network made them impractical for real world problems. Perhaps 5 years ago, ANN once again was taken out of the closet, dusted off and programmed on new, low cost parallel processors such as Nvidia GPUs. Suddenly, programming the weights of large ANNs via backpropagation looked doable at a reasonable cost. Rapid advances were made in specific problem spaces, particularly natural language parsing. The expression Deep Learning that we now so often hear refers to ANNs with one or more hidden layers of neurons (hence, the deep part). The great thing about ANN today is that you really do not have to program anything you can just use ready-made application programmer interfaces from a number of providers (including the aforementioned hosted system in AWS). In next month’s edition of Random Points, we’ll explore the value of ANN and examine the sorts of problems to which these algorithm might be applicable. In the meantime, if you want to do experimentation on your own, I highly recommend the book “Make Your Own Neural Network” by Tariq Rashid available as a Kindle book for $3.98. In the meantime, keep those neurons firing! Recommended reading: “Convoluted Thinking” – Neural Networks, Part II Lewis Graham is the President and CTO of GeoCue Corporation. GeoCue is North America’s largest supplier of LIDAR production and workflow tools and consulting services for airborne and mobile laser scanning.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100989.75/warc/CC-MAIN-20231209233632-20231210023632-00763.warc.gz
CC-MAIN-2023-50
5,867
12
https://gigazine.net/gsc_news/en/20150622-windows-10-for-free-insiders
code
What is the truth of "What can you upgrade from Windows Vista or XP to Windows 10 for free?" For Windows 10 scheduled to be released in July 2015,Free upgrade from Windows 7, Windows 8.1 etc, But Microsoft was talking about Microsoft's posting a blog posting a sentence that can be read as "a free update is available if you use a technical preview version on a PC that does not have Windows 7 / Windows 8.1" It was. Whether the upgrade is free even on PCs that do not actually have Windows 7 or 8.1, the truth is revealed. Upcoming changes to Windows 10 Insider Preview builds Although it is Windows 10 which can be upgraded free from Windows 7 / Windows 8.1 etc, it is said that it costs up to 199 dollars (about 24,000 yen) when upgrading from Windows Vista or Windows XP. However, even if "Windows 7 / Windows 8.1 is not installed, those who are using the technical preview version of Windows 10, Insider Preview can upgrade for free" can be read on Microsoft's blog and overseas news On the site was reported that "It is a present for appreciation from Microsoft to Insider Program participants". Here's how to get Windows 10 for free even if you do not have Windows 7 or 8 | Ars Technica Ars Technica of the IT news site said, "People using the latest build 10125 of Insider Preview will be able to use the final build released on July 29 and release versions that follow." "People using Insider Preview, which is a technical preview version of Windows 10 on a PC registered as a Microsoft Account (MSA), can update for free even if you are not using Windows 7 / Windows 8.1"Minimum system requirementsIt is necessary to satisfy "and became a big topic. Ars Technica, "It's a completely different approach from Microsoft Windows, but Microsoft's new attempt"WinBetaI appreciate it as "gratitude towards test participants". However, Microsoft subsequently changed the blog's text.According to WinBetaFirst, the following words were written in Microsoft blog. Once you have successfully installed this build and activated, you will also be able to be updated to Windows Install on that PC from final media if you want to start over fresh. Currently, the following wording has been changed. Once you have register, you will receive the Windows 10 final release build. Once you have successfully installed this build, you will also be able to install install that that PC from Genuine Windows 7 or Windows 8.1 can upgrade to Windows 10 as part of the free upgrade offer. * The point that changed was that "Insider Preview participants can receive the final release build of Windows 10 and continue to activate" has become "possible to receive final release build" Finally, "The important point is that only people using Windows 7 and Windows 8.1 can update for free" was attached to the sentence. Also, from Microsoft's interaction with Microsoft, Gabriel Aul who wrote the blog, Microsoft's intention is to "downgrade as long as it is a PC that registered MSA even if you cleanly installed Windows 10 It is thought that it was free upgrade to Windows 10 ", and it seems that it is still impossible to perform free update even for participants of Insider Program from Windows Vista and Windows XP. in Software, Posted by logq_fa
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00475.warc.gz
CC-MAIN-2022-33
3,230
14
https://www.behance.net/gallery/8065067/Survey-Bot-Auto-completes-surveys/
code
Here is survey bot attempting to complete a survey with no given information. I can tell you that this did work and running this on 6 Surveys a day for two weeks (fully automated of course) got me the total sum of £14.95p, with no user interaction what so ever! Here is an example of the program working. It completes the survey in little time (However i think it does break at one point). The next version (After this video) supports page errors linking back to the original page. So this can be left running and will fix its self if anything goes wrong!
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00019.warc.gz
CC-MAIN-2021-21
556
2
https://bioimagebook.github.io/chapters/1-concepts/3-bit_depths/imagej.html
code
ImageJ: Types & bit-depths# Show code cell content Hide code cell content %load_ext autoreload %autoreload 2 # Default imports import sys sys.path.append('../../../') from helpers import * from matplotlib import pyplot as plt from myst_nb import glue import numpy as np from scipy import ndimage The bit-depth and type of an image is determined before it is opened in ImageJ. If the data is clipped, it’s already wrong before we begin – and no amount of ImageJ wizardry will get the information back. Here, we will explore how to: Check the bit-depth and type Diagnose when clipping may have occurred Convert the bit-depth and height – carefully – if needed Checking the bit-depth & type# Bit-depth and type are related to one another: both are needed to convert binary data into pixel values. ImageJ does not always make a careful distinction between the two. The full list of image types supported by ImageJ is found in thesubmenu. The top three entries are the most important; they are 8-bit – unsigned integer 16-bit – unsigned integer 32-bit – floating point Although these look like bit-depths, they are listed as ‘types’. But since an 8-bit and 16-bit images in ImageJ are always unsigned integer, and 32-bit images are always floating point, there is no ambiguity. You can see the type of the current image by checking which item underhas a tick next to it. But you don’t usually have to; you can also see the information at the top of the image window. Show code cell content Hide code cell content fig = create_figure(figsize=(8, 4)) show_image('images/type-window-series.png', pos=121) show_image('images/type-window-neuron.png', pos=122) glue_fig('fig_types_windows', fig) There are various other types listed under Channels & colors., which all have an association with color. These are less different than they first appear: an RGB image is really an 8-bit image with three channels (corresponding to red, green and blue). We will explore this in The biggest problem associated with an image’s bit-depth and type is clipping.is the essential command needed to diagnose if something is wrong – just press H to run it. Show code cell content Hide code cell content fig = create_figure(figsize=(8, 4)) show_image('images/imagej-histogram-unclipped.png', title="Good image (no clipping)", pos=121) show_image('images/imagej-histogram-clipped.png', title="Clipped image", pos=122) glue_fig('fig_types_imagej_clipping', fig) The main sign that an image was clipped is a big peak at either end of the histogram. This can take some careful inspection to distinguish from the black border that surrounds the histogram in ImageJ. If you know the bit-depth and type of the image, you can figure out the range (e.g. 0-255 for an 8-bit unsigned integer image, 0-65,535 for 16-bit) and usually that gives a good indication to where the peaks would be – but it isn’t a perfect guide. Conceivably, we could have an image that was clipped at some other value because it has been rescaled after clipping. Yes! There is a small peak at the high end of the histogram, corresponding to pixel values of 4095. This is itself a suspicious number because it would be the maximum possible value in a 12-bit unsigned integer image (i.e. 212 - 1) – so my guess is that was the bit-depth of the acquisition device. Admittedly, the image is not very badly clipped. We could check the proportion of pixels with that value, and use this to estimate whether it is likely that the clipping will have a significant impact upon later analysis. But it’s better to avoid clipping altogether when possible. There are three main scenarios when you might need to convert the type or bit-depth of an image: Reducing the file size Converting to 8-bit to display the image in other software Because 8-bit images are more common outside of science Converting to floating-point before doing image processing operations Because (as we will see later in the book) these often require fractions and negative numbers Note that reversing the effects of clipping isn’t in the list: if an image is clipped during acquisition, any later conversion won’t help. The clipped data is gone for good. However, you can still introduce clipping after acquisition by making ill-advised conversions – with all the unfortunate consequences of that. Therefore it’s important to know how ImageJ’s type conversion works. Increasing the bit-depth# Let’s start with the easy case: increasing the bit-depth of an image. In principle, we can convert an image just by choosing the type we want from thesubmenu. In ImageJ, there are only really three bit-depths and associated types. This means that the only conversions that can increase the bit-depth are: 8-bit (unsigned integer) → 16-bit (unsigned integer) 8-bit (unsigned integer) → 32-bit (floating point) 16-bit (unsigned integer) → 32-bit (floating point) any 8-bit unsigned integer value can be represented in a 16-bit unsigned integer image any 16-bit unsigned integer value can be represented in a 32-bit unsigned floating point image Consequently, increasing the bit-depth should always be safe. That being said… Always prepare for software to surprise us! We shouldn’t be complacent about image conversions, even if we think they should be ok. It’s so easy to measure images (press M), we should always check before and after conversion to make sure the summary measurements are unchanged. Reducing the bit-depth# Reducing the bit-depth is where the biggest dangers lurk. Then not all values from a higher bit-depth image fit into an image with a lower bit-depth. The process is the same: choose the type you want from thesubmenu. But what happens next depends upon whether the option Scale When Converting under is checked or not. Scale When Converting is not checked: pixels are simply given the closest valid value within the new bit depth, i.e. there is clipping and rounding as needed. Example: If you convert an image to 8-bit, then no data will be lost only if every pixel value before conversion is an integer in the range 0–255. Every other value will be rounded or clipped. Scale When Convertingis checked: a constant is added or subtracted, then pixels are further divided by another constant before being assigned to the nearest valid value within the new bit depth. Only then is clipping or rounding applied if it is still needed. Scale When Converting is on by default and, as suggested by Fig. 22, is usually the best option. The question then is where the constants come from to perform the rescaling. Perhaps surprisingly, they are determined from the Minimum and Maximum in the current Brightness/Contrast… settings: the Minimum is subtracted, and the result is divided by Maximum - Minimum. Any pixel value that was lower than Minimum or higher than Maximum ends up being clipped. Consequently, converting to a lower bit-depth with scaling can lead to different results depending upon what the brightness and contrast settings were. This means that, ideally, we would use a minimum value that is equal to the minimum pixel value in the image, and a maximum value equal to the maximum pixel value. Fortunately, there is an easy way to achieve this: Reset the Brightness/Contrast range before reducing the bit-depth If you really need to reduce the bit-depth of an image in ImageJ, you should usually open Reset button first, to minimize the data lost to clipping or rounding.(Shift+C) and press the Why is scaling usually a good thing when reducing the bit-depth, and why is a constant usually subtracted before applying this scaling? Hint: As an example, consider how a 16-bit image containing values in the range 4000–5000 might be converted to 8-bit first without scaling, and then alternatively by scaling with or without the initial constant subtraction. What constants for subtraction and division would usually minimize the amount of information lost when converting to 8-bit image, limiting the errors to rounding only and not clipping? In the example given, converting to 8-bit without any scaling would result in all pixels simply becoming 255: all useful information in the image would be lost. With scaling but without subtraction, it would make sense to divide all pixel values by the maximum in the image divided by the maximum in the new bit depth, i.e. by 5000/255. This would then lead to an image in which pixels fall into the range 204–255. Much information has clearly been lost: 1000 potentially different values have now been squeezed into 52. However, if we first subtract the smallest of our 16-bit values (i.e. 4000), our initial range becomes 0–1000. Divide then by 1000/255 and the new values become scaled across the full range of an 8-bit image, i.e. 0–255. We have still lost information – but considerably less than if we had not subtracted the constant first. Make sure that the Scale When Converting option is turned on (it should be by default). Then using a suitable 8-bit sample image, e.g. , explore the effects of brightness/contrast settings when increasing or decreasing bit-depths. Can you destroy the image by simply 1) increasing the bit-depth, and the then 2) decreasing the bit-depth to its original value? It’s generally a good idea to choose Reset in the window before reducing any bit-depths for 2D images (see Multidimensional processing to read about special considerations related to z-stacks or time series). You can destroy an image by increasing its bit-depth, adjusting the brightness/contrast and then decreasing the bit-depth to the original one again. This may seem weird, because clearly the final bit-depth is capable of storing all the original pixel values. But ImageJ does not know this and does not check, so it will simply do its normal bit-depth-reducing conversion based on contrast settings.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645089.3/warc/CC-MAIN-20230530032334-20230530062334-00534.warc.gz
CC-MAIN-2023-23
9,852
67
https://forum.forgefriends.org/t/advancing-federation-in-gitea/240/8
code
May I suggest you also post to SocialHub to keep the AP community informed? @pilou would you be so kind as to post this reply? @techknowlogick thanks for your reply and guidance It’s good to know the timing is right and I’m hopeful someone will be interested. you were to use these funds I would recommend giving them directly to the developer working on this PR, and the maintainers reviewing it, rather than to the project itself I’ll follow your advice. Since fedeproxy is horizontal (no organization) the funds originate from individuals (Pierre-Louis and myself, 50% each) and it will be possible to pay the person(s) doing the work directly. as this is likely a large PR… Maybe it can be broken down in smaller tasks / PRs? It would be easier to review and implement. And it will also be easier to prioritize which task should be worked on first and which ones can fit is the modest budget there is. @aschrijver that was my intention, as soon as cj and zeripath comment or a week pass, which ever comes first. I take this opportunity to ask your advice on how to word the toot / message to announce it. You have a talent to make it so people pay attention People on Lemmy ask where they can donate. I think I will provide 3 places: FedeProxy, Gitea and GoFed. - FedeProxy ??? can people donate? - Gitea OpenCollective - does not go directly to this initiative - GoFed ActivityPub Labs OpenCollective - does not go directly to this initiative The ForgeFed website is still down, for over a week now, so I just posted to Feneas about it. Maybe they want to step in too with this new development going on. Update: Instead I pointed here. FedeProxy is a horizontal community of individuals, therefore not incorporated: it is not possible to donate to the organization because it does not exist. As a consequence each member of the community is responsible for raising their own money and for receiving funds that they pledge to dedicate to the advancement of the fedeproxy project. As you can see in the liberapay page, the funds currently go directly to individuals (Pierre-Louis and myself). The income and expsense spreadsheet will show, weekly, if the money that we received via liberapay is actually accounted for. That’s the quantitative benefit of full transparency. The more difficult question is to evaluate if the way we spent the money makes sense at all. A good example is the current discussion on what to do with the diversity grant. Regarding the Gitea federation features here is how it will go: - Someone adds to the https://liberapay.com/fedeproxy/ account and earmarks it for the implementation of federation in Gitea - I will add this to the income part of spreadsheet - When someone claims credit for getting a pull request merged in Gitea that helps with federation, a new topic will be opened on the forum for the community to come to a consensus (because that’s how the decision process starts). - When the decision is final Pierre-Louis or myself will send them the required amount - I will add that to the expense part of the spreadsheet - Someone can verify in the spreadsheet later on, with a URL to the decision to release the funds, that their donation was used as intended. Weave the threads, Luke! That thread you mentioned on the Gitea forum also really needs a real update, knitting all of the ongoing conversations together. I just added a comment to the Gitea GH issue with all the places there’s talk about funding mentioned in it. PS. Your other posts on Gitea forum are still flagged. I’d DM an admin of the forum. Posted by southerntofu following our exchanges on libera#peers: While working on the grant application today, I ran into a question that is worth posting on the issue. Would you be so kind as to copy/paste it on my behalf? While working on the grant application today, I ran into a question that is probably worth discussing before moving forward. Go-fed is AGPLv3+, and Gitea is MIT. Adding Go-fed as a dependency of Gitea means that Gitea, as a whole (meaning Gitea+Go-fed+other dependencies), can only be released under the AGPLv3+. Or not released at all. To be more precise, here is the minimal set of actions required to distribute a gitea executable that contains Go-fed: - The https://github.com/go-gitea/gitea repository needs no change - A license notice stating the executable is under the AGPLv3+ license must be prominently added to the user interface (e.g. added at the beginning of js/licenses.txt) This is not the only possible course of action, only the simplest. What do people think about this? 2 posts were split to a new topic: Grant application for federation in Gitea This is no problem. Only apcore is AGPL, but this project is an opinionated server framework, meant to make life easier in implementing a full-blown server. The go-fed libraries are - probably very deliberately not under AGPL, but BSD-3 license. Gitea would not need apcore and already has a bunch of server aspects implemented their own way. They’d need to reimplement some things again, but not involving too much work (e.g. webfinger). @pilou would you be so kind as to post the following on my behalf? Oh… my bad, thanks for the clarification! Since it seems likely that nobody will be interested in the 5K€ grant, it would be a good thing to set an expiration date. Would November 1st 2021 be sensible? @zeripath in the 1.16 plan you write “(It’s worth noting that although this may appear orthogonal to things like federation - this is key to making federation possible.)”. You mentioned this a few times and I concur: authentication / authorization is a building block of federated activities. Maybe I’m stating the obvious but here it is: if issues were created around that topic that clearly articulate how they are relevant to federation, it would be possible to fund their implementation with the money earmarked for federation. My 2cts 3 posts were merged into an existing topic: Grant application for federation in Gitea
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00705.warc.gz
CC-MAIN-2022-27
6,010
43
https://forums.yoyoexpert.com/t/yoyoexpert-popularity/258
code
Hey everyone i made this graph to show how popular this web site is. Its my thanks to André Boulay and who ever helped him make this site. By the way Is there a list of the people who help make the site? Or was it Just André and just him alone? also if you would like put down a thank you for him/them that would be great. I guess it’s kinda like a really cheep Christmas present isn’t it ha ha. Thanks André and the YYE team (If there is one) for one of the best web sites I’ve ever encountered! No offense to André but I think that this site is like 3rd popular. It is the BEST for learning tricks, but I think the best forum is YoYoNation, cause its more active. Anyway, thanks Andre! Well yoyonation has been around A LOT longer than yoyoexpert. I’m positive buy looking at the graph that its only gonna get better and better. also Samad i recall you wining a legacy singed by andré himself, Can you get something like that from YYN. later. I was never denying the awesomeness of YYE, just saying, that YYN is more popular, and you are right that YYE will get more popular over time, cause thats what YYN did. Also, YYN has the most amount of YoYo stuff, then any online store. YYN is popular that holds a lot of yoyoers, and YYE is an extra yoyo website for some yoyoers who would join in. Happy Throwing! =] I love this site way more than ANY other yoyo site I have encountered. Thanks SO much to Andre (and anyone else who helped build the site) for building this site, its absolutely AMAZING, good job!!! ;D ;D ;D
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216333.66/warc/CC-MAIN-20180820101554-20180820121554-00045.warc.gz
CC-MAIN-2018-34
1,534
7
https://confluence.lsstcorp.org/display/DM/S17B+HSC+PDR1+reprocessing
code
This page collects notes about the HSC reprocessing in cycle S17B, including processing the RC dataset and processing the full PDR1 dataset. Descriptions about the PDR1 dataset and the RC dataset are summarized, along with the software stack and pipelines used in the processing. A stack based on w_2017_17 was used. The output repositories are available to DM team members at: A description of the aims and organization of this project is available here. The input dataset: HSC Strategic Survey Program (SSP) Public Data Release 1 (PDR1) The survey has three layers and includes 8 fields. - UDEEP: SSP_UDEEP_SXDS, SSP_UDEEP_COSMOS - DEEP: SSP_DEEP_ELAIS_N1, SSP_DEEP_DEEP2_3, SSP_DEEP_XMM(S)_LSS, SSP_DEEP_COSMOS - WIDE: SSP_WIDE, SSP_AEGIS |Layer||Field Name ("OBJECT")||Number of visits||Tract IDs (from https://hsc-release.mtk.nao.ac.jp/doc/index.php/database/ )| 16984, 16985, 17129, 17130, 17131, 17270, 17271, 17272, 17406, 17407 |SSP_DEEP_DEEP2_3||32||31||32||44||32||23||17||9220, 9221, 9462, 9463, 9464, 9465, 9706, 9707, 9708| |25||27||18||21||25||0||0||8282, 8283, 8284, 8523, 8524, 8525, 8765, 8766, 8767| |SSP_DEEP_COSMOS||20||20||40||48||16||18||0||9569, 9570, 9571, | |UDEEP||SSP_UDEEP_SXDS||18||18||31||43||46||21||19||8523, 8524, 8765, 8766| |SSP_UDEEP_COSMOS||19||19||35||33||55||29||0||9570, 9571, 9812, 9813, 9814, 10054, 10055| |WIDE||SSP_AEGIS||8||5||7||7||7||0||0||16821,16822, 16972, 16973| |SSP_WIDE||913||818||916||991||928||0||0||XMM: 8279-8285, 8520-8526, 8762-8768| GAMA09H: 9314-9318, 9557-9562, 9800-9805 WIDE12H: 9346-9349, 9589-9592 GAMA15H: 9370-9375, 9613-9618 HECTOMAP: 15830-15833, 16008-16011 VVDS: 9450-9456, 9693-9699, 9935-9941 Plots of tracts/patches: https://hsc-release.mtk.nao.ac.jp/doc/index.php/data/ Note: tract 9572 is listed on HSC PDR1 website for DEEP_COSMOS but no data actually overlap it; PDR1 does not have it either. Note: In S17B, more tracts than listed were processed. See below. Release Candidate ("RC") dataset The RC dataset was originally defined in https://hsc-jira.astro.princeton.edu/jira/browse/HSC-1361 for hscPipe 3.9.0. The RC dataset is public and available at /datasets/. 62 visits of them were not included in PDR1 ( - DM-10128Getting issue details... STATUS ): two of SSP_WIDE and 60 of SSP_UDEEP_COSMOS; their visit IDs are 274 276 278 280 282 284 286 288 290 292 294 296 298 300 302 306 308 310 312 314 316 320 334 342 364 366 368 370 1236 1858 1860 1862 1878 9864 9890 11742 28354 28356 28358 28360 28362 28364 28366 28368 28370 28372 28374 28376 28378 28380 28382 28384 28386 28388 28390 28392 28394 28396 28398 28400 28402 29352 (also see here). The RC dataset includes (a) 237 visits of SSP_UDEEP_COSMOS and (b) 83 visits of SSP_WIDE, in 6 bands: (a) Cosmos to full depth: (part of SSP_UDEEP_COSMOS) (tract=9813) - HSC-G 11690..11712:2^29324^29326^29336^29340^29350^29352 - HSC-R 1202..1220:2^23692^23694^23704^23706^23716^23718 - HSC-I 1228..1232:2^1236..1248:2^19658^19660^19662^19680^19682^19684^19694^19696^19698^19708^19710^19712^30482..30504:2 - HSC-Y 274..302:2^306..334:2^342..370:2^1858..1862:2^1868..1882:2^11718..11742:2^22602..22608:2^22626..22632:2^22642..22648:2^22658..22664:2 - HSC-Z 1166..1194:2^17900..17908:2^17926..17934:2^17944..17952:2^17962^28354..28402:2 - NB0921 23038..23056:2^23594..23606:2^24298..24310:2^25810..25816:2 (b) Two tracts of wide: (part of SSP_WIDE) (tract=8766^8767) - HSC-G 9852^9856^9860^9864^9868^9870^9888^9890^9898^9900^9904^9906^9912^11568^11572^11576^11582^11588^11590^11596^11598 - HSC-R 11442^11446^11450^11470^11476^11478^11506^11508^11532^11534 - HSC-I 7300^7304^7308^7318^7322^7338^7340^7344^7348^7358^7360^7374^7384^7386^19468^19470^19482^19484^19486 - HSC-Y 6478^6482^6486^6496^6498^6522^6524^6528^6532^6544^6546^6568^13152^13154 - HSC-Z 9708^9712^9716^9724^9726^9730^9732^9736^9740^9750^9752^9764^9772^9774^17738^17740^17750^17752^17754 Software Stack Version / Pipeline Steps / Config The LSST software stack is used; its Getting Started documentation is at https://pipelines.lsst.io Stack version: w_2017_17 (published on 26-Apr-2017) + master meas_mosaic/obs_subaru/ctrl_pool of 7-May-2017 built with w_2017_17 (i.e. w_2017_17 + DM-10315 + DM-10449 + DM-10430). That implies the PS1 reference catalog "ps1_pv3_3pi_20170110" in the LSST format (HTM indexed) is used (/datasets/refcats/htm/ps1_pv3_3pi_20170110/). The externally provided bright object masks (butler type "brightObjectMask") of version "Arcturus" ( - DM-10436Getting issue details... STATUS ) are added to the repo and applied in coaddDriver.assembleCoadd. Pipeline steps and configs: - singleFrameDriver.py Ignore ccd=9 which has bad amps and results not trustworthy even if processCcd passes - coaddDriver.py Make config.assembleCoadd.subregionSize small enough so a full stack of images can fit into memory at once; a trade-off between memory and i/o but doesn't matter scientifically, as the pixels are independent. forcedPhotCcd.py Note: it was added late and hence was not run in the RC processing Operational configurations, such as logging configurations in ctrl_pool, different from the tagged stack may be used (e.g. DM-10430). In the full PDR1 reprocessing, everything was run with the same stack version and config. Reproducible failures are noted below, but no reprocessing is done with a newer software version. This stack version had a known science problem about bad ellipticity residuals as reported in DM-10482Getting issue details... ; the bug fix DM-10688Getting issue details... was merged to the stack on May 30 and hence was not applied in this reprocessing campaign. General hints and tips for processing - To lower the memory when there's a LOT of inputs, decrease the subregion size in making coadds e.g. config.assembleCoadd.subregionSize = [10000, 50] - To avoid trashing a cluster filesystem, do not use more than 20 or so cores and do not use multiple nodes. multiband: typically want to use one core per patch; so the upper limit of usefulness is the number of patches multiplied by the number of filters Units of independent execution These pipelines will be run no smaller than these units: - makeSkyMap.py One SkyMap for everything - singleFrameDriver.py ccd (typically run per visit) - mosaic.py tract x filter, including all visits overlapping that tract in that filter. - coaddDriver.py patch x filter, including all visits overlapping that patch in that filter. (typically run per tract) - multiBandDriver.py patch, including all filters. (typically run per tract) Data of different layers (DEEP/UDEEP/WIDE) are processed separately. Example commands for processing - makeSkyMap.py makeSkyMap.py /datasets/hsc/repo --rerun private/username/path - singleFrameDriver.py singleFrameDriver.py /datasets/hsc/repo --rerun private/username/path/sfm --batch-type slurm --mpiexec='-bind-to socket' --cores 24 --time 600 --job jobName2 --id ccd=0..8^10..103 visit=444 - mosaic.py mosaic.py /datasets/hsc/repo --rerun private/username/path/sfm:private/username/path/mosaic --numCoresForRead=12 --id ccd=0..8^10..103 visit=444^446^454^456 tract=9856 --diagnostics --diagDir=/path/to/mosaic/diag/dir/ - coaddDriver.py coaddDriver.py /datasets/hsc/repo --rerun private/username/path/mosaic:private/username/path/coadd --batch-type=slurm --mpiexec='-bind-to socket' --job jobName4 --time 600 --nodes 1 --procs 12 --id tract=9856 filter=HSC-Y --selectId ccd=0..8^10..103 visit=444^446^454^456 - multiBandDriver.py multiBandDriver.py /datasets/hsc/repo --rerun private/username/path/coadd:private/username/path/multiband --batch-type=slurm --mpiexec='-bind-to socket' --job jobName5 --time 5000 --nodes 1 --procs 12 --id tract=9856 filter=HSC-Y^HSC-I - forcedPhotCcd.py forcedPhotCcd.py /datasets/hsc/repo --rerun private/username/path/multiband:private/username/path/forced -j 12 --id ccd=0..8^10..103 visit=444 The output data products of each step, their butler dataset types and butler policy templates are summarized at S17B Output dataset types of pipe_drivers tasks for HSC for the w_2017_17 stack. Processing the RC dataset singleFrameDriver: Reproducible failures in 46 ccds from 23 visits. The failed visit/ccds are the same as those in the w_2017_14 stack ( - DM-10084Getting issue details... STATUS ). Their data IDs are: --id visit=278 ccd=95 --id visit=280 ccd=22^69 --id visit=284 ccd=61 --id visit=1206 ccd=77 --id visit=6478 ccd=99 --id visit=6528 ccd=24^67 --id visit=7344 ccd=67 --id visit=9736 ccd=67 --id visit=9868 ccd=76 --id visit=17738 ccd=69 --id visit=17750 ccd=58 --id visit=19468 ccd=69 --id visit=24308 ccd=29 --id visit=28376 ccd=69 --id visit=28380 ccd=0 --id visit=28382 ccd=101 --id visit=28392 ccd=102 --id visit=28394 ccd=93 --id visit=28396 ccd=102 --id visit=28398 ccd=95^101 --id visit=28400 ccd=5^10^15^23^26^40^53^55^61^68^77^84^89^92^93^94^95^99^100^101^102 --id visit=29324 ccd=99 --id visit=29326 ccd=47 WIDE: The coadd products have all 81 patches in both tracts (8766, 8767) in 5 filters, except that there is no coadd in tract 8767 patch 1,8 in HSC-R (nothing passed the PSF quality selection there); the multiband products of all 162 patches are generated. COSMOS: The coadd products have 77 patches in tract 9813 in HSC-G, 74 in HSC-R, 79 in HSC-I, 79 in HSC-Y, 79 in HSC-Z, and 76 in NB0921; the multiband products of 79 patches are generated. "brightObjectMask" were not applied; but they should not affect. forcedPhotCcd.py was not run in the RC processing. Processing the SSP PDR1 All processing were done with the same stack setup (i.e. without DM-10451). Data of the three layers (UDEEP, DEEP, WIDE) were processed separately. The output repositories are at: All logs are at /datasets/hsc/repo/rerun/DM-10404/logs/ While unnecessary, some edge tracts outside of the PDR1 coverage were attempted in the processing this time. Those data outputs are kept in the repos as well. In other words, there are more tracts in the above output repositories than listed in the tract IDs in the table on top of this page; the additional data can be ignored. In singleFrameDriver/processCcd, there were reproducible failures in 78 CCDs from 74 visits. Their data IDs are: --id visit=1206 ccd=77 --id visit=6342 ccd=11 --id visit=6478 ccd=99 --id visit=6528 ccd=24^67 --id visit=6542 ccd=96 --id visit=7344 ccd=67 --id visit=7356 ccd=96 --id visit=7372 ccd=29 --id visit=9736 ccd=67 --id visit=9748 ccd=96 --id visit=9838 ccd=101 --id visit=9868 ccd=76 --id visit=11414 ccd=66 --id visit=13166 ccd=20 --id visit=13178 ccd=91 --id visit=13198 ccd=84 --id visit=13288 ccd=84 --id visit=15096 ccd=47^54 --id visit=15206 ccd=100 --id visit=16064 ccd=101 --id visit=17670 ccd=24 --id visit=17672 ccd=24 --id visit=17692 ccd=8 --id visit=17736 ccd=63 --id visit=17738 ccd=69 --id visit=17750 ccd=58 --id visit=19468 ccd=69 --id visit=23680 ccd=77 --id visit=23798 ccd=76 --id visit=24308 ccd=29 --id visit=25894 ccd=68 --id visit=29324 ccd=99 --id visit=29326 ccd=47 --id visit=29936 ccd=66 --id visit=29942 ccd=96 --id visit=29966 ccd=103 --id visit=30004 ccd=95 --id visit=30704 ccd=101 --id visit=32506 ccd=8 --id visit=33862 ccd=8 --id visit=33890 ccd=61 --id visit=33934 ccd=95 --id visit=33964 ccd=101 --id visit=34332 ccd=61 --id visit=34334 ccd=61 --id visit=34412 ccd=78 --id visit=34634 ccd=61 --id visit=34636 ccd=61 --id visit=34928 ccd=61 --id visit=34930 ccd=61 --id visit=34934 ccd=101 --id visit=34936 ccd=50 --id visit=34938 ccd=95 --id visit=35852 ccd=8 --id visit=35862 ccd=61 --id visit=35916 ccd=50 --id visit=35932 ccd=95 --id visit=36640 ccd=68 --id visit=37342 ccd=78 --id visit=37538 ccd=100 --id visit=37590 ccd=85 --id visit=37988 ccd=33 --id visit=38316 ccd=11 --id visit=38328 ccd=91 --id visit=38494 ccd=6^54 --id visit=42454 ccd=24 --id visit=42510 ccd=77 --id visit=42546 ccd=93 --id visit=44060 ccd=31 --id visit=44090 ccd=27^103 --id visit=44094 ccd=101 --id visit=44162 ccd=61 --id visit=46892 ccd=64 --id visit=47004 ccd=101 Out of the 78 failures: - 36 failed with: "Unable to match sources" - 13 failed with: "No objects passed our cuts for consideration as psf stars" - 7 failed with: "No sources remaining in match list after magnitude limit cuts" - 3 failed with: "No input matches" - 3 failed with: "Unable to measure aperture correction for required algorithm 'modelfit_CModel_exp': only 1 sources, but require at least 2." - 1 failed with: "All matches rejected in iteration 2" - 15 failed with: "PSF star selector found candidates" A rerun log of these failures is attached as singleFrameFailures.log. In multiBandDriver, two patches of WIDE (tract=9934 patch=0,0 and tract=9938 patch=0,0) failed with AssertionError as reported in - DM-10574Getting issue details... STATUS . I excluded the failed patches from the multiBandDriver commands, and then jobs were able to complete and process all other patches. The multiBandDriver job of WIDE tract=9457 could not finish unless patch=1,8 is excluded. However tract 9457 is actually outside of the PDR1 coverage. In forcedPhotCcd, fatal errors were seen about the reference of a patch does not exist; therefore some forced_src were not generated. A JIRA ticket has been filed: - DM-10755Getting issue details... STATUS Low-level processing details This section includes low-level details that may only be of interest to the operation team. The first singleFrame job started on May 8, the last multiband job was May 22, and the last forcedPhotCcd job was on Jun 1. The processing was done using the Verification Cluster and the GPFS space mounted on it. The NCSA team was responsible of shepherding the run and resolving non-pipeline issues, with close communications with and support from the DRP team regarding the science pipelines. The "ctrl_pool" style drivers were run on the slurm cluster. The processing tasks/drivers were run as a total of 8792 slurm jobs: - 514 singleFrame slurm jobs (slurm job IDs: jobids_sfm.txt) - 1555 mosaic slurm jobs (slurm job IDs: jobids_mosaic.txt) - 1555 coadd slurm jobs (slurm job IDs: jobids_coadd.txt) - 362 multiband slurm jobs (slurm job IDs: jobids_multiband_deepUdeep.txt jobids_multiband_wide.txt) - 4806 forcedPhotCcd slurm jobs (slurm job IDs: jobids_forc.txt) Disk Usage throughout the S17B reprocessing The figures above show the disk usage in the production scratch space, which was reserved purely for this S17B campaign use. Tests and failed runs wrote to this space as well. At hour ~275, removal of some older data in this scratch space was performed so the drop should be ignored. The resultant data products are archived in 4 folders at /datasets/hsc/repo/rerun/DM-10404/. In total there are 11594219 files. The large files are typically hundreds of MBs. The average size is ~14MB. The file size distribution is as the plot below: In terms of butler dataset types, the plots below show the distributions for SFM products and others. All plots are in log scale. More details can be found at https://jira.lsstcorp.org/browse/DM-10904 Computing Usage of the S17B reprocessing Total CPU = 79246 core-hours ~471.7 core-weeks Total User CPU = 76246 core-hours ~453.8 core-weeks The core-hours spent at each pipeline step are: The figure below shows the "efficiency", calculated by dividing the total cpu time by wall elapsed time * number of cores, for each pipeline. - A general feature of the plots is that the efficiency is observed to be bounded/limited by the fact that with ctrl_pool/mpi the MPI root process is mostly idle and occupies one core. This correlates with an upper bound for SFM of 23/24 ~0.958 , for coadd processing of 11/12 ~ 0.916, etc. - sfm: Every 11 visits are grouped into one job, and each visit has 103 ccds. Thus, 1133 ccds were processed in a job, divided amongst 24 cores. Each ccd took around 2 minutes in average; in other words, roughly 90 min of wall clock elapsed time and 36 hr of accumulated CPU time per job. Efficiency is uniformly good. SingleFrameDriverTask is a ctrl_pool BatchParallelTask. The histogram below shows the CPU time of the SFM slurm jobs. The job IDs of the longest running jobs are: 51245, 51320, 51371, 51483, 51496, 51497, 51525, 51533, 51534, 51536, 51546, 51547, 51548, 51549, 51550, 51582, 51587, 51602, 51603 - mosaic: The unit of processing is each tract x filter on a node for each layer. Mosaic jobs used 12 cores for reading source catalogs, via Python multiprocessing, but 1 core for other parts of the task; therefore we did not calculate the efficiency as it would be misleading. MosaicTask does not use ctrl_pool. - coadd: coadd jobs are chosen to process a tract on a node. One tract has 9*9=81 patches. CoaddDriverTask is a ctrl_pool BatchPoolTask. In most cases the patches are processed “12 wide” using ctrl_pool, distributing the work to 12 cores on a node. Using mpi based ctrl_pool in this context leads to one mostly idle MPI root process and 11 workers. As Verification nodes have 128 GB RAM, this gives on average ~ 11 GB of memory per patch, with the aggregate able to use the 128 GB. - MultiBandDriver is a ctrl_pool BatchPoolTask. - Six multiband jobs (9476-mbWIDE9219, 59482-mbWIDE9737,59484-mbWIDE10050, 59485-mbWIDE10188,59486-mbWIDE16003, 59316-mbUDEEP8522) were excluded from this figure; their elapsed times were very short and had very bad efficiencies but they are from tracks outside of the survey coverage. - Some of the forcedPhotCcd jobs run as only one task on one node had very high efficiency but this gave bad throughput. - Below are the histograms of the maximum resident set size and the virtual memory size for mosaic and forcedPhotCcd. Memory Memory monitoring of ctrl_pool driver jobs (singleFrameDriver, coaddDriver, multiBandDriver) was problematic and we do not believe in the numbers collected, so we do not plot them. Node Utilization of the S17B reprocessing on the Verification Cluster The Verification Cluster in its optimal state has 48 compute nodes with 24 physical cores, 256 GB RAM on each node. For the duration of the S17B reprocessing there was a peak of 45 compute nodes available. The total number of node-hours used was 9383.43. The node-hours spent for each code were as follows: - singleFrameDriver: 856.96 - mosaic: 291.46 - coaddDriver 541.00 - multibandDriver: 3005.92 - forcedPhotCcd: 4688.09 The plot below does not include failed jobs or test attempts, of which the generated data do not contribute to the final results directly. Other tickets of possible interest: Questions? For LSST-DM HSC-reprocessing effort we have a slack channel #dm-hsc-reprocessing
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510942.97/warc/CC-MAIN-20231002001302-20231002031302-00061.warc.gz
CC-MAIN-2023-40
18,498
141
http://metrovinz.deviantart.com/art/Modern-Skype-v4-Remix-282938127
code
That's here! a remixed version of Modern Skype v4 submitted here [link] CHANGELOG OF THIS "NEW" VERSION - Changed almost all icons on PRINCIPAL zone - Myerson replaced with Belfiore, person much more known than Myerson. - Changed square of status (online, offline etc) and added "more contacts" button on CONTACTS zone - Added some buttons on CHAT zone - Added a status bubble on the usertitle If you like this, leave a comment please.
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463475.57/warc/CC-MAIN-20150226074103-00056-ip-10-28-5-156.ec2.internal.warc.gz
CC-MAIN-2015-11
435
8
http://blog.greenpirate.org/tag/preliminary/
code
…and I think this is total bullshit. I seem to recall reading that this means Samsung can not sell smartphones (Galaxy S, S II and Ace) in the entire EU. If that is true, all I can say is RABBLE RABBLE! Hey Dutch courts, you are stupid. Didn’t you know that the claims were shooped? Aside from this, the Netherlands still seem awesome. There is some much more specific info on the formally Europe wide ban at fosspatents.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382705/warc/CC-MAIN-20130516092622-00066-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
425
4
https://dev-alert.com/how-to-use-nano-editor-beginners-guide/
code
How to use nano editor – Beginners’ guide In this article we will see how to use the very popular nano editor. This is just a very basic introduction for beginners. First of all you have to login to SSH. If you don’t have nano installed you can do it by typing for CentOS: yum install nano And for Ubuntu: sudo apt-get install nano After this you are ready to use it. In order to edit a file, the command (for file.txt for example) is: and then the GUI of the nano editor will appear with the contents of the file you want to edit. You can edit the text like any other text editor and some basic commands are: - To save a file in nano: CTRL+O - To exit nano: CTRL+X (if you haven’t saved the file it will ask you, with Y you save and with N you don’t) - To search for a text CTRL+W There are many more commands you can use, like copy/paste etc but this is just an introductory guide. For more information you can check the nano official website.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735885.72/warc/CC-MAIN-20200804220455-20200805010455-00017.warc.gz
CC-MAIN-2020-34
955
12
https://blender.stackexchange.com/questions/57098/my-scene-bpy-types-panel-appears-on-all-tabs
code
I created a simple bpy.types.Panel to allow modification of custom scene properties, wanting it to appear only on the Scene property tab, and thus created it like this: class MK8PanelSceneCourse(bpy.types.Panel): bl_label = "Mario Kart 8 Course" bl_idname = "SCENE_PT_mk8course" bl_space_type = "PROPERTIES" bl_region_type = "WINDOW" This works fine, and the panel appears on the scene property tab as expected: However, I noticed it also appears on every other tab. Here's an example with the World tab, where it's even the World name textbox: How can I make it appear only on the scene tab?
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055645.75/warc/CC-MAIN-20210917120628-20210917150628-00493.warc.gz
CC-MAIN-2021-39
592
6
https://cbmm.mit.edu/publications/phase-physically-grounded-abstract-social-eventsfor-machine-social-perception
code
|Title||PHASE: PHysically-grounded Abstract Social Eventsfor Machine Social Perception| |Publication Type||Conference Paper| |Year of Publication||2020| |Authors||Netanyahu, A, Shu, T, Katz, B, Barbu, A, Tenenbaum, JB| |Conference Name||Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop at NeurIPS 2020| The ability to perceive and reason about social interactions in the context ofphysical environments is core to human social intelligence and human-machinecooperation. However, no prior dataset or benchmark has systematically evaluatedphysically grounded perception of complex social interactions that go beyondshort actions, such as high-fiving, or simple group activities, such as gathering.In this work, we create a dataset of physically-grounded abstract social events,PHASE, that resemble a wide range of real-life social interactions by includingsocial concepts such as helping another agent. PHASE consists of 2D animationsof pairs of agents moving in a continuous space generated procedurally using aphysics engine and a hierarchical planner. Agents have a limited field of view, andcan interact with multiple objects, in an environment that has multiple landmarksand obstacles. Using PHASE, we design a social recognition task and a social prediction task. PHASE is validated with human experiments demonstrating thathumans perceive rich interactions in the social events, and that the simulated agents behave similarly to humans. As a baseline model, we introduce a Bayesian inverse planning approach, SIMPLE (SIMulation, Planning and Local Estimation), which outperforms state-of-the-art feed-forward neural networks. We hope that PHASEcan serve as a difficult new challenge for developing new models that can recognize complex social interactions. - CBMM Funded
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00315.warc.gz
CC-MAIN-2023-50
1,807
7
http://www.pdftron.com/pdfpagemaster/benefits.html
code
Encryption. PDF PageMaster supports standard PDF security (RC4 and AES). Efficiency. PDF PageMaster is based on PDFNet, making it extremely fast and efficient. Support for very large documents. PDF PageMaster can detect shared resources between pages (e.g. fonts, images, color-spaces) and ensures that only necessary objects are imported. All generated documents contain the minimum amount of information necessary, guaranteeing decreased file sizes. Built-in support for multi-threading makes PDF PageMaster a good match for multi-threaded, server-based applications. Support for all PDF revisions, including PDF 1.7 and Acrobat 8 documents. Common use case scenarios Server-based, on-demand delivery of dynamically assembled PDF documents. Build new PDF catalogues by extracting page ranges from existing collections of PDF files. This may be particularly useful in assembling product catalogues and report documents. Inserting a cover page or appending a legal notice to every PDF document in a given directory. Many PDF workflows in pre-press industry work with color separated documents. PageMaster allows for automated splitting and assembly of separated PDF documents. Integrating PDF merging and splitting functionality in a client application.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382764/warc/CC-MAIN-20130516092622-00084-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,253
12
https://forums.adobe.com/thread/931287
code
I'm working with a student who reports that when he downloaded Reader for Mac, he couldn't read the pdf files I created (with animations and sound produced via Presenter). He is using Reader X, but when I asked what he had listed in the "About Adobe Plug-Ins" list, he said there was nothing there. I suspect that's why he's not able to read the newer PDf files. But does anyone have a suggestion on why the download didn't have any plugins, or questions at least some questions I can ask him to try to pin down the problem. He reports that his system is running OSX Snow Leopard. The first thing I would ask is if he's absolutely positively sure that he is actually using Reader to open the files. By default, Mac Preview opens PDF files on a Mac. If you haven't already, have him open Adobe Reader then use File>Open to open the PDF and see what happens. If that doesn't work, let us know if there are any error messages...etc.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807825.38/warc/CC-MAIN-20180217204928-20180217224928-00614.warc.gz
CC-MAIN-2018-09
929
4
https://wallinside.com/post-107349-medical-uses-of-marijuana.html
code
Medical Uses Of Marijuana It can't be denied that most weed users are with the belief that smoking weed is not associated with any health risks. People utilize it for smoking and when they do, they get immediate intoxication feeling of euphoria. People put it to use for smoking and when they do, they get immediate intoxication a feeling of euphoria. Other Side Effects. Humans also react differently about bat roosting narcotics. The drug https://www.facebook.com/best.cbd.e.liquid is administered by smoking as opposed to taking orally. In the wild, marijuana may be present in abandoned fields, which were previously cultivated for fibers. The flower parts usually are not easily visible towards the human eye, and therefore are http://www.youtube.com/watch?v=u6-FuWWLDPI generally about 0. . As a Narcotic:. Other Uses:. . . . It helps in restoring eyesight, balance, speech, and bladder movement. Detailed study just isn't required to identify this plant. Three forms of narcotics can be extracted from three different elements of the plant. It can result in permanent loss of vision (blindness). Cultivation and Growth:. There are a few ways of smoking marijuana. The resulting smoke is a mix of nicotine and THC. One of the common methods is emptying a cigarette (blunt), and refilling it with marijuana. Abusing the drug affects the learning ability, causes memory loss, and disrupts cognitive and social behavior.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937015.7/warc/CC-MAIN-20180419165443-20180419185443-00434.warc.gz
CC-MAIN-2018-17
1,423
7
https://educationalliancefinland.com/news/edtech-schools-60-billion-hoax
code
How does technology impact our learning? Time published a distracting article about digital devices in schools being a 60 billion hoax. In the article, Dr. Nicholas Kardaras states that using ipads and laptops in schools is no good because many research findings show that the use of technology does not improve learning results by itself. On top of that we've have bad experiences with corrupted sales of devices and some researchers consider the rise of ADHD diagnoses to be related with increasing screen time. As a representor of a digital education agency I’m probably unable to write the most objective counter argument, but I want to point out a crucial misconception that the article had towards education technology. The first thing to point out is that the benefits of using technology in learning are mostly dependent on the content that is being used and the ways how to content is being used. It's not the devices, it's the content and its use that matters. To me this seems obvious, because it’s the same thing with books or any other tools we are using in learning. Even books don’t improve learning results by themselves. The book needs to be of high educational quality and teachers and students need to use the book in a pedagogically justified way. Should schools avoid use of technology because of screen-time issues? Cutting down screen time sounds good to me, although the ADHD link with tech usage is still lacking stronger evidence. However, I would not recommend to do it in schools, but rather suggest parents to take care of that at homes. We have to remember that schools are using devices for professional purposes, not for entertainment. If we need to cut down screen time it’s better to reduce the entertainment use of devices. School work needs the same tools as any information work: Microsoft Office, learning management systems, digital books, learning games, Google Suit for Education, etc. How would you feel if your boss tells you to use overhead projector instead of Powerpoint when giving a presentation at work, just because you need to cut down your screen time? How to help schools to benefit more from the use of EdTech? We should pay attention on teachers’ possibilities to do their work efficiently and in a meaningful way. According to EU's 2nd Survey of Schools: ICT in Education the tech infrastructure in schools is still not ideal to support effective use of technology in learning. In European schools the biggest challenges are slow internet connections and lack of devices. Besides good-quality infrastructure, continuous professional development is key for teachers to integrate educational technology into their teaching practices. I believe that the best way to face and overcome the challenges the Time Magazine's article names, is to improve our understanding of what are the benefits of EdTech. Through improved understanding on the benefits schools can create better strategies on how and why to use technology in learning. More strategic approach on using EdTech is likely to make the use more systematic and effective. At the end of the day, EdTech's efficacy in learning is what matters the most. How Finnish schools are using educational technology? In Finland the schools' use of EdTech is increasing. According to EU's Study on ICT in Education 94% of Finnish schools are categorised as highly digitally equipped and connected schools but only 44% of Finnish secondary school students use a computer at school on a weekly basis, which can be considered low as the EU average is 56% (2017-2018). In Finland there's a lack of national strategy of using EdTech in schools, but generally the schools' aim is to use only those solutions that are found effective and avoid using tech for the sake of it. The digitalization of education is happening inevitably. The society is changing and schools need to keep up with the pace in order to be able to ensure today's students have the right competencies after graduating from school. Head of Education at Education Alliance Finland (previously Kokoa Standard) Originally published September 15, 2016
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100146.5/warc/CC-MAIN-20231129204528-20231129234528-00034.warc.gz
CC-MAIN-2023-50
4,118
13
http://www.acsu.buffalo.edu/~chenhanx/news/
code
Chenhan will be joining Snap Research NYC in the summer of 2022 as a Research Intern on HCI research. Our paper “CardiacWave: A mmWave-based Scheme of Non-Contact and High-Definition Heart Activity Computing” is accepted by UbiComp ‘21 Chenhan passed Ph.D. qualifying exam Our paper “VocalPrint: exploring a resilient and secure voice authentication via mmWave biometric interrogation” is accepted by SenSys ‘20 Our paper “Sonicprint: a generally adoptable and secure fingerprint biometrics in smart devices” received the Best Paper Award from the 2020 ACM MobiSys. Chenhan received the First Year Achiever Award from CSE@UB Our research work on mmWave-scannable paper tagging received the Best Paper Award from the 2019 ACM SenSys Conference. Chenhan received the MobiSys ‘19 travel grant Our paper “WaveEar: Exploring a mmWave-based Noise-resistant Speech Sensing for Voice-User Interface” is accepted by MobiSys ‘19 Chenhan received the Chair’s Fellowship from CSE@UB
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00452.warc.gz
CC-MAIN-2022-27
996
10
http://www.designthinkingnetwork.com/profiles/blogs/continuous-innovation-what-why-and-how
code
As part of the DesignThinkers Group US launch event I gave a talk about Continuous Innovation. The talk was (and is) meant to trigger a conversation on innovation. There are many different opinions on this topic, and it has many dimensions I do not cover in this talk. But hopefully there is enough in it to agree or disagree with and I want to invite you to share your thoughts and opinions. I want to thank all the fantastic people already sharing their opinion and thoughts with me via twitter and Facebook. Thanks for enabling me to keep discovering and learning! Keep it coming :-) Add a Comment
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00757.warc.gz
CC-MAIN-2021-04
600
4
https://xeggex.com/asset/NVC-MAIN
code
This is a child asset of the main asset below Novacoin (NVC) is both a crypto pioneer and the coin of the feature! Its unique way of utilizing both Proof-of-Work (PoW) and Proof-of-Stake (PoS) for block generation with separated target limits make it truly stand out. In fact, its success can be readily seen across the cryptosphere, where the original NVC code serves as a foundation for numerous other PoS and hybrid PoS/PoW projects. Reserves Verification (Beta - we are still building automated signatures)
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233505362.29/warc/CC-MAIN-20230921073711-20230921103711-00226.warc.gz
CC-MAIN-2023-40
510
3
http://2000clicks.com/MathHelp/TutorWhiteboard1.aspx
code
Math Help > Math Tutoring > Whiteboard Have you ever wondered whether geometry and trigonometry help can be done over the Internet? Well, it can. I can set up a time with your math student to chat and draw diagrams using this whiteboard. The password is fibonacci. Try it now to see its capabilities and become familiar with it. I don't monitor the whiteboard, so you'll need to send me an email suggesting a time that we might both use the whiteboard. Don't forget to tell me what time zone you're in. To get started, if there is a diagram you want to show me, you can write it up on the whiteboard and save it, with a title and your name. Then I can add to it, and save it with the same title so you can find it. The webmaster and author of this Math Help site is
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526904.20/warc/CC-MAIN-20190418221425-20190419003425-00374.warc.gz
CC-MAIN-2019-18
765
12
https://forum.openwrt.org/t/contiguous-ssidover-multiple-openwrt-meraki-mr16/39579
code
Many MANY apologies if this is..... b) been asked and answered... for some reason - i get a page cannot be found error. I have 6 Meraki MR16's All running the same (except MAC Addresses) Build of RipTideWave99's. In the Interfaces, I have configured ... wan (SSID for 2.4Ghz) MYSSID-M-2 wan6 (SSID for 5Ghz) MYSSID-M-5 Can i spread the SSID's across all APs without using cucumber... can i do this independently... or "IS" cucumber best for Contiguous SSIDs. Basically - I want a 2.4Ghz SSID and a 5 GHz SSID without having to keep retyping the same passphrase:- Mode: Master SSID: MY2-4GHzSSID-M2 BSSID: 00:18:0A:38:8F:F5 Encryption: mixed WPA/WPA2 PSK (CCMP) Channel: 11 (2.462 GHz) Tx-Power: 17 dBm Signal: 0 dBm Noise: -95 dBm Bitrate: 0.0 Mbit/s Country: GB hopefully that question makes sense.... Thanks In Advance! Chaos Calmer has been out of support for years and is known insecure. Working with a current build is pretty much required. ar71xx has been deprecated and replaced by ath79 in v19. No idea what cucumber is in this context, but, in general, “stock” OpenWrt can configure what it sounds like you’re asking for simply and directly, without some seemingly obsolete set of packages or patches. Problem is... I couldn't get my build environment up and running... and decided to use RipTideWave99's set of files to get me out of a hole. I would love to update the packages.. but... being honest... I don't have a clue how to do it - I can;t even get the Linux Debian Build to work for me without failing at MAKE - after about an hour from the "make kernel_configmenu" command So... using LEDE - I was able to get out of a sticky situation - yes.. I am only using these for home "work" use... so.... unless someone could really help with the compilation or updates, or do a step by step "idiots guide" to updating - I'm kind of stuck as I am I appreciate the help you gave me last week, but I still couldn't get the environment to work so... in typical "personal" fashion - I gave up and used what was already there! ok - I will give it another go... one final question..... when the files get built, where do they get put in the file system? I'm guessing somewhere within home/openwrt/ (or is that /home/openwrt - either way...... please - and thank you update - just did an RM opernwrt/* -r ran the git clone ran the update and install now for the make menuconfig 14:13 - GMT FLUP..... how do I get FLUP on the system - apparently MAKE has a dependancy on FLUP hmm.. the ATH79xx doenst show the Meraki MR16 - but the older AR71xx DOES
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510138.6/warc/CC-MAIN-20220516140911-20220516170911-00212.warc.gz
CC-MAIN-2022-21
2,556
29
https://www.libraryintercept.com/news/beta18/
code
Beta 18 Release Notes - Customers and staff can now see events color-coded by their primary audience on the printable event calendar (requires using a subtheme of intercept_base, overriding the intercept_base/fullCalendar theme library, and therein defining colors per audience). - Clarified the "Usage" filter under the calendar view of Room Reservations - Began research & development on reworking customer feedback options. This redesigned feature is planned to become available in Q4. - Room Reservation Entry - Moved the Status field to a more logical position. - Fixed an issue with group name being mistakenly required the customer re-edits an existing room reservation.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00546.warc.gz
CC-MAIN-2023-50
677
6
https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-7
code
What's new in .NET 7 .NET 7 is the successor to .NET 6 and focuses on being unified, modern, simple, and fast. .NET 7 will be supported for 18 months as a standard-term support (STS) release (previously known as a current release). This article lists the new features of .NET 7 and provides links to more detailed information on each. To find all the .NET articles that have been updated for .NET 7, see .NET docs: What's new for the .NET 7 release. Performance is a key focus of .NET 7, and all of its features are designed with performance in mind. In addition, .NET 7 includes the following enhancements aimed purely at performance: - On-stack replacement (OSR) is a complement to tiered compilation. It allows the runtime to change the code executed by a currently running method in the middle of its execution (that is, while it's "on stack"). Long-running methods can switch to more optimized versions mid-execution. - Profile-guided optimization (PGO) now works with OSR and is easier to enable (by adding <TieredPGO>true</TieredPGO>to your project file). PGO can also instrument and optimize additional things, such as delegates. - Improved code generation for Arm64. - Native AOT produces a standalone executable in the target platform's file format with no external dependencies. It's entirely native, with no IL or JIT, and provides fast startup time and a small, self-contained deployment. In .NET 7, Native AOT focuses on console apps and requires apps to be trimmed. - Performance improvements to the Mono runtime, which powers Blazor WebAssembly, Android, and iOS apps. For a detailed look at many of the performance-focused features that make .NET 7 so fast, see the Performance improvements in .NET 7 blog post. .NET 7 includes improvements to System.Text.Json serialization in the following areas: - Contract customization gives you more control over how types are serialized and deserialized. For more information, see Customize a JSON contract. - Polymorphic serialization for user-defined type hierarchies. For more information, see Serialize properties of derived classes. - Support for required members, which are properties that must be present in the JSON payload for deserialization to succeed. For more information, see Required properties. For information about these and other updates, see the What's new in System.Text.Json in .NET 7 blog post. .NET 7 and C# 11 include innovations that allow you to perform mathematical operations generically—that is, without having to know the exact type you're working with. For example, if you wanted to write a method that adds two numbers, previously you had to add an overload of the method for each type. Now you can write a single, generic method, where the type parameter is constrained to be a number-like type. For more information, see the Generic math article and the Generic math blog post. .NET's regular expression library has seen significant functional and performance improvements in .NET 7: The new option RegexOptions.NonBacktracking enables matching using an approach that avoids backtracking and guarantees linear-time processing in the length of the input. The nonbacktracking engine can't be used in a right-to-left search and has a few other restrictions, but is fast for all regular expressions and inputs. For more information, see Nonbacktracking mode. Regular expression source generators are new. Source generators build an engine that's optimized for your pattern at compile time, providing throughput performance benefits. The source that's emitted is part of your project, so you can view and debug it. In addition, a new source-generator diagnostic SYSLIB1045alerts you to places you use Regex that could be converted to the source generator. For more information, see .NET regular expression source generators. For case-insensitive searches, .NET 7 includes large performance gains. The gains come because specifying RegexOptions.IgnoreCase no longer calls ToLower on each character in the pattern and on each character in the input. Instead, all casing-related work is done when the Regex is constructed. Regex now supports spans for some APIs. The following new methods have been added as part of this support: For more information about these and other improvements, see the Regular expression improvements in .NET 7 blog post. Many improvements have been made to .NET library APIs. Some are mentioned in other, dedicated sections of this article. Some others are summarized in the following table. |Support for microseconds and nanoseconds in TimeSpan, TimeOnly, DateTime, and DateTimeOffset types - New DateTime constructor overloads - New DateTimeOffset constructor overloads - And others... |These APIs mean you no longer have to perform computations on the "tick" value to determine microsecond and nanosecond values. For more information, see the .NET 7 Preview 4 blog post. |APIs for reading, writing, archiving, and extracting Tar archives |For more information, see the .NET 7 Preview 4 and .NET 7 Preview 6 blog posts. |Rate limiting APIs to protect a resource by keeping traffic at a safe level |RateLimiter and others in the System.Threading.RateLimiting NuGet package |For more information, see Rate limit an HTTP handler in .NET and Announcing rate limiting for .NET. |APIs to read all the data from a Stream |Stream.Read may return less data than what's available in the stream. The new ReadExactly methods read exactly the number of bytes requested, and the new ReadAtLeast methods read at least the number of bytes requested. For more information, see the .NET 7 Preview 5 blog post. |New type converters for |In the System.ComponentModel namespace: |Type converters are often used to convert value types to and from a string. These new APIs add type converters for types that were added more recently. |Metrics support for IMemoryCache |GetCurrentStatistics() lets you use event counters or metrics APIs to track statistics for one or more memory caches. For more information, see the .NET 7 Preview 4 blog post. |APIs to get and set Unix file permissions |- System.IO.UnixFileMode enum - Directory.CreateDirectory(String, UnixFileMode) |For more information, see the .NET 7 Preview 7 blog post. |Attribute to indicate what kind of syntax is expected in a string |For example, you can specify that a string parameter expects a regular expression by attributing the parameter with .NET 7 makes improvements to observability. Observability helps you understand the state of your app as it scales and as the technical complexity increases. .NET's observability implementation is primarily built around OpenTelemetry. Improvements include: - The new Activity.CurrentChanged event, which you can use to detect when the span context of a managed thread changes. - New, performant enumerator methods for Activity properties: EnumerateTagObjects(), EnumerateLinks(), and EnumerateEvents(). For more information, see the .NET 7 Preview 4 blog post. The .NET 7 SDK improves the CLI template experience. It also enables publishing to containers, and central package management with NuGet. Some welcome improvements have been made to the dotnet new command and to template authoring. - Available template names - Template options - Allowable option values In addition, for better conformity, the update subcommands no longer have the Template constraints, a new concept for .NET 7, let you define the context in which your templates are allowed. Constraints help the template engine determine which templates it should show in commands like dotnet new list. You can constrain your template to an operating system, a template engine host (for example, the .NET CLI or New Project dialog in Visual Studio), and an installed workload. You define constraints in your template's configuration file. Also in the template configuration file, you can now annotate a template parameter as allowing multiple values. For example, the web template allows multiple forms of authentication. For more information, see the .NET 7 Preview 6 blog post. Publish to a container Containers are one of the easiest ways to distribute and run a wide variety of applications and services in the cloud. Container images are now a supported output type of the .NET SDK, and you can create containerized versions of your applications using dotnet publish. For more information about the feature, see Announcing built-in container support for the .NET SDK. For a tutorial, see Containerize a .NET app with dotnet publish. Central package management You can now manage common dependencies in your projects from one location using NuGet's central package management (CPM) feature. To enable it, you add a Directory.Packages.props file to the root of your repository. In this file, set the MSBuild property true and add versions for common package dependency using PackageVersion items. Then, in the individual project files, you can omit Version attributes from any PackageReference items that refer to centrally managed packages. For more information, see Central package management. P/Invoke source generation .NET 7 introduces a source generator for platform invokes (P/Invokes) in C#. The source generator looks for LibraryImportAttribute on partial methods to trigger compile-time source generation of marshalling code. By generating the marshalling code at compile time, no IL stub needs to be generated at run time, as it does when using DllImportAttribute. The source generator improves application performance and also allows the app to be ahead-of-time (AOT) compiled. For more information, see Source generation for platform invokes and Use custom marshallers in source-generated P/Invokes. This section contains information about related products that have releases that coincide with the .NET 7 release. Visual Studio 2022 version 17.4 For more information, see What's new in Visual Studio 2022. F# 7 continues the journey to make the language simpler and improve performance and interop with new C# features. For more information, see Announcing F# 7. .NET Multi-platform App UI (.NET MAUI) is a cross-platform framework for creating native mobile and desktop apps with C# and XAML. It unifies Android, iOS, macOS, and Windows APIs into a single API. For information about the latest updates, see What's new in .NET MAUI for .NET 7. ASP.NET Core 7.0 includes rate-limiting middleware, improvements to minimal APIs, and gRPC JSON transcoding. For information about all the updates, see What's new in ASP.NET Core 7. Entity Framework Core 7.0 includes provider-agnostic support for JSON columns, improved performance for saving changes, and custom reverse engineering templates. For information about all the updates, see What's new in EF Core 7.0. Much work has gone into Windows Forms for .NET 7. Improvements have been made in the following areas: - High DPI and scaling For more information, see What's new in Windows Forms in .NET 7. WPF in .NET 7 includes numerous bug fixes as well as performance and accessibility improvements. For more information, see the What's new for WPF in .NET 7 blog post. Orleans is a cross-platform framework for building robust, scalable distributed applications. For information about the latest updates for Orleans, see Migrate from Orleans 3.x to 7.0. .NET Upgrade Assistant and CoreWCF The .NET Upgrade Assistant now supports upgrading server-side WCF apps to CoreWCF, which is a community-created port of WCF to .NET (Core). For more information, see Upgrade a WCF server-side project to use CoreWCF. ML.NET now includes a text classification API that makes it easy to train custom text classification models using the latest state-of-the-art deep learning techniques. For more information, see the What's new with AutoML and tooling and Introducing the ML.NET Text Classification API blog posts.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817146.37/warc/CC-MAIN-20240417075330-20240417105330-00661.warc.gz
CC-MAIN-2024-18
11,830
96
http://www.brokencontrollers.com/faq/11612743.shtml
code
How to import your package/modules from a script in bin folder in python When organising python project, this structure seems to be a standard way of doing it: myproject\ bin\ myscript mypackage\ __init__.py core.py tests\ __init__.py mypackage_tests.py setup.py My question is, how do I import mycore.py so I can use it in myscript? both __init__.py files are empty. Content of myscript: #!/usr/bin/env python from mypackage import core if __name__ == '__main__': core.main() Content of core.py def main(): print 'hello' When I run myscript from inside myproject directory, I get the following error: Traceback (most recent call last): File "bin/myscript", line 2, in <module> from mypackage import core ImportError: No module named mypackage What am I missing? Usually, setup.py should install the package in a place where the Python interpreter can find it, so after installation import mypackage will work. To facilitate running the scripts in bin right from the development tree, I'd usually simply add a simlink to ../mypackage/ to the bin directory. Of course, this requires a filesystem supporting symlinks… I'm not sure if there is a "best choice", but the following is my normal practice: Put whatever script I wanna run in /bin do "python -m bin.script" in the dir myproject When importing in script.py, consider the dir in which script.py is sitting as root. So from ..mypackage import core If the system supports symlink, it's a better choice.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00522.warc.gz
CC-MAIN-2019-51
1,458
19
https://rdrr.io/github/jgrevel/BAST1-R-Library/man/multi_strsplit.html
code
Description Usage Arguments Value Note Author(s) Examples View source: R/multi_strsplit.R An adaptation of the strsplit function, which is only able to take a single character string for its split argument (when fixed = TRUE). character vector, each element of which is to be split. Other inputs, including a factor, will give an error. character vector containing regular expressions to use for splitting. A character vector containing all characters in x that were split using splits. Applies fixed = TRUE to underlying strsplit function. See strsplit documentation for explanation of this argument. 1 2 3 x = c("a+b-c", "d*e/f") multi_strsplit(x, splits=c('+', '-', '*', '/')) # "a" "b" "c" "d" "e" "f" Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657735.85/warc/CC-MAIN-20230610164417-20230610194417-00793.warc.gz
CC-MAIN-2023-23
823
11
https://forums.unrealengine.com/t/unreal-editor-freeze-when-debugging/328711
code
I can build my project without difficulty. When debugging, however, I will occasionally meet an issue where Visual Studio freezes and the computer will not respond to any input - no programs will respond, and I cannot even summon task manager. This seems to happen when loading symbols, and the UE process has already started. There are only two solutions: either I have my task manager open before the event, and I quickly terminate the UE process, at which point visual studio stops debugging and everything is fine - or I simply have to hard reset my computer by turning off the power. Given that this happens a good 50% of the time when launching the game, I essentially cannot make meaningful progress. This only seems to have started happening recently: I am running under Windows 10 x64, using VS2013 Ultimate and UE4.8 Any help would be greatly appreciated
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00056.warc.gz
CC-MAIN-2022-21
864
5
https://webmasters.stackexchange.com/questions/42896/server-responding-to-request-differnetly
code
I believe as above hosted locally , the long green delay is actually the home page of the website. May i know what causes those delay? Magento is a very CPU intense monster, is it due to the fact that its waiting for the server to respond due to lack of cpu power? I did load test on different website before to study how server works, as i'm new to server hosting and optimizing. I personally use amazon and i understand amazon has their limit, i tested my server stress test and it would react in 0.5 seconds with 30 concurrent users, Micro instance, apache, varnish , compressed js and img, images from S3(CDN) and the load time only shoots up from 0.5 seconds to 1min ! at 31st concurrent user. Why does the shoot happens? It feels like it cannot take users when its stressed. Where as the other site that i tested , the image above is one example , no matter how stress it is , from 20-50+ users, its load time is 1min(from overseas connection) locally i tested its 30 seconds. I did check on other website too, some are hosted on share hosting, and they are able to take stress and function when its overloaded. It just react in 1min, for my case, amazon, it can even hit 3-4 mins when its stressed. May i know how do i have this "Maxinum load time no matter how stress it is, not NO Matter but to a larger extend" What should i be looking at. *PS if i increase my Micro instance to Small instance, it can take only an additional 10 concurrent users, which isnt a big difference but when it come to cost, its a hell lots of a difference. Any other knowledge would be totally appreciative, i'm here to learn and to help in future. Thank you for this platform.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100276.12/warc/CC-MAIN-20231201053039-20231201083039-00270.warc.gz
CC-MAIN-2023-50
1,664
6
http://www.dzone.com/links/the_five_axioms_of_the_api_economy_axiom_3_apis_a.html
code
Today we announced that Jelastic version 2.5 has been released! We now support multi... more » A look at the new features in the latest DevCraft release that enable easier cross-platform mobile development and more. Offline availability of your web application will give a little extra comfort to your users.... more » Semantic HTML can help web designers and developers convey meaning not simply in the presented... more » “The law is massive, and massively complicated, so that was the beginning of big data for me –... more » Multi-tenancy is a frequent requirement in large business applications. Besides the... more »
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898644.9/warc/CC-MAIN-20141030025818-00058-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
631
6
http://www.mobygames.com/game/psp/beats/screenshots
code
MobyGoal achieved! Thanks to all contributors who helped us reach 250 documented ZX81 Games. Choosing musical track Beginning of musical challenge There are several themes to choose from The oldest and most comprehensive game database. Information, credits, reviews, screenshots and more covering 145 video game platforms from 1971 to date! MobyGames™ Copyright © 1999-2014 Blue Flame Labs. All rights reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997860453.15/warc/CC-MAIN-20140722025740-00060-ip-10-33-131-23.ec2.internal.warc.gz
CC-MAIN-2014-23
414
7
https://forum.telus.com/t5/Mobility-Devices/Mobile-Data-Turned-Off-Data-still-used/m-p/61522/highlight/true
code
I have a new LG K4 phone. I have turned off Mobile data. I am a pay-as-you-go customer. When I check my Usage Summary - I see the following (different times each day - but always an hour apart): 11:33:22 PM 0 Kilobyte (Data Volume) $0.00 PDA 10:33:22 PM 2 Kilobyte (Data Volume) $0.01 PDA 6:30:53 AM 0 Kilobyte (Data Volume) $0.00 PDA 5:30:53 AM 0 Kilobyte (Data Volume) $0.00 PDA 4:30:53 AM 0 Kilobyte (Data Volume) $0.00 PDA Does anyone have any ideas as to what is using this tiny little bit of data? I leave my phone turned on at night, but I don't use it. I have all apps set to update manually. I have the OS set to update manually. 1) It seems many phones, or apps on the phones leak some amount of data. I see situations on my phone where the same pattern as you describe on our phone for hours after use to gather some piece of information - it is like an app keeps checking in for some information or other. I have one instance where 9 KB were used early in the day, and every hour thereafter for about 6 hours is a $0.00 data point. 2) On many phones, the wi-fi radio is turned off when the phone is unused for some period. If an App does not respect the no data toggle you have set, it may 'ping' a server, but go no further, and some data is used. All I can suggest is to keep an eye on this, and see if greater amounts are billed in the future, then call to ask for an adjustment, as I have seen no real solution to these minuscule data consumption points. Hopefully others will jump in with their experiences! Thank you!! I have had a couple of chat sessions with Telus support, which have been very helpful. They've offered to put a data lock on the account which I might end up doing. I didn't realize that I could check the apps using data!! Even with data turned off, my phone is running Messaging, Google Play Services, Setup Wizard, and Android OS. Tiny little amounts of data. After a little more internet research, I changed my Preferred Network Mode from GSM/WCDMA/LTE auto to GSM/WCDMA auto. So far, no more data charges. I send text messages on a daily basis, and once in awhile I make a phone call. I'm a very light user so it doesn't matter to me what network I use. In your data settings, is there a list of apps that are using your data? And do you have a background data on//off button? If so, turn it off. My data settings (GS7) lists all apps that have accessed data, and when my background data is off, nothing is used.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00508.warc.gz
CC-MAIN-2023-50
2,453
17
http://renderwiki.haggi.biz/wiki-seiten/renderers/commercial/krakatoa.php
code
- Homepage: krakatoa - Current version: Krakatoa MX 2.5.2 june 2016, MY 2.4.3 from December 2015, C4D 2.4.1 from October 2015 - Plugin volumetric renderer for 3dsmax, Maya and Cineam4d - Standalone renderer with python and c++ api Krakatoa is not a full range renderer but is specialized on rendering particles as volumes or soft particles. Krakatoa is very famous for its volumetric particle rendering. Often the renderings are as fast as lightning including shadows and beautyful diffuse volumetic scattering. On vimeo, you can see some examples of rendering fluids with krakatoa. With the possibility of using multiple particle exports, partitions, you can render a huge amount of particles with this renderer. One problem is that occuding geometry has to be translated to the renderer and that these occluders are quite weakley antialiased. If you have a software which creates deep shadow maps, like the free renderman, you can use these deep shadow maps to create occluding and shadow casting objects. They can read a broad range of particle files, not only prt, but Lidar scan data, Realflows bin files or .E57 files, whatever this is. Release MY/SR 2.0.2 : With krakraoa for maya and the standalone version the possibilities are now almost unlimited. With the standalone version which includes the python and c++ api, users can implement their own translator and maybe include a partio loader or naiad particle loader. Release MX 2.0: They now support hair rendering and is able to render particles generated by Chaos Groups phoenix FD. Very interesting improvements. Last update: 07.2016
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823565.27/warc/CC-MAIN-20181211040413-20181211061913-00105.warc.gz
CC-MAIN-2018-51
1,596
12
http://computer-science-project28841.free-blogz.com/6791786/not-known-factual-statements-about-c-assignment-help
code
The marketing department performs an extensive job while in the resort, as They are really accountable for generating the attention One of the area people, corporate properties as well as the travel and trade organisations concerning the hotel and its providers. The internet marketing workforce is answerable for promoting, print media, Net and community relation actions. A plan in a comprehensive way describing in regards to the challenge along with a documentation from the requirement with the communications in the form of undertaking interaction matrix. The distinction between the duplicate constructor and the assignment operator leads to a lot of confusion for new programmers, nonetheless it’s actually not all of that hard. Summarizing: Comprehend the Thought of C Programming: You will find couple ideas that happen to be certain into the C language. You received’t find Constructions and tips in the modern programming languages. Taking into consideration C programming assignment to generally be determined by procedural language it differs from other C++ programming or Java programming langauge determined by the concepts of objects. Secondly, principles of input and output streams are a lot less tedious to be familiar with at the very first go. Even so, you can easily master these ideas by practising. The most crucial use of C programming is while in the Electronic style and Automation companies. It can be crucial for the hotel to detect the threats that can influence them. The identification of threats will provide the resort the directions to have geared up for your Levels of competition. The precedence table establishes the purchase of binding in chained expressions, when It isn't expressly specified by parentheses. Other than the ideas stated above, and C ++ has unbelievable library aid. You'll find greater than 3000 libraries available online. Next, it is actually built on The essential operators of C programming; hence it really is suitable with almost every programming code in C programming framework. In order to learn more about C++ and C programming, you may check the Programming sample inquiries available on our Site. These programming samples incorporate packages on each notion which is used in C++ programming. What you want to accomplish is not initialization, but assignment. But these assignment to array is impossible in C++. The variable definition instructions the compiler the amount of storage is necessary to generate the variable and where by It's going to be stored. Disclaimer: AllAssignmentHelp.com presents reference papers to the coed and we strongly advise you not to post the papers as it can be. Remember to use our solutions as design remedy to enhance your techniques. Now, everybody knows about the power of Reddit but not all are mindful of the aid it can offer. It's the position of individuals don't just sharing cute and funny pictures but in addition the resource where by all of the intelligent people hold out. Dilemma Definition – This is actually the first phase in programming process and will involve defining the situation i.e. what we must accomplish and in what buy. Before I utilized to mess up Homepage with numerous academic tasks and was finding it tough to execute properly in the other many assignments. But when I found MyAssignmentHelpAu, I just breathed deep as I obtained a best System from where I could get finest and most effective assignment help. Professionals, at this platform provided the very best producing help to me. Thank you MyAssignmentHelpAu We are The most recognized C homework help experts and we've been exceptionally unique with the kind of coach's we have in our group. Our coaches can understand the significance of C programming highlights since they discovered and used them in each individual probable condition. We largely concentrate on providing your assignments in time to be able to improve your minimal grades. We will serve you the very best quality contents Based on your requirements. Glimpse through the supplied points and there's a chance you're being aware of our role in lessening your research load.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741340.11/warc/CC-MAIN-20181113173927-20181113195927-00517.warc.gz
CC-MAIN-2018-47
4,143
14
https://docs.citrix.com/es-es/dna/7-12/reporting/forward-path/dna-forward-path-tasks.html
code
- Run Forward Path tasks - Virtualization solution Forward Path tasks are typically used to automate the creation of production-ready App-V and XenApp packages, based on logic within the Forward Path report. However, Forward Path tasks can be configured to do many other tasks, such as copying files and sending emails. Forward Path tasks are controlled by Forward Path task scripts that are configured to run based on a value in the Outcome column in a Forward Path report. Forward Path reports are controlled by scenarios. After you create or import Forward Path scenarios and task scripts, you can run tasks and monitor their status. You can change the default active scenario in the Forward Path Logic Editor. The lower part of the screen shows the progress and the error log. Some task scripts are dependent on the successful configuration of Install Capture and a virtual machine. See Install Capture for more information.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215075.58/warc/CC-MAIN-20180819090604-20180819110604-00300.warc.gz
CC-MAIN-2018-34
928
6
https://simpleprogrammer.com/programmer-impostor-syndrome/
code
I’ll never forget the time I spent a summer at Facebook. I had an opportunity most people can only dream of. I stepped into a robust Software Engineering culture and a community of mission-driven people. The perks were everything you’ve heard of and more. I ate at gourmet cafeterias, rode free commuter buses with Wi-Fi, and had all the free tech gear I wanted. That said, I remember my first week being the toughest week of the summer. The people who ran the intern program decided to gather several of the summer interns and set up a happy hour for us. During the happy hour, all the interns were asked to mention which school we attended, which team we were working on, and what we wanted to accomplish by the end of the summer. The first intern: “Stanford University. I’ll be working on the Android team for Facebook marketplace. I want to completely revamp how sellers display their items and help them sell faster.” Then the second: “Brown University. I’ll be working on the Marketing Data Science team. I want to create new algorithms that will help us reach new demographics.” Then there was me: “Uh, I went to, I mean, I go to Baylor University. I’ll be working on the Data Center Infrastructure team, and I’m not sure what I want to do yet.” There was a deafening silence. “Harvard University. I’ll be working on …” The rest of the interns continued their introductions. I felt so many emotions. I didn’t feel I belonged among such accomplished individuals. I didn’t deserve to be put in the same group. I felt intimidated by their robust skill sets and extraordinary visions to change the world. When I looked at what I had to offer, I paled in comparison. You may not have had a Software Engineering internship or job yet, but you’ve felt the same exact emotions I just described. Maybe you’ve written “Hello World” and thought “Who am I kidding? I’m not a programmer.” Maybe you’ve talked to a seasoned software engineer and felt intimidated by their robust knowledge of programming. Maybe someone asked you a programming question, and you felt like a phony because you couldn’t answer it. These feelings of uncertainty, intimidation, and illegitimacy aren’t isolated occurrences — they are a part of a larger experience many have come to know as “impostor syndrome.” Impostor syndrome is any feeling of professional inadequacy. You feel you don’t belong because everyone else has “it” figured out. Impostor syndrome can plague programmers of all levels. The number of proficient programmers who still feel like they’re not good enough will surprise you. You’re not hurting yourself if you experience the feelings that come with impostor syndrome. But you are in trouble if you give into those feelings. The rest of this article covers two things: how to recognize if you’re giving in to impostor syndrome and how to overcome it. I’ll share personal stories of how I experienced feelings of uncertainty, intimidation, and illegitimacy throughout my programming journey. I’ll also share the ways I overcame those feelings. By the end, you’ll walk away with proven strategies and approaches to help you push through impostor syndrome and make unbelievable progress in your own programming journey. Sound good? Let’s get started. Are You Giving In to Impostor Syndrome? Almost anyone can recognize when they’re scared or uncertain, but recognizing when you’ve given in to that fear and uncertainty can be difficult. The crazy thing? Two programmers can both experience the feelings that come along with impostor syndrome, but produce vastly different results. Several years ago I attended a two-hour coding workshop. The instructor was going to take us through nuanced programming principles like recursion and classes. As I made my way to the entrance, I could feel my stomach tighten up. I thought of all the coding concepts I didn’t know and how I struggled to even write a simple function. I walked in and sat down at a table by myself. I was scared to talk to anybody because I was convinced every person knew more than I did. My worst fear would soon be confirmed. The person facilitating the meetup asked everyone who was comfortable with a programming language to raise their hands. Everyone’s hand went up — except mine. Talk about intimidating. The instructor encouraged people to ask questions as he went along. He wanted the workshop to be interactive. You know what I did? I stayed silent the entire time. Within the first five minutes of the workshop I was lost, but I was too scared to ask questions because I wanted to look confident. Other people spoke up and worked through their problems while I wallowed in my uncertainty. I left the workshop and felt a deep regret. I learned nothing. I wasted two hours of my time. Looking back, I can recognize that I gave in to impostor’s syndrome. I know I gave in because I allowed my uncertainty and fear to hold me back from growing. That’s what giving in to impostor syndrome for programmers looks like. You not only experience feelings of being a fraudulent programmer, but you allow those feelings to hold you back from developing your programming skills. When you look back at your programming journey, in what ways have you given in to impostor syndrome? How have you allowed feelings of uncertainty or being a fraud prevent you from learning? Maybe you lost confidence in your learning direction because someone asked you a random coding question you couldn’t answer. Maybe you avoided a coding meetup because you felt judged by the established engineers at your last one. Whenever you code or do anything related to programming, I encourage you to ask yourself, “How did I get better today?” I ask this question anytime I get done working on a project, attending a coding meetup, or trying to learn a new coding concept. If I can answer “yes,” I know I conquered impostor syndrome for that day. If the answer was “no,” I know I can do better to manage my impostor syndrome next time. As you find ways to get better, you’ll overcome impostor syndrome, which takes us to our next question. How Can You Overcome Impostor Syndrome? When you look at established programmers, you may feel like a phony compared to them. When you’ve realized your coding skills aren’t where they should be, you may think of other programmers who are ahead of you. Impostor syndrome tries to make programming all about the other person. If you want to overcome those feelings of intimidation, uncertainty, and illegitimacy, you need to focus on you. That’s where we’ll start. Strategy #1: Focus On What You Produce Before going into any endeavor where you’ll code or learn about coding, do your best to focus on your results. Impostor syndrome will tempt you to look at other people’s progress and what they’re accomplishing. Then you’ll feel bad because you’ll lose in comparison. If you want to beat impostor syndrome, you should concentrate on your past results and the results you expect to produce. That way, you never lose because you’re comparing your results against what you’ve already done. If your results are better, you’ve improved. If not, you can further refine your workflow and find a way to improve for next time. Strategy #2: Put On Your Learner’s Hat When I went to my first coding meetup, I felt nervous. I was scared I’d be the worst programmer in the room. Instead of letting other people’s competency throw me into analysis paralysis, I decided to put on a learner’s hat. I did this by being upfront about where I was at as a programmer. I told people I didn’t know what to work on, but that I was there to learn. To my surprise, people responded with kindness and openness. They showed me what projects they were working on. I’d ask a question about their problem-solving approach, and they took time to explain what their code was doing. When you position yourself as a learner, people tend to open up and make themselves available to you. In most cases, people are even more inclined to help you along your programming journey. The best part about positioning yourself as a learner? They won’t (and shouldn’t) judge you. Strategy #3: Find People of Peace Throughout the night the meetup’s facilitator went around to each person and asked what they were working on. I dreaded the moment he would come to me and I’d have to tell him I was working on nothing. With 30 minutes left to go, he approached me. He asked what I was working on, and I said, “I have no idea, but I’m learning from watching.” His face went blank, and then he smiled. He sat down next to me and encouraged me. He said to never stop learning and that I was doing exactly what I should be doing at my stage of programming. He was my person of peace. People of peace are anybody who’s willing to show you the programming ropes. The facilitator was an unexpected person of peace but was much needed nonetheless. He provided pivotal encouragement and pointed me in the right direction. Be on the lookout for people of peace. They’ll make your programming journey a lot easier. Strategy #4: Fixate on Growth Like I mentioned earlier, growth is the best indicator that you didn’t give in to impostor syndrome. It’s also a great way to overcome impostor syndrome. If you want to overcome those feelings of uncertainty and intimidation, you should focus on how you can get better — even if that progress seems minute or incremental. Keep a book on hand so you can always have something to keep your nose in. Run through mock interviews so that you can refine your interview skills, but also learn how to anticipate potential interview questions. If you’re looking for a great programming book to help you grow, I recommend checking out Grokking Algorithms, which is an illustrated guide for teaching algorithms! If you’re looking to get into interview questions, try reading Cracking the Coding Interview. Strategy #5: Ask Thoughtful Questions This strategy should be #1 because it has never failed me. I love asking thoughtful questions. Questions are disarming, even for the snobbiest of programmers, if you ask them the right way. Asking thoughtful questions ties in with positioning yourself as a learner, but it takes the learner’s hat strategy a step further. When you ask a question, you’re also asking for advice. People love to give advice. I suggest you take advantage of that. The one caveat is don’t be gimmicky or demand answers. Put yourself in the other person’s shoes. Maybe you’re talking to a more experienced programmer. Maybe you’re talking to a peer who you think has made more progress than you. If you want to make sure your questions are thoughtful, I encourage you to allow your curiosity to form your questions. If you’re talking to more experienced engineers, ask them questions about their most recent project. Try to understand what makes them great engineers and how you can follow suit. I’ve used this tactic before and, to my surprise, came out with a coding mentor. If you’re talking to peers who’ve made great progress, ask them what encouraged their most recent progress. Celebrate them. Their response will surprise you. They’ll point you to the same resources that have helped them. They’ll want to help you experience the same success they achieved. Strategy #6: Master Google Search If you find yourself coding by yourself and doubting your skills, start Googling questions. When I did this I searched for phrases like “how to become a better coder,” “coding problems for beginners,” and “simple coding exercises.” Those Google searches not only led me to discover problems I could solve with ease but also to problems where I was in over my head. In the end, my programming skills improved because I had a better understanding of where I stood as a programmer. The best part about Google search is that it’s always within reach. You don’t have to leave your workspace, and you have a wealth of resources at your disposal. Whether you’re with people or by yourself, you always have the ability to grow. Strategy #7: Listen, Listen, Listen Finally, when in doubt, listen. When I was at the Python meetup, I took time to listen. When I wasn’t asking questions, I paid attention to what the other coders were saying. I also watched how they approached problems. I was able to understand how great programmers communicated and that writing problems on a whiteboard was a helpful visual for solving problems. Everybody Doubts, But You Can Learn From It All in all, it’s OK to experience intimidation or a lack of belonging. It’s OK to experience impostor syndrome. Your programming journey is your own. It’s not about other people and how much better — or worse — their skill sets are but about finding ways to grow your own skills. If you find yourself becoming weary, check out a great article on emotional self-care for programmers. We discussed the surefire methods for how to recognize when you’ve given in to impostor syndrome. We also talked about seven effective strategies for overcoming feelings of intimidation and uncertainty. The journey is long, but worth it. I guarantee you will encounter impostor syndrome again at some point in the future. The biggest encouragement I have for you is that even expert coders still experience doubt, uncertainty, and illegitimacy. The goal is to learn how to manage it—that’s how you overcome it. Keep learning and keeping coding. I’m rooting for you!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474581.68/warc/CC-MAIN-20240225035809-20240225065809-00227.warc.gz
CC-MAIN-2024-10
13,615
59
https://ai-jobs.org/job/internship-opportunity-researcher-deep-reinforcement-learning-for-games/
code
|November 17, 2022 |Cambridge, United Kingdom For our Microsoft Research Cambridge, UK, location, we are seeking highly motivated researcher intern candidates in the area of Gaming and AI. We encourage applications from all candidates with a background in Deep Learning (DL), Reinforcement Learning (RL), or a related field, who are excited to tackle challenges that arise in applications of modern machine learning approaches to video games. Working closely with researchers from Microsoft for the duration of 12 weeks, you will advance the state of the art in this space by developing novel models and algorithms. This is an exceptional opportunity to drive ambitious research while collaborating with a diverse team. Key research challenges we are currently tackling include, but are not limited to, generalization in deep RL, multi-agent RL, imitation learning, and scaling training to large-scale data and compute. The focus and scope of internship projects considers the team’s direction as well as successful candidates’ experience and research interests. There is no closing deadline for this post. The post will be filled once suitable candidates are found so if you are interested, please apply as soon as possible. When submitting your application, include your CV with a list of publications as an attachment. For more information about the post, please feel free to email Tabish Rashid at [email protected]. - In collaboration with your mentor and a diverse team (including designers, engineers, and researchers), solve an ambitious research challenge and translate your results into actionable insights that are relevant to applications in modern video games. - Write code to test the new approach or hypotheses. - Distil the developed insights into effective communications, such as a research paper, a presentation, to reach internal and external technical and general audiences . - Enrolled in a PhD program with a focus on reinforcement learning, game AI, or a related area. - Ability to carry out research in reinforcement learning or a related area, demonstrated by journal or conference publications or similar. - Strong understanding of state-of-the-art (deep) reinforcement learning approaches. - Hands-on experience in implementing and empirically evaluating reinforcement learning, and/or deep learning approaches. - Demonstrated ability and or strong motivation to learn to use cloud infrastructure for experimentation is a plus. - Effective communication skills and ability to work in a collaborative environment. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818464.67/warc/CC-MAIN-20240423033153-20240423063153-00616.warc.gz
CC-MAIN-2024-18
3,185
16
https://community.quicken.com/discussion/7860443/import-microsoft-money-file
code
About MS Money conversion supported version of Quicken for Windows (Deluxe or higher feature level required) will directly import a MS Money data file into the currently open Quicken data file (File menu / File Import / MS Money File) , but - the MS Money file must have been opened at least once with MS Money Plus Sunset Edition (V 17) so it's converted to the latest file format. - you MUST have MS Money Plus Sunset Edition installed on the same computer that Quicken runs on. - unless fixed, MS Money will not run on Windows 10 (see below). If you have a computer running Vista, 7, 8 or 8.1 (but NOT XP) use it for the - close MS Money before trying to convert the file with Quicken. https://www.quicken.com/support/archived-how-do-i-importconvert-my-microsoft-money-work-quicken-windows If you need to download MS Money Plus Sunset Deluxe or Home & Business Caveat: MS Money does not run under Windows 10 unless you apply this fix:http://www.thewindowsclub.com/use-microsoft-money-on-windows-10 I have no idea what side effects this fix might have on future Windows Updates
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00781.warc.gz
CC-MAIN-2022-21
1,079
16
https://sxpmaths.wordpress.com/2015/08/24/logic-puzzles-part-ii/
code
This post is a sequel to Logic Puzzles, Part I. This time I will focus on the ‘Sum Code’ puzzle and consider how to scaffold it for use with pupils. Note: this post will contain some spoilers about solving the Sum Code puzzle so you may want to try solving it yourself before going much further! Sum Code – The Puzzle To give credit where it’s due, this puzzle is from a book called Jumbo Book of Number Puzzles published by Igloo, 2007. I will try and share a better quality image when I can get to a scanner. Cracking The Nut The nice thing about this problem are the constraints within which it works: each letter A to Z represents a unique integer from 1 to 26. The first three clues are: - A × B = C - D × E = C - F × G = C and these give us quite a lot of information. There is only one number between 1 and 26 that can be expressed as a product in three such different ways. Therefore we know the value of C. Moreover, we know a set of values that must be assigned in some order to A, B, D, E, F and G. (I marked these numbers with a dot underneath them in the lower table to remind myself they were effectively allocated.) The fourth clue is: - H × H = I and, with the remaining available numbers, there is only one possible assignment for H and I. And so the game proceeds – indeed, the clues are in quite a neat order to be tackled (approximately) sequentially. Scaffolding for Pupils Chatting with @MissWillisMaths yesterday evening, we debated how best to prepare students for a puzzle like this. It would seem a shame to do the first few steps for them and so we thought about creating a simpler ‘starter’ puzzle to introduce the key logical steps involved. Certainly it is worth simplifying the puzzle in terms of the number of letters used and, thus, the number of clues. However, we want to retain the principles of different factorisations of a number and perhaps, say, the use of square numbers. Creating a Simpler Puzzle Here is the thought-process I followed to create a 1-10 puzzle: - Think about the numbers 1 to 10 and their properties. - None of them can be created from two different products (using 1 in a product may or may not be interesting.. The clue A × B = B tells us something interesting about A but nothing about B). - There are 3 square numbers. - There is a cube number. - Sums to 6, 7, 8, 9, 10 can be written in multiple ways. (Notice in the original puzzle that addition didn’t appear until about a third of the way into the clues.) Some initial clue ideas: - A × A = B - C × C = C - D × D = E This gives us that C = 1, A and D are 2 and 3 (in some order) and B and E are 4 and 9 (in the same respective order). - E + J = B This clue now tells us implicitly that E is smaller than B. (Therefore E=4, B=9, A=3, D=2.) And thus we now also deduce that J = 5. - G + I = F + J This narrows down some options but doesn’t fix any further numbers yet. G, I are either 6,7 or 7,8 and F is either 8 or 10, respectively. - A × F = E × G If I’ve done my calculations correctly, that pins down all the remaining numbers to give F = 8, G = 6, I = 7, J = 5. It’s a little unsatisfying that H has not been clued, but at least we know its value must be 10. I’ve summarised the above steps into a single-sheet activity that you could use with a class: SumCode-10. It concludes with a sentence challenging pupils to create their own puzzles which is another great way to get them exploring the deductions involved. Here are some other sets of clues that might be useful for discussion. This one has a unique solution for puzzles up to 10 (or indeed up to 26): - A × A × A = B This collection of 3 clues would determine B uniquely in a 1 to 10 puzzle: - A ÷ B = C - D ÷ B = E - F ÷ B = G
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593004.92/warc/CC-MAIN-20180722022235-20180722042235-00142.warc.gz
CC-MAIN-2018-30
3,746
40
https://www.raspberrypi.hackster.io/projects/tags/home+security?page=2
code
User will first scan the right tag and then enter the password for that tag to open the door lock. Master tag will add/rem other tags. An alarm control panel made with an Arduino MEGA and a 3.2" Touch Screen. Home Assistant is an automation platform that can track and control all kinds of devices and automate actions, which fits well with AWS IoT. Wi-Fi connected desk clock with RGB panel, controls IR and Wi-Fi based switches to control AC appliance via IR or Internet to save power. Roomberry is a surveillance robot based on Roomba using a Raspberry Pi Zero W and a camera module. Shoot first, ask questions later. Supervision optional. (Some assembly required, batteries not included.) A beginner level but a cool IoT project which will send a SMS on your phone whenever your precious locker is opened by someone. Gives you time to react to power or pump failures before it is too late. Here we are with the classic RFID door lock. It's classic in that whole. We live in the future and take it for granted at this point. You may have seen lots of face securities in mobiles but this one is Heuristic-eye with face recognition on IP webcam for home security. The system will give access on scanning the right tag and will sent us confirmation message otherwise it will send alert message. How to make a small, low-cost surveillance camera – including app and device source, with ESP32-CAM or ESP32-EYE + Omnivision camera. This project can perform a number of tasks simultaneously while being monitored through serial port on your computer or on your mobile. Hey techies! Techiesms is back with another amazing project article. This time we breakdown a home security based product. Ellipso is an autonomous intelligent home robot, which is always online, listening to you and ready to execute your requests. This DIY Alarm system can be easily made using the components and devices which you can easily find in your house. Simply Ultrasonic send any moation and take photo and send twitter. The system will only give access on scanning the right tag. There will be a master tag that will be used to add/remove other tags. Solenoid Door Lock Contol RFID | Security Access Arduino. An extremely straightforward project involving a digital PIR sensor, which can be utilised for many security and home automation uses. The perfect project for any beginner featuring a simple-to-program but powerful laser module, an Arduino board and basic coding functions. In this project, we will be using the power of RFID to tap into your garage door opening system. The vibration sensor is a low-cost solution for simple applications where impact (e.g. patting a toy) translates to a switch event. Use a SparkFun ESP8266 Thing Dev and HC-SR04 to create a monitoring device that can make sure your door stays closed while you're away.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347394756.31/warc/CC-MAIN-20200527141855-20200527171855-00300.warc.gz
CC-MAIN-2020-24
2,826
24
https://www.biostars.org/p/199589/
code
A postdoctoral position is available in Dr. Kunal Rai's laboratory in the Department of Genomic Medicine at The University of Texas MD Anderson Cancer Center. Our lab focuses on understanding the function of epigenome in cancer progression. To define chromatin states connected with different stages of cancer progression, we employ cutting-edge epigenomic technologies such as high-throughput ChIP-Sequencing and Hi-C. Additionally, we take functional genomic approaches to identify epigenetic regulators/elements that play important functions in cancer progression. The primary research focus of the fellow will be on applying known and novel computational methods for the analysis of epigenomic datasets and integration with complex high-dimensional omics datasets from tumor samples. We seek a highly motivated individual with a Ph.D. in computer science/engineering/statistics/biostatistics/genomics/bioinformatics or a related quantitative field. Must have strong training in statistics and strong programming expertise, in particular R/Python and interest in the application of state-of-the-art computational/statistical methods to complex data. practice in analysis of ChIP-Seq or Hi-C datasets is highly desired. First author publication in a high quality journal is required. A high degree of written and oral communication expertise in English is essential. The laboratory is situated in a highly dynamic and stimulatory environment for learning. MD Anderson Cancer Center is top rated hospital in cancer care in the United States. The institute offers active graduate and postdoctoral training programs and the unmatched scientific environment of the Texas Medical Center, the world's largest biomedical center. Interested candidates should submit a cover letter, CV and contact information of three references to Dr. Kunal Rai at [email protected] Contact: Kunal Rai, Genomic Medicine, The University of Texas M. D. Anderson Cancer Center, 1515 Holcombe Blvd. Houston, TX 77030, United States Official Job Posting: https://www.postdocjobs.com/jobs/printer_friendly.php?jobid=4018284
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510326.82/warc/CC-MAIN-20230927203115-20230927233115-00018.warc.gz
CC-MAIN-2023-40
2,095
5
https://sweetsweetsimplicity.blogspot.com/2012/04/update-on-dressers-for-melissa.html
code
DRESSER AND NIGHTSTAND (just received a reply from Melissa on the email I sent with these pics.... I Love them! I am so excited! Thank You! That my friends this what makes this job sooooo very worthwhile!!! puts a huge smile on this girls face!!!! Perfect for any little princesses room :)
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591575.49/warc/CC-MAIN-20180720080634-20180720100634-00143.warc.gz
CC-MAIN-2018-30
289
7
https://www.sitepoint.com/community/t/a-question-for-designers-that-deal-with-developers-remotely/27998
code
I always use dropbox to communicate with the developers. any updates made to any design is updated on the dropbox inside a new folder. Then just a email to inform them that there is a new file on dropbox. A small skype conversation every 2-3 days and everything will be good. MY PHP GUY often changes the codes and all to the programs he makes. Dropbox works best for these kind of updates.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690029.51/warc/CC-MAIN-20170924134120-20170924154120-00600.warc.gz
CC-MAIN-2017-39
390
6
https://www.idownloadblog.com/2011/12/07/official-xbox-live-app/
code
Members of Microsoft’s popular online gaming community, Xbox LIVE, will be happy to hear that an official iOS app has just surfaced in the App Store. There had been some doubt on whether Microsoft would port the app to Apple’s mobile OS. Xbox LIVE integration has been one of the Windows Phone features that Microsoft touts as an advantage over other mobile platforms. But with millions of users across the globe, Microsoft couldn’t continue to ignore the massive iOS crowd… Take your Xbox LIVE experience wherever you go with the My Xbox LIVE app. Track and compare your achievements, connect with your Xbox LIVE friends, and change up your 3D Avatar. Review all your recent great games you and your friends love to play and compare achievements with them. Jump into your games hub to learn about the latest LIVE games and apps. Access Xbox Spotlight feeds, get breaking news from Xbox LIVE, game tips and tricks, gamer spotlight and much more. Once you sign into the app with your Xbox LIVE account you’re taken to the home screen. Both the iPhone and the iPad versions sport a beautiful user interface — extremely similar to the popular Windows Phone Metro UI that we showed you last week. If you want to check out the app for yourself, you can download My Xbox LIVE for free from the App Store. What do you think of Microsoft’s new app?
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510810.46/warc/CC-MAIN-20231001073649-20231001103649-00236.warc.gz
CC-MAIN-2023-40
1,354
6
https://legacyblog.citizen428.net/blog/2012/05/14/tap-dance/
code
1 2 3 4 5 Inserting this in a chain of calls can be very enlightening as to where a value changes unexpectedly. However, recently I’ve seen more and more use of tap where I’d traditionally have used inject/reduce. People who know me can attest to the fact that I’m a big fan of the latter, but for some reason there seem to be quite a few developers who find these methods hard to grok. For this reason recent Ruby versions added Enumerable#each_with_object, which seems to be easier to use for some people, but which isn’t very popular because of it’s lengthy name. See for example the following blog post that was written as a result of a discussion I had with the author on StackOverflow: tap vs. each_with_object: tap is faster and less typing. As I said in the comment there, my main problem is that you have to call tap on what is to become the result, not the data you want to transform. While this is not a problem per se, I somehow don’t like the semantics of it. However, once I decided to hide it behind a Pascal-like with statement, I immediately started liking it: 1 2 3 4 5 Here are the examples from the blog post linked above, including a new version for with. Decide which one you like best: 1 2 3 4 5 6 7 Sure, putting it in Kernel and therefore calling it without a receiver seems a little strange at first, but I kinda like how it reads. I know this is probably a case of me being anal about semantics, but I really think that with transports the intent a lot better than tap in cases like the one shown here.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00710.warc.gz
CC-MAIN-2022-33
1,541
14
https://www.westohioumc.org/node?page=2
code
FREE ADVENT RESOURCES FOR YOU! Our series is called “Backstory: Rediscovering the Advent Season” and includes 4 weeks of resources. (A related Christmas Eve set will be released in the coming days). See the video here: https://vimeo.com/240285177 There are two options now to download the resources. Option 1 – The Complete set: To download the complete set in one file, use this link (Note: This is a 1.75GB download):
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00398.warc.gz
CC-MAIN-2023-06
425
6
http://justuselinux.blogspot.com/2009/09/wine-compatibility-vs-stability.html
code
Yes this is both a rant and a general overlook of how and why things work the way they do with the Linux and Wine communities and where things are headed. First of all I'd like to start off by saying that me and most of my friends think that Ubuntu 8.04 32bit is the most stable and compatible version of Ubuntu ever released. Why do we think this? Well it's because 8.10 and 9.04 just don't seem to play nice in the long run. We don't know why. Some of the packages and features are missing and in 9.04 the button to turn off the computer in Gnome is by the clock instead of on the familiar left side of the screen. General faults and programs crashing are much more evident than in 8.04. Yet if we are forced to upgrade 8.04, it's programs, and their dependencies the OS becomes more unstable. It's very similar to Microsoft's Windows in that if you use Windows XP SP2 it's the best version of Windows Microsoft ever put together but when you start upgrading to SP3 and adding newer components it turns into Vista. For example in Ubuntu 8.04 32 bit it was not only possible but common to install Command and Conqueror 3 Tiberium Wars in Wine and the network play was flawless. Now if you install it you can't use network play after upgrading. PlayOnLinux still supports the patch for online play even though it doesn't effect anything because it no longer works as proof that it once worked. Call of Duty 4 is another example. Once you could run servers that were visible on the internet but now it's almost impossible. So what do we get in trade for all of this? More hardware support, newer games are supported in wine, and it runs faster. Is it worth it? Ask yourself this question. If you are a Guitar Hero fan (assume you are) and you play FOF or Frets on Fire on Linux and you notice it only runs properly on 32 bit installs of Linux when you know your computer runs faster with a 64 bit OS what do you choose? Here is another question. What if you were me? What if you spent hours working on figuring out how to get lets say Call of Duty 4 Modern Warfare servers working properly in Linux and being recognized on the internet. Now imagine that Wine is altered and upgraded and COD4 no longer works properly with network play. Now you see where I'm going with this. How about Audacity? It use to have a fully working pitch changer (not speed / tempo changer) built into it but in the upgraded version that comes with Ubuntu and is automatically upgraded on older versions, you can't have a pitch changer. Who decides what to keep and what to leave behind? Why leave anything behind? If something does not work when we upgrade and patch it means that we have broken something. If breaking something makes something else work then we need to find another way.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948529738.38/warc/CC-MAIN-20171213162804-20171213182804-00177.warc.gz
CC-MAIN-2017-51
2,765
7
https://community.openhab.org/t/control-over-knx-to-zwave/89116/3
code
So you have 2 systems, each to have automation rules but these only apply to the devices of that system so no interoperability ? I’d suggest you setup OH to - have “mirror” OH items of your KNX devices which address these through your KNX gateway (assuming your “Savant” can act as a GW, I don’t know that system but would assume it can do that) - implement all your automation in OH rules - add a zwave controller stick to the box you run OH on (Aeotec or zwave.me/UZB or RaZberry) - migrate your ZWave devices to that controller (ex- + include them again) Finally you can dump the Fibaro home center. I know that’s a long journey to take but it’s worth the effort since it’ll allow for integrated, comprehensive control such as a scene to set lights AND blinds in one go. That won’t ever work well unless you migrate to have a single controller only (OH, that is).
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00674.warc.gz
CC-MAIN-2022-05
886
8
https://blog.phalcon.io/post/community-hangout-and-update-2020-08-14
code
We are going to host a community hangout on Friday the 14th of August at 17:00 EST (21:00 UTC). For the most part holidays are over. Join us this Friday for a hangout, where we will provide a status update, discuss what we have to do and outline the future of Phalcon. Looking forward to seeing everyone there! Chat - Q&A <3 Phalcon Team
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476532.70/warc/CC-MAIN-20240304200958-20240304230958-00339.warc.gz
CC-MAIN-2024-10
337
5
https://lists.debian.org/debian-user/2001/01/msg03654.html
code
Re: possible move to unstable.. - To: Marcial Zamora III <[email protected]> - Cc: [email protected] - Subject: Re: possible move to unstable.. - From: David B.Harris <[email protected]> - Date: Sun, 21 Jan 2001 16:59:48 -0500 - Message-id: <[email protected]> - In-reply-to: <20010121152317.A17586@b0x> - References: <20010121152317.A17586@b0x> To quote Marcial Zamora III <[email protected]>, # hey all.. I know this mite stir up a great deal of debate, but its not my intention.. Im currently running potato, and thinking bout running unstable.. there are quite a few packages I would like to have in unstable, and I know ahead of time, to successfully install those packages, there are others in the same directory tree that I would need.. from wut I have seen in the entries in the mailing list so far, unstable is not really that *unstable*. The only real concern I think I would have is the move from Xfree86 3.3.6 to 4.0.2.. any of you guys have any input on this ? or any recommendations as to wut to do ahead of time, before I decided to go with a dist-upgrade ? to all who respond, I thank you in advance =) Well, there are a few things you can do; a) Add a deb-src entry in sources.list pointing to unstable, then 'apt-get source <package that you want>', then go into the newly created directory and(as root) 'dpkg-buildpackage -uc -b'. That'll give you a nice binary .deb built for your platform. This isn't guaranteed to work(since you're compiling a Sid package on a Potato machine), but it's always worked for me. b) Upgrade to Sid(unstable). It runs fine on my machine, but there are two things you should worry about: the upgrading process itself seems to be touchy - so you might run into trouble there. If you jump that hurdle though, you're probably set. The second thing is that you should be familiar with system recovery. For instance, a new LILO package was uploaded to Sid recently, and it made more than one machine unbootable. So, you should be able to restore things on your own. Also keep backups. :) Also, if something breaks, people are much less likely to sympathize with you, since you're running Sid(unstable), and you should know better c) Upgrade to Woody(testing). Woody is the new "in-between" distribution, which is supposed to be more stable then Sid. For instance, the broken LILO package never made it into Woody. This is what I suggest to most people who ask about the different versions. Woody/testing is a nice compromise - you get relatively up-to-date packages, and your system isn't nearly as likely to die because of it. Currently, Woody is using XFree86 3.3.6, so if you upgrade to Woody, you won't need to worry about 4.0.2 yet. Hopefully, by the time 4.0.2 gets into Woody, a nicer setup program will exist(since the 3.3.6 and 4.0.2 config files are vastly different). Right now, there's 'xf86config', which is an admirable stop-gap measure, but it's not right for at least 60% or the users out there. David Barclay Harris, Clan Barclay Aut agere, aut mori. (Either action, or death.)
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648000.93/warc/CC-MAIN-20180322190333-20180322210333-00451.warc.gz
CC-MAIN-2018-13
3,091
51
https://community.mp3tag.de/t/mapping-discogs-year-to-another-field/61173
code
i've been finding the Discogs import feature quite useful for adding missing tags. however, one thing i don't like about it is that, for reissued/remastered albums, the 'Year' tag from Discogs contains the re-release year, not the original release year, and this is mapped to the YEAR field in Mp3tag. in my library, i prefer to set YEAR to the original release year (even for reissues), then i add a REISSUEDATE tag for the reissue year. at the moment, this means i have to deselect the Discogs 'Year' field when importing, then manually re-tag everything with the reissue date afterwards. is there a way to have Mp3tag automatically map the Discogs 'Year' to my REISSUEDATE field instead?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00217.warc.gz
CC-MAIN-2023-50
690
3
https://www.funtoo.org/User:Watersb
code
I’ve been an active user of Gentoo since Daniel’s IBM DeveloperWorks articles were published back in 2000. Technically, that was in the 20th Century. Wow. In 2006, I switched to Macintosh for my workstations and haven’t gone back to Linux as a primary machine since. However, I was a heavy user of Gentoo Prefix on Mac OS X for a couple of years. Linux is a great way to learn about systems design, and my experiments focus on early userspace, Trusted Boot, remote attestation, and filesystem security. I've been a heavy user of the ZFS Filesystem since 2006. I've also dug into FreeBSD’s GEOM architecture… but it’s been a while. And while my current ZFS file server is using Solaris Express 11, ZFS on Linux now supports zpool version 28, and I will be moving to an all-Linux backend. Yay. My Gravatar Profile has some other ways to contact me.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00212.warc.gz
CC-MAIN-2022-21
858
4
http://www.geneseo.edu/career_development/alumni_career_partners
code
Geneseo Career Partners The Geneseo Career Partners database is an online directory of Geneseo alumni who have voluntarily offered to provide career development support to current Geneseo students and fellow alumni. If you are a Geneseo alumni/a and would like to contact other alumni for career exploration purposes OR would like to offer your assistance to current students as an alumni/a mentor, please register via Knightjobs. To connect with a Career Partner, login to Knightjobs and click on the Career Partners tab. Please follow these guidelines when contacting a Geneseo Career Partner: - For sample contact scripts and FAQ's, click on Geneseo Career Partner Tips - Be well prepared for each discussion/informational interview. If you are meeting a career partner in person, dress professionally. - Follow up each contact with a formal, written thank you - Keep us informed of your experience with the resource
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00465-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
919
9
https://vexxhost.com/blog/author/hnaser/
code
We're closing out 2018 with this recap of everything VEXXHOST has been up to these past 12 months. From upgrades, to expansions, to events - we've got it covered! Catch up on all the excitement from last week's KubeCon + CloudNativeCon North America 2018 event! With around 8,000 people in attendance, it was quite a success! VEXXHOST Unveils Certified Kubernetes-as-a-Service & Becomes Member Of The Linux Foundation And The CNCF Continuously improving and investing in open source communities are just a few of the things VEXXHOST and its employees stand behind. As such, we have a handful of exciting announcements that we have been hard at work on and fully believe reinforce our values as a company. Machine Learning (ML) is a growing subset of Artificial Intelligence (AI) that uses statistical techniques in order to make computer learning possible through data and without any specific programming. What this means is that ML makes use of large amounts of labeled data and processes it to locate patterns before applying what it learns about and from the patterns to its program. Data mining has become increasingly significant with the growth of big data industries. As such, these industries are now heavily reliant on the evolution of data mining tactics and techniques keeping up with the modernization of their fields as well as their growing demand.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00462.warc.gz
CC-MAIN-2019-35
1,363
6
http://forums.devshed.com/php-development/3393-authenicate-php-phplib-last-post.html
code
September 11th, 1999, 06:08 PM Hi,all ! I have a real quick question about Apach Authenicate with PHP. Ok...I would like to write a script that would delete a file in "MYDIR" and own by user "me" and group "myself." I set up Apache where it can takes my ID and password and I'm it. The problem is that it won't let me delete any file in "mydir" which is own by me? Any help please, thanks in advance!!!!!!!!! September 17th, 1999, 10:47 AM What username is Apache running under? If it is running as your username I'm not sure what is wrong, however if it is running as user 'nobody' or 'webserver' or something like that then the files it is supposed to delete need to be owned by the same user. September 17th, 1999, 03:52 PM Thanks for the tip and the username is "NOBODY" of what Apache is running under. The reason I do NOT want to make the files owned by "NOBODY" is that if I have lots users logged in, and they can delete anything under someone's else you know..well, hope I express my idea clearly to you. September 18th, 1999, 10:23 AM There is nothing magic to the 'nobody' ID under a UNIX system. Files owned by that user are subject to the same protection as any other user's files and directories. That is, files cannot be deleted unless the directory permissions in which they reside allows them to do so. It sounds as if you wish to run apache as a specific user, to avoid making directories world writable (which allows any user to delete files not owned by them). Fair enough. Apache does have the facility to do this; see your server's documentation for details.
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00051.warc.gz
CC-MAIN-2017-22
1,580
10
https://github.com/dbonates/Cities-for-iPad
code
A framgment of an app I've been working on about places to visit on Brazil. This branch implements the City selector if the user does not select any before dive into the application. The main goal is to break the screen to give a sense of importance of the first step, select a city to proceed, if not done yet. More screens and info at my Dribbble profile: PS: The Podfile is setup but no pod since I declined to use it. Hope it helps :)
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583508988.18/warc/CC-MAIN-20181015080248-20181015101748-00458.warc.gz
CC-MAIN-2018-43
438
4
https://docs.microsoft.com/en-us/windows/uwp/publish/about-community-ads
code
About community ads As of June 1, 2020, the Microsoft Ad Monetization platform for Windows UWP apps will be shut down. Learn more If your app displays banner or banner interstitial ads, you can cross-promote your app with other developers with apps in the Microsoft Store for free. We call this feature community ads. Here's how this program works: - After you opt-in to community ads as described below, you can create a free community ad campaign. Your app will then share promotional ad space with other developers who also opt in to community ads. Your app will show ads for apps published by other developers who participate in community ads, and their apps will show ads for your app. - You earn credits for promotional ad space in other apps by showing community ads in your app. Credits are calculated according to the following process: - For each country or region where an app that is serving community ads is available, the current market-rate eCPM (effective cost per thousand impressions) value for the country or region is multiplied by the number of requests for community ads made by your app in that country or region. This value is the credits you have earned for your app in that country or region. - Your total credits earned for a given time period is equal to the sum of all credits earned in each country or region for each of your apps that is serving community ads. - Your credits are divided equally across all active community ad campaigns, and are converted to ad impressions for your app based on the current market-rate eCPM values of the countries your community ad campaigns target. - To track the performance of the community ads in your app, refer to the advertising performance report. Opt in to community ads Before you can create a community ad campaign for one of your apps, you must opt in on the Monetize > In-app ads page in Partner Center. To opt in to community ads for a UWP app: Select an ad unit that you are using in the app and scroll down to Mediation settings. If Let Microsoft optimize my settings is selected, community ads are enabled for your ad unit automatically. Otherwise, select the baseline configuration or a market-specific configuration in the Target drop-down and then check the Microsoft Community ads box in the Other ad networks list. You can use the Weight fields to specify the ratio of ads you want to show from paid networks and other ad networks including community ads. You do not need to republish your app after making your selections. Once you've opted in, you'll be able to select Community ad (free) as the campaign type when you create an ad campaign.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00417.warc.gz
CC-MAIN-2020-29
2,631
17
http://homesteadnotes.blogspot.com/2011/10/birding-we-will-go.html
code
A couple of weeks ago, hubby was invited to give a talk at Northern Illinois University, so we tagged along to do some birding around the Great Lakes area. Our hotel in Aurora was next to a pond so the kids and I did some birding outside while hubby was giving his talk. There weren't too many birds there, at least not that we could see, although we heard plenty. We spotted Great Blue Herons and Great Egrets. We suspect many more were hiding in the tall reeds. The next day, we went to the FermiLab and walked around, hoping to see birds there. Unfortunately, we didn't see a whole lot, but we had some nice walks. All in all, not a bad birding outing. And I think this is the part where I put in a plug for the Great Backyard Bird Count, Audubon, and WildBird Magazine to get you hooked on birding! "I may not have gone where I intended to go, but I think I have ended up where I needed to be." ~ Douglas Adams
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865595.47/warc/CC-MAIN-20180523102355-20180523122317-00030.warc.gz
CC-MAIN-2018-22
914
5
http://automarket-mongolia.tk/article195-oracle-plsql-insert-statement.html
code
Hi All, We are facing problems in lots of insert statement in oracle.Executing shell commands from PL/SQL and getting the output Executing a KSH script from PLSQL block query solution java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error when passing a large HashMap Please assume that I have 2 Oracle schemas ALPHA and BETA with the same table GOOFY, containing one column of CLOB type.www.experts-exchange.com/questions/25457949/Export-CLOB-data-as-INSERT- STATEMENTS-ORACLE-PL-SQL.html copy. SQL Statement Executor Oracle Server. Benefits of PL/SQL. Integration. Application.Manipulating Data Using PL/SQL. Make changes to database tables by using DML commands: insert. update. prepare manually scripts with INSERT statements. I would like to focus on the last method. Its very popular to keep metadata in some files as number of INSERTs for backup/migrations/versioning etc. I decided to write my own Oracle PL/SQL function to do that. Oracle Insert with . sql file using SQLPLUS for German Language. 9. Why the last PL/SQL statement fails using dbmsassert.enquoteliteral? 0. Dynamically logging PLSql statements in a table. How Oracle SQL Query Process - Duration: 39:10. Amit Sharma 21,702 views.Oracle SQL: How To Add Data Using The Insert Statement by AskTheOracle.net - Duration: 5:17. asktheoracle1 2,768 views. The INSERT statement adds one or more new rows of data to a database table. Literal Declaration.Oracle implicitly opens a cursor to process each SQL statement not associated with an explicit cursor. In PL/SQL, you can refer to the most recent implicit cursor as the SQL cursor, which always has the Note that PL/SQL allows BOOLEAN variables, even though Oracle does not support BOOLEAN as a type for database columns. Types in PL/SQL can be tricky.On Line (15), a SQL INSERT statement that inserts the reverse tuple into T1. Line (17) closes the cursor. Line (18) ends the PL/SQL program. Oracle PL/SQL code editor is the ultumate Oracle database solution, allowing to edit sql queries, statements, stored procedures and scripts.You can use predefined snippets provided in the application and create your own to insert them in SQL scripts and queries. Hi all, Im having a little difficulty with a PL/SQL procedure in Oracle Im working on for a class. I am attempting to insert values into a table using and insert statement with a select query returning multiple values. BEGIN --statements. [exception] end Pl/SQL-3. Copyright 2012, Oracle. All rights reserved.Make changes to database tables by using DML and. transactional statements: insert. update delete. SQL> SQL> SQL> SQL> -- create demo table SQL> create table Employee( 2 ID VARCHAR2(4 BYTE) NOT NULL primary key, 3 FirstName VARCHAR2(10 BYTE), 4 LastName VARCHAR2(10 BYTE), 5 StartDate DATE, 6 EndDate DATE, 7 Salary Number(8,2), 8 City VARCHAR2(10 BYTE) Here i will explain about Oracle Plsql Insert Statement Techonthenet.Pl/sql tutorial oracle for beginners the insert statement is part of data manipulation language and allows the user to insert a single record or multiple records. You are also introduced to different products in the Oracle 11g grid infrastructure. Oracle Database 11g: PL/SQL Fundamentals I - 2.In this lesson, you learn to embed standard SQL SELECT, INSERT, UPDATE, DELETE, and MERGE statements in PL/SQL blocks. SQL, Structured Query Language, is the standard language used by software companies and program developers to communicate with a relational database.In Oracle PL/SQL, an INSERT statement adds one or more records to any single table in a relational database. INSERT Statement with Subquery. INSERT INTO table [ column(, column) ].PL/SQL Block Structure. DECLARE --Optional. Oracle PL SQL Language Pocket Reference.(Introduced with Oracle8i.) Use the RETURNING clause in INSERT, UPDATE, and DELETE statements to obtain data modified by the associated DML statement. Related questions. Can I store an Oracle database in a different location to the installation? How can I import one column from excel to multiple columns on db?In PL/SQL you could write the INSERT statement directly. PL/SQL INSERT Statement Topics.Conclusion. How comfortable are you with your knowledge of UPDATE and DELETE statements? The most important principle in INSERT statements, and anything else in Oracle is "do the least work". PL/SQL, Oracles procedural extension of SQL, is an advanced fourth-generation programming language (4GL).ELSE INSERT INTO purchaserecord VALUES (Out of tennis rackets, SYSDATE) End if commit end With PL/SQL, you can use SQL statements to manipulate When inserting rows in PL/SQL, developers often place the insert statement inside a FOR loop. To insert 1,000 rows using the FOR loop, there would be 1,000 context switches between PL/SQL and the Oracle library cache. For example, in this dynamic SQL statement, the repetition of the name :x is insignificant: sqlstmt : INSERT INTO payroll VALUES (:x, :x, :y, :x)Oracle Database PL/SQL Packages and Types Reference for more information about the DBMSSQL package, including instructions for running a 25 Oracle PL/SQL Block Structure . . .The second INSERT statement fails because the foreign key constraint on memberid in the member table isnt met. The failure triggers an Oracle exception and shifts control to the exception block. Thanks. Louie. RE: Oracle PL/SQL insert statement into SQL server. Thargy (TechnicalUser) 2 Aug 07 15:47. I dont think you can. To my knowledge, setting identity insert on is used from OSQL, and only means something therein. Oracle / PLSQL: INSERT Statement - TechOnTheNet — Note.INSERT Statement — For example, it could be a literal, a PL/SQL variable, or a SQL query that returns a single value. Computation in Oracle Apex not returning expected result Storing variables in a list and then insert them into database foreach other list columnvalues Is it a good practice to use CHECK constraints to validate data during an INSERT statement oracle pl/sql collection varray Error while installing postgresql plsql sql-insert with-statement.Im an Oracle fun and it would be pity for me to know that it cant combine WITH statement and INSERT command.PL/PGSQL: Store the result of a loop in a table. Implement the REGEXPCOUNT function. Oracle PlSQL Training Syllabus.Describe the Continue Statement. Composite Data Types. Use PL/SQL Records The ROWTYPE Attribute Insert and Update with PL/SQL Records Associative Arrays (INDEX BY Tables) Examine INDEX BY Table | Recommendoracle - PL/SQL SQLPlus Statement Ignored. elegrammessageAFTER INSERT ON EVENTLOGFOR EACH ROWDECLARE GROUPIDS VARCHAR(200)BEGIN IF :NEW.EVENTID AND Oracle / PLSQL: INSERT Statement. This Oracle tutorial explains how to use the Oracle INSERT statement with syntax and examples. Weve also added some practice exercises that you can try for yourself. Tags: postgresql plsql sql-insert with-statement.Tool for translation of Oracle PL/SQL into Postgresql PL/pgSQL [closed]. Session based global variable in Postgresql stored procedure? Criteria for when a SQL statement or PL/SQL block can be shared are described in the Oracle Database Performance Tuning GuideWith PL/SQL you can insert, update, delete, or merge data one row at a time, in batches whose size you can determine, or even all data at once just like in SQL. Server Utilities :: Insert Data Without Writing Insert Statement In Oracle?SQL PL/SQL :: Building Correlated Subquery Select Statement By Aggregating Timestamps ToSQL PL/SQL :: Insert Into Statement Doesnt Insert All Rows Return By Select Statement? Free Oracle Magazine Subscriptions and Oracle White Papers.Insert Into ( With Check Option) Values (valuelist) I wrote the snippet of PL/SQL today: declare firstid number secondid number begin insert into table (sortnr, textid, unitid) values(, tableseq.nextval, tableseq.nextval) returning textidWithin a single SQL statement containing a reference to NEXTVAL, Oracle increments the sequence once Oracle PL/SQL by example / Benjamin Rosenzweig, Elena Silvestrova Rakhimov. p. cm. ISBN 0-13-714422-9 (pbk. : alk. paper) 1. PL/SQLNotice that the INSERT statement contains an Oracle built-in function called USER. At first glance, this function looks like a variable that has not been declared. For a full description of the INSERT statement, see Oracle Database SQL Reference.For example, it could be a literal, a PL/SQL variable, or a SQL query that returns a single value. For more information, see Oracle Database SQL Reference. Oracle PL/SQL. Subscribe to posts.Example 2 Issuing INSERT Statements in a Loop. The following example loads some data into PL/SQL collections. Then it inserts the collection elements into a database table twice: first using a FOR loop, then using a FORALL statement. For information on the CREATE TYPE SQL statement, see Oracle Database SQL Reference.INSERT INTO departments VALUES deptinfo END / Updating the Database with PL/SQL Record Values. INSERT statements with variable. SQL> SQL> CREATE TABLE lecturer ( 2 id NUMBER(5) PRIMARY KEY, 3 firstname VARCHAR2(20), 4 lastname VARCHAR2(20), 5 major VARCHAR2(30), 6 currentcredits NUMBER(3) 7 ) Forums > Oracle Database > SQL PL/SQL >. Get rid of all advertisements and get unlimited access to documents by upgrading to Premium Membership.INSERT is the one of the most widely used DML Statements in Oracle (or in any Database). PL/SQL code from Pandazen: Create or replace function getinsertscript(Vtablename VARCHAR2) return VARCHAR2 as bfound boolean : false Vtempa VARCHAR2Java utility to generate insert statements for Oracle. SQL Statement Tuning Training. Exploring the Oracle Database Architecture. Use PL/SQL Records The ROWTYPE Attribute Insert and Update with PL /SQL Records Associative Arrays (INDEX BY Tables) Examine INDEX BY Table Methods Use INDEX BY Table of Records. In PL/SQL you could write the INSERT statement directly. DECLARE tablevalue varchar2(200) BEGIN tablevalue : Hello World!oracle,plsql. Share this PL/SQL Insert. The Insert statement is part of Data Manipulation Language and allows the user to insert a single record or multiple records into a table. Syntax: INSERT INTO table VALUES (value1, value2, value3 The following Tip is from the outstanding book "Oracle PL/SQL Tuning: Expert Secrets for High Performance Programming" by Dr. Tim Hall, Oracle ACEThe script starts by creating a test table and executing a simple insert statement 10 times, where the insert statement concatenates a value into Overview of SQL Support in PL/SQLPerforming DML Operations from PL/SQL (INSERT, UPDATE, and DELETE)Before executing a SQL statement, Oracle marks an implicit savepoint. Then, if the statement Oracle Regular Expressions Timestamp SQL Date format String concatenation Loop in pl/sql SQL IN-clause Regular Expressions Examples Flashback queryThe INSERT statement in Oracle is used to add rows to a table, the base table of a view, a partition of a partitioned table or a subpartition of a An SQL INSERT statement adds one or more records to any single table in a relational database. Insert statements have the following form: INSERT INTO table (column1 [, column2, column3 ]) VALUES (value1 [, value2, value3 ]). The number of columns and values must be the same. Note that PL/SQL allows BOOLEAN variables, even though Oracle does not support BOOLEAN as a type for database columns. Types in PL/SQL can be tricky.On Line (15), a SQL INSERT statement that inserts the reverse tuple into T1. Line (17) closes the cursor. Line (18) ends the PL/SQL program.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864466.23/warc/CC-MAIN-20180521181133-20180521201133-00095.warc.gz
CC-MAIN-2018-22
11,524
4
https://experienceleaguecommunities.adobe.com/t5/adobe-experience-manager/aem-forms-designer-on-my-work-computer/qaq-p/394342
code
Sign in to Community Sign in to view all badges I am trying to install AEM Forms Designer on my work computer. My team is using AEM Workbench 6.4 Designer and no one seems to be able to load it. IS 6.5 a web only app? AEM 6.5 Designer to create the PDF OR HTML4 base form. If installer is not corrupted then it should load properly. If you talk about MySQL Workbench then, it should required some Microsoft prerequisites like "visual studio" latest version to work.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046156141.29/warc/CC-MAIN-20210805161906-20210805191906-00707.warc.gz
CC-MAIN-2021-31
465
5
https://notes.alexkehayias.com/component-driven-development/
code
In frontend development, a component-driven workflow is a way of building websites and applications by breaking down the UI into smaller components and iterating on them independently and composing them together. This has been popularized by Storybook.js where you write stories around components to make a faster feedback loop compared to loading the whole application (and data) every time. - Faster feedback is a way of improving the ‘hand feel’ of software engineering - Creators need an immediate connection to what they are creating (a la Brett Victor) - If building parts of the frontend is slow, it makes more of the development slow i.e. slowness begets more slowness
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00033.warc.gz
CC-MAIN-2021-10
680
4
https://stefvanlooveren.me/blog?page=1
code
Sometimes you need the URL of an image in twig in order to use it as a background image or something. While there are ways to load them in twig, it is more appropriate to do this with a preprocess function. This makes your code more maintainable. A toolbox like Ionic makes our life as a developer a little bit more easy. This post guides you in how to get the window size dynamically in your components and to set a variable width depending on it.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00470.warc.gz
CC-MAIN-2022-49
448
2
https://stackoverflow.com/questions/7216057/setting-culture-for-asp-net-mvc-application-on-vs-dev-server-and-iis
code
This is more specific and cleaner version of this question - Different DateTimeFormat for dev and test environment In the Application_BeginRequest() method of global.asax.cs in my ASP.NET MVC project there is code: Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("en-GB"); When I set a breakpoint on Controller Action I see the following value of Thread.CurrentThread.CurrentCulture: - In VS dev server - "en-GB" - In IIS - "en-US" Question is - What settings in IIS are responsible for this and how can I override it?
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104678225.97/warc/CC-MAIN-20220706212428-20220707002428-00797.warc.gz
CC-MAIN-2022-27
542
7
https://hub.alfresco.com/t5/alfresco-content-services-forum/how-to-use-newly-created-core-to-store-and-get-the-indexes/td-p/161479/page/2
code
Re: How to use newly created core to store and get the indexes? You can shard using Alfresco Community Edition. The only thing not supported is the dynamic shard registry, so you need to manually adapt the Repository-tier list of available shards / URLs. Though to be fair, it should be straightforward to implement the dynamic shard registry feature for Community Edition as well - the API is defined in the public core and SOLR itself is "edition-unaware". You'd only need to register the bean and manage the data storage (via AttributeService). It is one of those features that are so simple, it is unclear why Alfresco chose them to be Enterprise-only (like the "live configuration change on subsystems" one).
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00282.warc.gz
CC-MAIN-2021-43
713
3
https://discuss.dizzycoding.com/linux-tutorial-automating-ssh-login-without-password/
code
Are you looking for a way to automate your SSH login without a password? If so, then you’ve come to the right place. This Linux tutorial will show you how to automate SSH login without a password, so you can focus on the more important tasks. Have you ever been frustrated by having to type your password every time you need to connect to a remote server? If so, you’re definitely not alone. Fortunately, with a few simple steps, you can automate your SSH login without a password, making it a breeze to connect to a server. In this tutorial, we’ll show you how to use the SSH key authentication mechanism to login to a remote server without having to enter a password. We’ll also discuss the advantages of this approach, and explain how to set it up on your system. If you’re looking for an easy and secure way to access your remote servers, then this tutorial is for you. So read on to find out how to automate your SSH login without a password and make your remote server access easier and more secure. to Automating SSH Login Without Password Secure Shell (SSH) is a network protocol used to secure communication between two systems over an unsecured network. SSH is commonly used to securely access remote systems such as servers, routers, and other network devices. It can also be used to securely transfer files between systems. SSH uses strong encryption to ensure that all data transferred between the two systems is secure and not vulnerable to interception. In this tutorial, we will discuss how to automate SSH login without password using public key authentication. Generating SSH Keys The first step in automating SSH login without password is to generate SSH keys. SSH keys are two strings of data that are used to authenticate a user on a remote system. The public key is stored on the remote system and the private key is stored on the local system. To generate a pair of SSH keys, use the following command: ssh-keygen -t rsa -b 2048 This command will generate a pair of 2048-bit RSA keys. You will be prompted to enter a passphrase for the keys. This passphrase is used to secure the private key and should be kept secret. Once the keys have been generated, you can view them by running the following command: The output of this command will be the public key. This key needs to be copied and stored on the remote system. The private key should remain on the local system and should not be shared with anyone. Setting Up the Remote System Once the SSH keys have been generated, we need to set up the remote system to allow authentication using these keys. To do this, we need to create a new user on the remote system and add the public key to the user’s authorized_keys file. The authorized_keys file contains a list of public keys that are allowed to access the system. To create a new user, use the following command: useradd username -m This command will create a new user with the specified username and create a home directory for the user. Next, we need to add the public key to the user’s authorized_keys file. To do this, we need to copy the public key generated earlier and paste it into the user’s .ssh/authorized_keys file. This file should be created if it does not already exist. Once the public key has been added to the file, the remote system is now set up to allow authentication using the SSH keys. Automating SSH Login Without Password Now that we have set up the remote system to allow authentication using SSH keys, we can now automate SSH login without password. To do this, we need to create a script that will execute the ssh command with the necessary parameters. The script should contain the following command: ssh -i ~/.ssh/id_rsa username@hostname This command will use the private key stored in the ~/.ssh/id_rsa file to authenticate against the remote system. The username and hostname should be replaced with the appropriate values for the remote system. Once the script has been created, it can be executed to automate SSH login without password. In this tutorial, we discussed how to automate SSH login without password using public key authentication. We discussed how to generate SSH keys and how to set up the remote system to allow authentication using these keys. Finally, we discussed how to create a script to automate SSH login without password. By following the steps in this tutorial, you should be able to automate SSH login without password and securely access remote systems. Suggestion to Improve Coding Skill To improve coding skills related to Linux Tutorial: Automating SSH Login Without Password, it is important to become familiar with the command line. Learning the basics of the command line can help you understand how SSH works and how to use it effectively. It is also important to practice writing scripts as this will help you understand how to automate tasks and improve your ability to write secure and efficient code. Source: CHANNET YOUTUBE Tony Teaches Tech
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643388.45/warc/CC-MAIN-20230527223515-20230528013515-00740.warc.gz
CC-MAIN-2023-23
4,958
23
https://github.com/BerkeleyTrue/warning
code
A mirror of Facebook's Warning npm install warning // some script var warning = require('warning'); var ShouldBeTrue = false; warning( ShouldBeTrue, 'This thing should be true but you set to false. No soup for you!' ); // 'This thing should be true but you set to false. No soup for you!' Similar to Facebook's (FB) invariant but only logs a warning if the condition is not met. This can be used to log issues in development environments in critical paths. Removing the logging code for production environments will keep the same logic and follow the same code paths. FAQ (READ before opening an issue) Why do you use This is a mirror of Facebook's (FB) warning module used within React's source code (and other FB software). As such this module will mirror their code as much as possible. The descision to use warn was made a long time ago by the FB team and isn't going to change anytime soon. The source can be found here: https://github.com/facebook/fbjs/blob/master/packages/fbjs/src/__forks__/warning.js The reasoning can be found here and elsewhere: https://github.com/facebook/fbjs/pull/94#issuecomment-168332326 Can I add X feature? This is a mirror of Facebook's (FB) warning and as such the source and signature will mirror that module. If you believe a feature is missing than please open a feature request there. If it is approved and merged in that this module will be updated to reflect that change, otherwise this module will not change. Use in Production It is recommended to add babel-plugin-dev-expression with this module to remove warning messages in production. Don't Forget To Be Awesome
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00075.warc.gz
CC-MAIN-2021-43
1,610
16
http://proactionlab.fpce.uc.pt/en/news-entry/science-cafe-about-the-brain-science-and-technology-week
code
Proaction Lab hosted together with CNC - Centre for Neuroscience and Cell Biology, its first science cafe, under the Science and Technology Week theme. The event "Brain: past, present and future" was held in Aqui Base Tango pub, with the goal to have an informal conversation about the challenges in neuroscience research, from old discoveries to new exciting possibilities. The event had two guests: André Peres, a post-doctoral researcher at the Proaction Lab, working on information processing in the brain, and Ana Luísa Cardoso, an assistant researcher at the CNC, focused on cellular mechanisms. The room was full, with a great audience, prompt to ask several questions. The two researchers and the audience explored the theme in different ways, from trying to define what consciousness is, passing through the understanding on how the brain is wired and how we perceive things, to imagining the possible knowledge yet to be learned in the field. The Science and Technology Week finishes officially on the 30th of November, with Proaction Lab having participated successfully with two activities: this Science Cafe, and an Open Day.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143815.23/warc/CC-MAIN-20200218210853-20200219000853-00343.warc.gz
CC-MAIN-2020-10
1,140
2
http://naqwerjami.xyz/archives/3353
code
Novel–Young Master Damien’s Pet–Young Master Damien’s Pet 639 Improvised Witch- Part 1 taste fog “In which is she?” Dime humored the lady. She planned to know in which exactly her mom was so she could complete what her mom had started out. Getting a very sharp turn, she started to run lower back from where she possessed began to finally pickup the firearm she obtained lost in the past. Dime didn’t are concerned about the dark colored witch simply being lively. Penelope didn’t answer to your black color witch and instead was thinking the best way to catch your hands on her. The council participants wished the dark witches still living rather than lifeless. As long as they were definitely departed, they will be of no use. Now anything they had to do was kill the woman to see how many a lot more black color witches they had a corporation. She photo for the tentacle using the gun which had been made of silver and for just a moment she thought that it acquired been working through to the tentacle came to curl all over her physique. “Should we get started it now, Young lady Penelope?” Sibling Jera asked who pulled out the pills. “The place is she?” Cent humored the woman. She planned to know exactly where exactly her new mother was that she could end what her mum had begun. Penny wanted she could move the cause through the weapon which has been together but if she does that, the female would never be lively any more, “Yeah, I think it is time.” the headswoman kenneth grahame It needed Penny some time to detect and understand the way the vampiress was behaving for the reason that start off. At first ahead of absolutely everyone she got set up a front side just like vampire almost like she didn’t proper care when they became aquainted with again experiencing the body systems being sliced up, there was some sort of urgency by applying both equally Dime and Jera against the remainder of the ten participants. Choosing a distinct convert, she did start to operate back from where she acquired started to finally pick-up the weapon she obtained shed previously. Penny didn’t worry about the black colored witch becoming full of life. “Will we begin it now, Young lady Penelope?” Sibling Jera expected who drawn out the supplements. At this time what we simply had to do was get rid of the female and then determine the number of much more black color witches that they had a company. She taken in the tentacle utilizing the rifle which has been crafted from silver and for just a moment she believed that it acquired been working up until the tentacle came to curl all around her body. short stories by robert a. heinlein vol 1 summary “Can we start out it now, Girl Penelope?” Sister Jera inquired who drawn out of the supplements. The dark-colored witch didn’t heed into the vampire’s phrases but instead viewed Cent, “I thought it was only rumored but it seems that you happen to be witch as well.” This got the vampire look at Penelope and next to check out Sibling Jera, “You happen to be woman who had been in Valeria. You visited Valeria.” She dragged out the pins that had been improved for the reason that time she had identified she might be joining the local authority or council exam. Either Dime and Jera experienced improvised their resources so that they could be applied. “So intrigued,” the dark colored witch slurred the language, running around them, “The past I knew she was somewhere on the boundary in here Bonelake. Didn’t she come to view you?” she gave a style of pity to Cent. It sounded like individuals were mindful of the dynamics between her beloved mum and her. Dollar wanted she could take the induce from your gun that has been with her but when she managed that, the female would stop being living anymore, “Yeah, I think it is time.” “It truly is amusing the way the witches came up all prepared to only turn out on the trees and shrubs in this process,” the black colored witch clicked her mouth, “Bad them and then like I stated, they need to already have acknowledged anything they have subscribed for. It isn’t like many individuals ever come alive following the secondly examination. I am certain folks will mourn you once all you happen to be lifeless. Once you have into the authorities, I will make sure to offer the respect of fatality in order that you don’t experience awful.” “Exactly where is she?” Penny humored the lady. She wanted to know where by exactly her mommy was so she could complete what her mother had started. “It truly is interesting the way the witches came all ready to only end up on the trees and shrubs in this particular process,” the dark colored witch clicked on her tongue, “Bad them but then like I explained, they will likely have already recognized the things they have subscribed for. It isn’t like some people ever come to life as soon as the following check-up. I am sure individuals will mourn you once all of you may be deceased. After getting inside of the council, I am going to be sure to provide you with the value of death in order that you don’t really feel terrible.” Penny in contrast who had been intending to bring the weapon out and also in her hand was knocked far away from where she withstood by the tentacle. When she attempted to take it, the tentacle started in between her as well as the rifle, not making her get near to the handgun. She had taken one step back, switching all around, she did start to run and the tentacle arrived proper at her. Regardless of how lots of zig-zag steps she had taken, it contributed to the bushes getting sabotaged as being the tentacle chased her. “So inquisitive,” the dark-colored witch slurred the words, walking around them, “Another I knew she was somewhere in the border in here Bonelake. Didn’t she come to view you?” she provided a style of pity to Cent. It sounded like individuals were alert to the dynamics between her precious mom and her. Using a razor-sharp transform, she begun to work rear from where she experienced did start to finally pickup the gun she had shed earlier. Dime didn’t care about the dark witch becoming still living. “Should we start out it now, Young lady Penelope?” Sibling Jera questioned who drawn out the pills. “Your new mother needed,” said the black witch, and Penny’s hands and wrists suddenly switched ice cold hearing this, “I became only joking but she’s definitely have you within the grind, hasn’t she?” the lady threw her brain backside, giggling. “Is there a good reason I should be scared?” questioned the dark-colored witch, “You think I am some mere person?” and just as she said that, tentacles appeared behind her rear reminding her of the being that prowled deep down during the ocean. It checked like the lengthy appendage she experienced found in the witch back in the research laboratory. That certain possessed in her own hands and that just one, she acquired up to six tentacles which are switching like snakes on the fresh air. boss level ending She dragged out the pins that had been customized considering that the time she had discovered she can be joining the local authority check-up. Either Cent and Jera had improvised their resources to ensure that they could be used. “What is the factor I will be frightened?” required the black color witch, “Do you consider I am just some sheer guy?” and only as she stated that, tentacles appeared behind her back reminding her of an creature that prowled deep-down during the seas. It looked like the extensive appendage she experienced found in the witch back in the research laboratory. That a person had in her fingers and also this an individual, she had around six tentacles which had been transferring like snakes from the surroundings. “Where is she?” Dime humored the female. She needed to know just where exactly her mother was she could end what her mother had started out. Sister Jera ignored her gla.s.ses simply because it dropped off her deal with and she begun to look for it but experiencing the tentacle shift towards her, she left behind her gla.s.ses and proceeded to go behind the plant to consider take care of that only brought about the shrub getting a store and busting into two halves. She jumped away quickly prior to the limbs would tumble from her. The quantity she was functioning at the moment, Sister Jera was confident she would drop the remaining bodyweight from her system. Novel–Young Master Damien’s Pet–Young Master Damien’s Pet
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00466.warc.gz
CC-MAIN-2023-06
8,581
35
https://celinesurai.medium.com/how-i-prepared-for-and-secured-three-software-engineering-summer-internship-positions-in-big-tech-aede8d76348c?source=post_page-----aede8d76348c--------------------------------
code
How I Prepared for And Secured Three Software Engineering Summer Internship Positions in Big Tech Companies. Summer 2019, I had just finished my sophomore year of college and declared CS as my major. Being a CS student, I had seen upper class students secure internships with big tech companies in Silicon Valley, and honestly the thought of having to prepare for a technical interview let alone secure a summer internship really scared me. At the time I had only taken basic CS and Math classes. I barely had any CS project on my portfolio and was not at all confident in my coding skills. Having no solid experiences that I could add to my resume, I had chosen to dedicate that summer in preparing as well as honing my skills in readiness for the next recruiting season. Every day, I made it a goal to do at least two leetcode problems. I had also started working on CS projects for my portfolio. Trust me, it was not easy and there are times I felt so overwhelmed especially when I could not work my way in solving a leetcode problem. However, being part of a community online (Rewriting the Code) gave me the dedication and discipline to continue. A dedication that saw me become better in my problem-solving skills as well as completing my portfolio projects! In July, most companies had started opening their fall and summer 2020 internships. I updated my LinkedIn and resume and began applying for the postings that I saw on LinkedIn at the time. It was also during that time that I applied for a Grace Hopper Scholarship and also applied to be part of the Code2040 Fellows program. Eagerly, I waited for feedback from my applications, but of the 13 companies I had applied to from LinkedIn, only 5 had gotten back to me asking me to take their technical challenges. My first technical challenge was a big failure! I failed 7 out of my 8 tests! I remember saying that I was giving up the whole internship search and that I was not fit for this CS career. At the time I felt like all the nights I had put practicing on leetcode were not worth it. I am so grateful for my partner who had just graduated with his CS degree, he really came through especially in reviving my confidence that made me keep doing the other technical challenges.(We all need a support system for such times).I later learnt that the more technical challenges and interviews I took the better I became in them. Early August came with good news from my Grace Hopper and Code 2040 applications. I had secured a Grace Hopper Scholarship to attend the conference as well as secured a spot as a Code 2040 finalist! This was such a huge feat for me and the highlight of my summer 2019. In preparation for Grace Hopper I uploaded my resume on their resume database and this gave me access to a whole lot of companies. Companies even began reaching out to me on the database asking to schedule interviews! I was mostly excited when Facebook reached out because for a long time I had always wanted to work at Facebook. I spent most of August doing interviews from the companies that reached out and while some interviews did not go as well, I got invited for onsite final rounds for a couple companies i.e. Facebook, Lyft, Twitter, Stripe, Splunk, Discover, TaskRabbit, Slack, Dropbox, Adobe. Honestly looking back, I am so proud of myself for getting this far with these companies especially since this was my first time applying for tech internships. In October I attended the Grace Hopper conference in Florida for three days, and I had chosen to take most of my onsite final interviews there. My first interview was the Facebook interview and I was a nervous wreck! While I answered my technical whiteboard challenge well enough my anxiety was definitely a setback. On my last day at GHC, I got my results from Discover and guess what! I had secured an offer! I cannot describe the feeling I had when I received that call from the recruiter. Most of my other results were sent in the next few weeks and while I didn’t secure some offers, I also got competitive offers from Adobe and Slack! I was so happy about the Slack offer because I had really grown to love the company through my interactions with them. You can guess which company I ended up choosing for my summer internship! Slack. In conclusion, my four months of hard work over the summer were definitely fruitful as I had ended up securing an internship for the next summer early, most importantly I had grown and learnt a lot as a software engineer and CS student. Here is a short summary of what I believed worked for me. - Leetcode and Cracking the Coding interview practice. I did a lot of interview problems and even have a YouTube playlist for questions and how to solve them on my channel so feel free to check them out. - Being part of tech communities i.e. Rewriting the Code, Girls Who Code, Grace Hopper Scholarship, Code2040 (These communities played a big role in me getting access to recruiters as well as internship opportunities.) - Working on projects for my portfolio early! (Making sure the projects were interesting and portrayed my skills.) - Having a personal portfolio website that showcased my skills as well as projects! (This is highly recommended.) - Update your linkedin and make sure your resume includes all your accomplishments. - Be aggressive and reach out to recruiters for companies you are interested in!(I got 10 interviews from reaching out!) - Not giving up. (The whole internship search process can be such a stressful experience but not giving up and having a fighting spirit definitely takes you a long way! I hope that this encourages and inspires someone out there during this upcoming recruiting season. We got this! HAPPY RECRUITING SEASON!
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00019.warc.gz
CC-MAIN-2021-25
5,723
19
https://hiremebecauseimsmart.wordpress.com/2010/10/23/untitled-10/
code
This is how I first really understood the Pythagorean Theorem. The outer circle looks just a little bit larger than the inner circle. But actually, its area is twice as large. Just think about that. Other ideas involved here: - scaling properties of squared quantities (gravitational force, skin, paint, loudness, brightness) - circumcircle & incircle This is also how I first really understood √2, now my favourite number.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945111.79/warc/CC-MAIN-20180421090739-20180421110739-00418.warc.gz
CC-MAIN-2018-17
425
8
https://docs.pachyderm.com/latest/reference/config_spec/
code
This document outlines the fields in pachyderm configs. This should act as a reference. If you wish to change a config value, you should do so via If a field is not set, it will be omitted from JSON entirely. Following is an example of a simple config: Following is a walk-through of all the fields. A UUID giving a unique ID for this user for metrics. Whether metrics is enabled. v2.active_context specifies the name of the currently active pachyderm context, as specified in Active Enterprise Context v2.active_enterprise_context specifies the name of the currently active pachyderm enterprise context, as specified in v2.contexts. If left blank the v2.active_context value will be interpreted as the Active Enterprise Context. A map of context names to their configurations. Pachyderm contexts are akin to kubernetes contexts (and in fact reference the kubernetes context that they're associated with.) An integer that specifies where the config came from. This parameter is for internal use only and should not be modified. host:port specification for connecting to pachd. If this is set, pachyderm will directly connect to the cluster, rather than resorting to kubernetes' port forwarding. If you can set this (because there's no firewall between you and the cluster), you should, as kubernetes' port forwarder is not designed to handle large amounts of data. Trusted root certificates for the cluster, formatted as a base64-encoded PEM. This is only set when TLS is enabled. A secret token identifying the current pachctl user within their pachyderm cluster. This is included in all RPCs sent by pachctl, and used to determine if pachctl actions are authorized. This is only set when auth is enabled. The currently active transaction for batching together pachctl commands. This can be set or cleared via many of the pachctl * transaction commands. The name of the underlying Kubernetes cluster, extracted from the Kubernetes context. The name of the underlying Kubernetes cluster's auth credentials, extracted from the Kubernetes context. The underlying Kubernetes cluster's namespace, extracted from the Kubernetes context. Cluster Deployment ID The pachyderm cluster deployment ID that is used to ensure the operations run on the expected cluster. Whether the context represents an enterprise server. A mapping of service name -> local port. This field is populated when you run explicit port forwarding ( pachctl port-forward), so that subsequent pachctl operations know to use the explicit port forwarder. This field is removed when the pachctl port-forward operation completes. You might need to manually delete the field from your config if the process failed to remove the field automatically. Last update: November 1, 2021
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300244.42/warc/CC-MAIN-20220116210734-20220117000734-00227.warc.gz
CC-MAIN-2022-05
2,737
30
http://www.haygroup.com/leadershipandtalentondemand/ourproducts/item_details.aspx?itemid=25&type=1&t=2
code
Match the skill to the job. Understanding learning skills can help people ensure that their personal skills suit the demands of their job. Use the Boyatzis-Kolb learning skills profile (LSP) to help your employees: - assess the match between their personal learning skills and their job demands - identify which skills are critical to satisfactory performance, which need development and which are under-utilized - gather feedback from their peers or managers to augment skill-gap information - create a learning agenda to develop the skills that they enjoy, or to work on the skills that they need to use more often. The LSP helps participants identify their personal skills and the skill demands of their jobs. It assesses four skill groups. Learn more about online group accounts.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190236.99/warc/CC-MAIN-20170322212950-00141-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
783
8
https://www.redox-os.org/nl/news/rsoc-ptrace-6/
code
By jD91mZM2 on Hallå världen! We’ve got yet another week on our hands, and that must surely mean another status report on ptrace? It does. If you recall last week, I said my design just felt wrong, but I couldn’t explain it? This has been fixed, it now feels more right than ever, let’s see what’s changed! After a tough session of brainstorming I rewrote the RFC, you can see design changes the dull PTRACE_SINGLESTEP to the new awesome PTRACE_STOP_SINGLESTEP, with the new advantage of being a non-exclusive operation! That’s right, you can now set multiple breakpoints with ptrace, and it will stop on the first one while reporting which one it reached. How will it report that? Everything is an event! Especially breakpoints, they now use events to report which one was reached as well as any arguments that might be useful. This lets you catch which signal caused to be returned. The design was largely inspired by Linux' which is a non-exclusive way to stop at more places than just the one you requested. And yes, this does mean I thoroughly read through almost the entire man ptrace ;) After a breakpoint is reached you can read from the trace file descriptor to recieve one or more events. This lets you catch both breakpoint and non-breakpoint events, where the non-breakpoints don’t stop the tracee. If you try to set a new breakpoint without all events, it will refuse to wait for the breakpoint to be set. In non-blocking mode it won’t wait anyway, although you can use PTRACE_FLAG_WAIT to override this behavior. Sysemu breakpoints are now more flexible. You don’t have to decide whether to emulate a syscall up-front, but rather once you recieve a PTRACE_STOP_PRE_SYSCALL (the reason for separating pre- and post-syscalls are described in the RFC) you can choose to add PTRACE_FLAG_SYSEMU to the next ptrace operation. Now that ptrace operations use a lot more bits, using a u8 would be too small. I increased this to a whooping u64 which to me sounds smaller than it is (I currently only use 16 bits anyway!). To convert a 64-bit integer one can use kidding. You don’t have to do that either! I am experimenting with using the bitflags crate for creating type wrappers around various flags in the redox_syscall crate, where ptrace is one of them. This will not only have the benefits of using the type system to ensure you don’t mix and match different flags in an invalid way, as well as giving a darn useful Debug implementation; it will also have the benefit of letting me implement Deref on this struct which will let you coerce You can see if you also think the kernel gets cleaner with this change in this commit. If not, write to me and complain! Strace has been separated into two possible compilation modes: Simple and advanced. The simple code is what we had before, code that’s easy to read and understand where everything is synchronous. Advanced mode is a new and exciting mode that uses the asynchronous interface (not async, yet) to support more functions: Such as also tracing child processes and threads. Now that we’re back to where we were (but with a much more scalable system on our hands), I can get back to work implementing a way to override signals and therefore handle int3. After that, a lot of the final concerns and nitpicks I had are completed. Then there’s also the huge problem left of actually allowing the user to inject code… See you the next week! Until then, make sure to stay hydrated in this warmth 🍻
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817398.21/warc/CC-MAIN-20240419110125-20240419140125-00595.warc.gz
CC-MAIN-2024-18
3,488
61
http://linuxquota.com/Getting-the-best-out-of-atq-when-using-Ubuntu-1605485342.html
code
With Linux and open source, IT is now far more transparent to the broader enterprise, helping to better align with general business goals and actually innovate as opposed to just maintaining a status quo. The ease with which you can control the inner workings of Linux is another attractive feature, particularly for companies that need their operating sys- tem to perform specialized functions. The barriers to entry for working on a kernel module are, generally speaking, much lower than they are for working on the Linux kernel. Certain security checks allow processes to perform certain operations only if they meet specific criteria. My emacs and makemap workflow Still, this survey does compare Windows 2000, GNU/Linux (up to 497 days usually), FreeBSD, and several other OSes, and FLOSS does quite well. That write routine uses information held in the VFS inode representing the pipe to manage the write request. In Linux, no further organization or formatting is specified for a file. The best way to learn the Linux command line is as a series of small, easy to manage steps. Docker and Freesco It was coined in 1997 with the intention of replacing the term free software in order to avoid the negative connotations that are sometimes associated with the word free and thereby make it more attractive to corporations. The UEFI boot manager boots the mini-bootloader, then, in turn, it boots the standard Linux bootloader image. Source code is converted into executable (i.e., compiled or runnable) programs through the use of specialized programs called compilers. There are various other shell interpreters available, such as Korn shell, C shell and more. Succeed with symlink on Linux Frequently it is best to select a small local ISP rather than a large, nationwide one. Parameter passing is handled in a similar manner. Therefore, when moving to the PDP-11 as the main hardware platform, the developers started C as the as a core language for Unix. Gaz Hall, a UK based SEO Expert , commented: "Each device driver tells the operating system how to use that specific device." Getting the best out of atq when using Ubuntu This chapter describes how the Linux kernel manages the physical devices in the system. Commoditized hardware, namely the x86 chip (the baseline for computing today), likely would've struggled to emerge as strongly as it has without Linux functioning as the baseline operating system. It links the sock data structure to the BSD socket data structure using the data pointer in the BSD socket. The new module is marked as UNINITIALIZED.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517159.7/warc/CC-MAIN-20210118220236-20210119010236-00156.warc.gz
CC-MAIN-2021-04
2,569
13
https://www.alvestrand.no/pipermail/ietf-languages/2002-April/000198.html
code
Request: Language Code "de-DE-1996" Tue, 23 Apr 2002 17:01:56 -0500 On 04/23/2002 08:44:57 PM "J.Wilkes" wrote: >The German spoken (and written) in Germany, Austria and Switzerland >differs not primarily in orthography, but in the words assigned to the >Austrian "Obers" is German "Sahne" (for "cream"), e.g. >Of course not all words are different, but enough to require different >And yes, there are some orthographical differences as well... My assumption: I gather from what you're saying, then, that people probably wouldn't want to tag specifically orthography distinctions that are based on country (i.e. orthography but not vocabulary), but they would want to distinguish spellings according to the 1901 and 1996 conventions, and they would want to distinguish data sets that use country-specific vocabulary (and which will also follow either the 1901 or 1996 conventions). If that is the case, then it would seem to me that what we need are where de-1901 and de-1996 tell us what spellings are used, but don't distinguish with regard to vocabulary, and where de-1901-xx and de-1996-xx distinguish both vocabulary and spelling. I suggest that we don't need to distinguish vocabulary without reference to spelling: any given set of data that has country-specific vocabulary is going to follow one orthographic convention or the other. That means that what we don't actually need is de-DE, de-CH, etc. Of course, there is surely existing data that is tagged this way. I would think it appropriate to (a) discourage new use of these sequences, and (b) treat existing uses as equivalent to de-1901-DE, de-1901-CH, etc. Either that, or if we want to allow on-going use of de-DE, etc. explicitly state that these are considered synonymous with de-1901-DE, etc. and distinct from de-1996-DE, This all based on the assumption made above. I realise that I may still not fully grasp the details. >> If there *are* orthographic differences between the various countries, >> it's fairly clear what kind of object and what specific instance of that >> kind of object something like de-DE-1901 is intended to denote: German >> spelled in Germany following conventions defined in 1901 (but not as >> spelled in Germany using other conventions, and not as spelled in some >> other country). But it is *not* clear what kind of object de-1901 is, >> alone the identity of the specific instance. I question the usefulness >> such ambiguous tags. >If I encountered such a tag without having participated in this >1901 denotes a pretty generalized variant of German, following conventions I.e. pretty generic vocabulary, but 1901 spelling. Yes? >When checking whether a given text should receive this tag, I would a) >1901 orthography, and b) look for spelling or words that are specific to >three countries, and not common. If I would encounter such words, I would >specific subtag instead, but if not, I'd leave it at de-1901. de-1901 >example, denote that this text can be understood in Germany, Austria and >Switzerland all the same, without further adaption. That fits precisely with what I describe above based on what I was assuming (see above -- which seems to me to confirm that my assumption was based on a correct understanding of the sociolinguist situation you were describing). In the process you describe, precisely what you would *not* end up ever specifying is vocabulary that is specific to one country yet without specifying orthography. Andrea: based on this input, I'd say to turn these details in my earlier response to you around: Is it really likely that one will have data from which it can be determined that the spelling follows 1901 conventions but it can't be determined which particular country's spelling conventions were used? Make it this: "Is it really likely that one will have data from which it can be determined that the vocabularly was for country X but it can't be determined whether 1901 or 1996 spelling conventions were being applied?" Non-Roman Script Initiative, SIL International 7500 W. Camp Wisdom Rd., Dallas, TX 75236, USA Tel: +1 972 708 7485
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00091.warc.gz
CC-MAIN-2022-27
4,079
64
https://www.roblox.com/games/330997830/Broken-Bones-3?refPageId=841329cc-a8d8-4a69-9194-c764fc757ac8
code
Join the development group Wavey Games for a unique in game icon next to your name! https://www.roblox.com/Groups/Group.aspx?gid=2802279 Follow me on twitter for development updates! https://twitter.com/ZaquilleRBX Fully compatible with desktop, tablet, mobile and console. Graphics designed by DanielKGaming Beta 1.6 -Utility support for Mobile and Console has been added -Saving bugs have been officially patched -Xbox release and filtering enabled -Tesla coils map pack released Beta 1.5 -Super Cannons released -Stowaway Planes released Beta 1.4 -Anti Exploit has been added -Added Infinite Money Game pass -All 10 Utilities have now been added This experience does not support Private Servers.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00273.warc.gz
CC-MAIN-2021-43
698
2
https://support.sas.com/edu/schedules.html?id=2845&ctry=RO
code
Using SAS to Put Open Source Models into Production O nouă versiune a acestui curs este disponibilă. Consultați Using SAS to Put Open Source Models into Production. This course introduces the basics for integrating R programing and Python scripts into SAS and SAS Enterprise Miner. Topics are presented in the context of data mining, which includes data exploration, model prototyping, and supervised and unsupervised learning techniques.În urma acestui curs, veți învăța să
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878639.9/warc/CC-MAIN-20200702080623-20200702110623-00162.warc.gz
CC-MAIN-2020-29
483
3
https://proceedings.mlr.press/v144/zhao21b.html
code
Primal-dual Learning for the Model-free Risk-constrained Linear Quadratic Regulator Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:702-714, 2021. Risk-aware control, though with promise to tackle unexpected events, requires a known exact dynamical model. In this work, we propose a model-free framework to learn a risk-aware controller of a linear system. We formulate it as a discrete-time infinite-horizon LQR problem with a state predictive variance constraint. Since its optimal policy is known as an affine feedback, i.e., $u^* = -Kx+l$, we alternatively optimize the gain pair $(K,l)$ by designing a primal-dual learning algorithm. First, we observe that the Lagrangian function enjoys an important local gradient dominance property. Based on it, we then show that there is no duality gap despite the non-convex optimization landscape. Furthermore, we propose a primal-dual algorithm with global convergence to learn the optimal policy-multiplier pair. Finally, we validate our results via simulations.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00288.warc.gz
CC-MAIN-2021-49
1,042
3
https://man.dragonflybsd.org/?command=environ&section=7
code
DragonFly On-Line Manual Pages ENVIRON(7) DragonFly Miscellaneous Information Manual ENVIRON(7) environ - user environment extern char **environ; An array of strings, called the environment, is made available to each process by execve(2) when a process begins. By convention these strings have the form name=value, and are referred to as "environment variables". A process can query, update, and delete these strings using the getenv(3), setenv(3), and unsetenv(3) functions, respectively. The shells also provide commands to manipulate the environment; they are described in the respective shell manual pages. What follows is a list of environment variables typically seen on a UNIX system. It includes only those variables that a user can expect to see during their day-to-day use of the system, and is far from complete. Environment variables specific to a particular program or library function are documented in the ENVIRONMENT section of the appropriate BLOCKSIZE The size of the block units used by several disk-related commands, most notably df(1), du(1) and ls(1). BLOCKSIZE may be specified in units of a byte by specifying a number, in units of a kilobyte by specifying a number followed by `K' or `k', in units of a megabyte by specifying a number followed by `M' or `m', and in units of a gigabyte by specifying a number followed by `G' or `g'. Sizes less than 512 bytes or greater than a gigabyte are ignored. This variable is processed by the getbsize(3) function. COLUMNS The user's preferred width in column positions for the terminal. Utilities such as ls(1) and who(1) use this to format output into columns. If unset or empty, utilities will use an ioctl(2) call to ask the terminal driver for the width. EDITOR Default editor name. EXINIT A startup list of commands read by ex(1) and vi(1). HOME A user's login directory, set by login(1) from the password file passwd(5). LANG This variable configures all programs which use setlocale(3) to use the specified locale unless the LC_* variables are set. LC_ALL Overrides the values of LC_COLLATE, LC_CTYPE, LC_MESSAGES, LC_MONETARY, LC_NUMERIC, LC_TIME and LANG. LC_COLLATE Locale to be used for ordering of strings. LC_CTYPE Locale to be used for character classification (letter, space, digit, etc.) and for interpreting byte sequences as multibyte characters. LC_MESSAGES Locale to be used for diagnostic messages. LC_MONETARY Locale to be used for interpreting monetary input and LC_NUMERIC Locale to be used for interpreting numeric input and LC_TIME Locale to be used for interpreting dates input and for MAIL The location of the user's mailbox instead of the default in /var/mail, used by mail(1), sh(1), and many other mail clients. MANPATH The sequence of directories, separated by colons, searched by man(1) when looking for manual pages. NLSPATH List of directories to be searched for the message catalog referred to by LC_MESSAGES. See catopen(3). PAGER Default paginator program. The program specified by this variable is used by mail(1), man(1), ftp(1), etc, to display information which is longer than the current PATH The sequence of directories, separated by colons, searched by csh(1), sh(1), system(3), execvp(3), etc, when looking for an executable file. PATH is set to ``/usr/bin:/bin'' initially by login(1). POSIXLY_CORRECT When set to any value, this environment variable modifies the behaviour of certain commands to (mostly) execute in a strictly POSIX-compliant manner. PRINTER The name of the default printer to be used by lpr(1), lpq(1), and lprm(1). PWD The current directory pathname. SHELL The full pathname of the user's login shell. TERM The kind of terminal for which output is to be prepared. This information is used by commands, such as nroff(1) which may exploit special terminal capabilities. See /usr/share/misc/termcap (termcap(5)) for a list of TERMCAP The string describing the terminal in TERM, or, if it begins with a '/', the name of the termcap file. See TERMPATH below, and termcap(5). TERMPATH A sequence of pathnames of termcap files, separated by colons or spaces, which are searched for terminal descriptions in the order listed. Having no TERMPATH is equivalent to a TERMPATH of $HOME/.termcap:/etc/termcap. TERMPATH is ignored if TERMCAP contains a full pathname. TMPDIR The directory in which to store temporary files. Most applications use either /tmp or /var/tmp. Setting this variable will make them use another directory. TZ The timezone to use when displaying dates. The normal format is a pathname relative to /usr/share/zoneinfo. For example, the command env TZ=America/Los_Angeles date displays the current time in California. See tzset(3) for more information. USER The login name of the user. It is recommended that portable applications use LOGNAME instead. Further names may be placed in the environment by the export(1) command and name=value arguments in sh(1), or by the setenv(1) command if you use csh(1). It is unwise to change certain sh(1) variables that are frequently exported by .profile files, such as MAIL, PS1, PS2, and IFS, unless you know what you are doing. The current environment variables can be printed with env(1), set(1) or printenv(1) in sh(1) and env(1), printenv(1) or the printenv built-in command in csh(1). cd(1), csh(1), env(1), ex(1), login(1), printenv(1), sh(1), execve(2), execle(3), getbsize(3), getenv(3), setenv(3), setlocale(3), system(3), termcap(3), termcap(5), nls(7) The environ manual page appeared in Version 7 AT&T UNIX. DragonFly 5.9-DEVELOPMENT November 1, 2020 DragonFly 5.9-DEVELOPMENT
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102612.80/warc/CC-MAIN-20231210155147-20231210185147-00739.warc.gz
CC-MAIN-2023-50
5,570
105
https://lists.jboss.org/archives/list/[email protected]/message/3XQGSHB3TRSHS5LF6QI43PLP5VDC3RQD/
code
On Wed, 2009-08-26 at 13:39 +0200, Emmanuel Bernard wrote: I've been thinking about a DSL to build Lucene queries in the What do you think of this proposal? What do you really gain compared to native Lucene queries? If your API achieves exactly the same as what's possible with Lucene it is just a 'useless' wrapper. A wrapper around native Lucene queries would make sense if it could somehow use some of the Hibernate Search specific meta data. As an extreme example one could generate some meta classes a la JPA2. This way one could ensure that you can get help with which field names are
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00447.warc.gz
CC-MAIN-2020-50
590
10
https://www.altenpolska.pl/2021/01/15/al-senior-devops-eng/
code
We are member of French ALTEN Group present in 25 countries around the world and employing over 34,000 engineers and IT specialists. Since 1988 we deliver advanced IT systems for well-known brands, develop medicine and the renewable energy industry. ALTEN innovates aero and cosmonautics, trains, electric and autonomous vehicles, and even space rockets. We are currently looking for Senior DevOps Engineer who will join our team. - The DEEP program objective is to build a DELIVERY ENGINEERING ENTERPRISE PLATFORM – this is a One stop shop for all development teams to quickly and autonomously BUILD & RUN consistently designed, secure, compliant and reliably operated software applications - 3+ year Proficient AWS engineer - Strong background as a software, infra or as a system engineer - Experience with Software development and delivery processes and frameworks - Expert of Containers – Docker/Kubernetes - Expert of Infrastructure provisioning technologies – Terraform - Expert of at least one modern scripting language (Python, Go, Node.js) - Solid foundation in Linux and networks administration and troubleshooting - Expert with automation and continuous delivery practices - Strong background with configuration management tools like Ansible, Puppet or Chef - Expert in Git/GitHub/Bitbucket/GitLab - Stable and long-term cooperation - Certification and training opportunities - Possibility to choose type of contract (B2B also) - Benefit package (MultiSport card, Medicover, MyBenefit) - Unlimited growth and development opportunity - Relocation support & relocation package Do not hesitate and join our team! The recruitment process and work during a pandemic are 100% remote Please, include following in your CV: „I agree to the processing of personal data provided in this document for executing this and future recruitment processes by ALTEN Polska Spółka z o.o., ul Pańska 73, 00-834 Warszawa, pursuant to the Personal Data Protection Act of 10 May 2018 (Journal of Laws 2018, item 1000) and in agreement with Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)” In this project you can choose Work Contract or B2B Contract. Are you wondering about setting up your own business? We can help you by providing the necessary advice.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00131.warc.gz
CC-MAIN-2021-10
2,487
25
https://blog.idorobots.org/entries/blog-reboot_.html
code
Finally got around to rebooting my old dev-blog! This time instead of fighting my way against some new unforgiving piece of blogging software I've decided to write my own piece of even less forgiving blogging software. Let's see how it goes this time... λ-blog is not merely a standalone tool you use to generate a static site with. λ-blog is a static site generator generator. This means that you can use it to create your very own blogging platform that will work precisely how you please. λ-blog is also a library of functions useful for static site generation and a Leiningen plugin that makes it easy to create your first static site. And finally, λ-blog features: - A Markdown parser. - Syntax highlighting. - A set of default, ready to use HTML templates. - A hacker friendly way to override anything and everything without much hassle. I got tired of jumping through many, many hoops to perform even the simplest tasks while using other blogging platforms with predefined filesystem layouts and other arbitrary restrictions. I am a lazy guy, so I wanted a blogging platform with emphasis on hackability & customizability with few restrictions, but offering sane defaults - one that you can start using immediately and hack-in new features as you go. I didn't find any, so I wrote λ-blog. Here you can find a more in-depth description of how λ-blog works internally and how to use it to your advantage.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00403.warc.gz
CC-MAIN-2022-21
1,415
10