url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
http://www.linuxquestions.org/questions/linux-newbie-8/shell-script-cannot-startup-my-instance-4175502368/
code
Originally Posted by thiyagusham I tried to write shell scripts. It's for local use. whenever i execute my 10g database env file , automatically my db should be up. $ . ./ora10.env export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME sqlplus / as sysdba SQL*Plus: Release 10.2.0.5.0 - Production on Sun Apr 20 21:34:55 2014 Copyright (c) 1982, 2010, Oracle. All Rights Reserved. Connected to an idle instance. but i cannot startup my instance . why ? You've said previously that you're an Oracle DBA, with years of experience, and have been posting Oracle related questions for at least two years now. You've been told in that time: - Oracle 10g is OLD - You need to first do research on your own...like reading the Oracle documentation - Contact Oracle support, since Oracle is a commercial, pay-for product. Given these things, there are MANY hints that can help you. This problem can have one of several solutions, depending on the error code. As an 'experienced DBA', this should be obvious. Also, a DBA with several years of experience with Oracle, and someone who is also TRAINING OTHERS, should be able to see the 'error'. Hint: you are CONNECTED to Oracle...but you have not yet bothered to open a database . Please read the SQLPlus basics documents:
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170914.10/warc/CC-MAIN-20170219104610-00203-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
1,277
16
https://cocoontech.com/threads/upb-or-insteon.8525/page-5
code
I am a little late to this thread... I have been a big Insteon user and developer since the products initial release. After 4 years of having Insteon in my home, I am ready move on. The primary reason is reliability. The iterations of the devices over the last 4 years has still not resulted in reliable and robust products. In my home, I have 45 Insteon devices (ranging over most of the various devices). As of last week, I have now replaced 42 of the 45 original devices. Only 2 Switchlincs (the very first ones released) and 1 KeypadLinc are still working. I have had to replace many devices due to the infamous failure of the paddle design. Many more have have been replaced due other various hardware failures. Many existing switches (that I should replace) require you to press the paddle in the exact right spot to get light control. Like many things in life, the cost of entry is only one part of the equation. SmartHome may be cheaper to get into but you will pay in the long run with the headache of constantly replacing devices and relinking/reprogramming your HA controller. I would not recommend their solution to anyone. I am not sure where SH went wrong with their Insteon implementation of the Linc line. At one time, I had many of the old X10 Switchlincs and KeypadLincs which lasted for years and years with very few failures. I moved to Insteon with the promise of greater reliability and 'speed' and more sophisiticated lighting control. It has not been worth it.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649439.65/warc/CC-MAIN-20230604025306-20230604055306-00727.warc.gz
CC-MAIN-2023-23
1,484
4
https://marx-brothers.org/biography/index.htm
code
The Marx Family - Family tree Minnie Schönberg and Sam Marx the parents of the Brothers - Manfred Marx, the first child was born in 1885, and died in infancy before the age of three. - Chico Marx (Leonard) - Harpo Marx (Adolph/Arthur) - Groucho Marx (Julius Henry) - Gummo Marx (Milton) - Zeppo Marx (Herbert) - Locations - Places where the Marx Brothers lived, worked, etc. - Genealogy - Check out the research done by Patrick McCaughey Other people important in the life of the Marx Brothers - Margret Dumont (link to IMDB) - Margret Dumont, Information prepared by the Alex Film Society (linked through web.archive.org)
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643388.45/warc/CC-MAIN-20230527223515-20230528013515-00102.warc.gz
CC-MAIN-2023-23
623
15
https://www.ionos.ca/digitalguide/server/configuration/docker-on-raspberry-pi/
code
Docker on Raspberry Pi: a how-to guide The mini-computer Raspberry Pi is good for more than just playing around or teaching children about hardware and programming. Users have set up web servers on Raspberry Pi, as well as cloud servers using ownCloud. It’s even possible to combine Raspberry Pi and Nextcloud, and some users have built Raspberry Pi mail servers. Developers have also made the single-board computer their own. Web and software developers have already been using the mini-computer for a while, for example, in order to work with the Internet of Things. It seems high-time then to explore the advantages of Docker on Raspberry Pi. How to install Docker on Raspberry Pi In the best case scenario, Docker can be installed with Raspberry Pi’s operating system. The Docker team has provided a special installation script for this. The first step involves downloading and executing the script, which you can do using a cURL command. curl -fsSL https://get.docker.com | sh To make sure that the installation was successful, you can try out the “hello world” image. docker run armhf/hello-world If everything is in order, Docker should pull the image from the Internet and execute it. You should get a message from the developer. The image here isn’t the normal “Hello world” image that would run on other systems, but rather an image that was specially made for ARM processors. Docker containers are made available by official developers as well as members of the community. To minimize security risks, you should only use containers that are actively maintained and already being used by a good number of users. In the DockerHub you can also find containers that were put together just for Raspberry Pi. The repository also offers the option of only viewing “official images” or containers from “verified publishers”. Hypriot OS: the all-in-one solution A small team of developers produced a special operating system for people who want to have a better experience with Docker and Raspberry Pi: Hypriot OS is specially pre-configured for using containers. The operating system is based on Debian, but is kept to a minimum making it perfectly suited to both Raspberry Pi and Docker. The Kernel is also specifically optimized for this purpose. Thanks to the lightweight structure of the system, it’s also possible to run several computers in parallel on relatively inefficient systems. Hypriot OS is installed like other operating systems for Raspberry Pi: First prepare an SD card on another computer with the image of Hypriot. (The operating system can be downloaded for free from the official website or from GitHub.) Then insert the memory card into the Raspberry Pi. When it starts, the computer will then load from the card and run with Hypriot. Using an SSH connection, you can then access the Raspberry Pi from the other computer and use Docker on Raspberry Pi. Regardless of how you bring together Docker and Raspberry Pi, you’ll also have to know how to work with the container software. Learn about the first steps and pick up a lot of important information about Docker in our Docker tutorial.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00260.warc.gz
CC-MAIN-2024-18
3,139
14
https://forum.xwiki.org/t/interested-in-contributing-an-asciidoctor-syntax/8586
code
You are both partly right Let’s try an analogy with Groovy. Groovy.py is a Python implementation that supports Groovy 1.x (the language) and Groovydoctor is a Ruby implementation that supports the latest version of Groovy (still the language). It’s the same with AsciiDoc (the language) with AsciiDoc.py (legacy Python implementation) and Asciidoctor (current/reference implementation) that supports the latest definition of the langage to date. The AsciiDoc.py project will (soon) move out from https://asciidoc.org/. This website will become the homepage of AsciiDoc the language. @melix If you are interested in working on a Java implementation of the AsciiDoc language, you should reach out to the AsciiDoc WG. A first step would be to send an email on the mailing list: Mailing list: asciidoc-wg (121 subscribers) | Eclipse - The Eclipse Foundation open source community website. to tell that you are interested in contributing on the Java implementation. You can join as an individual contributor or as a Gradle employee (if Gradle wants to join the AsciiDoc Working Group: Explore Our Members - Eclipse AsciiDoc | The Eclipse Foundation). I know that some people working at VMware/Pivotal have also express interest in working on the Java implementation. For reference, we will be working on a Java implementation as part of the Eclipse specification process (i.e., as an Eclipse project). I hope this clarify things!
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00160.warc.gz
CC-MAIN-2021-17
1,428
11
https://gist.github.com/samsch
code
When should I use relational vs NoSQL databases? What is the task you need to accomplish? Frequently this question is asked as a "what should I use for a web app" question with no real details for what kind of data is being stored. As it turns out, that's ok! We actually have a type of database which directly fits "general purpose", by being fully robust and flexible. These are the relational databases, which are designed around the SQL standard. Because "general purpose" covers most tasks really well when it comes to databases (something that's not nearly as true in other areas of technology), you can just grab PostgreSQL or MySQL and use that for all data storage purposes, and likely have no issues for the full lifetime of the project.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00167.warc.gz
CC-MAIN-2020-50
747
4
http://www.java-mobiles.com/tag/racing/xrace-extra-download-free-37911.html
code
XRace Mobile is a small 2D space race game, where the track is 4000 pixels long. The track and the spaceships can be edited by the users, if they rename XRace.jar into a zip archive, edit the png images and rename the zip archive into XRace.jar again. You record your races and play against yourself by choosing an old race. If you start XRace Mobile, you select the record name and the name of the old race. XRace Extra is a modified version of XRace Mobile: try to reach the right side of the gamefield. There are defense bases and ten different types of enemies. Like it? Share with your friends! Requirements:· MIDP 2.0, CLDC 1.0 Other Java Freeware of Developer «Astro Solutions»: XholeWorld You are followed by other spaceships. Escape and try to crash those spacecrafts into the black holes Kmoon3DMo Kmoon3DMo is a version of Kmoon3D with textures XRace Mobile XRace Mobile is a small 2D space race game, where the track is 4000 pixels long. The track and the spaceships can be edited by the users, if they rename XRace.jar into a zip archive, edit the png images and rename the zip archive into XRace.jar again. You record your races and play against yourself by choosing an old race W3D2 W3D2 is a more accurate and efficient version of W3D, the Library for easy 3D programming. From the ground up again. Contains floating-point simulation and simulation of trigonometric functions. The sample program Sample.java that represents a rotatable textured cube, which is 2741 bytes long RoboFight2D RoboFight is a 3D game for mobile phones. The Pyramid in the center as a pawn to the other Pyramids are shot. After victory or defeat stop the game and must be restarted. The treasury Balls are black. Distortions in the X-axis disturb anything MFTP (Java) MFTP (Java) - With MFTP you can view XML files that describe the files and folders of FTP servers. Descriptions of each file can appear if they were inserted by the provider. The XML files are compatible with HyperFTP AlgoMo (Algolight Mobile) Algolight is a programming language that can be used to implement simple mathematical algorithms that operate on variables, vectors and matrices, and includes controls for loops, while loops and if / else structures. The program can also be used as a simple calculator, it supports some trigonometric functions, roots, and logical expressions and relation symbols W3DGL W3DGL - This library (distributed as .java file) combines the essential features of mobile 3D graphics in a very small engine so searching standard features in the huge API is obsolete. You create Triangles and Quads, which only require three (four) vectors and an image object (for texturing) as arguments, and add them to an instance of Engine
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121665.69/warc/CC-MAIN-20170423031201-00356-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,722
13
http://ggftw.com/forum/1755834-post2421.html
code
It seems that my example was poorly chosen and mislead you however; the initial point still stands: chose someone that you both trust. I was typing it in a rush and I didn't think anyone would spend the time to pick at small pointers. Yes, a level 400 can deceive you, and so can anyone else. With the logic you applied, anyone can be deceived. In these forums we even witnessed someone who often dealt with myshop trades and happened to be a well known player scam.
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661768.23/warc/CC-MAIN-20160924173741-00116-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
466
2
https://community.ivanti.com/thread/28736
code
I'm running 9.6 SP2 We did all our HII testing during 9.5 SP2 but since upgrading to 9.6 SP2 there was the availability to add a second HII action in the System Configuration part of the template after installing an agent.. My problem is we are using a HII action to run a driver install package to 2 different models, a E6510 and OptiPlex 780, they were tested and provisioning went good. However we did not test on a E6400 and we don't have any assigned driver packages for this model. During Provisioning when it gets to the HII action in the System Configuration section it hangs forever. At the machine itself if you try to do anything it tells you that explorer has hung and wants to close, that's all you can do other than CTRL Alt Del which bings up Task manager and hangs. After rebooting and logging in I went though all log files in Windows\temp and in ldclient log folder. Nothing says what went wrong other then there was a error. One note here, take out the Hii and provisioning works on this model. My guess is if there isn't a action to be done in HII it hangs. I don't want to make a template for every model we don't have a HII driver package for. Any suggestions??????????????
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00135.warc.gz
CC-MAIN-2019-04
1,195
7
http://reference.wolfram.com/legacy/v5/Built-inFunctions/GraphicsAndSound/GraphicsPrimitives/Graphics.html
code
DocumentationMathematicaBuilt-in FunctionsGraphics and SoundGraphics Primitives Graphics[primitives, options] represents a two-dimensional graphical image. Graphics is displayed using Show. The following graphics primitives can be used: The sound primitives SampledSoundList and SampledSoundFunction can also be included. The following graphics directives can be used: The following options can be given: Nested lists of graphics primitives can be given. Specifications such as GrayLevel remain in effect only until the end of the list which contains them. Graphics[Graphics3D[ ... ]] generates an ordinary 2D graphics object corresponding to 3D graphics. The same works for SurfaceGraphics, ContourGraphics and DensityGraphics. The standard print form for Graphics[ ... ] is -Graphics-. InputForm prints the explicit list of primitives. See Section 2.10.1. See also: Plot, ListPlot, ParametricPlot. Related package: Graphics`Graphics`. New in Version 1.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00556.warc.gz
CC-MAIN-2017-47
954
14
https://santpaumemoryunit.com/about-us/lidia-vaque/
code
Dr. Lídia Vaqué Alcázar obtained her Biomedical Sciences degree by the University of Barcelona. In 2020, she acquired the PhD in Medicine and Translational Research by the University of Barcelona. She developed her thesis under the supervision of Dr. David Bartrés Faz and Dr. Roser Sala Llonch. Her main scientific interest is the study of brain mechanisms that support preserved cognition in aging, using multimodal neuroimaging techniques and the application of non-invasive brain stimulation tools. In 2022, she joined the Sant Pau Memory Unit as a Margarita Salas postdoctoral researcher.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474569.64/warc/CC-MAIN-20240224212113-20240225002113-00885.warc.gz
CC-MAIN-2024-10
597
2
https://docs.oracle.com/cd/E19528-01/819-0696/behhdjhc/index.html
code
Use the information in this section to plan the installation and configuration of Sun Cluster HA for SAP. The information in this section encourages you to think about the impact your decisions have on the installation and configuration of Sun Cluster HA for SAP. Retrieve the latest patch for the sapstart executable – This patch enables Sun Cluster HA for SAP users to configure a lock file. For details on the benefits of this patch in your cluster environment, see Setting Up a Lock File. Read all of the related SAP online service-system notes for the SAP software release and database that you are installing on your Sun Cluster configuration – Identify any known installation problems and fixes. Consult SAP software documentation for memory and swap recommendations – SAP software uses a large amount of memory and swap space. Generously estimate the total possible load on nodes that might host the central instance, the database instance, and the application server, if you have an internal application server – This consideration is especially important if you configure the cluster to ensure that the central instance, database instance, and application server will all exist on one node if failover occurs. Ensure that the SAPSIDadm home directory resides on a cluster file system - This consideration enables you to maintain only one set of scripts for all application server instances that run on all nodes. However, if you have some application servers that need to be configured differently (for example, application servers with different profiles), install those application servers with different instance numbers, and then configure them in a separate resource group. Install the application server's directory locally on each node instead of on a cluster file system - This consideration ensures that another application server does not overwrite the log/data/work/sec directory for the application server. Use the same instance number when you create all application server instances on multiple nodes - This consideration ensures ease of maintenance and ease of administration because you will only need to use one set of commands to maintain all application servers on multiple nodes. Place the application servers into multiple resource groups if you want to use the RGOffload resource type to shut down one or more application servers when a higher priority resource is failing over - This consideration provides flexibility and availability if you want to use the RGOffload resource type to offload one or more application servers for the database. The value you gain from this consideration supersedes the ease of use you gain from placing the application servers into one large group. See Freeing Node Resources by Offloading Noncritical Resource Groups in Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more information on using the RGOffload resource type. Create separate scalable application server instances for each SAP logon group. Create an SAP lock file on the local instance directory - This consideration prevents a system administrator from manually starting an application instance that is already running.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651465.90/warc/CC-MAIN-20180324225928-20180325005928-00085.warc.gz
CC-MAIN-2018-13
3,189
11
https://www.behance.net/gallery/52084297/Job-Application-Chatbot
code
JOB APPLICATION CHATBOT Sometimes it can be hard to stand out and get noticed in the sea of competition. It can be even more challenging when you’re searching for a new position in an industry where talent is becoming a commodity. However, I believe passion trumps talent. And when I started looking for a new work opportunity, I wanted to do things a little bit different. Why I Hate Résumés When beginning your job search, the first thing you need to do is update your résumé. However, everyone knows that résumés are extremely terrible at foretelling how “good” someone might do on the job. What’s more, when reviewing hundreds of applications, how is one supposed to compare these documents when they’re not even standardized? Things get even more ridiculous when employers are looking for every reason to put a CV in the “no” pile. Even where you live could be one of them. To overcome this ‘screening and elimination process’, I decided to throw out the résumé altogether, and simply “show” what I can do, rather than just “tell”. Finding An Alternative A passion project makes for a way better story, than casually telling your network you’re looking for work. Therefore, as an online marketer with a strong interest in new technologies, I decided to replace my résumé with a chatbot. This enabled me to showcase the versatility of my skills, while reaching a larger audience group of potential employers. AI vs. Scripted Chatbot From the start, I knew a scripted chatbot was the way to go. Today, AI (Artificial Intelligence) is leaving many disappointed with the experience, because the software isn’t quite there yet. I also didn’t want to overload the chatbot with content. If people asked a question that wasn't programmed, they were encouraged to contact me for a one-on-one conversation. Achieving a high number of chatbot users wasn’t the goal of this project. The main objective was to get contacted by potential employers or recruiters to talk about work opportunities. Therefore, every touchpoint – my website, portfolio or LinkedIn profile – was going to be just as important to generate qualitative leads. Crafting The Perfect Landing Page I also updated my personal website. A new single page design with two call-to-action buttons encouraged visitors to “chat with my bot” or to “grab a coffee” (i.e. contact me to meet in person). The navigation menu contained links to my social profiles and résumé, while the “Send to a colleague” button nudged visitors to share my website. Transforming the Facebook Page Because a Messenger Bot needs to be connected to a Facebook Page, I decided to get the most out of it. Using Facebook’s Milestones feature, I showcased my past experience and achievements, and a cover video was used to show the interaction of the chatbot. Additionally, periodic posts containing images and videos, kept the Facebook Page updated with new content to explore. Reviving the Paper Résumé We’ve all been saying it for years: “CVs will soon be history”. However, as long as employers and recruiters ask for printer-friendly résumés, we’re going to have to keep making them. Thus, I dusted off my Adobe InDesign skills and created an interactive PDF. The layout incorporated the Facebook Messenger Code and encouraged readers to check out the Facebook Page.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00420.warc.gz
CC-MAIN-2020-40
3,374
16
https://aviation.stackexchange.com/questions/90702/can-a-localizer-backcourse-signal-be-reversed-at-the-site-antenna?noredirect=1
code
Can we reverse the guidance from the transmitter? Can a Localizer Backcourse signal be reversed (at the site antenna)? From a technical standpoint it's entirely possible, but both beams, front and back, are reversed at the same time. Conceptually this is obtained by reversing the order of the individual antennas of the localizer array, the leftmost antenna becomes the rightmost antenna, etc. Localizer array at Melbourne airport (seen from the back), source This boils down to swapping the antenna inputs. As the inputs are symmetrical relatively to the array center, except the SBO phase, practically we would just have to invert SBO phase. See this array diagram showing signal amplitude and phase for each course and clearance antenna. Case of Aspen missed approach localizer In this section I'm going to separate two concepts: - One related to the antenna, the back lobe, which is the area where the antenna radiates rearward, as many antennas do. The opposite is the front or main lobe. - One related to the approach procedure, the back-course, which refers to the direction opposite to the approach the localizer is primarily used for. While the very usual case is the back-course is sent using the back lobe (making back-course and back lobe near synonymous), in the unusual I-PKN, the front lobe is used to create a back course guidance. If so, are there any airports where this has been done? Aspen back-course localizer I-KPN which motivates your question is possibly one of these localizers with inverted signal. Aspen has a LOC/DME approach based on two localizers: I-ASE classic, directed at 331°, is used to land and has no back-course. I-KPN directed at 301° is used in the missed approach trajectory to provide the guidance to return to the holding via LINDZ waypoint. I-KPN course is referred to as a "back-course". It's not a back-course in the sense it would give an additional access to the opposed runway for free, like other localizers do using the back lobes of their antennas. It's only use is during the missed approach. In the case of I-PKN, the signal is not sent by the back of the antennas, but by the front, like a regular LOC/ILS approach. This is visible on this picture of the array taken from its NW: I-KPN array, facing 303°, from Google Street View I believe each of the log periodic antennas has the shortest element on its front (up on the picture), and therefore the front lobes are used to transmit the guidance signal. Why using the front lobe of the radiation pattern? Just because this is the most efficient, the gain in the front lobe being larger than in the back lobe. From a signal standpoint, comparing front beam, back beam and back beam with reversed signal to create a "back-course" guidance: We see the signal radiated by the front lobe, but with an inverted signal, is equivalent to the signal radiated by the back-lobe in a conventional localizer. I believe this is what is used for Aspen I-KPN. The mention "normal sensing" on the approach plate means this guidance will drive the indicator needle like a when flying any conventional back-course flown outbound, that is the needle will deviate to the right to indicate the beam center is on the right side (top of the picture below). More detailed presentation of the localizer array When both directions are used, each individual antenna transmits the same signal in both directions (the back-course beam is a by-product of the front-course signal). Actual radiation of a directional antenna, source The guidance signal is the result of the 90/150 Hz modulation depths lateral variation making 150 Hz predominant on the left side (seen from the array, looking forward) and 90 Hz predominant on the right side. This variation itself is obtained by applying a different mix of signals (CSB and SBO) on each individual antenna composing the localizer array. See How is varying modulation depth achieved by localizer ground transmitters? for how it works exactly. To reverse the guidance it is sufficient to switch left/right antennas inputs. If the array is used in a single direction, there is no difficulty, otherwise it's not possible to affect only one side, two separate arrays are required. But in that case to get better performances, the second array will be located at the other runway end, and will use a different frequency. Said otherwise it will be another complete localizer. What does ICAO say about back-lobe and reversed guidance? There is no recommendations regarding the back-lobe signal characteristics, and there is no mention about sensing inversion in ICAO in Annex 10. Back-course approaches are local choices, off standards, ICAO discourages them in working documentation and they have been decommissioned almost everywhere, except in the US, replaced by separate navaids or GNSS-based procedures.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474746.1/warc/CC-MAIN-20240228211701-20240229001701-00742.warc.gz
CC-MAIN-2024-10
4,831
27
https://forum.kiwisdr.com/index.php?p=/discussion/comment/15231
code
Trying to update from v1.548 [fixed] edited September 2022 in Problems Now Fixed When I select UPDATE check now, I get the error Error determining the latest version -- check log. Copy the end of LOG: ADMIN connection closed Wed Sep 7 22:30:20 00:36:06.142 ........ PWD isLocal_if_ip: flg=0x18 fam=2 socktype=1 proto=6 addrlen=16 192.168.2.102 Wed Sep 7 22:30:20 00:36:06.142 ........ L PWD isLocal_if_ip: TRUE IPv4/4_6 remote_ip 192.168.2.102 ip_client 192.168.2.102/0xc0a80266 ip_server[IPv4] 192.168.2.161/0xc0a802a1 nm /24 0xffffff00 Wed Sep 7 22:30:20 00:36:06.145 ........ TLIMIT exempt local connection from 192.168.2.102 Wed Sep 7 22:30:20 00:36:06.145 ........ L PWD admin config pwd set FALSE, auto-login TRUE Wed Sep 7 22:30:20 00:36:06.146 ........ L PWD admin ALLOWED: no config pwd set, but is_local Wed Sep 7 22:30:20 00:36:06.146 ........ L PWD admin admin ALLOWED: from 192.168.2.102 Wed Sep 7 22:30:31 00:36:17.328 ........ L UPDATE: force update check by admin Wed Sep 7 22:30:31 00:36:17.332 ........ L UPDATE: checking for updates Wed Sep 7 22:30:33 00:36:18.886 ........ UPDATE: fetch origin status=0x00008000 Wed Sep 7 22:30:33 00:36:19.600 ........ L UPDATE: Makefile fetch error, no Internet access? status=0x00008000 WIFEXITED=1 WEXITSTATUS=128 Wed Sep 7 22:30:33 00:36:19.604 ........ task update_task:P3:T002((1000.000 msec) TaskSleep) exited by returning Looking at the update history on the forum I read we go from version 545 direct to ver 552, nothing about version 548 that I am using now. What can I do to clear my problem? Versions without anything of interest to users don't get a forum post. Use the console tab (or ssh/PuTTY to the Kiwi directly) and look at the file /root/build.logPost here or email to [email protected] A less painful alternative, if your Kiwi is accessible on the Internet (public or not), is to set a temporary admin password and email it to [email protected] and I can take a look. Recent versions have more comprehensive error log messages when a build fails (they identify some specific failure cases e.g. disk full, git clone corrupted etc). And there are some pre-programmed buttons on the console tab to help diagnose and repair build issues. Of course you have to get updated to the more recent versions first -- Catch-22! Okay, so your Git clone of the Kiwi sources is trashed (for whatever reason, power failure at inopportune time etc). Please try this from the admin page, console tab: mv Beagle_SDR_GPS B.bad Build will take 20 - 30 minutes. Then click the restart button on the control tab. Should be at the latest version now and able to receive future automatic updates if you have them enabled on the update tab. Yes. All is good now. Small correction to your instructions. "...admin page, control tab." should read the Console tab. Thank you for the help. I wanted to repair my receiver but I get this: Debian GNU/Linux 8 BeagleBoard.org Debian Image 2016-05-13 default username:password is [debian:temppwd] Last login: Thu Nov 3 18:40:27 2022 from 192.168.1.222 root@kiwisdr:~# mv Beagle_SDR_GPS B.bad mv: cannot move 'Beagle_SDR_GPS' to 'B.bad': Read-only file system Hi @ArturPL ! Usually Linux file system mount like read-only if it has errors. You can try to check and fix it with fsck utilite or full re-install this BBG from microSD card. I already tried to restore from SD but it loops and again cannot start from the local address via http (ssh works), probably the backup was already overwritten with an error in 1.566 I can try from a working 1.567 kiwi card or from the original one that has never been used, which is better to choose? You can try IMG from http://kiwisdr.com/quickstart/#id-net-reflash Ok, I'll write what came out of this later Managed to. I created an SD card from flash and the Kiwi came alive. I also found the culprit - the power supply I am surprised, however, that the other two of my Kiwis started from it without any problems. Currently, I used an emergency ZHAOXIN RXN-305D, I set him a current of 5A with a voltage of 5.3V. It works very steadily and, surprisingly, does not interfere with LF / MW like typical switch mode power supplies. Thank you for your help! Glad to hear you fixed it!
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644309.7/warc/CC-MAIN-20230528150639-20230528180639-00140.warc.gz
CC-MAIN-2023-23
4,227
38
https://msdn.microsoft.com/en-us/library/ms187709(v=sql.100).aspx
code
Event Handlers Tab Use the Event Handlers tab of SSIS Designer to build a control flow in an Integration Services package. An event handler runs in response to an event raised by the package or by a task or container in the package. Create the control flow by dragging graphical objects that represent SSIS tasks and containers from the Toolbox to the design surface of the Event Handlers tab, and then connecting the objects by using precedence constraints to define the sequence in which they run. Additionally, to add annotations, right-click the design surface, and then on the menu, click Add Annotation.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121665.69/warc/CC-MAIN-20170423031201-00405-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
609
4
http://forum.vintagesynth.com/viewtopic.php?p=721829
code
Ah, you must've been playing modern organs with integrated reeds. As they would say over on Vintage Organ Explorer, what you gain in volume control, you lose in the discrete quality of pipe oscillators.Automatic Gainsay wrote:First and foremost, I miss you. Paraphonic does not require divide-down. Many paraphonic synths ARE divide down... but some are not. Each pipe is an oscillator that doesn't go through a filter... but does go through a single "amp" which is controlled by the swell pedal. Certainly the volume of the overall sound can be controlled by adding "oscillators," but timbre is also changed by that. synthroom wrote:A swell pedal is just a foot actuated volume knob, not an "amp" or EG. You're totally off on this line of reasoning... To be honest, I've not noticed swell pedals when playing. They're mainly for reeds, I think?
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385984.79/warc/CC-MAIN-20210309030723-20210309060723-00105.warc.gz
CC-MAIN-2021-10
845
6
https://lids.mit.edu/news-and-events/news/crowdsourcing-big-data-analysis
code
In the analysis of big data sets, the first step is usually the identification of “features” — data points with particular predictive power or analytic utility. Choosing features usually requires some human intuition. For instance, a sales database might contain revenues and date ranges, but it might take a human to recognize that average revenues — revenues divided by the sizes of the ranges — is the really useful metric. MIT researchers have developed a new collaboration tool, dubbed FeatureHub, intended to make feature identification more efficient and effective. With FeatureHub, data scientists and experts on particular topics could log on to a central site and spend an hour or two reviewing a problem and proposing features. Software then tests myriad combinations of features against target data, to determine which are most useful for a given predictive task. In tests, the researchers recruited 32 analysts with data science experience, who spent five hours each with the system, familiarizing themselves with it and using it to propose candidate features for each of two data-science problems. The predictive models produced by the system were tested against those submitted to a data-science competition called Kaggle. The Kaggle entries had been scored on a 100-point scale, and the FeatureHub models were within three and five points of the winning entries for the two problems. But where the top-scoring entries were the result of weeks or even months of work, the FeatureHub entries were produced in a matter of days. And while 32 collaborators on a single data science project is a lot by today’s standards, Micah Smith, an MIT graduate student in electrical engineering and computer science who helped lead the project, has much larger ambitions. FeatureHub — like its name — was inspired by GitHub, an online repository of open-source programming projects, some of which have drawn thousands of contributors. Smith hopes that FeatureHub might someday attain a similar scale. “I do hope that we can facilitate having thousands of people working on a single solution for predicting where traffic accidents are most likely to strike in New York City or predicting which patients in a hospital are most likely to require some medical intervention,” he says. “I think that the concept of massive and open data science can be really leveraged for areas where there’s a strong social impact but not necessarily a single profit-making or government organization that is coordinating responses.” Smith and his colleagues presented a paper describing FeatureHub at the IEEE International Conference on Data Science and Advanced Analytics. His coauthors on the paper are his thesis advisor, Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems, and Roy Wedge, who began working with Veeramachaneni’s group as an MIT undergraduate and is now a software engineer at Feature Labs, a data science company based on the group’s work. FeatureHub’s user interface is built on top of a common data-analysis software suite called the Jupyter Notebook, and the evaluation of feature sets is performed by standard machine-learning software packages. Features must be written in the Python programming language, but their design has to follow a template that intentionally keeps the syntax simple. A typical feature might require between five and 10 lines of code. The MIT researchers wrote code that mediates between the other software packages and manages data, pooling features submitted by many different users and tracking those collections of features that perform best on particular data analysis tasks. In the past, Veeramachaneni’s group has developed software that automatically generates features by inferring relationships between data from the manner in which they’re organized. When that organizational information is missing, however, the approach is less effective. Still, Smith imagines, automatic feature synthesis could be used in conjunction with FeatureHub, getting projects started before volunteers have begun to contribute to them, saving the grunt work of enumerating the obvious features, and augmenting the best-performing sets of features contributed by humans.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818374.84/warc/CC-MAIN-20240422211055-20240423001055-00553.warc.gz
CC-MAIN-2024-18
4,287
12
https://1sheeld.com/arduino-robotics-series-rc-car-robot-arm/
code
Arduino keeps standing out from the crowd of all-around development boards due too its ease of use and budget price. As a result, this has enabled almost anyone to get his own idea into life no matter how crazy or even useless to the whole world it was, except for its maker! Also, it has been noticeable that makers communities are reaching the stars of the recent era with an enormous bunch of DIY projects are around there with their tutorials. Almost, anything that may come up to your mind and can be achieved with an Arduino board, you will find it already has been made with one or more makers! .. Okay, so am I innovative enough to come up with a whole new DIY Arduino project that no one has ever made before? Well, I don’t claim that I am that smart but also, not that shy of not thinking that I can build something totally new with the help of commonly Arduino DIYs around me; Arduino robot projects! And who doesn’t like to feel the excitement of seeing something he programmed moves like a baby! Yes, I am falling in love with Arduino robotics projects and the new thing I have built here is a multi-function Arduino robot by making use of some already made Arduino robot projects to come up with my own Arduino Monster Robot. What exactly this monster robot do Basically, it’s a 4 wheels Arduino car made of cardboard/foam that can be controlled remotely from your smartphone as a traditional Arduino RC Car .. BUT, with some freaking awesome features all in the same car: Can be controlled with 3 different ways using the phone; Gamepad, accelerometer gesture and voice commands. Autonomous drive by avoiding both obstacles with ultrasonic sensors and humans faces with phone’s camera. Line follower robot with no IR sensors at all .. just detecting line with the phone’s camera! Robotic arm placed on the top of the car that is controlled with the phone’s orientation gestures with a hand gripper. And guess what, all these things you can make with only one shield! It’s the 1Sheeld since it has all you need to achieve the 4 projects and more endless features you can discover here. Next, I will mention a good description with all the shields I used for each project with the project tutorial page & video in the Arduino Robotics series section below. The first project in this Arduino Robotics Series is a basic 4 wheels robot that uses 4 DC motors with a driver and Arduino mega that can be controlled with a joystick/gamepad. But this wasn’t enough to me as I found that 1Sheeld can use phone’s accelerometer and MIC to enable control with the phone’s gesturing and voice commands, respectively. How to use: Once you open the App and get connected to the 1Sheeld, navigate to the Gamepad and play with your car. Whenever you want to use the voice commands, just navigate to the Voice-Recognition and press on START to let the phone listen to your command whether it was “forward”, “backward”, “right” or “left”. Finally, you can activate the Accelerometer control by navigating to the GLCD and check on Accelerometer And .. yes .. was about to forget that you can change the car bottom crazy RGB light between red, green and blue from the Gamepad buttons or even turn it all off “) This is all about obstacles avoiding using the previous robot I built with 3 ultrasonic sensors and a servo motor. the robot avoids things and people at the same time! Thanks to the 1Sheeld’s Face Detection Shield which enables me to detect any human face ahead and move the robot away from. How to use: Just navigate to the GLCD and select the “Auto” option and watch your robot moving autonomously! Almost, the error percentage is at its least value thanks to the well-designed & tested distribution of the sensors to avoid all obstacles ahead as possible as it can. Also, the robot avoids humans once detects any face by using the Face Detection feature with the phone’s camera: And guess what? You can, surely, back to the manual control whether with the Gamepad or Accelerometer: The good thing about this project is that if you have already made the previous 2 projects, Arduino Bluetooth RC Car and Arduino Obstacle Avoiding Robot of this Arduino Robotics series, then you will need nothing to build, no components or tools, no materials .. just upload the line following project code and you are ready to go! Briefly, the robot can track any color, not only white and black. Thanks to the Color Detection Shield of the 1Sheeld, now your phone’s camera will detect the line color and 1Sheeld will make your robot follows the line. When detecting black on the left but middle and right are still white, that means the line is turning right. When detecting black on the right but middle and left are still white, that means the line is turning left. When detecting white on the left, middle and right, that means the line is going straight forward. How to use: All you need to do is to prepare the track: And select the “Line following” option from the GLCD and release your robot over the track. Again, you can back to other modes; “Manual” and “Auto” by selecting any one from the GLCD screen: Here comes the last but most enjoyable project of this Arduino Robotics series; a robotic arm with a gripper. It’s placed over the car/robot surface and can be controlled with your hand gestures. How to use: After all, I guess you are familiar now with the robot before even you start building it! Yes, select the “enable robotic arm” and control the arm and gripper with your hand gestures and close/open, respectively. It’s noticeable here that the “enable robotic arm” is just a checkbox, not a radio option! This is to let you choose it with the manual mode so that you can drive the robot and grip things on its way .. just at the same time! Getting all this headache of drawing the robot faces over the cardboard/foam sheet .. Then cut them all .. And glue together .. Furthermore, bake to make holes and customizations required for outer components like ultrasonics .. Seems like you think I have chosen the hardiest way to make it but I think you missed up the enjoyable part of making it manually instead of CAD design and laser-cut parts! Come on!! .. release the maker child inside you and get your hands a bit dirty here and feel the excitement of traditional design with a pencil and ruler. Struggles & Tips along the way of this Arduino Robotics journey Here are some points you need to pay attention for as I have experienced them myself while going through the making for all projects: Be careful while using the glue gun with the foam material. Just use a little bit of it if you went for the foam board instead of cardboard as the first gets dirty easily with extra use of glue. use this thinner cutter tool instead of the traditional big one for more accurate cutting. Hold the robot over anything that has a proper thickness so that you don’t push a lot on the glue of the wheels. Fix the Arduino Mega properly so that you can plug the programming USB cable easily. Don’t do it like this silly, like me! Place the side ultrasonic sensors with angels so that they detect more extreme angels for right and left. While testing the line follower code, decrease the robot speed as possible as you can so that the robot takes the time to process the camera capturing. While working on the last project, the robot arm .. it’s better to fix all the servo axes with SCREWS, not GLUE. This was a big mistake of mine as the glue couldn’t handle the weight of the arms. Surely, foam/cardboard has no wight to mention but with the wires, glue everywhere and obviously the servo motors, it will cost a noticeable weight. You know the 2 wires those pull/push the gripper to close/open it? The more Non-bendable they were, the more accurate and stronger closing/opening you will get and hence, the more catching strength your gripper will be. Also, for the electronics and connections: I realized that one 5v regulator wasn’t enough to supply the required current for both servo motors and the Arduino, 1Sheeld and Ultrasonic Sensors. So, I had to use another 5v regulator to supply the servo motors and that’s what you should better do from the beginning. The robot uses 2 x 3.7v series batteries, the popular power bank 18650 batteries to get about 7.4v. But with all these DC motors and servo motors, the 2 batteries will run out so quickly and that’s why I have used another 2 x 3.7v series batteries and connected them with the other parallelly to get the same 7.4v but with the double of the current .. you got it now, yes .. for longer operation time. It’s better to connect a push button to the Mega reset pin and glue it to the robot body to easily reset without opening the robot. Never use Blue LED for indication the power like what I did! As its light is freaking strong and uses much current that others like Red or Green on does. Finally, I hope I could have made this Arduino Robotics Projects Series as easy to make as possible and you find these 4 projects inspiring for more awesome and funny Arduino projects. For the meantime, I will be happy if you shared with me in the comments below any question in mind 🙂 Stay tuned for more awesome Arduino projects Series with 1Sheeld …. Arduino robots are always funny and definitely more enjoyable when combined with robotic arm! And today, I will walk you through the making of a simple Arduino Robot Arm that’s made of cardboard and how you can attach it over your Arduino Bluetooth RC Car / Robot that you have made so far through this series. … Arduino Line Following Robot is one of the easiest and most well-known projects that anyone can make to learn the basics of programming, electronics, and mechanics all in one project. It’s known that this project is common for most of the makers and tech students and today I am going to give it a revisit … Back again with another Arduino Bluetooth RC Car tutorial but with an advanced feature that makes your car autonomous by adding ultrasonic sensors to the RC Car you have made before!. Yeah, I know that you may have seen other tutorials using ultrasonic to make the car/robot avoid obstacles ahead, but I am going to … Cardboard crafts are one of the most popular and easy to make DIY stuff. Mixing this with the unmatchable enjoyment of RC Cars and the ease of using Arduino, I am going to show you how to make a Cardboard Arduino Bluetooth RC car that you can control via Bluetooth from your smartphone. … Arduino Security Camera Have you ever wanted to check your home in real time? Afraid that maybe someone has stormed in, don’t remember if you shut the door or not or maybe wanna check if your Hyper-energetic dog has broken any of your dishes AGAIN!! 🙂 Then this is the perfect place for you cause today … Arduino makers …. you can connect your Arduino to MQTT brokers(servers) and launch your IoT with Arduino IoT Shield! It’s the 1Sheeld IoT shield that will do the job for you. It’s one of the most important but recent Arduino shields in 1Sheeld. Simply, it turns your Arduino into MQTT client where you can publish and …
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00352.warc.gz
CC-MAIN-2021-31
11,179
70
http://geekdrop.com/content/plex-search-broken-solution
code
I've recently run into a long lasting problem with my Plex Media Server (PMS), where doing any search, be it in the PMS web interface itself, or any of Plex's apps, would always end up with "No matches found." Even if I was looking directly AT matches right on the screen, with my very own eyes (which are still pretty good, I might add, even at my age. ). In fact, I whipped up a couple short videos of the "bug" happening right here: |Plex Web Interface||Plex Android| I put "bug" in quotes, because technically this isn't a bug; it's more of a "break". Something broke. Read on for more. If you want to skip all of the nerdy details, and jump straight to the actual solution, just click HERE. So to track the issue down, what I did was this: (NOTE: You don't need to do any of this in order to fix your own. I'm just detailing the steps for those with geeky curiosity). Important first step. Shut down your Plex server. Go to it's tray icon, right-click it, then click "Exit". If you haven't already, for any other reason, head to your PMS web interface, and enable Logging. I like to set mine to "verbose", but you don't have to. If you haven't changed the port your PMS server is on, the page is typically here. Be sure to click the "Advanced" button, then the "Debug" tab. (More detail on this procedure can be found on our tutorial here: How To Enable or Disable Logging In Plex Media Server (PMS)) Next, I headed to the folder on my computer where Plex stores all of it's logs. In my case, since I store my Plex's data folder on my 4TB Z: drive, the folder path looks like this: Z:\Data\Plex Media Server\Logs, then I deleted all log files in that folder, so that I could start from scratch, and it'd be a little easier to sift through. Not a requirement, but a bit more helpful. Tip: There's the "You can view the debug logs here." link below the combo-box where you enable debug logging, that takes you to a useful page of log info, however, in my testing it didn't seem to update properly for me, so I'd recommend using both, that log page, and viewing the actual log files in a text editor, for full info on what's going on in PMS behind the scenes. Now that the server is off, and all log files deleted, restart the server. Go to your Plex server's Dashboard, and do a search or 2, for things you KNOW exist in your library. It'll of course, take you to the Search Results page, saying the usual "No matches found."Make a mental note of the search phrase(s) you used. Open up Plex Media Server.log (pretty sure it was that one. I'm going off memory and there are several log files generated, so if it wasn't that exact one, it's in one of those ) with something like Notepad, or your favorite text editor. Do a Search with the editor for whichever search term(s) you used, and you'll eventually come across something that looks similar to this: Sep 24, 2015 20:42:54:143 ERROR - SQLITE3:9CA7840C, 11, database corruption at line 67420 of [8a8ffc862e] Sep 24, 2015 20:42:54:143 ERROR - SQLITE3:9CA7840C, 11, statement aborts at 12: [select distinct metadata_items.id from metadata_items join media_items on media_items.metadata_item_id=metadata_items.id join media_parts on media_parts.media_item_id=media_items.id Sep 24, 2015 20:42:54:143 ERROR - Soci Exception handled: sqlite3_statement_backend::loadRS: database disk image is malformed Sep 24, 2015 20:42:54:146 DEBUG - Completed: [::ffff:192.168.0.195:55866] GET /search?local=1&query=hulk (20 live) TLS GZIP 12ms 486 bytes 500 Sep 24, 2015 20:42:54:158 ERROR - SQLITE3:9CA7840C, 11, database corruption at line 67420 of [8a8ffc862e] Sep 24, 2015 20:42:54:158 ERROR - SQLITE3:9CA7840C, 11, statement aborts at 14: [select distinct metadata_items.id from metadata_items join metadata_items as parents on parents.id=metadata_items.parent_id join metadata_items as grandparents on grandparents.id=pa Sep 24, 2015 20:42:54:158 ERROR - Soci Exception handled: sqlite3_statement_backend::loadRS: database disk image is malformed Aha! Well, there's your problem! Your database is corrupted. That's never a good thing. Now, according to the Plex docs this is supposed to be a very rare thing, however, from what I've deduced it's not so rare, and to be honest, I didn't to anything particularly 'hard' on the server, ever. It just happened through very simple, basic usage. This is why I said above that something 'broke', as opposed to it being a 'bug', though, in reality there may very well be a bug hidden somewhere in the code that's causing database corruption. Anyway, onto the ... Fortunately, when it comes to databases, corruptions are (usually) not something to panic about, especially if you're really, really lucky. It's usually just a matter of running a database repair on the database, and things will work again. Hopefully without having any data loss at all. Here's how to repair the Plex database: I normally use a slick database program called Navicat to do a lot of my database work, so I'll show how I did it with Navicat. However, Navicat is a commercial software, so it costs money. Going off memory (again), I think it does have a free trial period though, which may be good enough if you're only wanting to do a 1-time fix. The Plex database is a SQLite database, so as long as you have software that can repair SQLite3, it'll work, and there are several free options out there, easily found with a quick Google search. I'll even point you to one a bit later. Important first step. Shut down your Plex server. Next step, (after opening Navicat of course), is to open the Plex database in Navicat. This is done simply by drag & drop of the file com.plexapp.plugins.library.db onto the Navicat window. The file is found in Z:\Data\Plex Media Server\Plug-in Support\Databases (Remember, you probably have it somewhere other than Z:\). NOTE: Create a backup of this file first, in case of catastrophe! Simply highlight it, hold down the CTRL button, and drop the copy to an open space in the same folder. Rename it to: com.plexapp.plugins.library.db.OLD Right-click the database in Navicat, and hover over "Maintain". What I did first was "Vacuum Database", then "Reindex Database", as shown in the screenshot below: Now, go ahead and restart your PMS server. Once it's running, open the Dashboard in it's web interface; at this point, none of my original rows (such as my libraries) were showing anymore, only my "On Deck" row showed, and those weren't even correct. Don't panic ... To the left, where there's a Gear icon, first click "Update Libraries", give it a bit to do it's thing, then click "Optimize". Again, give it a bit to finish. This is all just so that PMS will get things all sorted out again, it's own way, and then optimized, since we re-index everything. Re-indexing is somewhat akin to dumping everything out into a box, then re-adding it all back in, in a more organized fashion, loosely speaking. (More details on this process here, if interested: Plex Media Server: How To Update Your Libraries & Optimize Your Database) Finally, shut PMS down again, and restart it. Walah! Once it's restarted, if everything when as expected, all of your usual rows and libraries will be there again, with no noticeable differences. you can (and should, before spending time customizing your stuff again) verify by going into a few of the movies that you know you had previously customized the poster, background, tags (etc.) of, to make sure it's the same as before. In my case, everything was perfect. Go ahead and try some new searches now, they'll show up as they always should have. For some reason, if things didn't work out so well, you can just shut PMS down, delete com.plexapp.plugins.library.db and rename the one I told you to back up first from com.plexapp.plugins.library.db.OLD to com.plexapp.plugins.library.db, and restart PMS. Of course you'll be back at square 1 where you were unable to get search results, but at least no harm, no foul in what we have just done. Then you can try to hunt down another method of getting things to work properly. Here are a couple useful links to the official Plex support website, relevant to the topic as well, including a free method of repairing the database if you don't already have either Navicat, or some other SQLite utility. Restore a Database Backed Up via 'Scheduled Tasks': This is another method, if you prefer to restore a previously backed up database that worked / wasn't corrupt. I usually prefer to repair mine, because I customize my library so much that I hate the thought of having to redo any of them, if the restored database was a bit too old. But that's just me, you may not mind. Repair a Corrupt Database: Here're the instructions from the official Plex website on basically how to do what I just described above; albeit, using a free SQLite download. Possibly a bit more confusing to understand for your beginner to average Plex user than our tutorial. We're always loving new ways to do the same thing, especially if they make life easier! Post 'em if ya got 'em!
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.67/warc/CC-MAIN-20170322212950-00370-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
9,041
37
https://sourceforge.net/p/archinstaller/wiki/Home/
code
Welcome the the Archlinux Installhelper Script Wiki! This script is currently in a alpha phase, may be instable and kill your cat! What are the requirements? You need to have Archlinux (base + base_devel) already installed and rebooted Logged in as 'root' A working internet connection - Make the script bulletproof - Better descriptions for all functions - Add more desktopenvironments - Add more software - Add more 'Tweaks' - Ask at the shell install zsh or grml zsh (thx defcon) What does the script do? All steps are optional and have dependencies. (E.g. sudo is NOT installed, then the user will not be asked to give sudo rights to %wheel group!) - check if current user is 'root' - check if internet connection is working - do a system upgrade - create a new user - check if system is 64Bit and install multilib - add archlinuxfr repo and install yaourt - install sudo - configure sudo - install alsa-utils - install x-server - install gpu driver (nvidia|intel|bumblebee|ati|virtualbox drivers) - change X keyboardlayout (ATM only german - make suggestions for new keyboard layouts!) - install dbus - install desktopenvironment (kde|gnome) - install acpi - install tlp - install cpufrequtils - use a RAM-Disk for /tmp - disable tty3-6 - install additional software (unrar, codecs, flashplugin, jre, jdk, eclipse, eclipse-subclipse, android-sdk, unison, kate, jdownloader, libreoffice, dropbox) - install another shell (grml-zsh|zsh) - giving tips about additional tweaks - print script sumary
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320869.68/warc/CC-MAIN-20170626221252-20170627001252-00641.warc.gz
CC-MAIN-2017-26
1,499
37
https://www.ibcscorp.com/blog/aname-alias-dns/
code
Recently, we had an issue with one of our client's hosting providers. They are hosted on Network Solutions, which has been a major DNS provider since the early days of the internet. The problem is that we use cloud servers for scalability. Many cloud servers are configured such that an ANAME or ALIAS is required for the DNS to work properly. To understand the problem, we need to define a few things. Everything on the internet has an IP Address. When we access a website, or even when an API for an APP accesses back-end services which make it work, they ultimately route to an IP Address. IP addresses are numbers separated by dots, like 127.23.24.222. If it is a V6 address then it looks like this: 2345:0425:2CA1:0000:0000:0567:5673:23b5 Imagine putting that on your business card. Checkout my website at 2345:0425:2CA1:0000:0000:0567:5673:23b5. or emailing me at james@2345:0425:2CA1:0000:0000:0567:5673:23b5. The domain name system solves this problem by resolving a more easily understood and remembered word like ibcscorp.com to an IP address. It used to be that straight forward. I may have had a machine under my desk with an IP address 220.127.116.11 and a DNS A record which pointed to it like this: ibcscorp.com : 18.104.22.168 Then whenever someone put in that domain name, they would be routed to the machine under my desk which, if it had a web server on it, would deliver a website. This is great, but, it isn't so great because it implies that the machine at 22.214.171.124 is always on and working. Today that would be an unacceptable solution because we expect applications and things on the internet to be up 24X7 always, even when they are being updated. So, the simple DNS A record doesn't work well for our modern day needs. It gets more complicated with distributed cloud based applications and applications which are distributed over a content delivery network. As explained above the Domain Name System (DNS) provides mapping from common names called domain names, which are easier for us to remember to IP addresses which are not easy to remember. Domain name servers provide this mapping. While there are only 13 root domain servers, there are many domain servers which provide domain name lookup services as can be seen here: If there was only one, then we could update the IP address in real time when we were swapping out or upgrading a machine or if a machine failed, but, as the IP address may take 72 hours to update, this isn't a good solution. A CNAME record is a canonical name record. It is used to alias one domain name to another, so we might map a subdomain to another using a CNAME record. It can be used in lieu of an A record. This would allow us to point our domain to a cloud provider or content delivery network which has its own host name which we may not control. This network may represent any number of servers in any kind of routing network keeping our site up and running 24X7 with extremely high availability and access speed. However, CNAME records do not allow the root domain (DNZ zone apex), so ibcscorp.com could not be a CNAME. Unlike the CNAME record An ALIAS or ANAME record does support the root domain. This makes it so that iBCScorp.com as well as www.ibcscorp.com can be pointed to a CDN (Content Delivery Network) which would provide quick access and robustness to the website. Most applications we build, including applications built on PrimeAgile require an ANAME or Alias record to ensure high availability and performance. A canonical URL is the most representative form of a page or URL. It us necessary to prevent duplicates in search results. Typically, we use www. as the canonical URL for a website, for example, www.ibcscorp.com. Other subdomains may also be used for other applications, API's, or to provide other functionalities. Typically, we redirect the root domain to www. for web traffic, so we would route ibcscorp.com to www.ibcscorp.com, which is our canonical URL. So you might ask, why does it matter anyway since we always use Canonical URL's when we publish a website? It matters because we want to prevent in all cases non-routability of the root domain should it be entered into a web browser. WWW is the standard prefix for a website, like ftp. might be the standard prefix of an ftp server, but it is not always typed in by the user as redirects are the standard practice to make it easier for the user to get to www.yourwebsite.net. One possible solution is to setup a 301 redirect on a web server which routs the root domain to the www. domain. In this case, Network Solutions will do this for $2 to $3 per month. However, we find that to be a smelly solution as it requires routing to the wrong server in order for the re-route to take place. Thus, we suggest using a different DNS service provider. Our feeling is that, in the end, if the DNS provider does not support ANAME/ALIAS, we should migrate the zone file to another provider which does support it. Some providers that provide this include: If you are using a Content Delivery Network (CDN) like CloudFront and want your root domain to route, then this is required. It is important to note that we typically route the root domain to www., so a 301 permanent redirect is going to be in place routing the root domain to www. anyway.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662560022.71/warc/CC-MAIN-20220523163515-20220523193515-00463.warc.gz
CC-MAIN-2022-21
5,290
33
https://phabricator.wikimedia.org/feed/?userPHIDs=PHID-USER-tjyfydvd4ncrfvvr6mkr
code
@JoeWalsh Ok, thanks! @JoeWalsh Thank you for updating the description. One part is not clear to me yer. Maybe I have missed it. Where are the readMore items coming from? Is the app providing this list or is the JS layer expected to query that? Yes, the saved state could be handled as part of the interaction handling, in a follow up to T219998. Just talked with @JoeWalsh about this. For now we're planning on injecting the translated strings from the client. That should be easier to do than setting up I18N server-side and mapping accept-language header values to the right message catalog. Probably not needed for now. The apps can request the summary and usually have it first anyways. I don't think we would support different strings for Android vs. iOS inside the WebView content. Long-term it's probably best to ask the PMs. If this became a requirement then it would be best to have the clients pass those strings in since that would be bad for caching and storage to serve two different platform variants for each page, as you mentioned. It wouldn't be ideal to have this done by clients, though, because that would add more DOM transformation burden on clients. Mon, Jun 17 The mobile-sections endpoint and most of the other MCS/PCS endpoints were built to support the Wikipedia apps. So, most endpoints have only been tested with *.wikipedia.org domains. (The exception is the definitions endpoint but you probably don't care about that one.) May I ask what features of mobile-sections you are most interested in and are the reason for not using Parsoid directly? I hasn't been deployed yet. Should be today, though. Fri, Jun 14 Thu, Jun 13 Wed, Jun 12 Not sure what level this should be best resolved. My guess is probably Parsoid or upstream from that. Parsoid also 404s: https://en.wikipedia.org/api/rest_v1/page/html/User:JoeWalsh_(WMF) The reason why I mentioned mobile-html and the other PCS endpoints is that those will replace mobile-sections in the future. Tue, Jun 11 Mon, Jun 10 @Sharvaniharan @NHarateh_WMF Is this used by the apps by itself (just collapseTables)? If so how or where is it used? This functionality is exposed by the setMulti() function which should be called when a page loads. Having a separate call for this, when we cannot reasonably assume what the state of the table collapsing is would require some refactoring work. That's why I wanted to check first if this is even needed as a stand-alone function. Yes. The reason is that ChangeProp doesn't know to update /page/media when these kinds of changes happen. I agree that we should consider separating out this information into separate endpoints if this is info that is only needed by the editing aspect. Fri, Jun 7 I think we'd like this is for all PCS endpoints (mobile-sections, mobile-html, and related PCS JSON endpoints) Services team, could we add MCS for WikiSpecies? Thu, Jun 6 Wed, Jun 5 +1 what @Mholloway said about beta cluster already being covered by our deploy procedure. I do find the appservice useful and would like to keep it. Most of the feed endpoints are not useful/testable in beta cluster unfortunately. We need access to production pages, which appservice allows. Thu, May 30 Wed, May 29 Possibly related to T174986. Fri, May 24 Are the values always going to be text or can they contain HTML as well? Maybe we should consider having fields like text or html, like we have in other places? Thu, May 23 FWIW, display: block is used in several places of the CSS inheritance tree in this case, even in the inline style. The other places are mostly originating from MinervaNeue. Now it's up to services to fix service-runner. I'm think other service-runner based projects will run into this as well if not fixed soon. Wed, May 22 Tue, May 21 Should crashes on appservice.wmflabs.org even be reported in Logstash? May 17 2019 I think it would help if there was at least some Varnish caching for a certain amount of time but I guess that editing might have more issues with that than PCS. May 16 2019 No objection, as long as we can have the same port in all environments (local dev, beta, production, mwvagrant, ...) if there is such a thing as an exposed port in Kubernetes. (I think it adds more burden if the ports of the same service are different per environment. Example: In MCS we use 6927 for local development, but 8888 on the production machines.) May 14 2019 Thank you @NHarateh_WMF. That sounds useful enough to expose it. The issue is that we already have a callback for the setup functionality. I guess I'll make a parameter object which can hold multiple callback functions. In CollapseTable.setupEventHandling() is the footerDivClickCallback used by iOS or Android? Just wondering if that should be exposed. May 9 2019 @phuedx Yes, that is correct. I've mainly brought it up since you are doing various refactoring, and it might fit into the theme. :) May 8 2019 Keeping mobile-sections with the rest of PCS makes sense to me. Thank you for preserving the git history. The rest is nice to have and not necessarily blocking. The lexeme work sounds promising. (That would be a big enough change to warrant at least a major version bump if not a new endpoint.) Even if we removed the functionality in the app we would have to keep the endpoint running for a bit for older versions of the Android app. This task might also be a reminder to look into definition alternatives, i.e. if the Android OS provides one. May 7 2019 Moving out the feed stuff sounds like a reasonable approach to me. I'm just not sure if we really should be starting with a brand new repo. Have you considered using a clone so we can keep the Git history? Some of the libraries are shared between mobile-sections and the PCS endpoints. We may consider creating an npm library for sharing those if it make sense to do so. Before we move code to other repos I think it might be beneficial to make sure the OpenAPI spec is merged and some other code cleanup is done. That could be updating of dependencies, esp. eslint configs, a convention for unit test file names. We could start out with moving easily identifiable files and subfolders into different buckets, at least for the lib and test. Probably the routes could be done as well. May 6 2019 May 4 2019 Reproduced with https://en.wikipedia.org/api/rest_v1/page/mobile-html/Windows_10_version_history on desktop browser and running pagelib.ThemeTransform.setTheme(document, pagelib.ThemeTransform.THEME.BLACK) in the console. May 3 2019 I see now. I forgot to close the table earlier.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00151.warc.gz
CC-MAIN-2019-26
6,540
64
http://ehlixr.tumblr.com/
code
*adds something to snapchat story to make it look like i have a life* (Source: teencry, via assume) I’ll marry a man who knows how I take my tea, coffee, and alcohol And knows when to make which. -szerintemegyfiúvanakitudja (via rebcsok (Source: grettypop, via 90-s-ki-d) don’t worry i wouldn’t care about me either
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500959239.73/warc/CC-MAIN-20140820021559-00335-ip-10-180-136-8.ec2.internal.warc.gz
CC-MAIN-2014-35
322
7
http://www.spinsterlibrarian.net/what-is-python-technology-in-data-science/
code
Python is a dynamic and protest situated programming language, broadly utilized for web application advancement. 90% of individuals incline toward Python over other innovation due to its effortlessness, unwavering quality, and simple interfacing. It offers both great scripting and quick application improvement process over an immense scope of fields. If you are searching for data analytics firm you can see here: actionx.com.au As the premise of a few open-source stages, Python underpins with instruments that assistance to manufacturing applications with incredible security and execution levels. Python takes after procedural and protest situated coding standards and consequently, the changed applications written in Python confess all and intelligible code, making them simple to keep up. Employment of Python Technology for Application Development Python is an open source programming dialect, which is broadly utilized in various application spaces. It can perform on every single working framework like Windows, Linux, UNIX, OS/2, Mac, and Amiga. The committed Python Development group has composed a few applications in light of python programming dialect. Python is a fun and dynamic dialect, it has been utilized by various organizations, for example, Google, Yahoo, and IBM. It is additionally utilized generally to compose custom instruments and contents for uncommon applications. Python is widely utilized in Web applications improvement, for example, Django, Pylons, and Games Applications like Eve Online, Image Applications, Science and Education Applications, Software Development, Network Programming, Mobile applications, Audio/Video Applications and so forth.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832259.90/warc/CC-MAIN-20181219110427-20181219132427-00144.warc.gz
CC-MAIN-2018-51
1,684
5
https://sellsbrothers.com/1662
code
The Reason For Code Access Security I had a question in my inbox the other day that went something like this: “Since programming within the partial trust sandbox I get by default when using ClickOnce is so hard, why wouldn’t I just kick it up to FullTrust and let the user press the OK button?” You can do that. Since ClickOnce supports user management of permission awarding for code deployed via ClickOnce (aka there’s a dialog that the user has to approve if the app wants more permissions than are the default), you could ask for FullTrust. If I were you, I wouldn’t ask for FullTrust in my ClickOnce apps and not just because I don’t want users to be freaked out by the dialog box I expect to see that says “Danger, Will Robinson, Danger, Danger!” Personally, I don’t want the liability. If I write code the requires FullTrust, I have to write my code to take full responsibility for its actions, including if the code is hijacked by other code to do bad things. On the other hand, if I request the minimal set of permissions that I need, I’m walking with a net. If I miss an exploit, I’m limited to doing bad things inside of the limited set of permissions that the user has awarded to me and not the whole darn thing. Full trust isn’t easier; it’s much, much harder. I like partial trust because I’m lazy: I don’t want to do the work to warrant the user’s full trust.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817112.71/warc/CC-MAIN-20240416222403-20240417012403-00201.warc.gz
CC-MAIN-2024-18
1,406
7
https://cgbucket.com/forest-pack-and-railclone-livestream-recording/
code
iToo Software has published a recording of a recent livestream for Forest Pack and RailClone. The format is a Q&A with topics that include how to blend scattered objects with a surface more convincingly, how to create a parametric warehouse with RailClone, how to use Forest Pack and Forest Color to create a patterned metal chain curtain, using markers on clipping splines to change a RailClone object’s parameters, and how to increase the density of a scatter in Forest Pack without changing the distribution pattern. Watch it on YouTube. Adobe announces Firefly AI – CGPress Adobe has announced a new suite of AI tools called firefly. The first tools to market will focus on creating images and text effects but
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00517.warc.gz
CC-MAIN-2023-14
718
3
https://devpost.com/software/codestock-speaker-feedback
code
After speaking with a few of the Directors of CodeStock and a few people who have spoken at the event we realized that its critical for both parties to be able to obtain useful feedback regarding the talks given during the event. One of the biggest things we wanted to focus on was stream lining the process for the end users. So we do not require a user account! What it does Our application will allow a CSV to be uploaded which will create all of the talks that will be given by speakers. Going to a custom url which is auto generated thanks to SQL everyone is able to access that page via a QR code that will allow you to provide useful feedback for the talk. To hopefully incentivize people to participate we ask for a username or nickname which will need to me the same each time you answer a survey. This How we built it We started the hackathon by having conversations. These conversations would later lead to how we started our build process. We wanted to follow the agile development model since we had such a limited amount of time to complete the task. After having our conversations we were able to narrow our scope down to a handful of information that we figured would be useful. No users accounts, keep the process simple, keep the feedback simple to help streamline, and to hopefully filter content. Challenges we ran into We ran into a handful of issues during the project. One of the first and lease obvious was our connection to the SQL server. We were using ones that were hosted by UTK but for an unknown reason we were unable to access them with the python scripts. After awhile of troubleshooting we decided to host the application on my personal server at home and created a MySQL server there was well. One the flask/web side we had to take awhile to learn how the post and get requests would interact with flask and then later translate into the SQL queries we had written. Accomplishments that we're proud of We are very proud that we most the core functionality of the project functioning. It is a great base for us to continue to develop or with a few more hours been able to add more features! What we learned There was a lot of stuff that we learned during this process. One of the biggest things for myself was setting up a SQL database that used foreign keys and had many to many relationship. Which allowed us to learn about bridge tables as we were using it. We also learned how to host an application on Flask while having it be somewhat dynamic with user content and info. What's next for CodeStock Speaker Feedback It would awesome if we were able to actually run our application through the real world paces! There is still a bit of fine tuning to do and making sure that we can control the functionality completely through the UI and not rely on having to make database level edits. So hopefully if we are to continue this project will we polish the core functionality which is a solid foundation and we will be able to add some of our stretch goal functions!
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00248.warc.gz
CC-MAIN-2020-16
3,000
16
https://hackaday.io/project/12607-iss-overpass-indicator
code
To know the current satellite position, the ESP8266 first gets a recent two-line element set from the web. Then it uses the SGP4 model to calculate the current satellite position. With the model and some extra algoritmes, it's able to tell if the satellite is in the earth shadow, and it can predict future overpasses. This information is display on the leds with a color scheme, and on a webpage with a web-socket. The color changes depending on the elevation of the satellite and the type of overpass: daylight, eclipsed or visible. (Note: Stellarium is used as a reference. There is no connection between Stellarium and the ESP8266.) Summary of all it's features: - Calculating satellite position and visibility. - Predicting previous or future overpasses. - Webserver for configuration and real-time information. - Neopixels for warning the user when the satellite is overhead.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00409.warc.gz
CC-MAIN-2022-05
881
8
http://snakers41.spark-in.me/1683.html
code
During the last competition my teammate found a nice paper in Jeremy Howard's tweet. DS/ML/CV specialists in the USA like Twitter for some reason. In Russia / CIS Twitter is not used (at first vk.com was better and now telegram is better) and I have always considered it to be a service like Snapchat (i.e. useless hype generator) but with roots in SMS era (their stock and dwindling user base agree). But this post - goo.gl/y3DXWH - changed my mind (twitter accounts of the brightest minds from NIPs). So I decided to monitor their tweets ... and I guess twitter does not send you emails on every new tweet so that you would use their app. Notifications about new tweets are limited either to API or push notifications or SMS - which is hell (+1 garbage app on the phone - no thank you). So today we decided to write and share a small python class that would use Twitter API to send you emails - how it looks in Gmail - prntscr.com/hvldt7 Please feel free to use it, share it, star it and comment. Many thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00170.warc.gz
CC-MAIN-2019-22
1,011
7
https://p2p.wrox.com/classic-asp-basics/4929-splitting-list-asp-pages.html
code
Splitting a list of asp pages Hope I've come to the right place.... I want to be able to display the results of a search, but I want to limit the number of returned results to 10 per page. Im sure this must be quite easy code to write, but Im fairly new to VBScript and ASP, and would appreciate any pointers in the right direction Ta very muchly
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060603.10/warc/CC-MAIN-20210411000036-20210411030036-00421.warc.gz
CC-MAIN-2021-17
346
5
https://wowcryptocurrency.com/6-top-crypto-altcoins-to-buy-hold-forever/
code
6 TOP CRYPTO ALTCOINS TO BUY & HOLD FOREVER In this video, we are going over the 6 top Crypto Altcoins to buy and hold forever. These are the best cryptocurrency to hold long term. Join our channel membership to get access to perks: Get $10 of BTC When You Sign Up With Nexo Partnered With Evai.io – Unbiased Crypto Ratings Crypto . com – FREE $25 Bonus Cold Storage from Ledger Cardano Stake Pool Harmony ONE Stake Pool 👉 Instagram: https://bit.ly/3yjg6cr Long term investing historically has been safer than day trading and swing trading. Plus, the tax advantages of holding for over 1 year are important. That is why a majority of my investments are in long term holds. Long term investing in the crypto and altcoin scene is of course less established than the stock market, but I do think there are some coins that should be bought, held, and not sold until much further into the future. I’ll go through what these 6 crypto coins are that I am bullish on. The crypto market is bouncing back right now and I’m hopefully the uptrend continues! Crypto and blockchain are seriously amazing technologies that deserve to have the spotlight, and it has been very exciting riding the waves over the last few years. With any type of investing, we recommend you do your own due diligence before investing. Just because a coin was on this list does not mean you should blindly invest in it – that is very silly to do. These are my opinions and you should create your own using this video as a reference guide only This video should not be taken as financial advice. #crypto #cryptocurrency #cryptonews #cryptocurrencynews #altcoin #altcoins #bullrun #moon #hodl #blockchain #nft #digital #decentralized DISCLAIMER: We are NOT financial advisers. None of what we have communicated verbally or in writing here should be considered as financial advice; it is NOT. Do your own research before investing in any digital asset, and understand that investing in any cryptocurrency is risky. If you do, you need to be prepared to lose your entire investment. ⚠ This video is for information / entertainment purposes only ⚠ All our videos are strictly personal opinions. Please make sure to do your own research and never take our opinions for financial guidance. There are multiple strategies and not all strategies fit all people. Our videos ARE NOT financial advice. cheeky crypto,top crypto,top crypto to buy long term,crypto long term,crypto long term investment,crypto long term hold,crypto long term coins,crypto long term best,best crypto long term investment,cryptocurrency news,crypto 2021,best crypto to buy today,cryptocurrency,top cryptocurrencies,which crypto to buy,which crypto to buy today,which crypto to buy now,top crypto october 2021,altcoins,altcoins to buy now,altcoin gems 2021 cheeky crypto,5 altcoins to buy now,crypto to buy now,alcoins right now,altcoins buy now,crypto right now,best altcoins,altcoins august,crypto news,altcoin,altcoins,crypto,cryptocurrency,top altcoins,top altcoins to buy now,best altcoins 2021,which altcoins to buy,crypto news today,top altcoins 2021,best crypto,altcoin news,crypto 2021,august,cryptocurrency news,what crypto to buy,todays crypto news,crypto market,cryptocurrency news today Investing Money Finance Entrepreneurship Trading Journey Wealth Income Education Learn Passive Income Side Hustles Financial Literacy #ADA #CARDANO #BITCOIN #BTC #ETH #EGLD #VeChain #VET #LINK #Chainlink #ALT #ALTCOIN #ALTCOINS 00:53 What is Bitcoin 02:31 What is Ethereum 03:55 What is Elrond 05:16 What is CARDANO 06:58 What is VeChain 08:47 What is Chainlink
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00015.warc.gz
CC-MAIN-2021-49
3,606
28
https://news.ycombinator.com/item?id=21216693
code
Still a cool project but I think this limits a lot of its utility. Providing an API to access the data as a service would be a lot more profitable. Can I scrape password protected stuff with Spider? Yes! It’s a browser extension, so as long as you log in first, you can scrape whatever you like.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476432.11/warc/CC-MAIN-20240304065639-20240304095639-00096.warc.gz
CC-MAIN-2024-10
297
3
http://blogs.msdn.com/b/jimw/archive/2012/01/22/26-things-every-programmer-should-know-in-2012-start-here.aspx
code
By way of an introduction to this blog series, '26 things every programmer should know in 2012', let me tell you a little about myself. I'm a Senior Software Development Engineer for Microsoft Corporation, working in Management and Security Division (MSD). My team works closely with the Product Groups to create Solution Accelerators such as MDT (Microsoft Deployment Toolkit). Check us out online: http://technet.microsoft.com/en-us/solutionaccelerators/bb545941. I've authored (or co-authored) two published technical books - Pro SQL Server 2005 Integration Services for Apress and TK 70-561 ADO.NET Technical Specialist Training Kit for MSPress. 2012 is my 30th programming 'anniversary', so I thought it would be good to look back at that experience and all the stuff I've learned. I'll pick out any bits I feel are particularly pertinent to the programmer in 2012, and publish them to this blog. Programming is quite different today than in 1982, but there are still many lessons to be learnt that are just as relevant in 2012 as they were back then. And plenty of new ones too! For example - I've heard in 2012 some employers let their programmers out of their darkened basements occasionally...I don't believe it myself, but that's what I hear. I'll be concentrating on a few different areas of programming, but when it comes to the technical content, it will center around .NET 4, C#, SQL Server, WPF, Silverlight, XAML, Metro, and System Center platforms such as Service Manager 2012, Configuration Manager 2012 and Orchestrator 2012. You'll probably read some (or all!) of the articles in this series and think 'duh, that's obvious!' or 'who doesn't know that!!'. If that's the case...great :-). You have, in my opinion at least, a great foundation to build your programming career upon. Let me know what you think every programmer should know in 2012. You may read the articles and disagree with the content. Again, that's great :-). Stimulating conversation about modern programming is one of the aims of this series. I'd love to hear your arguments - leave a comment below and start the discussion! My end-goal is to provide an honest, informative but entertaining and lively discussion. There may or may not be a sprinkling of lines from Star Wars and Tron and other 'geek' staples - for which I apologise in advance (I don't think it's cool or funny, but I literally just can't help it). I'm going to try to be brutally honest and not pull punches, but I don't have the talent to be an 'Eric Brechner'! (http://blogs.msdn.com/b/eric_brechner). I would definitely say however that I take inspiration from him and his work. I've read his 'Hard Code' blog and read the books over the years and find myself agreeing completely with most of what he writes. He's a smart guy who talks a lot of sense - so if you're looking for the 'real deal', check him out. Let's do this. Starting this week...three weeks later than planned…26 things every programmer should know in 2012. Coming Soon: Week 1: Fun is a Four Letter Word
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699776315/warc/CC-MAIN-20130516102256-00098-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
3,032
12
http://datakeyword.blogspot.com/2014/08/group-cursor-in-esproc.html
code
In the big data computing, besides the grouping and aggregate operations, sometimes you also need to retrieve a group of data each time to analyze. For example, analyze the sales by date, collect statistics on sales curve for each product, and the purchase habit of each client. In esProc, you can use function cs.fetch(;x) or cs.skip(;x) to get or skip records till the value of expression x is changed. By doing so, a group of consecutive data can be obtained. For example, retrieve a product each time and prepare to examine the sales data of each product:From B7, the records of the 20th goods can be retrieved like this: The data retrieval in esProc cursor is a one-way street. Thus the data in cursor must be in order when retrieving a group of records each time as necessary. As we know, that the @z option can be used to retrieve file by block or data from cursor. However, when retrieving by block, esProc will determine how the data is divided, and sometimes you may encounter troubles.First, let’s prepare a data text: For the above-used data which are already sorted by the sequence number, store them into a new binary file Order_Products: In the later computation, if retrieving data by segment, we will get the situation given below: After all data are divided into 100 segments, retrieve the data from the 1st segment in A3, and retrieve the data from 2nd segment in A5, as shown below: At this point, you may encounter such problems: For the product number B1445, its sales record appears in both groups. If aggregating after data retrieval each time, then duplicate product numbers may appear in the result returned, and the re-aggregation will be necessary to get the final result. Such piecewise computation is quite common for the parallel computation over big data. The above conditions will make the computation ever more complicated. In this case, we should perform the segmenting by group when storing the data. For the data sorted by the sequence number of products, save them as a binary file Order_Products_G, segment by group according to the PID. This is slightly different to the method we adopted previously to write the data to a file of Order_Products. Please note that piecewise storage is only valid for the binary file. In this step, the data retrieved in A3 and A5 are as follows: At this point, for the data of the segment 1, all product records whose number is B1445 will be read out. As for the data of segment 2, the record will be retrieved from the next product. As can be seen, if the segmenting by group is set to perform during writing a binary file, the data of a whole group will be put in a segment for retrieval from the cursor. With segmenting by group, the integrity of the data in each group can be guaranteed, and the piecewise computation over big data can be simpler and easier.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735812.88/warc/CC-MAIN-20200803140840-20200803170840-00181.warc.gz
CC-MAIN-2020-34
2,837
10
https://www.rune-server.ee/members/sabsabi/
code
LOL yo whats up man have fun with region fix Are you the owner of SabsabiOnline? Fucking talk on MSN Nice,I have a vB license you can possibly use What happened to your sites forums? Didn't you have a real vBulletin license ect sorry for spamming your server thread, but that idiot acts like a 10 yr old and i couldn't hold my flame any longer l0l Send an Instant Message to Sabsabi Using... Split Era #1 Development Services √
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00524.warc.gz
CC-MAIN-2022-33
429
10
https://www.turbosquid.com/3d-models/head-morph-targets-3d-model/623359
code
|| SPECS || - 1,812 polygons isolated head without subdivition (14,000 polygons with turboSmooth ON) - 25,000 polygons including morph targets. - Preview images rendered with Max Scanline + LightTracer. - Standard materials only. - All textures are hand painted (PSD files are included with product) Texture dimensions: - 4096x4096 (1) Head - 2048x2048 (1) Hair - 512x512 (2) Eye, Eye_r || General Characteristics || - Clean mesh edge loops, only quad and minimum triangles. - Model has real-world scale and is centered at 0,0,0. (head height is about 40cm) - Objects, Materials, Textures use meaningful names. (striped texture path names) - Scene objects organized by layers. - No Third-party renderer or plug-ins needed. || Additional Notes: || - Also PSD files use named layers for easy customization. - MAX file is the original version; the use of this is recommended. It include morph modifier.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00279.warc.gz
CC-MAIN-2020-24
899
3
https://lists.debian.org/debian-user/2013/04/msg00429.html
code
A home directory on removeable storage ... ... is possible. I understand the concept of mounting Additionally there can be a backup home directory which stays with the machine, on a hdd for example. I imagine that when the machine powers up without the removeable storage, the backup home directory is instated. When the removeable storage is connected, the backup is remounted as another directory in /home and the removeable is mounted as the home. So for example, without the removeable present /home/peter is on the hdd. With the removeable present, /home/peter is on that and the hdd is /home/peter.bak with the same ownership and privilages as /home/peter. With udev, it might be accomplished with one or two scripts. This can't be an original idea. Is it available? Thanks, ... Peter E. 123456789 123456789 123456789 123456789 123456789 123456789 123456789 12 Tel +13606390202 Bcc: peasthope at shaw.ca http://carnot.yi.org/
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00732.warc.gz
CC-MAIN-2022-27
931
18
https://zeeshanusmani.com/2019/10/10/what-is-kaggle-why-i-participate-what-is-the-impact/
code
Kaggle is an AirBnB for Data Scientists – this is where they spend their nights and weekends. It’s a crowd-sourced platform to attract, nurture, train and challenge data scientists from all around the world to solve data science, machine learning and predictive analytics problems. It has over 536,000 active members from 194 countries and it receives close to 150,000 submissions per month. Started from Melbourne, Australia Kaggle moved to Silicon Valley in 2011, raised some 11 million dollars from the likes of Hal Varian (Chief Economist at Google), Max Levchin (Paypal), Index and Khosla Ventures and then ultimately been acquired by the Google in March of 2017. Kaggle is the number one stop for data science enthusiasts all around the world who compete for prizes and boost their Kaggle rankings. There are only 94 Kaggle Grandmasters in the world to this date. Do you know that most data scientists are only theorists and rarely get a chance to practice before being employed in the real-world? Kaggle solves this problem by giving data science enthusiasts a platform to interact and compete in solving real-life problems. The experience you get on Kaggle is invaluable in preparing you to understand what goes into finding feasible solutions for big data. Kaggle enables data scientists and other developers to engage in running machine learning contests, write and share code, and to host datasets. The types of data science problems posted on Kaggle can be anything from attempting to predict cancer occurrence by examining patient records to analyzing sentiment to evoke by movie reviews and how this affects audience reaction. Different sources post projects on this trailblazing platform. While some are just for educational purposes and fun brain exercises, others are genuine issues that companies are trying to solve. Kaggle makes the environment competitive by awarding prizes and rankings for winners and participants. The prizes are not only monetary but can also include attractive rewards such as jobs or free products from the company hosting the competition. Monetary prices are exciting to most Kagglers. For instance, Home Depot was offering a winning prize of a whopping $40,000 in search of an algorithm to improve search results on homedepot.com. For most data science enthusiasts, this innovative website is not only a monetary resource, but it is also an indispensable learning tool that helps improve the experience, gain knowledge, elevate and enhance the skills, and learn from mistakes by resubmitting the code. It is the perfect platform to practice consistently. The Kaggle community is growing fast. There are currently over one million Kaggle members (Kagglers). This data community has submitted above four million learning models to different competitions. Kaggle users have shared over one thousand datasets, more than 170,000 forum posts and over 250 kernels. According to the founder, this incredibly fast growth can be attributed to high-quality content, data, and code shared by Kagglers. Most Kaggle users are committed and active hence the 4,000 forum posts per month and more than 3,500 competition submissions on a daily basis. This platform is the place to be for data scientists and machine learning engineers worldwide. Why is Kaggle Worth Your Time? - Interesting and challenging projects where contributors can learn and practice Kaggle competitions involve solving challenging and interesting problems. Companies post projects to numerous contributors. It especially a great place for beginners who are just trying to break into the data science field. Aside from the competitions that are open to the general public, Kaggle also has private competitions which are only open to top rated participants (Kaggle Masters). - Insightful discussions with industry leaders and learned experts Apart from the projects, Kaggle also consists of live discussions between numerous people on the platform. Such forums are very interesting, stimulating and informative. Through these discussions, you can either seek advice from others or offer advice to people who are dealing with issues you understand - Kaggle offers its audience a chance to get into the biggest data science community in the world This platform is trusted by some of the largest data science companies of the world such as Walmart, Facebook and Winton Capital. On Kaggle, data scientists get exposure and a chance to work on problems faced by big companies in real-time. While it is not a guarantee, there is always the chance that the company will be impressed enough to recruit. This data science platform is the brain Child of Anthony Goldbloom, a brilliant 28-year-old econometrics expert. His objective was to bring large and open data to the masses through crowdsourcing. According to Goldbloom, Kaggle has united data scientists and businesses in a meaningful way. His concept did not receive sufficient backing in Australia initially, and then he decided to relocate to Silicon Valley in the United States. In a recent tech conference, Goldbloom expressed his surprise at how much talent is available and was inaccessible to companies before the inception of Kaggle. How Kaggle Works The host of the competition is in-charge of preparing the data and preparing a detailed description of the problem at hand. To make it more convenient for hosts, Kaggle offers an additional consulting service that can help prepare data and describe the problem in the best possible format. The participants who compete for projects submit their models with a variety of techniques. All the work is shared on the platform through detailed Kaggle scripts with the intention of inspiring new ideas to achieve better benchmarks. In most Kaggle competitions, submissions are scored immediately and clearly summarised publicly on the live leader-board. Competitors are not given a single chance at solving a problem. Before the deadline expires, the competitors are allowed to make revisions on their submissions as they deem fit. This fuels competitors’ motivations to consistently innovate, be creative and polish their skills to produce better, elegant and effective solutions. Allowing for revisions elevates the level of accuracy and precision as well. When the deadline for a competition expires, the host pays the prize money to the winner. Hosts have the sole ownership and royalty-free license to use the winning entry any way they want with all intellectual property. How the Winner is Selected The host will screen participants depending on where they are placed on the leader-board. Their final scripts and also the content of the scripts submitted. Most hosts take the prerogative to reach out to strong contenders and arrange interviews. Do Kaggle Projects have any Real impact One of its biggest and most recognized projects is one by Heritage Health which offered a remarkable cash price of $3 million. Competitions hosted on Kaggle have had far-reaching impacts such as enhancing and enabling state of the art HIV/AIDS research and improving traffic forecasting. Several informative academic papers have been written and published on the basis of the findings generated through Kaggle contributions. Essentially Kaggle has given companies the opportunity to seek solutions from the best data scientist in the world and to have external pairs of eyes to look at the problems they are trying to solve. I’ve recently published a book Kaggle for Beginners, I hope you will enjoy it.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00585.warc.gz
CC-MAIN-2024-10
7,504
26
https://tutorials.anastasiy.com/?kbe_knowledgebase=tip89-auto-update-backgroundforeground-color-relationship-when-using-eyedropper
code
Tip#89: Auto-update background/foreground color relationship when using eyedropper Keep relationship between foreground and background colors when using eyedropper. In Photoshop click Link button next to color swatches on MagicPicker color wheel panel and then use eyedropper. Background color will update automatically. So for example you can always have complementary color in the background swatch. Note that you can assign a keyboard shortcut to the Link functionality.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00355.warc.gz
CC-MAIN-2022-27
473
4
https://www.mysciencework.com/publication/show/early-phase-insulin-release-man-new-method-quantitative-analysis-49f1d959
code
A method is presented for the quantitative analysis of early insulin release in man. There were measured arterial insulin levels after glibornuride administered intravenously. The mathematical procedure has been modified: Modification I is based on the assumption that early insulin release represents a wave like insulin delivery, modification II is based on the assumption that this insulin bolus is immediately followed by a slower insulin release which must be distinguished from the second phase of insulin release. For the calculations there was used a "primary insulin space" derived from experiments with exogenous insulin. The results of calculations were varying up to 1.5 units of early insulin release in healthy volunteers receiving glibornuride with dosages varying up to 50 mg. The value of the presented method for examinations of insulin release for theoretical and clinical purposes is discussed.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826145.69/warc/CC-MAIN-20181214162826-20181214184826-00022.warc.gz
CC-MAIN-2018-51
914
1
http://superuser.com/questions/400583/how-to-change-from-bash-3-2-to-userlocalhost-in-unix
code
I am newbie to UNIX. When I was practicing some commands in UNIX. Earlier the Prompt was shown like this "[user@localhost ~]$". After some time it shows "bash-3.2$", but still some commands worked. I tried to change the shell type from bash to ksh and csh. But that didn't worked. How can I change this bash-3.2$ back to [user@localhost ~]$ in bash shell. migrated from stackoverflow.com Mar 14 '12 at 11:11 This question came from our site for professional and enthusiast programmers. take a look at http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html on how to set up the I noticed I got this same issue when I ran you can just type if you run -s [command]: The -s (shell) option runs the shell specified by the SHELL environment variable if it is set or the shell as specified in the password database. If a command is specified, it is passed to the shell for execution via the shell's -c option. If no command is specified, an interactive shell is executed. This prompts for your password, and you can just type The main different here is the
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824499.16/warc/CC-MAIN-20160723071024-00065-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
1,070
10
https://rondebruin.nl/about.htm
code
I have been a Microsoft MVP (Most Valuable Professional) for Excel since 2002. For more information about the MVP award click on the MVP logo above. Next to my daily job I run a small company named "Ron de Bruin Excel Automation". If you need an Excel developer for a commercial project you can contact me in English or in Dutch. Click here to go to my Contact page. You will be surprised what you can do with Microsoft Excel!
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00085.warc.gz
CC-MAIN-2022-33
426
2
https://www.sqlservercentral.com/forums/topic/parse-xml-to-table
code
Comments posted to this topic are about the item Parse XML to table I tried running it on a bit of xml that stores an order on our database. It ran for over 6 minutes before I stopped it. This only works for the most of trivial samples. Academic and won't work for real world cases. Try using it on over a gig of XML data and watch it blow up. I just tried it on a bit of xml that had a DataLength of 13859 and it took 3:28 (3 minutes 28 seconds) to complete. I then tried it on another bit of xml with a datalength of 20030 and it took 22:47 (22 minutes) to complete and generated 993 rows. It would be nice if it were a bit quicker so it could be used in a reasonable amount of time. I'll add to the slowness comments... it processed 11K of XML in a little over two minutes. I tried on a 141K block and it ran for 12 minutes before I gave up. Thanks all for the feedback. Every environment is different so it's hard to anticipate what kind of loads such a process might need to handle. In my use it was for relatively small XML files used by a web service. I think the main problem is that it uses several levels of recursion within the CTEs. I'll see what I can do to streamline the process. Any suggestions are welcome. I'm always open to improving the code. This is a nice bit of code on smaller chunks of XML. It might also be worth adding exec "sp_xml_removedocument @idoc" to the end of the procedure to free up memory. Not working for me. Microsoft SQL Server 2012 - 11.0.5058.0 (X64) May 14 2014 18:34:29 Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) When trying an example from the post it gives: Msg 217, Level 16, State 1, Procedure ParseXMLtoTable, Line 63 Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32). PS. Nevermind. I just run the whole code at once and examples just became a part of the procedure itself. The code has several issues. - Missing sp_removedocument means there is SQL memory leak. sp_PrepareDocument is no longer recommended. - Placing a string into a XML type variable parses it automatically, there was no need to use sp_preparedocument. This was done several places with the overhead of parsing done each time. - Having an input parameter of XML type and passing in an invalid XML string (due to the automatic parsing) can be hard to track down. Better to have the input parameter be nvarchar(max) then convert to XML variable inside the SP where invalid XML can be handled. i am missing or not seeing the place where to puth my path Thanks for the script. Viewing 11 posts - 1 through 10 (of 10 total)
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00408.warc.gz
CC-MAIN-2023-14
2,648
28
https://www.myonlinetraininghub.com/excel-forum/vba-macros/executing-dos-batch-file-from-within-excel-vba
code
March 29, 2019 Am trying to execute a bat file from within Excel VBA. Using a variable to select a file with full path to process with the batch file. Q. I need to run a specific .bat file that would be located in the same folder/path as the selected file. How would I run the bat file with the selected file. Have viewed: https://www.myonlinetraininghu...../vba-shell With no success. Any suggestions greatly appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510983.45/warc/CC-MAIN-20231002064957-20231002094957-00003.warc.gz
CC-MAIN-2023-40
422
6
https://www.edaboard.com/threads/simulation-of-srr-using-hfss.208970/
code
Newbie level 4 While simulating the structure using HFSS, we use PEC-PMC boundary conditions along with wave ports. Could you suggest some links where i can read up on PEC-PMC boundary conditions and its implications? Better to use Floquet Ports and Master Slave boundaries. HFSS does these well and fast this way. See tutorial for getting started guide of unit cell setup.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038887646.69/warc/CC-MAIN-20210419142428-20210419172428-00433.warc.gz
CC-MAIN-2021-17
373
3
https://www.freelancer.com.jm/job-search/cpanel-full-backup-script-cron/
code
...listing site. You need to make a module in the admin end, so that i can upload the listing in bulk. the listing should be,uploaded by cron job in the background. I will just upload the files, into the website. cron will be scheduled in such a way that all the queued files wil be uploaded. import will be imported to: contact_detail. Each file will be looking for an expert to set up a cpanel email system for my client. Need to ensure SSL set up and security settings to properly work. need to be able to set up email on gmail as secure also and be able to forward emails. also need to import contacts from old cpanel server to new addresses for about 6 emails. All emails are currently set up on new backupstrony, wstawienie nowej takiej samej tylko z inną grafiką. Presta I cant install WordPress from CPanel because its occuring an Redirect error. I tried everything but still not working. I have a website that run on interspire script. I would like to be able to back up data base and restore data base. Including the image files. I need email accounts, emails and a wordpress instal...emails and a wordpress install (to show in installatron on new host as well) migrated from one cpanel to another. The server it will end up on already has websites etc on there so migration will need to be done for individual file rather than entire cpanel. I am needing this done right away please I need some changes to an existing website. migrate website on vps cloud vultr company now they offer cpanel with centos so need to deploy new server with cpanel application and transfer my website to cpanel and delete previous server and also email configuration with yandex Also can you migrate my whole website data to new woocommerce theme? Not move 6 WordPress sites from 1 cpanel host to cPanel on Siteground setup DNS A records for the 6 staging servers move the files / database and reconfigure so wordpress works properly on all 6 sites to a staging server - test - and make sure they are the same and no errors once approved by the clients - push the files to the main site on siteground Hi, we are looking for someone with knowledge of Magento and Cron jobs. We have a Magento 1.9 site with various extensions, we have had intermittent cron problems, but it has all stopped working at the moment. So the job is 2 parts... 1) find out what is wrong with our current cron jobs and get them working again - URGENT 2) create a better We need to know how we can create new user (admin/manager and user) from backend/cpanel. As well as how we can change the passwords from those user. We expect a professional documentation with screenshots and steps mentioned we need to do. Additionally, we need someone to implement our google analytics code. Shall be done today itself Will pay $25 upon completion of work, Not hiring no-one over my pay price. Need help to integrate Wchat with my existing users table and pass user session as stated by creator, i tried but some of the tables are taken or have the primary key taken already. Need step by step installation help. If your familiar with Wchat lets do it.... I already have the folder in the root. ***** I&... Write a PHP Script to fetch ads from [url removed, login to view], and [url removed, login to view] post ads on my website using Cron Job Setup new VPS, with Mail preferences for mass email (Dkim,SPF etc) & secure Server. IP...preferences for mass email (Dkim,SPF etc) & secure Server. IP Rotation & Setup of Mailwizz software with cron jobs etc. Please only contact if you are familiar with Mailwizz Email Marketer & are able to Install and maintain this. Setup of Backup service/storage Hi, our client is requiring their customers backing up ASAP. The live website of theres currently has no customers but their Staging website (which is a backup and can be accessed via a URL) has all their customers. Both websites (live and staging) are Magento 220.127.116.11, but it's just the customers they wasnt to transfer (as there are new orders on Our website is down more than 24 hours. We have restarted server and also disabled firewall but still website not opening. Restarted server and all services but not solution. But we are able to ping website without any issue. We need a script which would take backup mysql database, secure it with password and move it from current server to another server. It can be a shell script, PHP script etc. We are open for suggestion. Thank you for placing bid!
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513866.9/warc/CC-MAIN-20171211183649-20171211203649-00188.warc.gz
CC-MAIN-2017-51
4,461
16
http://forum.xda-developers.com/showthread.php?p=29666387
code
"Updating" to an older version is not possible, no. It might be possible to spoof the current version to an old one, if the phone is interop-unlocked, and then "upgrade" to a firmware release that's actually older than your current one... but even that might not actually work, if the updater checks the checksums of the "old" files. It wouldn't work with OS updates for sure; those contain differential (rather than canonical) patches. Installing an old, stock ROM with the old firmware does work. However, while this is possible with Samsung gen1 phones, I don't believe anybody has found a way to do it with gen2 phones yet. Finally, restoring a phone backup (as made by Zune when updating, or as made using the Easy Backup tool which just spoofs the update process), definitely does restore both your unlocks and your ability to unlock - that is, it restores the registry (where the unlocks are controlled from) and it restores the firmware (where the ability to unlock is implemented). If this didn't work for you, it's due to one of two things: A) Your backup (more properly called a "restore point") was created at a time when your phone already had relocked and lost the ability to unlock. A.1) You can use an older restore point, if you kept them (which I strongly recommend); just move it back to %LOCALAPPDATA%\Microsoft\Windows Phone Update. A.2) Before you ask, no, you can't use somebody else's restore point; they have device-specific encryption that we haven't broken yet. B) Your restore worked, but trying to interop-unlock again failed because you updated your Diagnostics app from the Marketplace. B.1) You can undo this by first uninstalling Diagnostics from the app list, then re-installing it using the phone dialer. Win8/Windows RT Projects: List of desktop apps for hacked RT devices XapHandler, Root Webserver, OEM Marketplace XAPs, Bookmarklets collection (Find On Page), Interop-unlock hacks.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700014987/warc/CC-MAIN-20130516102654-00097-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,920
11
https://cgsignal.com/u18/p1030
code
Real world scale 1 = 1 inch. Clean geometry in all models, only quads and triangles were used to create the geometry. All textures, materials and lighting included. Three different animal (horse A, horse B, and camel) materials have color and normal textures with size 1024x1024. Animals and bench are subdivision ready, product images were generated with a subdivision level 2. Unwrapped non-overlapping uv's done for all the animals and other models using textures. 2048x2048 textures used on the carousel structure. Background photo seen on product images not included. A rig has been set up for automatic animation with the following controls. 1 - Swing Speed : controls the speed animals move up and down (Slider). 2 - Spin Speed : controls how fast the carousel rotates (Slider). Flag with dynamics using C4D cloth object. Please look at included images and preview animations (Swing.mp4 and Spin.mp4) Automatic animation only available on the Cinema4D version of the file. FBX, OBJ version of the file will include material definition but may require additional setup to achieve similar results. If you need a customized version of any of the Solancla assets please contact CGSignal.com. - Published onSat Feb 18 2012 - Cinema4D R1080.4 M. - Collada 1.490.3 M. - scale unitsinches - 3D printableno CGSignal Standard License You are free to: Adapt — remix, transform, and build upon the material for any purpose, including commercially. The licensor cannot revoke these freedoms as long as you follow the license terms. Under the following terms: You may NOT redistribute or resale the licensed content, unless it is used as licensed. You may NOT distribute purchased contents using (VRML, WebGL) or any other technology that allows open access to the licensed contents data.https://www.cgsignal.com/content-license Questions? please contact us
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00665.warc.gz
CC-MAIN-2022-21
1,852
29
https://www.toughbyte.com/positions/finland/helsinki/alphasense/full-stack-javascript-developer-998
code
Must have skills: Nice to have skills: Considering candidates from: CIS and Schengen CIS and Schengen Work arrangement:Onsite only Company size:201-500 employees Trial period:3 months AlphaSense is a revolutionary AI-powered search engine for market intelligence used by financial firms and corporations across industries and geographies. With more than 1,000 enterprise clients, their mission is to enable knowledge professionals to acquire critical business insights and data with speed and conviction. The company's search technology leverages AI and NLP to parse topics, concepts and ideas semantically and uncover the most relevant insights from previously fragmented data sets. Right now, the company is looking for a Full-Stack Developer who will be working on numerous new features development, migration to GraphQL and the public API development. - Experience with React and Node - Solid experience with GraphQL (they might consider a candidate without this experience in case this is an extremely senior candidate) - Fluent in communication, and with excellent organizational, problem-solving, debugging, and analytical skills - Higher education degree in a relevant technical discipline such as Computer Science, Engineering or Information Technology is highly desired - Ability to create high-performance systems using cloud-native patterns Check out the answers to frequent questions about this position below. Can't find the answer you're looking for? Ask us via email or try the company page. How does the interview process look like? The standard process includes the following stages (might vary): - Toughbyte screening call - Hiring Manager runs a 1-hour screening call, tells about the company and asks a few tech questions - Test assignment (3-4 hours) - Team tech interview - Site Director Interview - Reference check
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00675.warc.gz
CC-MAIN-2021-43
1,838
24
https://community.cisco.com/t5/networking-documents/fabricpath-cisco-s-new-technology-of-extending-layer-2-network/tac-p/3116060/highlight/true
code
The traditional Ethernet network designs require termination of the Layer 2 Network at Aggregation or Core layer to limit the fault isolation and broadcast domain due to spanning tree. Due to limitations of spanning tree, each network design is composed of both Layer 2 and Layer 3 to take advantage of routing features (e.g. multipathing and fast convergence, loop mitigation mechanisms like TTL, RPF) to extend their network as shown in Figure 1 below. Cisco's new technology, FabricPath, brings Layer 3 routing benefits to flexible Layer 2-bridged Ethernet networks. Due to widespread use of virtualization and clustering technologies these days, many organizations are looking to extend their Layer 2 domains across multiple data centers. FabricPath can provide this solution because it has the reliability and HA (High Availability) features just like ISIS protocol. Figure 2 shows the key points of FabricPath from both a Layer 2 and Layer 3 prospective. Currently FabricPath is available only on the F1-Series module of the Nexus 7000 series. FabricPath is derived from IT Standard TRILL technology with a lot of extra enhanced features. Its switching allows multipath forwarding at the Layer 2 without the use of spanning-tree. FabricPath uses layer 2 ISIS based protocol for its control plane. The FabricPath ISIS process is different than the layer 3 ISIS process. FP Forwarding Mechanism: FabricPath creates trees just like spanning tree but uses link-based control based on ISIS protocol rather than distance vector like spanning tree. This is why it is loop free. This allows FabricPath to be in a forwarding state on all paths (maximum 16) without any blocking. It also allows faster convergences in case of failure similar to routing protocol. As shown in Figure 3, the fabric topology is composed of ingress, egress (edge) switches that are connected to the hosts and core switches that provide the fabric to connect all the edge switches. The egress switch can have the ports connected to conventional Ethernet (CE) so the egress switch is the one that has interfaces that are part of fabric path and CE. To forward the traffic to multiple destinations, FabricPath creates the Tree. After electing common roots for the L2 Fabric, "trees" from these roots are calculated from the shared L2 IS-IS routing database. In FB topology, each switch gets a unique switch ID, as depicted in Figure 3, to create the Layer 2 routing table. The ingress switch determines the "Tree" to be used for a flow and add the unique Tree identifier into the Fabric Path header. Figure 3 depicts the Fabric Path routing table view from each switch. As mentioned before, once the root of the tree is determined, the root assigns dynamic IDs to the members. One of the major improvements in FabricPath is that not all the switches in the Layer 2 FabricPath domain have to learn all the MAC addresses, which helps scale the MAC address tables. In FabricPath, as shown in figure 4, the MAC address table of the host A where it is showing that the host B is connected locally on the CE, whereas host C and D are connected via FP to the remote switches 101 and 200. When the Host A needs to send traffic to host C and host C's MAC address is not known, it floods the traffic to its root tree and then root forwards the packets to all its member switches. If the destination is not known on a particular edge switch, the switch drops that frame and does not learn the MAC address. However, if the destination is present on that switch, it will keep the source MAC address. To help reduce the MAC address entries, the core FabricPath switches never learn the MAC addresses. FabricPath is fairly simple to configure.To configure the basic FabricPath network, follow the following steps on each device: Enable the FP feature set on each device. switch# config t switch(config)# feature-set fabricpath Configure the FP interfaces. Switch(config)# interface ethernet 1/1 Switch(configif)# switchport mode fabric Set the VLAN into FabricPath Mode. The default is the CE VLAN mode. switch(config)# vlan 10 switch(config-vlan)# mode fabricpath Please visit Cisco.com for more information about FabricPath. Dear All, I thought that applying an OSPF cost on an interface would increase the cost of the routes learned via that interface? I have a router and a switch directly connected, both of which are in area 0. The router is an ASBR since it originates a... Hi All, I have had a few issues with my Packet tracer network recently. I am unable to ping my corp lan through my firewall also. The next main issue is I need to configure AAA on my layer 3 switches. I feel like they are only acting as layer 2....
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514475.44/warc/CC-MAIN-20191208174645-20191208202645-00285.warc.gz
CC-MAIN-2019-51
4,686
20
https://www.r-bloggers.com/2012/12/how-i-learned-to-stop-worrying-and-really-love-lists/
code
One of the first weird things to get used to in R is unlearning some of the things that you think you know. As often happens, this reminds me of a quote I once read about Zen, which went about like this (I’m paraphrasing), “When I knew nothing of Zen, mountains were mountains, rivers were rivers and the sky was the sky. When I knew a little of Zen, the mountains were not mountains, the rivers were not rivers and the sky was not the sky. When I fully understood Zen, mountains were mountains, rivers were rivers and the sky was the sky.” When I knew a little bit of R, a list was not a list. Actually, I wasn’t sure what to make of it. Is it a structure? Is it a linked list? Is it an oject array? I’m slowly reaching the point where I begin to understand that a list is a list. I’m not fully Zen on lists yet, but I do know this. I think they might be awesome. For me, the first circle of enlightenment for R comes when I realize how much more powerful and flexible it is than any of the other tools I’ve used (yes, even Matlab). The second circle of enlightenment comes with an appreciation of the apply functions and that means understanding lists. Here’s a very simple construct that I’ve started applying (ha!) often: df = GetTriangleData() lCompanyDFs = split(df, df$GRCODE) lProjections = lapply(lCompanyDFs, SomeFunction) dfResults = do.call("rbind", lProjections) Here’s how the process works in a nutshell: 1) Get a pile of data, which contains at least one categorical variable. In the NFL data set, that’s a team, in the NAIC insurance data set (to be discussed in a forthcoming post), that’s an insurance company. 2) Split the data. This will return a list whose elements are all dataframes. (Or at least in this case it will.) 3) Apply some function across the entire list. 4) Stitch the results back together with a call to rbind. Lather. Rinse. Repeat. Once you’re in the second circle of enlightenment, you’ll never again write a “for” loop. This has been a lifesaver to me when I’m trying to crunch through a giant set of data. I can pull data from our warehouse and carry out routine actions for each of our 500 accounts, for each of our lines of business, for each accident/policy year, etc. I split the data along a different access and the rest of the analysis pretty much takes care of itself.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00218.warc.gz
CC-MAIN-2020-45
2,353
5
https://www.kom.tu-darmstadt.de/en/research-results/research-areas/multimedia-technologies-serious-games/gamedays/
code
The Serious Games group at the Multimedia Communications Lab is involved in a set of activities to promote the topic of Serious Games. This includes active participation in research activities from the GI (German association for Computer Science) as well as hosting events and conferences, for instance Game Jam's, very popular not only for students, as well as the GameDays. The GameDays have been established in 2005 as “Science meets Business” event in the field of Serious Games, taking place on annual basis in Darmstadt, Germany. The principle aim is to bring together academia and industry and to discuss the current trends, grand challenges and potentials of Serious Games for different application domains. Since 2010, the academic part has been emphasized resulting in an international conference on Serious Games. Further information about the GameDays - both the scientific part and the public part - is availabe at http://www.gamedays-darmstadt.de/ including links and resumes to the GameDays 2020, 2019, .. Since 2015, the GameDays international conference on Serious Games has been merged with SGDA (Int'l Conference on Serious Games Development and Applications) towards the Joint Conference on Serious Games (JCSG). Direct links to JCSG 2015 and JCSG 2016 as well as all previous GameDays and SGDA conferences are provided at http://jointconference-on-seriousgames.org Furthermore, since 2013 we organize local gatherings for Game Jams like Ludum Dare. Game Jams are game development challenges, where video games with a predefined theme are created within three days. Further information about the Game Jams is available at www.kom.tu-darmstadt.de/en/0/game-jams/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00000.warc.gz
CC-MAIN-2021-43
1,685
5
https://forcoder.su/c-sharp-5-0-unleashed/
code
English | 2013 | ISBN: 978-0672336904 | 1700 Pages | EPUB | 41 MB C# 5.0 Unleashed is for anyone who wants to learn the C# programming language in depth, understanding how language features truly work. While giving you those insights, you learn where and how to use the features to design various kinds of software. This book not only teaches the language’s capabilities, it also looks behind the scenes to build a solid foundation to aid you in understanding the .NET platform as a whole. Bart De Smet offers exceptional insight into the features of both the language and Microsoft’s broader framework. He doesn’t just cover the “what” and “how” of effective C# programming: He explains the “why,” so you can consistently choose the right language and platform features, maximizing your efficiency and effectiveness. The early chapters introduce the .NET platform, the tooling ecosystem, and the C# programming language, followed by in-depth coverage of the C# programming language itself, with immediate application of language features. The last chapters give an overview of the .NET Framework libraries about which every good developer on the platform should know. - Understand the .NET platform: its language support, libraries, tools, and more - Learn where C# fits, how it has evolved, and where it’s headed - Master essential language features including expressions, operators, types, objects, and methods - Efficiently manage exceptions and resources - Write more effective C# object-oriented code - Make the most of generics, collections, delegates, reflection, and other advanced language features - Use LINQ to express queries for any form of data - Master dynamic programming techniques built on .NET’s Dynamic Language Runtime (DLR) - Work with namespaces, assemblies, and application domains - Write more efficient code using threading, synchronization, and advanced parallel programming techniques - Leverage the Base Class Library (BCL) to quickly perform many common tasks - Instrument, diagnose, test, and troubleshoot your C# code - Understand how to use the new C# 5.0 asynchronous programming features - Leverage interoperability with Windows Runtime to build Windows 8 applications Here, the card object is checked against various types, and when a match is found data is extracted from it and bound to local variables. Pattern matching doesn’t operate only on regular objects; it can also be used to match over different types such as lists (extracting the head element and tail list) or tuples (extracting the elements at the different positions in the tuple). It’s not clear whether C# will ever get a full-fledged pattern matching operator. But it would definitely seem to be the better feature compared to simple type switching and may provide more value. In the meantime, if you’re designing an API that needs to be friendly toward type switching and the use of virtual methods doesn’t give you what you want, you have some workarounds. For example, the expression tree APIs in System.Linq.Expressions—which we explore during our discussion of reflection in Chapter 21, “Reflection”—provide a tree-based object model for expressions and statements. It’s a convenient way to represent code as data so that code can be inspected at runtime (for example, for interpretation, optimization, and translation). It’s common for consumers of such an API to have to switch on types corresponding to nodes in the tree. For example, a + operation will be represented as a BinaryExpression, whereas a ! operation is done using a UnaryExpression. Not to mention other types of expressions like method calls and such. Good layers of abstraction supported by language features such as code patterns (for example, foreach loops, using blocks, query syntax), object orientation, namespaces, and so on provide a win-win situation in this jungle of libraries. From the library writer’s point of view, an API can be given, by obeying good design principles, a natural feeling for users in various languages. At the same time, users’ skill sets can be reused to target all those libraries. Language design principles, such as static typing, together with runtime facilities, such as rich metadata support, tend to aid users in exploring APIs. A good example of tooling features leveraging this can be found in IntelliSense and the Object Browser (which should really be called Type Browser). Doing justice to all the framework libraries just by reviewing them in a shallow manner is nearly impossible, so refer to specialized titles for deep coverage of things such as user interface (UI) programming (for example, using Windows Presentation Foundation [WPF] or Extensible Application Markup Language [XAML] for Windows Store apps), targeting the Web (for example, using ASP.NET and Silverlight), communication and services (for example, using WCF), and so forth. Often, the number of auxiliary concepts introduced by those technology pillars
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00249.warc.gz
CC-MAIN-2021-10
4,995
23
https://forum.stacks.org/t/does-this-actually-store-data-in-the-blockchain/5934
code
Does Blockstack Gaia actually use the Blockchain for storage? Or is Blockstack only really using Blockstack technology for the authentication part? Is there any data stored in the Blockchain? As far as my understanding, your identify based data are stored in Blockchain. The other data that you push through dApps are encrypted and can be stored in any cloud storage (user choice). You can read through https://github.com/orgs/blockstack/projects/27 Blockstack helps in achieving this.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474775.80/warc/CC-MAIN-20240229003536-20240229033536-00784.warc.gz
CC-MAIN-2024-10
485
3
https://financetrain.com/introduction-to-stress-testing
code
Introduction to Stress Testing In the modern risk management practices, the statistical tools and models play a significant role in measuring risk. These statistical models are used to estimate the distribution of future possible outcome, such as that of interest rates, stock prices, etc. One of the most popular measures of risk is Value-at-risk. VaR is defined as the predicted worst-case loss with a specific confidence level (for example, 95%) over a period of time. One shortcoming of VaR is that it does not capture all the possible outcomes. For example, it does not capture sudden, dramatic changes in the financial markets, such as some of the recent financial crisis that we have seen. In order to overcome this shortcoming, risk managers use the tool called “Stress Testing”. A very basic definition of stress testing: Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. In terms of financial risk management, it involves stressing the portfolio with extreme conditions to see how it will perform. A stress test is a scenario that measures risk under unlikely but plausible events in abnormal markets. For example, what will happen if the interest rates become extremely high, or if there is an unexpected change foreign exchange. The stress testing can be done using many such plausible events or changes in financial variables. While such events of extreme changes will not affect the VaR, the stress testing using such events will tell us more about the expected losses in the given time horizon. Most banks and financial institutions use stress tests as a complement to value-at-risk. Stress test are more common for portfolios that require managing market risk. The portfolios most suitable for stress testing are the ones that include interest rates, equity, foreign exchange, and commodity-related instruments. There are two types of stress tests: sensitivity tests, and scenario tests. Sensitivity analysis identifies how portfolios respond to shifts in relevant economic variables or risk parameters. Scenarios assess the resilience of financial institutions and the financial system to severe but plausible scenarios. The following is an excerpt from the JP Morgan Chase 2010 annual report describing how they use stress tests: This content is for paid members only. Join our membership for lifelong unlimited access to all our data science learning content and resources.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00456.warc.gz
CC-MAIN-2023-50
2,567
12
http://www.neoformix.com/Projects/DiggViz2/index.html
code
Digg Story Graph is an interactive visualization that shows the relationships between recent popular stories on Digg through the use of node and link diagrams. Stories can be visually connected through shared vocabulary, common topics, domain, submitter, or date submitted. To drag a node use the right mouse button. If you don't have a right mouse button you can use the left button while holding down the CTRL key. Pressing the space bar will unfreeze any fixed nodes. Hovering over any node will stop things from moving. For story nodes if you hover near the center (the darker section) then the details for that story will appear. Keeping the mouse near the edge of any node allows you to see what is connected up to 2 levels away. You can click on all but the Date or Word nodes to visit an associated web page. There is also a large version of the Digg Story Graph available. It requires 900x800 pixels for proper display and a decent CPU for good responsiveness. This smaller version above shows the 100 latest popular Digg stories. The larger version will show 200 and support more word nodes. This application requires java to run and was constructed using Processing. The folks at Digg have provided an excellent api that made it quite simple to extract the data. I expect to explore some other aspects of Digg in the future. Thanks also to Jeffrey Traer Bernstein who created the excellent traer physics library that I used to construct the layouts. As always, feedback is welcome ! If you found this interesting you might like to take a look at these as well:
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701409268/warc/CC-MAIN-20130516105009-00006-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,571
5
https://dr.library.brocku.ca/handle/10464/2881/browse?type=author&value=Hossain%2C+Md.+Nour
code
Browsing M.Sc. Computer Science by Author "Hossain, Md. Nour" Equational Reasoning about Object-Oriented ProgramsHossain, Md. Nour; Department of Computer Science (Brock University, 2013-04-08)Formal verification of software can be an enormous task. This fact brought some software engineers to claim that formal verification is not feasible in practice. One possible method of supporting the verification process is a programming language that provides powerful abstraction mechanisms combined with intensive reuse of code. In this thesis we present a strongly typed functional object-oriented programming language. This language features type operators of arbitrary kind corresponding to so-called type protocols. Sub classing and inheritance is based on higher-order matching, i.e., utilizes type protocols as basic tool for reuse of code. We define the operational and axiomatic semantics of this language formally. The latter is the basis of the interactive proof assistant VOOP (Verified Object-Oriented Programs) that allows the user to prove equational properties of programs interactively.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572515.15/warc/CC-MAIN-20220816181215-20220816211215-00526.warc.gz
CC-MAIN-2022-33
1,098
2
https://www.sidefx.com/docs/houdini/vex/functions/usd_setcollectionexpansionrule.html
code
int usd_setcollectionexpansionrule(int stagehandle, string collectionpath, string rule) This function sets the expansion rule on the collection. A handle to the stage to write to. Currently the only valid value is 0, which means the current stage in a node. (This argument may be used in the future to allow writing to other stages.) The path to the collection. The expansion rule to set on the collection. USD supports a few standard expansion rules explicitOnly- only paths in the include list and not in the exclude list belong to the collection expandPrims- all the primitives at or below the includes (but not excludes) belong to the collection expandPrimsbut also includes properties of matched primitives The value of stagehandle on success or -1 on failure. // Set the expansion rule on the cube's collection. string collection_path = usd_makecollectionpath(0, "/geo/cube", "some_collection"); usd_setcollectionexpansionrule(0, collection_foo, "explicitOnly");
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00038.warc.gz
CC-MAIN-2023-40
968
14
http://www.wetechgadgets.com/uber-launches-ludwig-a-toolbox-for-open-supply-ai-constructed-on-tensorflow/
code
You wish to delve severely into the event of synthetic intelligence (AI), however you discover the piece of programming daunting? Don’t worry, Uber helps you. Ludwig, the driving big, debuted at present, an open supply "toolbox" constructed on Google's TensorFlow framework, which permits customers to coach and take a look at new fashions. synthetic intelligence with out having to write down code. Uber claims that Ludwig is the fruits of two years of labor to streamline the deployment of AI techniques in utilized tasks. He leveraged the suite of instruments in-house to carry out duties comparable to extracting info from driver's licenses, figuring out factors conversations between driver companions and bikers, offering for supply occasions for meals and extra. "Ludwig is exclusive in his capability to assist in-depth understanding of studying for non-experts and to allow quicker model-building iteration cycles for builders and skilled researchers in machine studying, "wrote Uber in a weblog. "Through the use of Ludwig, consultants and researchers can simplify the prototyping course of and streamline information processing, in order that they will concentrate on creating deep studying architectures slightly than on." quarrels of information. " above: visualizations produced by Ludwig. Picture Credit score: Uber As Uber explains, Ludwig supplies a set of synthetic intelligence architectures that may be mixed to create an end-to-end mannequin for a given use case. To begin the coaching, a tabular information file (comparable to CSV) and a YAML configuration file specify the columns of the primary one which can be enter options (that’s, the person properties or the noticed phenomenon) and the output goal variables. If a couple of output goal variable is specified, Ludwig learns to concurrently predict all outputs. The brand new mannequin definitions might comprise extra info, together with preprocessing information for every characteristic of the dataset and mannequin studying parameters. And fashions fashioned at Ludwig are saved and will be loaded later to get predictions about new information. At the moment, for every sort of information supported by Ludwig, the device set supplies information type-specific encoders that map uncooked information to tensors (information buildings utilized in algebra linear), in addition to decoders that map tensors to uncooked information. Built-in combiners routinely assemble the tensors of all enter encoders, course of them, and return them to be used with output decoders. "By composing these data-specific parts, customers can create Ludwig practice fashions for all kinds of duties," Uber writes. "For instance, by combining a textual content encoder and a class decoder, the consumer can get a textual content classifier. The mixture of a picture encoder and a textual content decoder will permit the consumer to acquire a picture caption template … This encoder structure versatile and versatile decoder permits much less skilled practitioners to study in-depth coaching fashions for numerous machine studying duties, comparable to textual content classification, object classification, picture captioning, markup sequencing, regression, language modeling, machine translation, chronological forecasting, and reply to questions. " As well as, Ludwig supplies a set of command-line utilities for coaching, mannequin testing, and forecasting. instruments to judge fashions and evaluate their predictions via visualizations; and a Python programming API that permits customers to coach or load a mannequin and use it to acquire predictions about new information. As well as, Ludwig is ready to type distributed fashions via the usage of Uber's Horovod, a framework that helps a number of graphics playing cards and machines. At the moment, Ludwig accommodates encoders and decoders for binary values, floating numbers, classes, discrete sequences, units, baggage, photographs, textual content, and time collection. It helps some pre-trained fashions. Sooner or later, Uber plans so as to add new encoders for information sorts for textual content, photographs, audio, level clouds and graphics, and to create new encoders for information sorts. combine "extra scalable options" for managing massive datasets. "We determined to open the Ludwig software program as a result of we predict it may be a useful gizmo for inexperienced practitioners in machine studying and skilled builders and researchers in deep studying. Non-experts can rapidly practice and take a look at in-depth studying fashions with out having to write down code. Consultants can get stable baselines to check their fashions and have an experimental framework to assist take a look at new concepts and analyze fashions by preprocessing and visualizing normal information. . " Ludwig's debut follows the publication of Uber's Pyro in 2017, a deep probabilistic programming language constructed on Fb's PyTorch machine studying framework. And that comes as the event instruments of AI with out code, like Baidu's EZDL and the mannequin builder for Microsoft's AI, proceed to realize floor.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525046.5/warc/CC-MAIN-20190717041500-20190717063500-00456.warc.gz
CC-MAIN-2019-30
5,141
13
http://www.programmableweb.com/sample-source-code/google-cloud-print-python-sample-code
code
The Google Cloud Print Python code samples act only as a reference to build applications with the Google Cloud Print API. Some of the topics display code to extract cookies, authenticate, register printer, and access printer jobs. In the site, developers can be aware that the last update of the code samples was done in 2011, reason why they should not implement the code into existing applications. Most enterprise IT environments consist of multiple generations of legacy enterprise applications that collectively represent a business process. In recent years, the more advanced IT organizations have layered suites of business process management (BPM) software as part of an effort to manage those business processes in a more holistic fashion.
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661795.48/warc/CC-MAIN-20160924173741-00207-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
748
2
https://www.saltyjedi.com/forum-1/object-279e/unionen-15
code
Tank used: Object 416 My thoughts on using this tank went like this. A rarely seen tank which should mean that it wouldn´t be that hard to get atleast master badge 1 and I was right I think, around 1k base xp. The other reasons was that it has very good premium ammo and a good gun so that you wont have any problems when you get a T10 game, which you are hoping for. Just dont get hit because that will wreck your day. You can ofcourse do this in any tank but I do recommend a T8 or T9 and this is because if you face tanks tiers above you and you manage to dmg them you get more xp. Best of luck to you all
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00131.warc.gz
CC-MAIN-2020-50
609
4
https://athleta.gapcanada.ca/customerService/info.do?cid=44959
code
- By chat: Start a chat using the icon in the bottom, right corner of the page. Regular chat hours are daily, 9 am – 9 pm ET - By phone: Please dial 711 for relay service - By mail for Correspondence: Athleta Canada Customer Service, 9500 McLaughlin Road North, Brampton, ON, L6X 0B8, Canada - For Merchandise Returns: Please view our entire Return policy here. Purchasing and Ordering
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00670.warc.gz
CC-MAIN-2023-06
387
8
https://community.spiceworks.com/topic/5362-just-deleting-tickets
code
I recently got his with a slew of unwanted help requests from a spammer. I have over 400 tickets that I wish deleted. The problem I run into is I cannot just delete the ticket, I have to close it first. By closin git, it triggers an e-mail to be sent to the spammer and starting the whole process over again. What can I do to "just" delete the unwanted tickets. This topic was created during version 1.7. The latest version is 7.5.00101. 400 tickets, that sounds nasty..... Although this might not help you with your immediate problem, my helpdesk email can only send and receive within the domain. Which is perfect because anyone I want contacting the helpdesk is part of our domain. If you don't have a need to go beyond your domain for helpdesk email, you won't need to worry about incoming spam.
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429417.40/warc/CC-MAIN-20170727182552-20170727202552-00195.warc.gz
CC-MAIN-2017-30
799
6
https://neos-server.org/neos/solvers/milp:Cbc/GAMS.html
code
The COIN-OR Branch and Cut (Cbc) solver is an open-source mixed-integer linear programming solver written in C++. Problems for Cbc can be submitted on the NEOS server in AMPL, GAMS, or MPS format. Cbc is intended to be used primarily as a callable library to create customized branch-and-cut solvers, however, a basic stand-alone executable is used to solve problems submitted to the NEOS Server. Cbc utilizes other COIN-OR projects Cgl (Cut Generation Library) to generate cutting planes and Cpl to solve the linear programs at each node of the tree. Cbc was developed by John Forrest, now retired from IBM Research. The project is currently managed by John Forrest and Ted Ralphs. For more information on Cbc and the COIN-OR initiative, please visit the Cbc COIN-OR website. The user must submit a model in GAMS format to solve an optimization problem. For security purposes, the model submitted must adhere to the following conventions: If you are unfamiliar with GAMS, the GAMS Documentation includes a GAMS Tutorial and Examples of models in GAMS format can be found in the GAMS model library. By default, the NEOS Server limits the amount of output generated in the listing file by turning off the symbol and unique element list, symbol cross references, and restricting the rows and columns listed to zero. This behavior can be changed by specifying the appropriate options in the model file. See the documentation on GAMS output for further information. You may optionally submit an options file if you wish to override the default parameter settings for the solver. Currently, the NEOS Server can only use optfile=1 with GAMS input. Therefore, any model that specifies a different options file will not work as intended. <modelname>.optfile = 1 ; optfile = 1
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00659.warc.gz
CC-MAIN-2022-49
1,767
32
https://4mobiles.net/downgrade-samsung-galaxy-a3-to-official-lollipop-5-1-1/
code
As you might already know, Marshmallow update is rolling out for Galaxy A3 devices. However, some models become laggy and their batteries started to drain fast. The only solution to fix it, is to get Lollipop OS back into your device. The tutorial given below will downgrade Galaxy A3 to the stable Lollipop 5.1.1 OS and hopefully should bring back the smile on your face. Warning: This tutorial is only for Galaxy A3 A310F model. Update process will erase device internal storage, so it’s strongly recommended to backup your data to device SD memory card or PC before starting this tutorial. • The device should have at least 60% charge left on the battery. • If you have Samsung Kies program in PC, Kies should be completely turned off to not disturb all the process. • USB drivers must be installed on PC (if Kies are in PC then drivers are already installed). • USB Debugging must be enabled on the device. To enable it go to Settings/About phone and keep taping on Build number until you see Developer mode has been turned on. After that go to Settings/Developer Options then check USB Debugging. - Download Odin3 v3.10.7 to PC and extract it. - Download Lollipop 5.1.1 to PC and extract it so you’ll get .tar.md5 file. - Turn off Galaxy A3 and boot into Download mode by pressing and holding down the Volume Down, Home and Power buttons together. When a warning screen is displayed release all buttons and press the Volume Up button to enter Download mode. - Run Odin3-v3.10.7.exe as an Administrator and then connect Galaxy A3 to the PC via USB. Message showing Added!! will appear in Odin’s message box, if not, try another USB port. If the issue persists, try reinstalling the USB driver. - Click on the AP button and select a .tar.md5 file, which was downloaded in step 2. In Odin, make sure that the Auto Reboot and F. Reset Time options are checked while Re-Partition must stay unchecked. For example: - Click Start button in Odin to begin the process. - After the process is complete, Galaxy A3 will restart and a PASS message with green background will appear in the left box at the top of Odin. - Turn off Odin and unplug USB cable from device. - Galaxy A3 will restart. First startup can take up to 15 minutes, so be patient. Congratulations on successfully downgrading Galaxy A3 to Lollipop 5.1.1 official ROM! When I start installation Odin show fail. I have Odin alert showed on my phone screen: Binary size is too large : boot” What should i do? I have cleard everything from phone. Does your PC have enough free space to perform this operation? Dude same problem. I have 20 gb left on my pc xD wtf Looks like some devices doesn’t support this OS file. The only possible solution here is to look for alternative OS version. I have installed android version 5.1.1. My phone is just showing Samsung welcome icon and restarts again and again, whats the problem, what should I do? Connect your Samsung to PC and launch Kies to recover the device. Hello, I have the same problem as Mozzam and Kies doesn’t recognize my phone please help:(( Please give us more info. On which step did you got stuck? What you see on device screen? Helo, I used ODIN to downgrade my A3 2016 to Android 5.1.1 from 6.0 and when I started up my phone after ODIN saying successful downgrade, it just keeps booting but It doesnt really bot, I mean it shows “Samsun Galaxy A3 6” logo and then Samsung bootup logo, and it restarts and keeps doing this every minute. Then you should connect it to PC and launch Kies for device recovery. Didn’t work – Odin failed to recognize the USB connection at all, even after reinstalling the USB drivers. And you don’t say what to do then – I’m not supposed to turn the phone off, but what elqse can I do? This means that the problem is with your PC USB connection. Device must be recognized, otherwise you won’t be able to proceed. Try using another USB port and installing the right drivers for your device. Also make sure USB Debugging is On.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00456.warc.gz
CC-MAIN-2023-14
4,008
30
http://www.newamericanfoods.com/pages/conversions.html
code
Types of measuring utensils Weighing rice with a scale Real chefs mesaure out their solid ingredients according to weight NOT volume. For example, take cut up strawberries - 1 cup of cut strawberries and 8 oz of cut strawberries may not equal the same amount (depends on how they're cut, how big they were to start with, etc). Another great example is the misconception that 8 oz of flour would equal 1 cup; however 1 cup of flour actually only weighs 4.5 ounces! When a recipe ingredient is expressed in weight, weigh it out, likewise, you should always measure out any ingredient expressed in volume. There are, of course, a few exceptions to this rule: the weight and volume of water, butter, eggs and milk are the same, so you can measure these ingredients out however is most convenient. Refers to the space occupied by a substance. Height x Width x Length. 1 Gallon = 0.5 Pecks = 3.80 Liters = 4 Quarts = 8 Pints = 16 Cups = 128 Fluid Ounces = 256 Tablespoons = 768 Teaspoons 1 Liter = 1.06 Quarts = 2.11 Pints = 4.23 Cups = 33.81 Fluid Ounces = 67.73 Tablespoons 1 Cup = 0.24 Liters = 0.5 Pints = 8 Fluid Ounces = 16 Tablespoons = 48 Teaspoons = 250 ml 1 Fluid Ounce = 0.125 (or 1/8) Cups = 2 Tablespoons = 6 Teaspoons = 1 Shot = 29.57 ml 1 Tablespoon = 3 Teaspoons = 0.5 Fluid Ounces = 15 Milliliters 1 bushel = 4 pecks 1 lemon = 1-1.25 fluid ounces juice 1 orange = 3-3.5 fluid ounces juice Some less common known measurements of volume are: DASH, PINCH, SMIDGEN, NIP. Learn about those and their conversions by clicking here. As you can see in the conversions for Volume listed above, 1 fluid ounce = 29.57 milliliters. So, to convert fluid ounces to milliters, multiply the number of ounces by 29 (rounded for convenience): 8 fl. oz. x 29 = 232 ml Therefore: 224 ml / 29 = 7.72 fl. oz. Refers to the mass or heaviness of a substance. Most chefs use portion or balance scales to measure out these ingredients. 1 Pound = 16 Ounces = 454 Grams 1 Ounce = 28.35 Grams = 0.0625 Pounds 1 Gram = 0.035 Ounces (1/30 oz.) 1 Kilogram = 1,000 Grams = 35.27 Ounces = 2.2 Pounds 1 large egg white = 1 ounce (average) As you can see from the weight conversions above, 1 oz = 28.35 grams.Multiply number of grams by 28 (rounded for convenience): 8 oz. x 28 = 224 g Therefore: 224 g / 28 = 8 oz. Refers to the number of individual items - used in recipes, portion control and in purchasing. When used in purchasing, it indicates the size of the individual items. For example: a "96 count" of lemons would mean that a 40-pound case countains 96 individual lemons and if you were to increase the count, to say, a "115-count case" it would mean the size of the lemon decreased so that more would fit in the same 40 pound case.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00006.warc.gz
CC-MAIN-2022-27
2,717
27
https://khangdinh.wordpress.com/2013/03/16/red5-1-0-1-note-on-the-admin-panel/
code
For those of you who are interested in the RED5 media streaming server, here’s a minor note that might come in handy: If you try to use “admin.jsp” and it gives you the error message “Error in db setup Table/View ‘APPUSER’ does not exist” – fear not! I… haven’t found out why this happens (admin.jsp used to work properly and normally on a prior version to 1.0.1, if memory serves me well), but anyway that’s not the only way for you to create an admin account. You can instead try this link: If that doesn’t go anywhere, try installing the “admin demo” app via the RED5 installer at The admin app will give you an alternative GUI to register an admin account. Hope this helps,
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057775.50/warc/CC-MAIN-20210925202717-20210925232717-00222.warc.gz
CC-MAIN-2021-39
705
6
https://informatica.vu.lt/journal/INFORMATICA/search?author=Algirdas%20Laukaitis
code
Pub. online:1 Jan 2018Type:Research ArticleOpen Access Volume 29, Issue 4 (2018), pp. 693–710 In this paper, we propose a framework for extracting translation memory from a corpus of fiction and non-fiction books. In recent years, there have been several proposals to align bilingual corpus and extract translation memory from legal and technical documents. Yet, when it comes to an alignment of the corpus of translated fiction and non-fiction books, the existing alignment algorithms give low precision results. In order to solve this low precision problem, we propose a new method that incorporates existing alignment algorithms with proactive learning approach. We define several feature functions that are used to build two classifiers for text filtering and alignment. We report results on English-Lithuanian language pair and on bilingual corpus from 200 books. We demonstrate a significant improvement in alignment accuracy over currently available alignment systems. Pub. online:1 Jan 2011Type:Research ArticleOpen Access Volume 22, Issue 2 (2011), pp. 203–224 In this paper, we describe a model for aligning books and documents from bilingual corpus with a goal to create “perfectly” aligned bilingual corpus on word-to-word level. Presented algorithms differ from existing algorithms in consideration of the presence of human translator which usage we are trying to minimize. We treat human translator as an oracle who knows exact alignments and the goal of the system is to optimize (minimize) the use of this oracle. The effectiveness of the oracle is measured by the speed at which he can create “perfectly” aligned bilingual corpus. By “Perfectly” aligned corpus we mean zero entropy corpus because oracle can make alignments without any probabilistic interpretation, i.e., with 100% confidence. Sentence level alignments and word-to-word alignments, although treated separately in this paper, are integrated in a single framework. For sentence level alignments we provide a dynamic programming algorithm which achieves low precision and recall error rate. For word-to-word level alignments Expectation Maximization algorithm that integrates linguistic dictionaries is suggested as the main tool for the oracle to build “perfectly” aligned bilingual corpus. We show empirically that suggested pre-aligned corpus requires little interaction from the oracle and that creation of perfectly aligned corpus can be achieved almost with the speed of human reading. Presented algorithms are language independent but in this paper we verify them with English–Lithuanian language pair on two types of text: law documents and fiction literature. Pub. online:1 Jan 2008Type:Research ArticleOpen Access Volume 19, Issue 4 (2008), pp. 535–554 This paper examins approaches for translation between English and morphology-rich languages. Experiment with English–Russian and English–Lithuanian revels that “pure” statistical approaches on 10 million word corpus gives unsatisfactory translation. Then, several Web-available linguistic resources are suggested for translation. Syntax parsers, bilingual and semantic dictionaries, bilingual parallel corpus and monolingualWeb-based corpus are integrated in one comprehensive statistical model. Multi-abstraction language representation is used for statistical induction of syntactic and semantic transformation rules called multi-alignment templates. The decodingmodel is described using the feature functions, a log-linear modeling approach and A* search algorithm. An evaluation of this approach is performed on the English–Lithuanian language pair. Presented experimental results demonstrates that the multi-abstraction approach and hybridization of learning methods can improve quality of translation.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00155.warc.gz
CC-MAIN-2021-10
3,785
9
https://communities.vmware.com/t5/VMware-Workstation-Player/What-GPU-would-you-recommend-for-VMWare-Worksation-12-on-Linux/td-p/952477
code
Maybe that has already been asked... but a quick search di end up with no real results and/or too many of them, most not relevant So here's my question : I am currently running an "old" Mac Pro (4.1 upgraded to 5.1 ; Xeon CPU's upgraded to 2 X Hexa Cores @ 2.93Ghz ; 40GB or RAM ; several SSD's and HDD's inside the machine and outside of it). The main system is Linux (don't ask why, but basically because I got fed up with Apple stepping backwards with each new product they release !). I have to run Windows as a Guest OS. VMWare allows me to do so, but I would like to improve things to have a Win 10 running even more smoothly than it does. I think one of the major bottlenecks here is my GPU : it's an old ATI Radeon HD 5870 with 1GB of VRAM. I am considering swapping it for a "Mac flashable card" (just in case I'll decide to resell that MacPro...). So chances are I will get either an ATI Radeon HD 7970 (3GB RAM) or a GTX 680. I think that performance wise they're more or less on par, but maybe VMWare uses resources (Open CL, Cuda, Direct X...) and that would make one of the cards superior to it's competitor... Anyone has some knowledge he'd like to share to advise on which card to choose ? Thanks a lot.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648695.4/warc/CC-MAIN-20230602140602-20230602170602-00319.warc.gz
CC-MAIN-2023-23
1,219
11
https://claudiograssi.medium.com/unity2d-devlog-16-ramming-speed-b83753b29e1a?source=user_profile---------8----------------------------
code
As the player, you spend a lot of time avoiding collisions with your enemies. Dodging is made easy by the fact that enemies move predictably. But I want to throw a wrench into that scheme. Sometimes, trouble comes looking for you, and all you can do is face it head-on. Create an enemy that will attempt to ram the player. How I went about it: Fundamentally, this is a question of movement. I had to decide how to move an enemy normally and switch its behavior based on the player’s proximity. But before any of that, I drew some new sprites. With the artwork completed, I was ready to implement my Ramming Script. I start by defining the speed and range at which the alien will attempt to ram the player. Next, I create a reference to the Player’s Transform Component. Finally, I use the Player’s Transform Component to check for distance compared to the value of Range. Update() I check for the condition below, and if it returns true, I activate the ramming behavior. Just like that, if the Player is within range and the alien has not moved past them, the Ramming() method will be called. I could have gone a different way with this. I could have had the alien chase the Player around. However, I did not want to make a game that was impossible for the player to beat. Part of the fun comes from the feeling of empowerment. If a Player feels powerless, the game becomes a frustrating mess. So making sure that the game mechanics are balanced is just as important as making them fun.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00028.warc.gz
CC-MAIN-2023-50
1,492
9
https://ckan.org/2011/09/22/ux-designer-front-end-developer-position/
code
UX Designer / Front end developer position We’re looking to hire a Designer / Front-end developer with UX skills to join the CKAN team at the Open Knowledge Foundation. We’re a small group working on a fantastic product that enables communities and open governments to share and manage data. CKAN is the leading open source data hub software that powers several govenrment sites including the UK government’s open data portal http://data.gov.uk/ and community hubs such as the http://thedatahub.org/. With lots of powerful functionality for cataloging and managing data our next focus is on creating a better user experience for the CKAN functionality. About the role You’ll be our first dedicated front end designer / developer. Ideally, you’ll be excited to work in open data on an open source project which is being widely used. You’re keen for the chance to direct the overall user experience of a whole product, overseeing everything from navigation and information architecture to the UI on individual features and look of the site. You’ll help us to collect, understand and implement user feedback to help data wranglers and publishers become more productive using CKAN. - Own the look and feel of CKAN product and the http://thedatahub.org/ instance - Designing illustrations and icons for to communicate the product better - Create the user interface of new features as well as upgrading old ones (producing mockups and designs then implementing using HTML/CSS) - Improve the information architecture and user flows through the site - Assist in theming of new CKAN instances - Consult on creating personalised themes for clients - Graphic design skills (e.g. Photoshop etc) - Development of information architecture and user experience (UX) for webapps - Super exciting product - Flexible working days - London-based, but option to work from anywhere - Competitive remuneration How to apply: Send us your CV, cover letter, examples of previous work to [email protected] If you like, some initial ideas on how you would improve thedatahub.org. About the Organization The Open Knowledge Foundation (OKF) is a multi-award winning community-based, not-for-profit. The Foundation now has projects and partnerships throughout the world and is especially active in Europe. We build tools and communities to create, use and share open knowledge – content and data that everyone can use, share and build on. We believe that by creating an open knowledge commons and developing tools and communities around this we can make a significant contribution to improving governance, research and the economy.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00223.warc.gz
CC-MAIN-2020-05
2,616
23
https://www.visualistan.com/2024/02/otterisrollingoutanaichatbotinchannelstoassistusersinimprovedteamwork.html
code
Automated transcription service Otter is launching an AI feature called ‘AI Chat in Channels,’ to its group chat functionality ‘Channels.’ Channels work pretty much like Slack chats, where people can connect with frequent collaborators and share transcripts with each other. With the help of the new AI integration, groups will be able to ask a chatbot questions related to their older meetings. The chatbot will then gather information from all the meetings that group members have participated in, and generate answers to the prompts asked. The reason why this AI functionality is unique is because unlike other AI chatbots that are commonly found in single-user chats, it allows multiple people to ask it questions, hence enabling faster team work. Moreover, Otter is increasing the extent of meetings that the AI chatbot can draw data from. Previously, within a specific transcript, the chatbot could only respond to questions about that single meeting / conversation. Now, the bot can collect data from all prior meetings and transcripts of the user. Lastly, Otter is also introducing an AI conversation summary feature that is capable of identifying action items during an ongoing meeting. Other generative AI features that Otter has added to its platform in the past include a chatbot that attends meetings for users, and a meeting summary generator in All of Otter’s AI features will be available to users as part of its free Basic plan.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818337.62/warc/CC-MAIN-20240422175900-20240422205900-00652.warc.gz
CC-MAIN-2024-18
1,455
21
https://ladieslovepaulrudd.com/2017/02/15/episode-8-ant-man-with-christelle/
code
+ We really need Agent Peggy Carter back ASAP — only she can save us now. + New Life Motto: Never Give Up the Biscuits. + Nature is awesome, but it’s even more awesome when you don’t have to go out in it. Come share the love with us! https://www.facebook.com/ladieslovepaulrudd/ Follow Christelle: @brewy_chris Fall in love with her beautiful cakes: https://www.instagram.com/bibiliciousbakes/ Follow Amy: @amypop Rate and Review us on iTunes: https://itunes.apple.com/us/podcast/ladies-love-paul-rudd/id1188475576
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510730.6/warc/CC-MAIN-20230930213821-20231001003821-00701.warc.gz
CC-MAIN-2023-40
520
8
https://www.gransnet.com/forums/news_and_politics/1245428-News-from-the-opposition
code
Anything intelligent about the official opposition or indeed any other political party seems to be underrepresented on here. So I thought I would start a thread giving news of parties policies etc other than the government’s. I will kick off with a Reuters report on Corbyns speech. “In a speech to the manufacturers section, Corbyn will pledge to rebalance the economy if labour get into power. Corbyn argues that instead of finance serving industry, politicians have served finance, and we have seen where this ends, the productive e onomy, our public services and people’s lives being held hostage by too big to fail banks and casino financial institutions. whitewave Tue 20-Feb-18 10:55:30 whitewave Tue 20-Feb-18 11:12:57 whitewave Tue 20-Feb-18 11:41:59 GracesGranMK2 Tue 20-Feb-18 12:18:02 lemongrove Tue 20-Feb-18 13:54:13 whitewave Tue 20-Feb-18 14:00:25 whitewave Tue 20-Feb-18 14:26:26 whitewave Tue 20-Feb-18 16:15:07 bmacca Tue 20-Feb-18 17:03:20 M0nica Tue 20-Feb-18 17:09:45 GracesGranMK2 Tue 20-Feb-18 17:10:12 bmacca Tue 20-Feb-18 18:11:02 durhamjen Tue 20-Feb-18 20:33:19 bmacca Tue 20-Feb-18 22:09:12 Day6 Tue 20-Feb-18 23:07:54 dbDB77 Tue 20-Feb-18 23:43:21 durhamjen Tue 20-Feb-18 23:43:38 whitewave Wed 21-Feb-18 08:54:45 whitewave Wed 21-Feb-18 08:56:02 durhamjen Wed 21-Feb-18 08:58:26 yggdrasil Wed 21-Feb-18 09:13:39 whitewave Wed 21-Feb-18 12:56:48 varian Wed 21-Feb-18 16:57:22 durhamjen Wed 21-Feb-18 20:22:23 whitewave Wed 21-Feb-18 20:29:39
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00584.warc.gz
CC-MAIN-2022-21
1,477
29
https://inter-self.com/research/
code
Our group investigates the interrelation of prereflective bodily processes, social interaction and the sense of self. We combine methods from different disciplines to shed light on the enactive approach to self and intersubjectivity. According to the enactive approach, the self is a distributed and processual phenomenon. It can be described as a self-organized autonomous system, which is constituted through interactional and relational processes. These processes show two general tendencies: towards degrees of distinction and emancipation from other agents and towards degrees of connectivity and participation with them. Our group explores the two dimensions of distinction and participation, as well as the transitions between them from a philosophical and empirical perspective InterSelf Empirical Study In 2018 we conducted an empirical mixed-methods study investigating the impact of embodied social interaction on the self. The first results of this study have been submitted for publication to the Scientific Reports Journal. Stay tuned for further information and results!
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00165.warc.gz
CC-MAIN-2021-25
1,085
4
https://pagecdn.com/theme/wp-chilly/performance
code
Free LibrariesChilly Wordpress Theme CDN Performance Chilly is a responsive, multi-purpose WordPress theme. It’s flexible and suitable for agencies, blogs, businesses, finance, accounting, consulting, corporations, or portfolios. Customization is easy and straight-forward, with options provided that allow you to setup your site to perfectly fit your desired online presence. For more details, visit this link https://wordpress.org/themes/spicepress/. We hope you will find the chilly theme useful. [ more... ] PageCDN allows you to extremely optimize content, delivery and caching based on your needs. Below is the list of all optimizations applied to or available for this repo. You can pull your own content from a website or a GitHub repo for best optimizations and fast delivery. PageCDN compresses resources using brotli (quality-11) at 'Extreme' compression level. For a general understanding of the size difference, see this comparison of file sizes produced by different CDNs. Resources delivered over different protocols are considered different. Browsers do not reuse cached copy of resources delivered over HTTP for the HTTPS requests, and vise versa. Making everything HTTPS not only increases security, but also improves cache hit ratio. This is how long CDN files in this repo are cached in the browser. This is how long CDN files in this repo are cached at the edge before checking for its freshness. Image optimization, resizing and WebP conversion is enabled through URL based flags. For more information, please visit the Integration page. URL based cache busting is turned ON for this repo through version tags.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00246.warc.gz
CC-MAIN-2022-21
1,634
9
https://downturk.net/3246969-introduction-to-data-warehousing.html
code
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz, 2 Ch Genre: eLearning | Language: English + srt | Duration: 27 lectures (1h 48m) | Size: 529 MB An introduction to data warehouses, using SQL Server 2019, SSMS, Visual Studio & SSIS. Covers topics in the 70-463 exam Define and describe common data warehousing concepts Install SQL Server, SSMS and Visual Studio with the SSIS package Create an ETL package in SSIS Integrate data from text files and databases A Windows computer or virtual machine with at least 2GB of RAM and about 10GB of hard drive space. This course is an overview of basic data warehousing concepts, a guide for installing software and a step-by-step tutorial on using the software. This course is divided into three main sections: Data Warehousing Basics We b with a conversation about data warehousing concepts, including the following: What is a data warehouse Why do we need a data warehouse What is Extract, Transform and Load (ETL) The difference between OLAP and OLTP databases The Star and Snowflake schemas Fact and Dimension tables These concepts can be difficult to understand, so I explain them using real conversation and examples. We have this conversation first so that you are familiar with the ideas as they are discussed in the later sections of the course. Data Warehousing Software Installation If you want to become good at data warehousing, you need to use the software. In this section I start by talking with you about the software and explain how the different pieces work together. Next is a step-by-step walkthrough of installing SQL Server Developer, SQL Server Management Studio (SSMS) and Visual Studio Community with the SQL Server Integration Services (SSIS) package. The versions of software we are all free for you to use. SSIS Tutorial: Create a Project and Basic Package with SSIS The last section of the course is a step-by-step tutorial on using the ETL tool SSIS. The tutorial is broken down into nine steps: Step 1: Create a new integration services project Step 2: Add and configure a flat file connection manager Step 3: Add and configure an OLE DB connection manager Step 4: Add a data flow task to the package Step 5: Add and configure the flat file source Step 6: Add and configure the lookup transformations Step 7: Add and configure the OLE DB destination Step 8: Make the Lesson 1 package easier to understand Step 9: Test the Lesson 1 package Before we being the tutorial, I discuss the steps in detail. I explain the source data, how the data will be transformed and how we will get the data to it's destination database. People taking a college course in data warehousing or data integration People who need to understand data warehousing, SQL Server or SSIS for their job
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00023.warc.gz
CC-MAIN-2021-49
2,746
35
https://elwexicano.medium.com/2015-year-of-the-user-64537523e9b6
code
Today, DoneDeal’s Android and iOS apps are some of the best-rated apps available in Ireland. Both apps are number 1 in the lifestyle chart, have an app rating of over 4 stars and both are closing in on nearly a million user downloads. However, that wasn’t always the case. If we look back at December 2014 it tells a different story altogether. Throughout 2014, our apps have been slated by user reviews in the App Store and Google Play Store. Yes, in 2014 we made a big mistake and spent a long time trying to right our wrongs. The mistake we made was never user testing a major change to our apps and as well as that, we decided not to take the advice of our developers on this change. The change we made was to move large parts of our apps from native screens to web based screens, those dreaded WebViews still haunt me to this day. From the minute we released the apps, we received very negative feedback from our users and I couldn’t even count how many times users said that they should sack whoever decision it was to update the apps. As a result our Android app rating fell to 4.01 stars and our iOS rating was at 3 stars. Our yearly ratings for 2014 were at an all time low and things were looking very bleak for our apps. So what did we do to fix the problem?!! Well, we started to take our users very seriously and put them first when we were thinking of new features to develop. First off we spent a lot of resources fixing any bugs that users reported, we responded to every single piece of feedback we received and most importantly we started to perform user testing (a lot of user testing). I can’t say that it was easy but it was definitely worth it. In one year our apps have completely changed and we have turned things around. We removed all WebViews and replaced them with native screens, which provided a far greater user experience. We constantly listened to user feedback and built new features (or adjust existing ones) based on their feedback. We did a lot of user testing and we’ve made user-experience the most important aspect of our apps. And, the results speak for themselves; our android app had a yearly rating of 4.4 stars in 2015 compared to 3.3 stars in 2014 and our iOS app had a yearly rating of 4.8 stars in 2015 compared to 2.3 stars in 2014. Although what happened in 2014 was a negative experience, it’s one that will stay with me and thought me some very important lessons: - Always listen to your users so that you can understand their needs (Make sure you identify their needs rather than their wants). - Always user test features (New & old). - Always listen to the advice of your developers and people around you (Some of them will be smarter than you). If you like apps then make sure to download the DoneDeal app which is available in the App Store and Google Play Store.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375439.77/warc/CC-MAIN-20210308112849-20210308142849-00614.warc.gz
CC-MAIN-2021-10
2,831
14
https://paulbuitelaar.net/talk-ai-gpt/
code
Speak.AI, a technology of revolutionary importance in the field of artificial intelligence, has seen a rise of popularity over the past few years. This article examines the evolution of Speak.AI. It provides insight into how this technology is changing the way people interact with language and data. In a world increasingly driven by data, tools for language analysis have grown in importance. These tools play an important role as we navigate the vast information sea. They can extract patterns and insights that are valuable from text and voice data. Speak.AI offers a promising solution for the challenges and possibilities presented by the vast amount of linguistic data. We will examine the core features of Speak.AI in the sections below and explore its potential impact on different aspects of our life. For a brief overview of Speak.AI, you can watch this YouTube Video from the AI Profit Tools channel. For a more in-depth reading, continue below to find out more about Speak.AI What is SpeakAI Talk Ai Gpt Speak.AI provides transcription, research, analysis of data, and software for natural language processing. With Speak.AI, you can convert your language data into insights without writing any code. Over 100,000 marketers, researchers, and companies trust Speak.AI to help them reduce manual labor, unlock competition advantages, strengthen customer relationships, make better decisions, and build stronger customer relations. Speak.AI is a powerful tool that offers many features, such as high-quality audio for transcription, time-saving transcription, and support of multiple languages. Speak.AI is a powerful tool that offers many features. These include transcription accuracy and high-quality audio. Additionally, it can be used in multiple languages. The platform’s speech-recognition and natural language processor engine automatically analyzes audio and video to find important topics, key phrases, sentiment analysis, and more. Talk Ai Gpt Speak.AI is a meeting assistant which automatically joins meetings, records them, transcribes and analyses them. It works on popular platforms such as Zoom, Microsoft Teams, Google Meet, and Webex by Cisco. It is possible to customize the name and image of your meeting assistant for a more personal and professional brand when taking calls. Speak.AI for Research Speak.AI lets you upload audio, text, and video data seamlessly for qualitative research. The platform allows for easy bulk and individual uploads of data. You can convert audio to text, use CSV for bulk analysis, or capture recordings. Speech Recognition and Natural Language Processing The platform’s natural language processing and speech recognition engine automatically transcribes, analyses, and identifies important topics, keywords, key phrases, sentiments, and more. With Speak.AI’s insights derived from your language data, you can compare trends over time, analyze data sets against each other, and discover new opportunities. - Transcription Accuracy: Speak.AI provides high transcription accuracy with high-quality audio. - Time Savings: Save time with transcription and analysis. - Multiple Languages: Speak.AI supports multiple languages. - Automated Transcription: You can convert audio files and videos to text using automated transcription. - Bulk Analysis: Import CSV file for bulk analysis. - Embeddable Recorder: Capture recordings using an embedded recorder. - Meeting Assistant: Speak.AI offers a meeting assistant that automatically joins your meetings, records, transcribes, and analyzes them. It works on popular platforms such as Zoom, Microsoft Teams, Google Meet, and Webex by Cisco. - Customization: You can customize your meeting assistant’s name and image for a new level of personal and professional branding when capturing calls. Business and Marketing Speak.AI is a powerful tool for business and marketing professionals. Speak.AI, its software for transcription, data analysis, research, and natural language processing (NLP), can help companies to reduce manual work, gain competitive advantage, improve customer relationships, and make better decisions. The platform offers high transcription accuracy and high-quality audio to save time in transcription and analysis. Speak.AI is available in multiple languages. This makes it accessible to businesses from all over the world. Businesses can use Speak.AI’s automated transcription feature for audio and video data conversion. This allows for easy qualitative research, academic and marketing research, competitor analysis, digital marketing, and other important functions. The platform allows the uploading in multiple formats of audio, video and text data. Users can use CSV files as a bulk data analysis tool or to record recordings. Speak.AI’s meeting assistant automatically joins meetings in popular platforms, such as Zoom by Cisco, Microsoft Teams, Google Meet and Google Meet. The meeting assistant records, transcribes and analyses meetings to give valuable insights. Users can customize both the name and the image of the Meeting Assistant for their own personal branding and professional purposes. Education and Training Speak.AI has many applications in the education and training field. The platform’s transcription features allow educators to convert audio and video lectures into text for reference or accessibility. This can be particularly beneficial for students with hearing impairments or those who prefer reading over listening. Talk Ai Gpt Speak.AI’s natural language processing engine (NLP) can analyze educational content in order to identify keywords, topics and key phrases. This allows educators the ability to track trends and gain insight into student engagement. Automated transcription is also available to create closed captions on educational videos and online courses. Closed captions make content more accessible to students who have hearing impairments, or prefer reading while watching. Telemedicine and Healthcare Speak.AI is also useful in the healthcare sector. The platform’s speech-recognition and natural language processing capabilities (NLP) enable healthcare professionals and transcriptionists to transcribe medical dictated accurately and efficiently. The healthcare provider can save valuable time by eliminating the need to manually transcribe. Healthcare organizations can also leverage Speak.AI’s data analysis features to gain insights from patient feedback surveys or analyze medical research data. The platform’s NLP can identify keywords or topics in large datasets. This helps healthcare professionals make data-driven decisions. The meeting assistant of Speak.AI can be very useful in the field telemedicine. It joins appointments for telemedicine on popular platforms like Zoom or Microsoft Teams automatically and records them for future reference. This feature provides accurate documentation for patient consultations while allowing healthcare professionals to focus on quality care. Speak AI Pricing There are three Speak.AI pricing plans. Each plan is tailored to a specific customer base. Speak.AI is a powerful tool, but it’s important to choose the right plan. The pricing plans for Speak.AI include Pay-As-You-Go: The easiest way to get started with Speak.AI. You can use it for free without any commitments. With unlimited storage, you get basic functionality and pay-as-you-go transcription. Talk Ai Gpt Starter: An all-in-one package to meet your language analysis requirements. This plan gives you 15 hours per week, 500,000 Magic Prompt character, unlimited storage, and one premium add-on. Custom: This plan lets you mix and match different features to create the best fit for your needs. Pricing is billed as you need it. You get unlimited hours, unlimited users, unlimited storage, and the ability to pick only the features you need. |$0 per month |$57 per month (billed annually) |Custom Pricing (Billed as needed) What is included in the Plans: Take video, audio, and text from anywhere Speak.AI offers plans that allow you use and upload files in all popular formats. The platform even provides the ability to record media, whether video or audio, and directly add text notes to them. You can create a landing page that allows you to record audio or video, or use a section on your website. Advanced Analysis and Data Visualization Speak.AI is known for its research capabilities, so it’s only fair that all levels of payment have access to the tools. Speak.AI’s research features include automatic named-entity identification, the ability for topics to be understood in video and audio files and also data visualization and filters. Professional Transcription & Editor Plans include tools for editing speakers and transcripts directly in the platform. Speak.AI also allows you to manage and save all transcription files. Tools to Customize and Share Media: Almost all platforms require you to be able to customize the way in which you manage your media. As a result, Speak.AI offers you just that. Be it, giving you an easy way to organize or export your files, optimizing your SEO, or just personalizing your presentation, that platform has it for you. Speak.AI offers API and webhooks to those who want to integrate its features into their platform. There is a dedicated dev team available to answer questions about integration with Speak.AI API. Of course, paying for Speak.AI also allows you to integrate it with some of the internet’s most used tools. Zapier is Speak.AI’s integration, which provides templates for automating common tasks. Aside from Zapier, Speak.AI also integrates with Zoom and Vimeo to naturally sync recording as well as media libraries to these platforms. What Customers Say Most customers who have used Speak.AI have been fairly content with it. Some users loved it. A reviewer said that Speak’s platform had reduced the workload from their old system by only a fraction. They even swear to the platform’s\ transcription quality, which is a huge step in improving their business. Speak has also proven to be extremely helpful for users when brainstorming new ideas. Instead of taking notes or just plain recording, Speak’s ability to detect and recognize sentiments from recording makes trying to figure out a jumble of ideas even faster than before. It also means saying goodbye to a train of unreadable thoughts. In the realm of language analysis tools, Speak.AI shines as a versatile and transformative platform. It empowers users across diverse sectors, from businesses seeking efficiency gains to educators enhancing accessibility. It offers healthcare professionals data-driven insights as well as streamlined transcription. Speak.AI’s advanced features, professional transcription tools, and customization options enhance the language analysis experience. Positive customer feedback highlights its effectiveness at reducing workloads while improving transcription quality. As the platform evolves, it promises that it will reshape how we approach language and data analysis. It offers valuable insights in an increasingly data-rich world. Speak.AI FAQ Talk Ai Gpt Who Can Use Speak.AI? Users of Speak.AI can be anyone. But it’s also a great tool for all types of researchers, academic or qualitative. Speak AI’s products are perfect for their job. Educational institutions, digital marketers, and go-to-market teams can also get the most value out of Speak.AI. What languages does Speak.AI Support? Speak.Ai supports more than 70 languages for transcription. These languages include Irish and Italian, as well as Malay, Catalan, and Dutch. Which payment plan is recommended? The payment plan that is highly recommended is really up to you and your needs. Speak.AI provides flexible payment plans to encourage customers to pick the one that is exactly what they are looking for. However, for those who want to start using Speak.AI ASAP, the starter plan is always a good place to start. Can plans be downgraded or upgraded? Customers can change their payment plan at any time. They can upgrade or downgrade anytime, and the new subscription will be applied in the next billing cycle. Are plans cancellable? Users can cancel their subscription at any time, but there will be no retroactive refunds. Speak allows users to cancel and still have access to Speak, including all their files, until the end of their subscription period.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474445.77/warc/CC-MAIN-20240223185223-20240223215223-00655.warc.gz
CC-MAIN-2024-10
12,406
74
https://pdesolutions.com/preview7.html
code
Here are some of the new features in version 7 : - Heirarchical Basis - New heirarchical FEM basis functions improve matrix conditioning. - Optimization - Built-in parameter optimizer. - CAD Mesh Import - Import a bounding mesh created in a CAD program (OBJ format). - General boundary shapes - Create boundary paths using implicit algebraic equations. - Interactive Plot Zoom - Zoom in on plots without the need to request a special plot. - Material Sets - User-defined groups of material properties simplify script writing. - Boundary Condition Sets - User-defined groups of boundary conditions simplify script writing. - Multidirectional Periodicity - Support for periodic boundaries in more than one direction at corners. - Extended Preferences Panel - All major settings located in a convenient preference panel. - Automatic Mesh Output - Easier post-processing with automatic mesh transfer output. - Automatic Mesh Input - Faster restarts by importing previous mesh when possible. - Simplified Stop&Restart - Simplified commands for restarting from mesh transfer files. - New Dongle Vendor - Wibu-Systems dongles provide more flexibility and cost effectiveness. Sample Version 7 Screen Shot of CAD Import :
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00151.warc.gz
CC-MAIN-2022-05
1,212
15
https://mybiosoftware.com/tag/secondary-structure
code
RNAfamily is a simple software tools that enables to display all secondary structures of a family of RNA molecules. It uses the linear backbone representation. RNAfamily provides usual graphical features: zooming, scrolling, etc. Colors of the stems correspond to matching stems. It is also possible to display the nucleotides composing a stem or the whole sequence. Carnac is a software tool for analysing the hypothetical secondary structure of a family of homologous RNA. It aims at predicting if the sequences actually share a common secondary structure. When this structure exists, Carnac is then able to correctly recover a large amount of the folded stems. The input is a set of single-stranded RNA sequences that need not to be aligned. The folding strategy relies on a thermodynamic model with energy minimization. It combines information coming from locally conserved elements of the primary structure and mutual information between sequences with covariations too. assp (Assess Secondary Structure Prediction) takes a multiple protein sequence alignmentand estimates the range in accuracy that one can expect for a “perfect” secondary structure prediction made using the alignment. JPred is a Protein Secondary Structure Prediction server. JPred incorporates the Jnet algorithm in order to make more accurate predictions. In addition to protein secondary structure JPred also makes predictions on Solvent Accessibility and Coiled-coil regions (Lupas method). NetSurfP predicts the surface accessibility and secondary structure of amino acids in an amino acid sequence. The method also simultaneously predicts the reliability for each prediction, in the form of a Z-score. The Z-score is related to the surface prediction, and not the secondary structure.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00284.warc.gz
CC-MAIN-2021-21
1,769
5
https://www.fmscout.com/q-883-Editor-Wage-budget-balancing-budget-and-the-balance.html?d=1&follow=1&x=2B5C5100
code
Anyone know how to boost balance as well as transfer budget, a club might have 100milion transfer budget but only 15mil balance. Any help would be much appreciated. you can edit such things in the editor. I know but you can't edit balance only transfer budget you can edit the wage and the transfer budget, look carefully mate. I can see it. Ok, maybe i didn't explain to clearly, when i edit the balance it doesn't change, i can give a team 100million transfer budget and 300million balance but the balance isn't adjusted, i get the 100mill but the balance is like 10million. I haven't tried it myself yet, but I've heard a mate having the same problem. Here's what he did. He entered Man City finances (in the editor). Then copied the balance/current budget and remaining budget, and pasted it into the respective fields at the club he wanted to edit, which worked splendidly. Maybe you could try something of the same? i put the balance as something like 948397593038 and then the transfer budget to 1277493049 (1.2 billion) which is more than enough lol. although the most i have managed to get in terms of wage budget is 1 million pound and dont know if you can do anymore Hi sorry im new on this and was wondering if anyone can help me ? i have the FMRTE editor ive loaded it and everything but whe i go to change the trasnfer budget it dont seem to work can anyone help me with this.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100081.47/warc/CC-MAIN-20231129105306-20231129135306-00829.warc.gz
CC-MAIN-2023-50
1,390
9
https://forum.delftship.net/Public/topic/dont-know-what-to-do-about-my-leakpoints/
code
I really need som help with my leak-points. Do not know what to do. My ship is nearly ready. But I have leak-points on the top and where I have connected my deck to the hull. This is a school project which i have to hand in on monday, I really hope someone can help me. I have attached a image where you see my ship and the leak-points are displayed. Thank you in advanced:) Hello Mr.Snupp, resolution in your jpeg image won’t help… Anyway, first: check all points supposed to have a Y=0 value, boring job but the first to do. Then, I noticed green edges, coincident edges of different layers: Be sure that no open edges of a layer with hydrostatics properties persist under the waterline. Last, if you have a faith, well… prey. Anyway check and let us know, we want to be credited in your degree! P.S.: I’ve just read carefully your post… If those leaking points are on deck ansd not related to hydrostatics, check out carefully all the coincident edges on the superstrucure, and let an offset, maybe, but on this matter Mr. Marven and his fellas are more credited than me. Best wishes, Jurgen.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057524.58/warc/CC-MAIN-20210924110455-20210924140455-00333.warc.gz
CC-MAIN-2021-39
1,105
12
https://doveltech.com/innovation/solving-the-service-granularity-challenge/
code
Solving the Service Granularity Challenge For us analysts that have been covering the markets around XML, Web Services, and Service Orientation it certainly is heartening to see that our audience of end users, vendors, and consulting firms are now asking some of the more complex and deeper questions around how to do architecture right. It seems that we’ve finally crossed the chasm and architects in particular now have a pretty good idea what SOA is and why they need it. Rather than trying to redefine what SOA is and what is means to the business, people are focusing on the more important issue of how to do SOA right. In that vein, some of our most recent conversations have centered on how to go about building the “right” Services. A key part of answering this question is making sure that we build Services at the right level of granularity. Granularity is a relative measure of how broad a required piece of functionality must be in order to address the need at hand. Fine-grained Services address small units of functionality or exchange small amounts of data. Consequently, to build complex business processes, companies would have to orchestrate large numbers of such Services to effectively automate the process — a difficult, often Herculean task. Coarse-grained Services, however, encapsulate larger chunks of capability within a single abstracted interface, reducing the number of Service requests necessary to accomplish a task, but on the downside, they might return excessive quantities of data, or it might be difficult to change them to meet new requirements. As a result, an architect must craft the right balance of fine-grained and coarse-grained Services to meet the ongoing needs of the business. This balance, of course, is part of the art of Service-Oriented Architecture. What Constitutes a Well-Defined Service? The first topic to consider when trying to understand how to craft Services of the right granularity is to understand how well-defined a Service interface must be. Some might think that all you need are the right standards and voila, you’ll have well-defined Service interfaces. In essence, one can say that a well-defined interface is something that a computer can understand. However, it doesn’t matter which specific standards you use to define a Service interface, as long as it provides enough information for a loosely-coupled exchange. In an SOA, we do prefer standards-based exchanges over non-standard ones, and it seems that a good part of the WS-* stack (in particular, WSDL, XML Schema, WS-Security, WS-Policy, and possibly BPEL or WS-CDL) is well on its way to becoming widely accepted. While it is good to have as much contract metadata in machine-processable standards, simply having the specifications available for the user does not give any clues on how to define the Service interfaces and address the granularity challenge at hand. Sure, it is important for a computer to be able to understand a given Service, but that doesn’t make that Service valuable. Indeed, other forms of architecture have also depended on standards-based interfaces and have not produced the sort of loosely-coupled Services we so desire. At this point, we must take off our developer hats and put our architect one on. In our recent ZapFlash entitled What Belongs in a Service Contract, we discussed the fact that a Service is not a Service unless it is defined using a metadata-encoded contract. The contract must contain two key elements: functional and non-functional information that specify the expectations of both the Service consumer and provider. In that case, at the very least, a Service contract must provide unambiguous information about what the Service does. In other words, a Service should clearly say what it means and mean what it says. Users shouldn’t be left scratching their heads as to what will happen when they provide the required input. So, a better argument for the well-defined Service question is that a Service is well-defined not only if a computer can understand it, but also if it is unambiguous as to what the Service will provide. Basically, a human can also understand the Service contract without having to consult additional resources or information. Now, this requirement for unambiguousness is somewhat opposed to the desire for loose-coupling. By somewhat, we mean that an architect can take this desire for clarity to the extreme and expose the inner workings of the Service, or perhaps provide the exact details of how a business process is composed. However, for loose coupling to work, we’d like to know as little as possible about how the Service works or about the process composition in order to achieve the task. Even then, we may have what we might call operational loose coupling in conjunction with semantic tight coupling. In other words, we need knowledge both of what the Service will do as well as how it will communicate to make it useful. Thus, one of the first real challenges is to build Service contract metadata that specifies enough, but not too much. Focusing on Reuse We still have not answered the question as to how to determine the level of Service granularity, since after all, we can build fine-grained, well-defined Services and coarse-grained, well-defined Services. It’s vital to understand whether or not a particular Service is single-use or multiple-use. You could say that the best Services are the most reusable ones, which is true, up to a point. After all, having several redundant, fine-grained Services leads to tremendous overhead and inefficiency. Clearly having a small collection of coarser-grained Services that are usable in multiple scenarios is a better option. However, developers could theoretically take this principle to an extreme and try to build a single Service called, say, “DoSomething” that can meet every single need. DoSomething would have a simple interface that would support some arbitrary Service function requirement, and it would produce a corresponding Service result. The problem with this DoSomething Service is quite obvious to anybody who has ever tried to implement such a thing. In essence, DoSomething is no longer a usable Service at all, since we’ve basically just passed the buck. Instead of the Service itself determining its own semantics, we’ve just shuffled that determination to some lower-level piece of code. In essence, we’ve treated SOA as just some sort of routing protocol or messaging system with no inherent functional capabilities. Such Services, however, are clearly unable to satisfy the requirements of SOA. So, if general-purpose Services are a red herring, what about single-purpose Services? The answer to this question is a bit of a draw. Some single-purpose Services, even though they might be very fine-grained and accomplish only one particular task, might be exceptionally reusable. That is to say, architects might be able to compose such Services into many different process scenarios. In contrast, domain-specific Services might only be applicable in certain scenarios, but the fact that they are specific to a particular problem or set of problems is what makes them useful to the business. And that is where we get our first clue for trying to solve the Service granularity issue: focus not on an individual Service, but rather on overall business processes and how Services might meet the needs of multiple processes in the business. The more a company can leverage a Service for multiple processes, the more useful it is. Correspondingly, if it’s impossible to leverage a Service within several different processes, then you should wonder whether or not it is at the proper level of granularity. The Role of Process Decomposition A Service is not really a technology concept. It is merely an abstract representation of value the business wants to extract from its technology. As such, we should focus on defining Services from the business point of view — that is to say, the business process point of view, since business processes fundamentally define the business. One approach to specifying the right Services is to start with some business process and decompose it into increasingly smaller subprocesses until you can go no further. The resulting subprocesses then become candidate Services for implementation. The more processes that a company decomposes in this way, the more they can see commonality across their subprocesses and thus have a chance at building an appropriate set of reusable Services. However, this top-down process decomposition approach has a critical flaw in that we might end up defining Services that are impossible or impractical to implement. Therefore, it’s important to simultaneously go through an exercise of taking existing business logic, albeit ingrained in code rather than metadata, and exposing it as Services which themselves become candidate Services that specify not the overall business process, but rather the mechanism for implementing the process. This exercise should yield two categories of Services: business functionality Services that are reusable across multiple processes, and fine-grained utility Services that can provide value to various Services across the organization. The ZapThink Take Companies going through this Service definition exercise should make sure not to fall into the common, yet fatal trap of thinking that once they’ve defined their Services, they are done. SOA, by its nature, demands constant evolution. Even if the Services a company develops are perfect for the business at one time, the business will continue to undergo constant change, requiring new Services as well as new compositions of Services. What might have been the right level of granularity for a Service on day one might be inappropriate just a few weeks later. As a result, it makes no sense to try to cast the level of granularity for a Service in concrete. Companies must approach Service design iteratively, building well-defined Service interfaces at a range of granularities and then establishing and using those Services that are appropriate at the time. Service-oriented architects will spend a good amount of their time tweaking Service interfaces such that they realize the optimal combination of fine vs. coarse-grained and single-purpose vs. multiple-use given the amount of knowledge they have about the business at that point in time. As a result, developers and architects should resist the urge to “get the right Services.” Indeed, getting it right doesn’t even matter, since what’s right today will surely be wrong tomorrow. After all, building Services is not the goal of SOA — it’s building an architecture that allows businesses to continuously evolve their set of useful Services that the business wants and can leverage despite ongoing change. Building such useful Services is all about striking the balance between general-purpose and domain-specific Services as well as between overly-defined and overly-ambiguous Service interfaces. There’s no cut and dried answer to what those Services should be for any particular company, but there certainly are good approaches to making those Services a reality. Spurring and encouraging this ongoing debate in the industry will only serve to make SOA more useful to the business, and thus worth continuing to write and speak about. Download the Full Solving the Service Granularity Challenge Report Here
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644867.89/warc/CC-MAIN-20230529141542-20230529171542-00630.warc.gz
CC-MAIN-2023-23
11,460
20
https://www.oreilly.com/library/view/learning-servicenow/9781785883323/6b4f2117-ebd4-4673-8871-5e4777ad485f.xhtml
code
The simplest possible workflow in ServiceNow, is a straight line from Begin to End: However, this workflow doesn't to very much for us. Instead, let's create a new Workflow for our virtual war rooms, to drive the generation of some war room tasks. To get started, open the Workflow Editor from Workflow | Workflow Editor in the Application Navigator. The Workflow Editor should open in a new tab or window. When it does, click on the Plus icon at the top-right, under the Workflows tab, so we can create a new workflow for our Virtual War Room tickets: On the New Workflow form, set the name to War Room, the Table to Virtual WarRoom [u_virtual_war_room] ...
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588246.79/warc/CC-MAIN-20211028003812-20211028033812-00447.warc.gz
CC-MAIN-2021-43
658
4
https://forum.kingsnake.com/rhac/messages/4063.html
code
mobile - desktop Available Now at RodentPro.com! News & Events: Posted by greentea on March 08, 2003 at 11:44:44: In Reply to: Humidity question? posted by MarthaStewart on March 08, 2003 at 01:17:32: you should get a spray bottle and mist them a couple times a day (if you dont already) & there are humidity thermometer things you can get to tell if its humid enough. my gecko doesnt like his hide box either :/ i put some dead leaves and moss on the bottom of his cage and he sleeps under those.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00260.warc.gz
CC-MAIN-2023-14
497
8
https://techcommunity.microsoft.com/t5/apps-on-azure-blog/monitoring-and-alerting-design-thoughts-and-considerations-with/ba-p/2349740
code
One very important aspect of managing one’s applications is that of monitoring and alerting. The Azure product group is acutely aware of this need, of course, and have built an advanced monitoring and alerting system right inside the portal, under the “Alerts” area. As part of this, you can configure various rules to keep track of your resources. These rules key to various elements ("conditions"), which you would choose from based on your understanding of the app and what are its key function parameters. There are about 60 conditions available, like certain HTTP errors, or CPU time. For example, one of the fundamental ways you could keep an eye on your app would be to set an alert on http server errors, and run it for a certain while without "major" alerting (as in, don't email the entire world about every error just yet) to establish your baseline, as any app would have a certain amount of errors occasionally. Let's say you run this for 2 weeks and see on average of 3 errors per day...you would then set the alert threshold to something higher, thus avoiding waking up everyone at 2am just because one user clicked the wrong button. After configuring the conditions and thresholds that are appropriate for your application, you would decide what to do with it. Azure can send an alert to an email address, or to SMS, or perform a push-notification to the Azure app on your phone, or to make a phone-call (voice). You could add as many targets as you wish, though most people create some kind of corporate alias or group, which people can join or be added to get the notifications. You can see more info and a helpful video about configuring Service Now to interact with our alerting on the Azure blog. However, really keeping track of your application is much more complicated, because the very notion of “up” vs “down” is different for every app. For example, if the application displays a form for the user to fill-out, then just testing if the form loads correctly doesn’t really give you much, and a truer test would be to see what happens when the form is submitted. If the application uses some kind of authentication, then testing the authentication process is an important goal, but not always possible, because it would typically require creating some kind of test account and that could create a security risk. One way to clear some of these obstacles is to create specific test pages, which perform “backend” operations, such as running a database query and displaying the result. Creating such a page and checking if it loads successfully and/or delivers the expected content is a good way to test the app. Another aspect of testing is the one of performance. An application can be “up”, but the time it takes to process a transaction can suddenly go from 8 seconds to 50 seconds. That kind of change is way below normal time-outs, but certainly above the patience threshold of many human beings, so tracking it is an important way to know things might be going awry. But things can get a lot more complicated, because as I noted, “up” and “down” can mean many things. For example, what if your application normally has about 100 transactions per minute, but suddenly, that number jumps to 1600? That’s not “down”, but such a growth could mean that the code is going into some kind of loop due to a bug or design issue, and that could be both a bad user experience, as well as cause undue strain on your resources, and even cause a spike in costs. Also, it could mean that some malicious party is doing some kind of footprinting on your app to find vulnerabilities, or performing a denial-of-service attack against the app. All of these are things you probably want to be aware of even if the app feels perfectly normal to all your users. Another thing to consider is that for users, there could be nuanced notions of what’s “down”. For example, your form could be loading, but it could be missing some image or CSS files, causing the appearance to suffer. This kind of thing doesn’t mean the app is down, but it can look very ugly, and if your users are customers, it could make the company look bad. Yet another thing to consider is alert levels. If your app is dead, you certainly want all-hands on deck, but if it’s performance is down by 20%, you might want a more limited circulation of just system admins or a developer or two. You might want that specific alert level to be off during the night, and set various thresholds (for example, 20% drop, just send an email to be read during the next business day, but a 40% drop warrants a phone call). The more complex the app and development process, the more elaborate your alerting decision tree and flowchart would be. Another aspect of this is the alert interval. Most monitoring options run at very short intervals, like once every 5 minutes, or even less, but people don’t typically respond that fast, and code-fixed can take time to develop and deploy. You certainly don’t want your CEO to receive a phone call every 60 seconds for 5 hours while your people are trying to fix it, right? Similarly, if the alerting system generates a high volume of alerts, many people tend to set email filters, so they don’t wake up in the morning to 540 new emails. Those kind of filters could lead to the issue not being seen, and so the alerting is too loud to be useful. A better design would be to have alerting trigger a certain number of alerts, but then quiet them down before they become unmanageable. In closing, alerting is an engineering effort that in many cases can be almost as complex as designing the application itself, and so a good idea for any organization is to start planning this from day-1, alongside the applications’ design and coding. Integrating this into the app early is more likely to lead to a reliable and stable monitoring, and thus a more reliable and stable application.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710473.38/warc/CC-MAIN-20221128034307-20221128064307-00320.warc.gz
CC-MAIN-2022-49
5,935
8
https://lists.debian.org/debian-kde/2002/11/msg00216.html
code
Recent Reformat -- Was It Necessary? I'm posting this to the list because a problematic state of my machine occured after an update of KDE 3.1b2. The file libstdc++1.2.something.so disappeared which caused attempts to use aptitude or apt-get to fail. As I'm inexperienced with solely using dpkg for system maintenance, I perform a complete reinstall of Debian. I'm still in the process of restoring all of my applications. I know that a complete reinstall is reminiscent of M$ Window$ and I wonder if I could have fixed my problems. (Yes, I did neglect to back up my /etc/fstab, /etc/lilo.conf, /etc/apt/sources.list and execute a "dpkg -l > old-application-list" but that's another matter... Live and learn.) Can anyone chime in with a way I might have avoided the catastrophic solution Comments are most appreciated,
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00147.warc.gz
CC-MAIN-2020-16
818
14
http://www.dbforums.com/showthread.php?1606048-Change-logins-for-multiple-database-user-groups
code
Unanswered: Change logins for multiple database user groups I have a SQL 2000 box with number of NT4 domain group logins. These groups are aliased to users in a particular database. I am now trying to map these users to new groups in an Active Directory domain. Is there an easy way to map the users to the new groups without deleting and recreating. I know this is possible with users but not sure with groups.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120878.96/warc/CC-MAIN-20170423031200-00527-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
411
2
http://8iapps.com/app/1031771916/spy-mission
code
Now, you can become a spy! Go on real-world secret 'missions' in your neighborhood. Find secret messages, identify other 'secret agents,' send coded messages, and more. Spy Mission contains thousands of real-world make-believe spy missions to encourage neighborhood exploration and interaction. Age-appropriate missions are broken down into step-by-step tasks. The missions include activities such as leaving a mark on a signpost; ordering a meal at a restaurant and relaying a coded message; and observing passersby. All missions are designed with safety, privacy, and adventure in mind. Spy Mission does not collect or share any personal information. Spy Mission encourages respect, responsibility, and resourcefulness. Start your first spy mission today!
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247487595.4/warc/CC-MAIN-20190218155520-20190218181520-00108.warc.gz
CC-MAIN-2019-09
757
1
https://kitchenthai.com/bangkok-nightwalk-patpong/
code
Bangkok nightwalk – Patpong [Feb 2020] NB: This video was filmed in Feb 2020, Bangkok 112 videos are for entertainment purposes and are not current affairs. Important: If you are thinking of booking any hotel through agoda, please book direct through the links on www.bangkok112.com at no extra cost to yourself. A small commission will be generated that will go straight back into improving the future videos on this channel, so it’s win/win. Thank you for your support. * Gimbal used for this video: https://amzn.to/302T6Cb * Camera Used for this video: http://amzn.to/2oT5fmJ * Best Thai Dating Site: https://tinyurl.com/y9bwbt8b * Best hotels near Soi Cowboy https://tinyurl.com/yapr9mhp * Best Hotels near Nana Plaza https://tinyurl.com/y7j7uqb3 * Best hotels near Sukhumvit Soi 11 https://tinyurl.com/ydbhe6xa * Latest Post on the Bkk112 website: https://tinyurl.com/y82cch8k * NEW for 2019 Bitcoin address: 3P52qQ3LpvtWV2rftJCdFGiyidSD7fELAx to support this channel. In Feb 2020 I did a ‘super smooth’ gimbal walk at Patpong Soi 2 around dusk. Patpong is the oldest, most famous but is also the least busy of Bangkok’s 3 main bar areas. This is due to many factors including a big market strewn through Patpong street 1, and scamming bars that promise free ping pong shows but extort customers (read tripadvisor reports for the latest disgruntled customers). Parts of this video include: 0:10 Heading along Silom road from Patpong Soi 1 to Soi 2 1:36 Patpong Soi 2 5:40 Foodland supermarket, a decent expat supermarket with an excellent ‘Took Lae Dee’ inexpensive Thai restaurant on site 6:17 Popular French restaurant 6:36 Oh dear! 8:55 Police booth 9:21 Hidden alley walk 10:30 Another popular French restaurant 11:25 Looking for a mototaxi 12:31 Mototaxi ride to Ratchadamri BTS station Thank you for watching and subscribe for more videos. Scott Buckley – Neon Termite Infested White Picket Fence – Tomove
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00554.warc.gz
CC-MAIN-2021-49
1,934
27
https://github.com/ratibus
code
Create your own GitHub profile Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 40 million developers.Sign up 30 contributions in the last year Created a pull request in geocoder-php/Geocoder that received 1 comment Key and app-id were inverted in the code example vs constructor signature in the implementation. A quote was missing in another line.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987822458.91/warc/CC-MAIN-20191022155241-20191022182741-00019.warc.gz
CC-MAIN-2019-43
415
5
https://www.destructoid.com/--105895.phtml
code
Here we are as some might see it in the middle of this console generation. What does that mean? Well a few things, and some might not be that great. With every new generation we get prettier games, and also some games start getting really stale. For instance last gen, the GCN, PS2, and Xbox were dominated by platformers and console FPSs. Now platformers are dieing and FPSs are the dominating genre this gen. That's how I see it. The rise of the Wii and DS have brought forth the dawning of the idiot games. Fun the may be but if they start overwhelming my library I am going to shoot someone. One can only take so many party games. This generation also gave rise to music games. What FPS was to the PS2 and Xbox days, music games are to this gen. This is what scares me, don't get me wrong I own all the songs currently available for Rock Band 2 . If the future consoles become centered on making casual and music games, like the current Xbox controller has a trigger for FPS, then they might dominate the next gen. You might not think anything of it, but if this happens, our prettier games, and other lovable genres might be pushed to the backseat over waggle and peripherals. It makes a person inside uneasy knowing that a pattern has emerged and might continue. I just hope Microsoft, Nintendo, and Sony do not use this data to make their next consoles. There are already rumors of the Playstation 4 using the Cell processor and adding "more features". Seriously how would you feel about that? I just hope Nintendo did not screw the next gen by showing Microsoft and Sony that you can make good money off of the waggle. What I do hope to see is the return of the platformer, maybe some more 3rd person action, and ugh nay I say music games, they have gotten to me. What I do not want is more waggle, and easily fun games that last 5 minutes, and use less power to look pretty than the damn Gameboy. That's my fear. If the next gen systems come bundled with a standard guitar, or super waggle control then it has come true and I have lost faith in the gaming industry, because that would show they care about the money and only the money. I think console FPSs are going to die this gen, they are getting really similar and stale imo. What I do want is good games, new games, and more Pikminesque titles. Fear the future, fear the waggletar.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201699.38/warc/CC-MAIN-20200921112601-20200921142601-00314.warc.gz
CC-MAIN-2020-40
2,346
6
https://physicsworld.com/a/number-theory/
code
A physicist in the US has proposed a new way of quantifying the scientific output of individual scientists. Jorge Hirsch of the University of California at San Diego says that the "h-index" - which is derived from the number of times that papers by the scientist are cited - gives an estimate of the "importance, significance and broad impact of a scientist's cumulative contributions." According to Hirsch the h-index "should provide a useful yardstick to compare different individuals" when recruiting new staff, deciding promotions and awarding grants (physics/0508025). While the number of papers published by a scientist provides a measure of their productivity, it says nothing about the quality of their work. The number of citations received by a scientist is a better indicator of quality, but co-authoring a handful of articles that are cited widely could “inflate” the reputation of a scientist. Hirsch says that his new approach overcomes these problems. A scientist with a h-index of 10, say, will have published 10 papers that have received at least 10 citations each. The best researchers should therefore have the highest h-indexes. “A high h is a very accurate indicator of scientific achievement,” says Hirsch. “I have looked at the h-index of many physicists in the subfields I am familiar with and have found that there is a very strong correlation between scientists for whom I have a high regard and their high h.” Hirsch says that it only takes a few seconds to find the h-index for a scientist – providing they don’t have a common name – on the ISI Web of Knowledge database. For example, the physicist with the highest h-index is the string theorist Edward Witten of the Institute for Advanced Study in Princeton, who has a h-index of 110. This means that Witten has published 110 papers with at least 110 citations each. Other highly ranked physicists include: Marvin Cohen (94), a condensed matter theorist at the University of California at Berkeley; Philip Anderson (91), a condensed matter theorist at Princeton University; Steven Weinberg (88), a particle theorist at the University of Texas at Austin; and Michael Fisher (88), a mathematical physicist at the University of Maryland (88). Hirsch, who has a h-index of 49, says that a “successful scientist” will have an index of 20 after 20 years; an “outstanding scientist” will have an index of 40 after 20 years; and a “truly unique individual” will have an index of 60 after 20 years. Moreover, he goes on to propose that a researcher should be promoted to associate professor when they achieve a h-index of around 12, and to full professor when they reach a h about of 18. However, Hirsch recognizes that the average h-index might be different for different subfields of physics: “One should make sure one knows what the typical values in each subfield are if one is comparing individuals from different subfields,” he says.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00437.warc.gz
CC-MAIN-2018-13
2,944
7
http://www.nvnews.net/vbulletin/showpost.php?p=226957&postcount=22
code
Originally posted by Hellbinder Nvidia Obviously encrypted the drives as to prevent exactly this type of comparrison. There is really only one reason to do something like that. Usually its if you are trying to hide something. Unwinder devised a way to remove the encryption, but still, antidetect has no effect on any of the drivers past 44.65 as far as I know.
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448949970.43/warc/CC-MAIN-20150501025549-00029-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
361
4
https://github.com/thaneuk/brackets-special-html-chars
code
Brackets Extension for insertion of Special HTML Characters The Inline editing context menu will get an additional option to bring up a menu of common Special HTML characters to insert of the cursors position (e.g. Copyright, Trademark, Non-breaking space). A more option at the bottom of this menu brings up a dialog which has an extensive list of these characters for you to choose from. In addition 'Alt-C' will bring up the full dialog to select a character to be inserted at the cursor position.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00755.warc.gz
CC-MAIN-2022-40
500
4
https://www.physicsforums.com/threads/strong-laser-pointers-facts.328467/
code
Recently, i obtained a 100mW green laser pointer, and the thing gets me confused. On one side, the beam is bright enough to see from some distance at 90* angle to it, and the point is visible on a hillside some miles away. On the other hand, it doesn't burn anything, like advertisements often say. Even more, pointing it on a thermistor of a digital thermometer gives no temperature increase whatsoever. The beam looks like in the commercials, but the match-lighting part is wholly missing. So, are there lies somewhere, or am i missing some parameter? What really is and is not possible with these things? EDIT: Also, is it normal for the pointer to work only for about a minute at a time? At start, it gains brightness in discrete steps, over about a second, then work at full for about a minute, then becomes dimmer, in discrete steps again. Trying to lit it up right after gives only fractional power - it does not reach the highest steps. If you wait a few minutes, it works fine again. Is that a normal behavior?
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865181.83/warc/CC-MAIN-20180623190945-20180623210945-00093.warc.gz
CC-MAIN-2018-26
1,019
1
https://unlocked-wordhoard.blogspot.com/2006/01/you-aint-no-tolkien-fan.html
code
I've commented before about how dangerous it is in the blog-o-sphere to say anything about Tolkien that suggests he was a mere mortal. Whenever I say something that could be interpreted as lacking appropriate zeal, I get hatemail from all over. OK, punks ... you think you're Tolkien fans? You ain't no Tolkien fan. THIS is a Tolkien fan. h/t La Professora Abstraida
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00123.warc.gz
CC-MAIN-2021-43
366
4
https://iainhouston.com/posts/
code
Well, it really was that easy - and fast - to get started with Hugo and the theme (this theme) recommended in the quick start tutorial is very nice indeed and will suffice now that I’ve made a few minor tweaks. We use Jeff Geerling’s Drupal-VM to keep our development and live sites’ environments exactly in sync. The operating system is at exactly the same level The required software components are exactly the same The configuration of both operating system and software components are the same. The idea was to explore CoffeeScript version 2 at the same time as exploring React to test its appealing claim to be ”… painless to create interactive UIs” by adapting the “Thinking in React” example on the React website.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512421.5/warc/CC-MAIN-20181019170918-20181019192418-00327.warc.gz
CC-MAIN-2018-43
736
6