url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.theannoyance.com/improv-classes-registration
code
Find You First Registration is Non-Refundable and Non-Transferable Why? Because your enrollment HOLDS A SPACE in a class that we are no longer able to offer someone else. Please be sure about your schedule before you register for class. This applies if you get cast in a show, get a new job, or other situation, or have other changes to your schedule. So, please be sure about your schedule before you register.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00228.warc.gz
CC-MAIN-2019-47
411
3
https://forums.adobe.com/thread/782835
code
I can't figure out how to word this question to make it make sense, but here goes... I have a page with 2 parts, Graphics and Photos... I want the Photos part to have a "next page" and "previous page" button that won't interfere with the Graphics part, in case I want to add a next/back button to that too... but they're all on the same page... so do I have to make a new state for each combination of pages? (like graphics page 2 and photos page 3 together, graphics page 2 and photos page 4 together, etc etc) because I could probably do that... Or is there an easier way? Thanks in advance! Your solution here is custom components, each of which can have its own states. Make the graphics portion a custom component, and the pictures portion a custom component. That way, you can buttons to the pictures component that changes its states, while leaving the graphics component unchanged. Rob is correct on the approach. You can read a tutorial I wrote on creating a component like that here: http://chrisgriffith.wordpress.com/2010/09/25/using-custom-components-in-flash-catalyst/
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863949.27/warc/CC-MAIN-20180521043741-20180521063741-00565.warc.gz
CC-MAIN-2018-22
1,082
5
http://japanese.stackexchange.com/questions/tagged/te-form+conjunctions
code
Japanese Language Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site なく vs. なくて and stem form vs. てform as conjunctions I have been wondering about this, since every time I hand in a 作文 in a Japanese class, I'm corrected on conjunctions. It seems to me that whenever I use a てform as a conjunction, a response comes back ... Aug 29 '11 at 14:46 newest te-form conjunctions questions feed Hot Network Questions Is there such thing as "touchscreen-friendly" user interface? How are system commands like ls created? Can humans eat grass? Elder wand: age or wood? Undefined Behavior Killed My Cat Output an image of your source code (not-quite-a-Quine) Examples of mathematical results discovered "late" Is it possible to ditch OS X and install BSD on my 3rd Gen Macbook Pro Longest code to add two numbers how can I measure / prevent clock drift? Do we need to formally teach the Greek Alphabet? Movement code from a game variation of the binomial theorem The formula for finding the percentage of isotopes in an element's atomic weight? How to politely communicate that I really can learn well only if I do it myself? What's the cleanest way to change a diaper on an airline flight? Sort characters by darkness Why is time-dilated simulspace not an I Win button? Finding person with most medals in Olympics database Repeat a command every x interval of time? Can a system be 100% Data Driven? Specialized bike store or general sports shop Logic to test for 3 of 4 Why should I use a pointer rather than the object itself? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Overflow Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653402/warc/CC-MAIN-20140305060733-00084-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
2,129
53
https://www.zdnet.com/article/microsoft-slashes-product-key-allowances-for-technet-subscribers/
code
Microsoft’s TechNet program is one of the best bargains in personal computing. For an annual subscription price of $349, TechNet Professional subscribers get access to nearly every release of every operating system (desktop and server) and Office suite. The licenses are valid for evaluation purposes only, but the downloadable products and product keys are typically Retail products, indistinguishable from shrink-wrapped products. Two years ago, a TechNet Professional subscription entitled you to 10 product keys for every version of Windows and every version of Office. In September 2010, citing concerns over piracy, Microsoft cut those allotments to five keys. Now, according to an announcement at the TechNet Subscriptions home page (link available only to signed-in TechNet subscribers), the number of product keys has just been slashed again: Beginning in mid-March 2012, subscribers to TechNet Subscriptions (excluding TechNet Standard which are entitled to 2 keys per product) may access a maximum allocation of three (3) product keys for Microsoft Office and Windows Client products in connection with their subscription. The allotted keys may only be used for software evaluation purposes. Once the maximum keys have been activated, no more keys will be made available. Additional product keys may be acquired through the purchase of an additional subscription. In addition to that restriction, Microsoft has also imposed restrictions on the number of keys that can be claimed on any given day. As another support page notes, a TechNet Professional (Retail) subscriber can claim 44 keys in a 24-hour period. Reaching your limit means that you have claimed the maximum number of keys allowed for your program benefit level within a 24 hour period. Every 24 hours you may claim another set of keys, up to your program levels maximum. The same document includes an explanation of sorts for the sudden spate of changes: Why has Microsoft limited my access to product keys? We are acting to protect the value of your subscription. If we did not act to prevent abuse of subscriptions we would eventually have to either limit the products available in a subscription or raise the price of your subscription. We believe that this is the best compromise to continue to deliver the highest value to you while limiting abuse at the same time. Over the past few years, I have encountered countless examples of unauthorized resellers hawking Windows and Office product keys on legit-looking websites. It was a lucrative business for scammers, whose $349 got them 10 licenses for Windows 7 Home Premium and Windows 7 Professional and Windows 7 Ultimate. It was practically a license to print money, as long as their customers activated the resold product keys before Microsoft cut them off. Even at five keys per product, the economics made it worth trying. The question now is whether the newly reduced allotment of product keys will actually reduce piracy or simply annoy TechNet subscribers.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00650.warc.gz
CC-MAIN-2023-14
2,995
13
https://techgenix.com/enabling-front-door-managed-certificates/
code
If you are working on your infrastructure-as-code (IaC) and having a hard time configuring the Front Door managed feature in Azure, you are not alone. But at the time of this writing, the parameters to configure it is not exposed in the API, thus we cannot use ARM templates for that. I have noticed a lot of questions about this topic on the Internet, but there is no doable solution at this time. I will be writing a longer article on this topic, but this Quick Tip at least lets you know you are not the only one having a problem with this. There are a couple of ways to perform a workaround on this current issue. First, you can do it manually and enable it (not good). Second, you can use PowerShell or Azure CLI in the Azure DevOps pipeline to enable it. As I mentioned, stay tuned for a complete tutorial on the workaround for configuring the Front Door managed feature in Azure.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104432674.76/warc/CC-MAIN-20220704141714-20220704171714-00468.warc.gz
CC-MAIN-2022-27
886
3
http://in-pursuit-synonym.psgroove.com/
code
If the synonym for in pursuit are wrong, please inform the site editors by writing a comment. You can search for in pursuit synonym and other synonyms at psgroove.com Do you think it is a different synonym for the following except for in pursuit synonyms? Comment to add other in pursuit synonym. 1. as in following In Pursuit synonym for adj happening, being next or after
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516194.98/warc/CC-MAIN-20181023132213-20181023153713-00130.warc.gz
CC-MAIN-2018-43
373
5
https://www.sapnaaz.com/author/krrishkhedia/
code
Are your emails going to SPAM? Don’t worry, It's OK. It is very common for emails to go land in the spam folder for a new sender with a new SMTP server (Email Server), new IP & Domain. Even if all technical aspects such as SPF, DKIM, DMARC, RDNS, No Blacklists, and 10/10 Score on Mail-Tester.com are in place, your emails may land in spam. When you build a New SMTP Server with a new IP Address, or when you subscribe to an SMTP Service Like Amazon SES with a Dedicated IP, and before You can Start Sending any Email Marketing Campaigns, you have to warmup your IP Address. Also, you need to warm it up if you left your IP without sending any email for more than 30 days or so. So, SMTP warmup is somehow a continuous process. What is Warming Up IP? When you have a new Brand IP Address for your SMTP, this IP will have no Reputation on the internet, and ISPs (internet service providers) don't know this IP. So, IP warmup is the practice of building your reputation on the internet by gradually increasing the volume of emails sent with your IP address according to a predetermined schedule. When an ISP notices an email suddenly coming from a new IP address, they will immediately begin evaluating the traffic coming from that IP. Since ISPs treat email volume as a key in determining spam, it is best to start sending a low email volume, then increasing up your way up to more significant amounts. This gives the email providers a chance to carefully observe and analyze your sending habits and volumes and record how your recipients engage with your email. In general, Warming up takes between 2-10 Weeks according to your scenario and the number of emails you want to send per day. How do ISPs evaluate your emails and Reputation? When you start the warming up process, ISPs will Evaluate your Reputation according to three Main Factors: - Bounce Rate: When you send an email campaign, you need to ensure that your emails are valid emails, High bounce rate will destroy your reputation. - Spam Traps: Even a very low percentage of spam traps can blacklist you! - Spammy Content: Your message content is essential. ISPs will check if you are using any spammy keywords or blacklisted links. - User Interaction: How Recipients are interacting with your emails, if they are reporting you as spam, then this is a real problem! Warming Up IP in Action (Examples) Ok, now after we have the big picture warming up SMTP, let’s go with some examples and real scenarios and understand how the operation works: Please note: These sample schedules are intended to be a suggestion only. Every sender is different. 1. 1000 Emails Per Day This is not a big deal, you start by sending 14 emails the first day, then increase gradually to reach 1K in about 10-15 days. The schedule is described in the following table: |Day||No. Of Emails||Increment Emails Per Day| 2. 10K Emails Per Day Now, in this scenario, we do the same thing, but with a more extended schedule, check the following table: |Day||No. Of Emails||Increment Emails Per Day| 3. 50K Emails Per Day Now 50K per day is considered somehow a big number. As a little advice, and to make things easier, always split large volume warmup campaigns into smaller ones, like in our case, we split it into three schedules: - Reach 10K emails. - Reach 30K emails. - Reach 50K emails. You may say, this will make the warmup schedule longer, maybe yes, but in our experience in this way, you can monitor and manage your warming-up campaigns easily and get better results. So, if you were warming to 10K, the number of emails would be less, and you can see and monitor the user interaction and bounce rate on a smaller amount of emails, which will make the picture clearer. I hope you got the point. Free Warmup Tool Warmup Schedule Generator Then, Select Your Email List Size, and Click Generate! And See the Magic Email Volume and Timeline For IP warmup, the warmup schedule, and the sending volume is different for all senders. The number of emails you send depends on your own total email volume, some may require to send 100 emails per day, and others may need 1M per day! But in any case, you must send enough emails at enough frequency so that your email reputation can be tracked. Also, you have to know something very important, most reputation systems only store data for 30 days, so you should not go 30 days or more without sending on an IP. If you do, then you will need to warm it up again. This is why we said above that warming up an IP is a continuous process. Transaction VS Marketing Emails If you want to use your SMTP server to send Transactional Emails (Password resets, invoices, welcome emails…). In this case, you may be an established business or a new business. If you are already sending a lot of emails, and you decide to move to an ESP for the first with dedicated IP or build your own SMTP, you should migrate you're sending a little bit at a time. One way to do this is to split your traffic and move small portions of it to the new IP over time. Alternatively, if you are already maintaining multiple mail servers, you can move your servers over to your new IP one at a time. Typically, the organic growth of your business will, by its nature, create an ideal ramp. Since transactional email is usually dependent on the number of users you have, the growth in your customer base will create a nice, comfortable growth curve in your email volume. So you will not worry about warming up in this case. But still, it's important to monitor your reputation and your system to see how it's performing. (We will talk about monitoring reputation later) Email Marketing Campaigns The Simplest Approach is to estimate your total monthly email volume and divide that number by 30. Then, try to spread your sending evenly over the first 30 days, based on that calculation. For example: if you will send 90,000 emails/month, you should start off sending 3,000 per day over the first month and so on. Maintain Warmup across ALL ISPs It’s important to remember that you must maintain a steady volume during the entire warmup period for each ISP. So remember to split up your warmup schedule so each ISP is receiving a comparable amount of mail each day – don’t warmup Gmail on Monday, Yahoo! on Tuesday, etc. – evenly disperse your mail to each ISP on each day of warmup.) If not, your sending activity looks sporadic and you won’t be able to build a solid reputation. Just mix things up! Don't send to one ISP at a time. Warming up IP Tips: There are some crucial tips that you have to Follow while warming up IP: - Don’t ever start before you get a High sending score: Ensure this by configuring SPF, DKIM, rDNS, and other Technical Details. - Don’t ever send promotional emails in the warmup period. You need the highest engagement rates, so send transactional emails or maybe some valuable info. - Mail only to your top active subscribers first. Ensure almost 0% bounce rates. - Don’t rotate or switch IPs during warmup. Rotation is a sign of spam. - In your emails, add a clear link for people to unsubscribe. - Add an Email Signature that makes your emails look trusted. - Mix your campaigns with premium SMTP services – this will give better user interaction and domain reputation. - Join Newsletters. This will make a lot of emails come to your inbox and give you a Higher domain reputation. - Send to your friend’s list and ask them to report you as non-spam and tell them to reply. - Try your best to build an audience and warmup to that audience. In this way, you will achieve the best user interaction and will make the “warming up process” a lot easier. - Monitor your campaigns accurately, and be sure to keep your bounce rate below 2% by validating your emails (You can use our service to validate the emails – Bulk Email Verification) How to Monitor IP Reputation? So, you need to monitor your Bounce rate, user feedback, and reputation score. Some Services Like Amazon SES, and Sparkpost has a built-in reputation system that shows your Bounce Rate and User Interaction. If you are using a Custom SMTP as we explained in How To Set Up SMTP Server With Postal (Step By Step Guide) or whatever SMTP you are using. In some cases, you will be able to monitor bounces and user feedback through the mailing system like in MailWizz. You can also monitor open, unsubscribe, and open rates. This will give you an indication of how users are interacting with your messages. Having the right tools for checking the IP reputation is halfway to success. Here are some tools and services you can use: - senderscore.org by Return Path. The score ranks from 0 to 100, 100 being the best. It tells you how you're performing. Typically it's recommended that you maintain your sender score of 90 or better. - talosintelligence.com by Cisco. It tells you how your reputation is across all the network providers Cisco manages. The reputation score is grouped into Good, Neutral, and Poor. - postmaster.live.com Microsoft's Smart Network Data Services gives you information about the traffic originating from your IP address such as the volume of sent emails, complaint rates, and spam trap - postmaster.google.com Gives access to your domain's data on Google Search Console. - postmaster.aol.com Check your IP reputation and rates it as “bad”, “neutral”, and “good”. IP warmup is the act of sending emails gradually to build a good reputation and reach the recipient’s inbox. So, be careful, and follow the guidelines and tips listed above, to achieve the best results. Good Luck, if you have questions or clarifications, you can comment below, or open a question on Sapnaaz Forum. Hope You All The Best! Postal is a fully-featured open-source mail delivery platform for incoming & outgoing e-mail and gives you all the tools and features needed to build a full mailing system for your business. What is the SMTP Server? We have described it in detail about this in our another blog “What is a SMTP Server?“. But if you want know it in brief, here it is: SMTP is that thing that allows you to send emails over the internet. SMTP stands for “Simple Mail Transfer Protocol”. It's a Concept in Networking that you don't have to worry about. Just Know that SMTP is the technical thing that is responsible for delivering emails over the internet. What do we mean by Sending Unlimited Emails? When we say unlimited emails, this means that we can send unlimited emails from our server, there are no restrictions by companies or monthly plans to buy or so. It's your server, you can send as much as your server can handle in terms of resources. So when you have more CPU and RAM, you can send more emails and so on. Let's Start The SMTP Server Setup Ok, let's start the real work! But, before we start, you need to know what is required. Requirements to Setup SMTP Server. In order to Build and Setup an SMTP Server, you will mainly need two things: 1. Domain name When you send emails, you will be sending from an email address like this one: So if you don't have a domain yet, go and get one now in order to continue the setup. How to get a domain? Simply you buy one! And it costs around 10$/year, so it's not that big deal! Here in this course, we will be using Namecheap to get my domain, but you can use any other service if you want, all work in the same way, and if you need any help, we will be here 🙂. Got a Domain? Great! ✔️ Let's continue. 2. VPS Server with Port 25 opened. What is a VPS server? If you don't know what is a VPS server, simply it's a computer (a server) running in the cloud that you buy from a Web Hosting or Cloud Services company. And it's publicly accessible with a Public IP. A VPS can be used to host your websites with higher performance and can be used to run a machine 24/7 in the cloud to do any task you want. Port 25 open?? We mentioned that the VPS must have port 25 opened, what does this mean? We don't want to bother your head dealing with technical stuff but, in short, any network service or software uses a certain port to communicate over the internet or network. For example: - Connecting remotely to another Windows Machine using RDP software works over port 3389. - SQL database systems like MYSQL on our computer works over port 3306. - Connecting to a Linux machine remotely to manage it with SSH, uses port 22. - When you surf the web and open websites, we use port 80. and so on. By default, all servers and computers have a firewall running which blocks all ports except the ones you want. So, to use a certain service, we need to open that port in the firewall. What you have to know also, that ports can be blocked and opened in two ways, incoming and outgoing, the following diagram will make things clearer: When you get a VPS server, just make sure that the company allows Port 25 and don't block it, because some companies do this to protect from spammers. Here is a list of some companies that allows port 25 by default: Are there any other Companies? Yes, you can contact the support of any company you want and ask if they block any ports by default. If not, then perfect, you can go with it. Feel free to use any VPS company you want, It's up to you! You can also sign up on DigitalOcean through this coupon link to get free $100 to test everything for free. Or start with Contabo which we think is the cheapest VPS service that you can use, we have 2 servers configured with Contabo. - Ubuntu 22.04 or 20.04 or 18.04 x64 as your operating system - You can start with 1 CPU/2 GB RAM (And resize later). Got your VPS? Great! ✔️ Let's continue. VPS Server basic configuration Now we have our new Ubuntu VPS server, let’s prepare it out for setup. - Press Win + R on your keyboard. - Type cmd, and press Enter. A new window will open. - In the window, type ssh username@ip-address (replace username with the username of your server and ip-address with the IP address of your server) First, check your hostname: If you don’t see a form of ANYTING.YOURDOMAIN.COM, then you need to change the hostname using the following command: sudo hostname server.domain.com Where the host is anything you want. Our domain for this tutorial is sapnaaz.com, so the command will look like this: sudo hostname postal.sapnaaz.com Map your domain name Host: postal (Whatever you have chosen) Points To: YOUR_SERVER_IP Done? ✔️ Let's continue. Install Postal Mail Server on Ubuntu 22.04/20.04/18.04 The installation of Postal Mail Server on Ubuntu is not as complicated as others say. By sparing some minutes and following below few steps, you should have Postal Mail Server running on Ubuntu 22.04/20.04/18.04 server. Step 1: Update your system Like all other installations, we start the installation by ensuring our system is updated. sudo apt update sudo apt -y upgrade Perform a reboot if it is required: [ -f /var/run/reboot-required ] && sudo reboot -f Then install git once the system is up sudo apt -y install git curl jq Step 2: Install Docker & Docker Compose Import Docker repository GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker-archive-keyring.gpg With GPG key imported you can add Docker repository to your Ubuntu system. sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" Finally install Docker CE on Ubuntu 22.04/20.04/18.04: sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io Add your user account to docker group. sudo usermod -aG docker $USER newgrp docker Confirm successful installation by checking Docker version $ docker version Client: Docker Engine - Community Version: 20.10.16 API version: 1.41 Go version: go1.17.10 Git commit: aa7e414 Built: Thu May 12 09:18:18 2022 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.16 API version: 1.41 (minimum version 1.12) Go version: go1.17.10 Git commit: f756502 Built: Thu May 12 09:16:22 2022 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.4 GitCommit: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 runc: Version: 1.1.1 GitCommit: v1.1.1-0-g52de29d docker-init: Version: 0.19.0 GitCommit: de40ad0 Download the latest Compose curl -s https://api.github.com/repos/docker/compose/releases/latest | grep browser_download_url | grep docker-compose-linux-x86_64 | cut -d '"' -f 4 | wget -qi - Make the binary file executable. chmod +x docker-compose-linux-x86_64 Move the file to your PATH. sudo mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose $ docker-compose version Docker Compose version v2.6.1 Step 3: Install MySQL / MariaDB database server The other requirement of Postal Mail server is a database server. Now, you can install MariaDB in the docker container. docker run -d \ --name postal-mariadb \ -p 127.0.0.1:3306:3306 \ --restart always \ -e MARIADB_DATABASE=postal \ -e MARIADB_ROOT_PASSWORD=postalpassword \ mariadb Step 4: Install and Configure RabbitMQ Postal uses RabbitMQ for queueing. RabbitMQ is necessary to process messages and distribute the loads To install it, run the following commands docker run -d \ --name postal-rabbitmq \ -p 127.0.0.1:5672:5672 \ --restart always \ -e RABBITMQ_DEFAULT_USER=postal \ -e RABBITMQ_DEFAULT_PASS=password \ -e RABBITMQ_DEFAULT_VHOST=postalvhost \ rabbitmq:3.8 Now we have installed all the prerequisite packages, it’s time to install Postal Step 5: Install and Configure Postal on Ubuntu 22.04/20.04/18.04 Cloning the Postal repository sudo git clone https://postalserver.io/start/install /opt/postal/install Create a symlink for Postal binary sudo ln -s /opt/postal/install/bin/postal /usr/bin/postal You’ll need these records for accessing the API, management interface & SMTP server. You can configure a global SPF record for your mail server which means domains don’t need to each individually reference your server IPs. This allows you to make changes in the future. |spf.postal.yourdomain.com||TXT||v=spf1 ip4:Your_Server_IP ~all| If you wish to receive an incoming e-mail by forwarding messages directly to routes in Postal, you’ll need to configure a domain for this just to point to your server using an MX record. |rp.postal.yourdomain.com||TXT||v=spf1 a mx include:spf.rp.postal.yourdomain.com ~all| Run the command below and replace postal.yourdomain.com with the actual hostname you want to access your Postal web interface at. $ sudo postal bootstrap postal.example.com Latest version is: 2.1.1 => Creating /opt/postal/config/postal.yml => Creating /opt/postal/config/Caddyfile => Creating signing private key This will generate three files in /opt/postal/config. - postal.yml is the main postal configuration file - signing.key is the private key used to sign various things in Postal - Caddyfile is the configuration for the Caddy webserver Open Postal configuration file. sudo vim /opt/postal/config/postal.yml At the minimum, have the following settings: web: # The host that the management interface will be available on host: postal.yourdomain.com # The protocol that requests to the management interface should happen on protocol: https main_db: # Specify the connection details for your MySQL database host: localhost username: root password: postalpassword database: postal message_db: # Specify the connection details for your MySQL server that will be house the # message databases for mail servers. host: localhost username: root password: postalpassword prefix: postal rabbitmq: # Specify the connection details for your RabbitMQ server. host: 127.0.0.1 username: postal password: password vhost: postalvhost dns: # Specifies the DNS record that you have configured. Refer to the documentation at # https://github.com/atech/postal/wiki/Domains-&-DNS-Configuration for further # information about these. mx_records: - mx.postal.yourdomain.com smtp_server_hostname: postal.yourdomain.com spf_include: spf.postal.yourdomain.com return_path: rp.postal.yourdomain.com route_domain: routes.postal.yourdomain.com track_domain: track.postal.yourdomain.com smtp: # Specify an SMTP server that can be used to send messages from the Postal management # system to users. You can configure this to use a Postal mail server once the # your installation has been set up. host: 127.0.0.1 port: 2525 username: # Complete when Postal is running and you can password: # generate the credentials within the interface. from_name: Postal from_address: [email protected] Edit the file to fit your Postal settings. When done Initialize database by adding all the appropriate table: $ sudo postal initialize [+] Running 5/5 ⠿ smtp Pulled 0.5s ⠿ cron Pulled 0.5s ⠿ requeuer Pulled 0.5s ⠿ worker Pulled 0.5s ⠿ web Pulled 0.5s Initializing database ...... Create your initial admin user Add your Email, Name & Suitable Password (Remember this email and password as this will be required to login to postal) $ sudo postal make-user Postal User Creator Enter the information required to create a new Postal user. This tool is usually only used to create your initial admin user. E-Mail Address : [email protected] First Name : Admin Last Name : User Initial Password: : ******** User has been created with e-mail address [email protected] Starting the application Run the following command to start the Postal application $ sudo postal start [+] Running 5/5 ⠿ Container postal-cron-1 Started 0.4s ⠿ Container postal-web-1 Started 0.2s ⠿ Container postal-requeuer-1 Started 0.3s ⠿ Container postal-smtp-1 Started 0.3s ⠿ Container postal-worker-1 Started 0.3s This will run a number of containers on your machine. You can look at the status at any time using: $ sudo postal status NAME COMMAND SERVICE STATUS PORTS postal-cron-1 "/docker-entrypoint.…" cron running postal-requeuer-1 "/docker-entrypoint.…" requeuer running postal-smtp-1 "/docker-entrypoint.…" smtp running postal-web-1 "/docker-entrypoint.…" web running postal-worker-1 "/docker-entrypoint.…" worker running Step 6: Configuring Caddy Web Server A web proxy is required for all web traffic and SSL termination. There are many options for Proxy – Nginx, Apache, HAProxy, e.t.c. In this guide, we’re going to use Caddy. We can run Caddy web server using Docker: docker run -d \ --name postal-caddy \ --restart always \ --network host \ -v /opt/postal/config/Caddyfile:/etc/caddy/Caddyfile \ -v /opt/postal/caddy-data:/data \ caddy Check it’s running: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 67461a34ae39 caddy "caddy run --config …" 20 seconds ago Up 20 seconds postal-caddy 1bc1bac79c15 ghcr.io/postalserver/postal:2.1.1 "/docker-entrypoint.…" About a minute ago Up About a minute postal-smtp-1 c5a04cea4211 ghcr.io/postalserver/postal:2.1.1 "/docker-entrypoint.…" About a minute ago Up About a minute postal-worker-1 e188ee4c2844 ghcr.io/postalserver/postal:2.1.1 "/docker-entrypoint.…" About a minute ago Up About a minute postal-requeuer-1 d7335cd48fa1 ghcr.io/postalserver/postal:2.1.1 "/docker-entrypoint.…" About a minute ago Up About a minute postal-web-1 c4b1cc7a852e ghcr.io/postalserver/postal:2.1.1 "/docker-entrypoint.…" About a minute ago Up About a minute postal-cron-1 Once this has started, Caddy will issue an SSL certificate for your domain and you’ll be able to immediately access the Postal web interface and login with the user you created in one of the previous steps. Once you have installed Postal, you can upgrade to the latest release it by running this command: $ sudo postal upgrade warning: redirecting to https://github.com/postalserver/install/ From https://postalserver.io/start/install * branch main -> FETCH_HEAD Already up to date. No version specified, using latest available version... Upgrading to 2.1.1 [+] Running 5/5 ⠿ worker Pulled 1.5s ⠿ cron Pulled 0.5s ⠿ web Pulled 0.4s ⠿ smtp Pulled 0.5s ⠿ requeuer Pulled 0.5s Migrating database [+] Running 5/0 ⠿ Container postal-worker-1 Running 0.0s ⠿ Container postal-web-1 Running 0.0s ⠿ Container postal-smtp-1 Running 0.0s ⠿ Container postal-requeuer-1 Running 0.0s ⠿ Container postal-cron-1 Running 0.0s Step 7: Access Postal Admin Web Dashboard Access Postal Administration page on https://postal.yourdomain.com You should see Let’s Encrypt SSL certificate in place if your installation was successful and login with admin user email created earlier. You’ll get a dashboard which looks like this: Create your first organization Provision mail server to start sending and receiving messages using Postal. Give your mail server a name and choose live mode. With the basic configurations required you can now use Postal email delivery software solution. Refer to the Postal Administration guide for further configurations. Congratulations! You have completed setting up your own Free SMTP Server using Postal. But before you start sending emails, you have to understand some crucial concepts and follow up on the guidelines to get the best delivery rates and reach the inbox! - Warming Up your SMTP server – We will be updating a blog about this very soon. - rDNS (Reverse DNS) - DMARC (Domain-based Message Authentication, Reporting & Conformance) - MX Record - Avoid Spam Factors – IP Reputation, SPF, DKIM, DMARC, Message Body, Email List health. Email is among the widely used communication channel, and most internet system uses Simple Mail Transfer Protocol SMTP to transfer email from their server to the recipient’s server. It’s a standard communication protocol used by mail servers and other message transfer agents and can impact the deliverability of your campaigns. What is the Simple Mail Transfer Protocol (SMTP)? The Simple Mail Transfer Protocol (SMTP) is a technical standard for transmitting electronic mail (email) over a network. Like other networking protocols, SMTP allows computers and servers to exchange data regardless of their underlying hardware or software. Just as the use of a standardized form of addressing an envelope allows the postal service to operate, SMTP standardizes the way email travels from sender to recipient, making widespread email delivery possible. SMTP is a mail delivery protocol, not a mail retrieval protocol. A postal service delivers mail to a mailbox, but the recipient still has to retrieve the mail from the mailbox. Similarly, SMTP delivers an email to an email provider's mail server, but separate protocols are used to retrieve that email from the mail server so the recipient can read it. What is a protocol? A protocol consists of a set of rules and procedures which govern the exchange of data between two or more devices. Protocols define how data transmission will occur between electronic devices such as computers. They set the standard procedures for communication and the exchange of information. The International Organization for Standardization established the Open Systems Interconnection (OSI). One such widely used Internet protocol lays down the standard for communication over different networks. The model divides the process of data transmission into a series of seven layers. Important internet protocols are TCP/IP, HTTPS, DNS, and SMTP. What is an SMTP server? An SMTP server is a mail server that can send and receive emails using the SMTP protocol. Email clients connect directly with the email provider's SMTP server to begin sending an email. Several different software programs run on an SMTP server: - Mail submission agent (MSA): The MSA receives emails from the email client. - Mail transfer agent (MTA): The MTA transfers emails to the next server in the delivery chain. As described above, it may query the DNS to find the recipient domain's mail exchange (MX) DNS record if necessary. - Mail delivery agent (MDA): The MDA receives emails from MTAs and stores them in the recipient's email inbox. Common SMTP server providers & settings |SMTP Provider||URL||SMTP Settings| How does the SMTP work? The Transmission Control Protocol (TCP) is the primary connection for communication between the mail sender and the mail receiver. In SMTP, the mail sender sends the data in command strings over this reliable ordered data stream channel. The SMTP client, the initiating agent, sender, or transmitter, initiates the communication session. It issues the command strings and opens the session for corresponding responses from the SMTP server, which involves the listening agent or receiver. Zero or more SMTP transactions may be there in a course. Usually, an SMTP email transaction follows four command or reply sequences: It tells the email server that the client wants to start the mail transaction. The client mentions its domain name after this command. It lays down the bounce address/return address, defining the return or reverse paths. It specifies the recipient of the message. The sender’s envelope contains the addresses of the recipients, to which the RCPT command can be issued multiple times for each recipient. It shows where the content of the message starts, as opposed to its envelope. An empty line separates the message header and body in the message’s text. DATA is not just one command but a group of commands in which a server has to reply twice - First, the server acknowledges the message and replies with its readiness to take the message. - Then after completing the end-of-data sequence, it either accepts or rejects the entire message. Apart from the reply of the DATA command, the server can reply in a positive way (2xx reply codes) or a negative way. The negative responses can further be permanent (5xx codes) or transient (4xx codes). If a server sends ‘reject,’ then it is a permanent failure, and the client needs to send a bounce message to the respective server. On the other side, a ‘drop’ is a positive reply in which the message is discarded instead of delivered. What is an SMTP envelope? The SMTP “envelope” is the set of information that the email client sends to the mail server about where the email comes from and where it is going. The SMTP envelope is distinct from the email header and body and is not visible to the email recipient. What port does SMTP use? In networking, a port is a virtual point where network data is received; think of it as the apartment number in the address of a piece of mail. Ports help computers sort networking data to the correct applications. Network security measures like firewalls can block unnecessary ports to prevent the sending and receiving of malicious data. - Port 25 is most used for connections between SMTP servers. Firewalls for end-user networks often block this port today, since spammers try to abuse it to send large amounts of spam. - Port 465 was once designated for use by SMTP with Secure Sockets Layer (SSL) encryption. But SSL was replaced by Transport Layer Security (TLS), and modern email systems, therefore, do not use this port. It only appears in legacy (outdated) systems. - Port 587 is now the default port for email submission. SMTP communications via this port use TLS encryption. - Port 2525 is not officially associated with SMTP, but some email services offer SMTP delivery over this port in case the above ports are blocked. Difference between SMTP, IMAP, & POP3 The Internet Message Access Protocol (IMAP) and Post Office Protocol (POP) are used to deliver the email to its final destination. The email client has to retrieve the email from the final mail server in the chain to display the email to the user. The client uses IMAP or POP instead of SMTP for this purpose. To understand the difference between SMTP and IMAP/POP, consider the difference between a plank of wood and a rope. A length of wood can be used to push something forward, but not pull it in. A rope can pull an item, but cannot push it. Similarly, SMTP “pushes” email to a mail server, but IMAP and POP “pull” it the rest of the way to the user's application. What is Extended SMTP (ESMTP)? Extended Simple Mail Transfer Protocol (ESMTP) is a version of the protocol that expands upon its original capabilities, enabling the sending of email attachments, the use of TLS, and other capabilities. Almost all email clients and email services use ESMTP, not basic SMTP. ESMTP has some additional commands, including “EHLO”, an “extended hello” message that enables the use of ESMTP at the start of the connection. SMTPs are essential for sending and receiving emails. However, as an email marketer, you need to choose and configure the SMTP service providers that suit your requirements. At Sapnaaz, we provide easy SMTP setup and integration with any SMTP servers you might like to send your email marketing campaigns. Feel free to reach out to the Sapnaaz team to learn more or check out our SMTP setup service page.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00569.warc.gz
CC-MAIN-2023-06
32,845
279
http://englishrivierageopark.org.uk/section_sub.cfm?section=13&sub=77
code
Saltern Cove: Permian Triassic Designations: GCR (No: 1503), SSSI A key site showing Permain rocks resting directly on top of Lower Devonian a regionally significant unconformity. Fossil burrows found near Waterside are evidence of life within the Permian deserts, perhaps the place for a primitive reptile to hide away from the sun. GCR block / key theme: Permian-Triassic Associated SSSI: Saltern Cove SSSI GCR Statement of Interest: "At this locality coarse Permian fluvial breccias rest uncomformably on Devonian slates. The unconformity surface is very clearly seen as a cast on the base of the breccias. These contain much locally-derived material and are arranged in poorly organised, fining-upwards and sedimentary sequences. The coarsest Permian beds occur immediately above the unconformity. A key site showing a regionally significant unconformity."
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100779.51/warc/CC-MAIN-20231208212357-20231209002357-00127.warc.gz
CC-MAIN-2023-50
860
6
http://stackoverflow.com/questions/6213595/dynamics-controls-lost-on-postback
code
This old chestnut again. My page is constructed as follows; I have a dropdownlist which is databound on first load. When the user selects a value from this, a postback is performed which then databinds a repeater control. The ItemTemplate of this repeater control contains a placeholder control. In code behind in the ItemDataBound event of the repeater, I am adding two controls dynamically to this placeholder, a hiddenfield and a checkbox. When the user clicks the save button, I then want to iterate over all those dynamically created hiddenfields and checkboxes and determine their values. However when the user clicks the save button, those controls no longer exist as shown in the page trace. I know this is a lifecycle issue and the articles I've seen on this suggest using Init methods to dynamically create your controls but I can't because of the way my page works, e.g. the repeater control only appears and binds after a value is chosen from the dropdownlist. What do I need to do to maintain the dynamic controls through the postback caused by clicking on the save button?
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549908.109/warc/CC-MAIN-20141224185909-00011-ip-10-231-17-201.ec2.internal.warc.gz
CC-MAIN-2014-52
1,086
6
http://www.activeworlds.com/help/aw51/document.php?messages
code
All of the text messages displayed by the Active Worlds Browser itself, including the text in menus and on buttons in dialog boxes, are contained in an external ASCII file in order to make support for languages other than English possible. As the AW browser is developed, some of the text messages used are exported, and stored in the default.awm message file. Messages files have the extension ".awm" and are found in the "Messages" folder inside the Active Worlds main folder. You can create and save your own message file, but do not save over the default.awm file. It is not necessary to include entries in your file which are you plan to leave exactly as they are in the default.awm file, since Active Worlds will look in default.awm for any entries it fails to find in your selected message file. You can select which message set to currently use from the Advanced Settings dialog. Click here for more information on creating your own message files for Active Worlds. If you are interested in creating a message file for a particular foreign language, please email us.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711605892/warc/CC-MAIN-20130516134005-00050-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,074
4
https://gis.stackexchange.com/questions/172139/get-the-intersection-of-a-polygon-with-a-generated-buffer-in-postgis
code
I have two tables. One has points in EPSG:4326 and the other one has polygons also in EPSG:4326. I loaded these two as layers in geoserver and show them using openlayers (as shown in image) Now I want to write a query in postgis in which I find the intersection area of each green point (buffered by 200 meters) with the polygons. I have written this query: SELECT c.name, d.name FROM points c, polygons d WHERE ST_INTERSECTS(d.geom, ST_BUFFER(c.geom,500)) But I get no results. If I increase the buffer size to 5000000 then I get results. - So the question is if there is a better way to do this without using a Buffer. - If there is projection issue in which I am missing. I followed the advice of @HimBromBeere and transformed my geometries to EPSG: 2100 (which is in metres). The problem still remains as executing this query I get as a result 0, which doesnt make any sense. SELECT ST_AREA( ST_INTERSECTION(ST_Buffer (ST_Transform(ST_SetSRID(c.geom, 4326),2100),200), ST_Transform(ST_SetSRID(d.geom, 4326),2100))) FROM categoriesdata c, dimoi d
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00317.warc.gz
CC-MAIN-2020-45
1,049
9
https://www.avertx.com/faqs/how-to-use-network-diagnostics-on-a-proconnect-nvr-/
code
ProConnect network video recorders (NVR) can function as stand-alone units, but provide more features and user flexibility when connected to the AvertX Connect cloud service. ProConnect network access can be diagnosed to confirm connectivity, like when the network works fine for most internet tasks, but the NVR isn't accessible via AvertX Connect. To view Network Diagnostic details - Log into the ProConnect NVR's user interface (UI) - Go to Setup > System Settings > Networking > Diagnostics tab - When properly connected to the network GREEN check marks are visible under the Status, Ports, and Relay sections: - Status addresses the following: Network Connection (This pings the gateway setup on the recorder) Can fail if internal ICMP is disabled on the gateway/router. There are cases where this can fail, but network is functioning. Internet Connection (This attempts to ping 188.8.131.52 – Google DNS server) Can fail if behind a firewall that does not allow ICMP traffic to 184.108.40.206. In this case, AvertX Connect may still work if proper whitelisting has been done. DNS resolution is required for many aspects of the AvertX Connect connection. This should NOT be in a failed state at any time. - Ports addresses the following: 80 Outbound (HTTP connection to port 80 of 'cloudApiUrl' branded host: Default is restapi.gp4f.com.) URL may change due to branding within AvertX Connect * A GREEN check mark is required for communication 443 Outbound (HTTPS connection to port 443 of 'cloudApiUrl' branded host: Default is restapi.gp4f.com.) URL may change due to branding within AvertX Connect. - Relay addresses the following: Relay registration is required for many aspects of the AvertX Connect connection. For AvertX Connect access, this should NOT be in a failed state at any time. - Progress through the following items to help convert a Red X to a GREEN check mark: a. Check the physical connection (ie cables, connectors, power, etc...) b. Confirm the network cable connector is firmly plugged into the the Internet or Client port on the back of the NVR. c. Verify that the NVR has a valid local IP address on the network it's connected to by going to Setup > System Settings > Networking > Adapters > d. Confirm network devices (ie. switch, router, etc..) have all cables firmly connected and show the NVR is attached. e. Perform a power drain cycle to refresh the NVR's networking hardware components. f. Check that network devices like router/firewall don't have "ping" (ICMP) traffic blocked or disabled. g. Check that the network is using a reliable DNS address, like 220.127.116.11 (Google DNS). h. Verify there are no firewall or web-filters between the recorder and the internet that would block port 80 and 443 traffic. i. Visit the AvertX Connect network requirements document for a complete list of URLs which must be added to a network's firewall/web-filter approved-list to allow network traffic. - Click on the Setup > Networking > Diagnostics > Refresh button after connections and settings are changed or confirmed.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100287.49/warc/CC-MAIN-20231201120231-20231201150231-00618.warc.gz
CC-MAIN-2023-50
3,053
31
https://tv.gab.com/watch?v=63af89d2a3049898b4488e64
code
Visit our Threadless store for Free Speech merch! Visit our flagship sites at: https://codoh.com and https://castlehillpublishers.com Browse the Holocaust Handbooks collection at: Check out an interactive commercial for the Holocaust Handbooks at: Browse our catalog at: https://castlehill.shop We're ringing in the new year with a resolution that EVERYONE can keep, brought to you by our growing collection of free-speech-themed merchandise. Learn more at https://merchandise.codoh.com Happy New Year from all of your friends and compatriots at CODOH and Castle Hill Publishers.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00109.warc.gz
CC-MAIN-2023-23
579
7
https://lists.denx.de/pipermail/u-boot/2005-January/008573.html
code
[U-Boot-Users] Diagnostic Tools in U-Boot wd at denx.de Sat Jan 8 01:18:30 CET 2005 In message <1104876780.23236.46.camel at cashmere.sps.mot.com> you wrote: > I have a few board-specific tools that help with some > board diagnostics and manufacturing processes. These You're not the first one... > tools are currently built in the "examples" directory, > but are highly board-specific. (EEPROM programming, > memory checking, Serial Number stamping, etc.) I always thought the name "examples" was clear enough. This is NOT a place for general purpose code, especially not for borad-dependent > Currently, the examples directory has board-specific > binaries that it builds, but it does so based on the > the ARCH variable, like this: This may be inelegant or even ugly, but i IMHO ok for the simple > It's sort of a rats nest all thrown together in one directory > where you just happen to build the parts that apply to your ARCH yes, board no. > I would like to propose a new scheme that fits in with Why a new scheme instead of simply using what's already in place and being used by others? > 1) Make the examples directory be a sub-directory structure > that allows a board-specific break-down beneath it. No. "examples" means example code, and to be more specific: examples for standalone programs. Board dependent code belongs to board > 2) Make a "diagnostics" (or "tools") sub-directory beneath > each board directory if it can build tools in the style > of the current "examples" directory. If you really feel you must create a sub-directory in your board directories feel free to do this. To me it makes little sense, > Personally, I think 2) would be a cleaner distribution, > though I am curious to know if you have any thoughts in > this area too. For example, have a look at the board/trab directory; the Makefile there includes rules to build a custom tool "trab_fkt" which is used by the board manufacturer for burn-in tests. This is totally transpa- rent to the rest of the U-Boot tree. Feel free to do something similar. I see no real need for a new level Software Engineering: Embedded and Realtime Systems, Embedded Linux Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd at denx.de A verbal contract isn't worth the paper it's written on. -- Samuel Goldwyn More information about the U-Boot
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00334.warc.gz
CC-MAIN-2023-14
2,323
44
https://www.libhunt.com/r/WinRing0
code
Similar projects and alternatives to WinRing0 based on common topics and language A library for querying connected PCI devices and a pci.ids parser. HackSys Extreme Vulnerable Windows Driver (HEVD) Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression. Windows kernel-mode Bluetooth Profile & Filter Drivers for PS3 peripherals Windows File System Proxy - FUSE for Windows Windows kernel-mode driver for controlling access to various input devices. Windows File System Proxy - FUSE for Windows [Moved to: https://github.com/winfsp/winfsp] (by billziss-gh) WinRing0 reviews and mentions We haven't tracked posts mentioning WinRing0 yet. Tracking mentions began in Dec 2020. GermanAizek/WinRing0 is an open source project licensed under GNU General Public License v3.0 only which is an OSI approved license.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00268.warc.gz
CC-MAIN-2023-14
980
12
https://www.khanacademy.org/computing/computer-science/cryptography/crypt/v/caesar-cipher
code
- What is cryptography? - The Caesar cipher - Caesar Cipher Exploration - Frequency Fingerprint Exploration - Polyalphabetic cipher - Polyalphabetic Exploration - The one-time pad - Perfect Secrecy Exploration - Frequency stability property short film - How uniform are you? - The Enigma encryption machine - Perfect secrecy - Pseudorandom number generators - Random Walk Exploration The Caesar cipher The Caesar Cipher, used by Julius Caesar around 58 BC, is a substitution cipher that shifts letters in a message to make it unreadable if intercepted. To decrypt, the receiver reverses the shift. Arab mathematician Al-Kindi broke the Caesar Cipher using frequency analysis, which exploits patterns in letter frequencies. Created by Brit Cruise. Want to join the conversation? - why would Caesar use ciphers?(27 votes) - Caesar used ciphers so that important information, such as the location of a attack or the date it would be carried out, would be unknown to enemies but know to the rest of his troop. If his messages were ever intercepted, the enemy would't immediately understand what the cipher meant.(20 votes) - At2:18, what if one uses a 'Random' shift like A is D, D is K.....rather using shift 3 or 4 or anything? Then Frequency fingerprint won't work right?(16 votes) - It doesnt really matter as long as all letters receive the same treatment meaning ALL As turn to D and ALL Ds to Ks. The key thing is hear is universality for each letter.(3 votes) - Decoding the Caesar Cipher based on the "fingerprint" requires a large sample space . I mean lets say if the message contains few words or only single word even that frequency distribution wont help in that case ..(25 votes) - To the original question, yes, shorter messages make it harder to detect the frequency distribution, but you'd be surprised how quickly it shows up. To Skylear's comment: A Caesar Cipher does have a sample space. The random variable is the number used for the shift. In your example, you encoded JASON IS BLUE using a shift of 2, but 2 could have been 1 or 23 or 14. In fact, it could have been any number from 1 to 26. So the sample space has 26 possibilities (there are 26 different ways to apply a caesar's cipher to the message).(12 votes) - So if the shift is 3, wouldnt a Z be a C?(7 votes) - Yes it would, since there are no more letters after Z so you'll have to start the beginning which A then B and finally C.(8 votes) - A cipher method I found quite interesting is the "Pig Pen Cipher" which is, according to wikipedia; "a geometric simple substitution cipher which exchanges letters for symbols which are fragments of a grid". I found the basic substitution quite simple but what about Pig Pen Cipher's that contain code words that disrupt the order of letters in the grid?(9 votes) - I do not quite understand what you are saying, but if you are saying something about cracking the Pig Pen cipher I can give you some advice. If you want to now how the Pig Pen cipher works, skip the next paragraph. The Pig Pen cipher is one of the many symbol ciphers, where a symbol is designated to each letter in the alphabet. Now, the Pig Pen cipher is very common code, so many people might know it, but say a random code is made up. You would collect the sample, and analyze it the same way. You would make a chart of the symbol frequency though, and compare each symbols height to a letter. If you wanted to know how a Pig Pen cipher works, read below. A Pig Pen Cipher is a symbolic cipher, with a different symbol representing each letter of the alphabet. It is a set code, and never changes. The link below shows a chart with each letter in its part of the grid. (Obviously you do not draw the letter in the secret note.) - How would you find the frequency of an encrypted message?(3 votes) - Hello Josiah, You could do it manually by printing the note on paper. Then use a scissors to cut the individual letters. Next sort the letters into 26 bins ( a through z). Finally count the number of times each letter occurred. Most would argue this is an inefficient way to do the operation but that is what the statistical analysis tells us to do... In reality you would uses a computer program to count the number of times a letter occurred. The computer could do the calculation for the entire dictionary in minutes. - At0:37MEET is encrypted as PHHN. M is shifted "ahead" 3 letters to P, E is shifted "ahead" 3 letters to H (twice over), but then T is encrypted by shifting to the letter that PRECEDES it (by three)... why?(5 votes) - Because it's wrong; the 't' in "at" or "elephant" should be the same letter but it's W. The sequence should be 'p', 'h', 'h', 'w', 'p', 'h', 'd', 'w', 'h', 'o', 'h', 's', 'k', 'd', 'q', 'w', 'o', 'd', 'n', 'h'(2 votes) - When did Caesar start using this cipher?(3 votes) - He used it whenever he needed to communicate to his officers, but I'm not sure when he started to use it.(5 votes) - I am not able to understand how shifts are determined using frequency?(3 votes) - Well for example E is the most common letter in the English alphabet. So if you look at some cryptotext and see that C is the most common symbol, you might infer that C actually represents the letter E. Shifting by -2 would give that result (E --> D --> C), so you might try shifting by +2 to decrypt.(4 votes) - Is Caesar's code still used in today's world? If so, what sort of things use it?(3 votes) - what does obvious mean because i`m in 1st grade.(1 vote) SPEAKER 1: The first well known cipher, a substitution cipher, was used by Julius Caesar around 58 BC. It is now referred to as the Caesar Cipher. Caesar shifted each letter in his military commands in order to make them appear meaningless should the enemy intercept it. Imagine Alice and Bob decided to communicate using the Caesar Cipher First, they would need to agree in advance on a shift to use-- say, three. So to encrypt her message, Alice would need to apply a shift of three to each letter in her original message. So A becomes D, B becomes E, C becomes F, and so on. This unreadable, or encrypted message, is then sent to Bob openly. Then Bob simply subtracts the shift of three from each letter in order to read the original message. Incredibly, this basic cipher was used by military leaders for hundreds of years after Caesar. JULIUS CAESAR: I have fought and won. But I haven't conquered over man's spirit, which is indomitable. SPEAKER 1: However, a lock is only as strong as its weakest point. A lock breaker may look for mechanical flaws. Or failing that, extract information in order to narrow down the correct combination. The process of lock breaking and code breaking are very similar. The weakness of the Caesar Cipher was published 800 years later by an Arab mathematician named Al-Kindi. He broke the Caesar Cipher by using a clue based on an important property of the language a message is written in. If you scan text from any book and count the frequency of each letter, you will find a fairly consistent pattern. For example, these are the letter frequencies of English. This can be thought of as a fingerprint of English. We leave this fingerprint when we communicate without realizing it. This clue is one of the most valuable tools for a codebreaker. To break this cipher, they count up the frequencies of each letter in the encrypted text and check how far the fingerprint has shifted. For example, if H is the most popular letter in the encrypted message instead of E, then the shift was likely three. So they reverse the shift in order to reveal the original message. This is called frequency analysis, and it was a blow to the security of the Caesar cipher.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653930.47/warc/CC-MAIN-20230607143116-20230607173116-00514.warc.gz
CC-MAIN-2023-23
7,661
45
http://forums.linuxmint.com/viewtopic.php?f=49&t=16031
code
Sanva wrote:Hello everybody! I have an Asus Eee 900 with Linux Mint and I don't know how could I do to use the webcam. I have installed a modified version of Linux Kernel (it supposed to has the correct drivers to Eee's hardware: http://www.array.org/ubuntu/index.html), but its the same... it doesn't work. What could I do?? Thanks a lot for your time. Sanva wrote:Is not uvcvideo module a driver to manage the webcam?? (In 2.6.24-20-eeepc is now available! at http://www.array.org/ubuntu/index.html). Sanva wrote:what make is the camera in your EeePC ? Sorry, but I don't understand the question... I don't understand what make is. Could you ask me the same with another words?? Users browsing this forum: No registered users and 11 guests
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158811.82/warc/CC-MAIN-20160205193918-00321-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
741
8
https://finanznachrichten.online/deploy-ghost-in-a-spot-namespace-on-rsaas-from-your-browser/
code
In this post I’m going to show you how you can run ghost, a headless CMS or your own apps in a namespace by defining the proper requests and limits in the deployment manifest file of ghost or your own app on RSaaS. Click on the project and in the navigation menu bar click on Namespaces and in the top right side click on the blue button “Add Namespace”: Please provide you user id or any other name as you’d like for the namespace, if a namespace like my-namespace exists, you’ll get a warning notice from the system and you should provide another name. We recommend to provide your user-id as the namespace name. In our next release we’ll create the namespace with your user-id for you. Now that we have created a namespace, we’re ready to deploy our Ghost app. First you need to select the cluster and click on the “Launch kubectl” button: With that we’ll have Shell access to the cluster in our browser: With the following kubectl command, you can switch to your namespace: kubectl config set-context --current --namespace <your namespace> And deploy Ghost like this: kubectl create -f https://raw.githubusercontent.com/kubernauts/practical-kubernetes-problems/master/3-ghost-deployment.yaml Please note in the spec part of the manifest we’re setting the limits for CPU and Memory in the resources part of our ghost image, unless you’ll not be able to deploy the app. - image: ghost:latest To see if our deployment was successful, we can run: kubectl get deployment Now we can create a L7 Ingress to ghost easily in Rancher by selecting Resources → Workloads → Add ingress. In this example we’re going to tell Rancher to generate a .xip.io hostname for us: After couple of seconds the L7 ghost ingress is ready to go, click on the ghost-ingress-xyz-xip.io link and enjoy: If you have any questions, we’d love to welcome you to our Kubernauts Slack channel and be of help. We love people who love building things, New Things! If you wish to work on New Things, please do visit our job offerings page. Deploy Ghost in a Spot Namespace on RSaaS from your browser was originally published in Kubernauts on Medium, where people are continuing the conversation by highlighting and responding to this story. Mehr zu Kubernetes Services, Kubernetes Training und Rancher dedicated as a Service lesen unter https://blog.kubernauts.io/deploy-ghost-in-a-spot-namespace-on-rsaas-from-your-browser-d9376db342e0?source=rss—-d831ce817894—4
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00135.warc.gz
CC-MAIN-2020-29
2,463
22
http://serverfault.com/questions/tagged/storage-spaces+windows
code
Meta Server Fault to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Can Storage Spaces be used for Hyper-V VMs? I'm planning to update my server from Win Server 2008 Rs to 2012 when it is released. I'm very interested in the new "Storage Spaces". The main question I have is whether I can mount a storage space ... Jul 13 '12 at 14:57 newest storage-spaces windows questions feed Hot Network Questions I want to do research but I'm too old for a PHD What is the science status in 7 kingdoms? How do you decide when to go home for the day? What is Google's tolerance about cloaking with unwanted crawlers (fraudsters, scammers, etc)? Iterator of iterator Hot spare host vs cold spare host? How can I hire a manga artist, and what are the general cost factors associated with producing a comic? Creating C++ classes and objects Am I too new to try DMing once? How did Bran get over the wall? Why would a guitarist use a microphone and direct output from an acoustic guitar? What shape would a boggart take if a person doesn't fear anything AM and FM reception with the same antenna "Wow, what a car!" - Is it okay if I say 'Wow, what the car!" Watching the football World Cup final in Amsterdam How to automatically merge Texas survey block data Should I disable http HEAD requests? My manager thinks showing up early doesn't show as much enthusiasm as staying late What approximation techniques exist for computing the square root? How to lock down an opportunity and prevent edits after generating a sales order? Why do people say "to be honest"? How effective is speeding? Are there as many real numbers as there are imaginary numbers? What does it mean for a sorting algorithm to be "stable"? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776401292.48/warc/CC-MAIN-20140707234001-00022-ip-10-180-212-248.ec2.internal.warc.gz
CC-MAIN-2014-23
2,282
53
http://slashdot.org/~Fesh/firehose
code
And I already stated in my first reply that IMHO your success has little to do with the training and a lot to do with the continuous follow-ups you do. Also with an environment that is not business-focussed. This does not match what you state later, which is in essence claims that all 3,000 people in your company need in depth knowledge of your security policy. That is, plainly, nonsense. Corporate "Security Awareness Training" has to address the needs of _many_, and not everyone needs that level of detail. In fact very few do, and a small percentage could even understand them. Which could explain your repeated claims of bad experiences. Jane and John, the new accountants, need to know what Phishing is, not what your encryption policy for tape back up is. You previously complained that for you it was redundant so "stupid" (your words). Stop moving the goal post. What I mean is that we replace actual security with trainings and think it's a solution. Security awareness training is not a replacement for security. If a Company believes it does, this matches what I stated repeatedly about a broken culture. Not a Security or Training deficiency. Sure I have my own view and experiences and my attitude is the result of what I've seen and what I think about it. Also the result of knowing a lot of people in the IT consulting business privately, where they tell you what they really think. I know plenty that underscore how bad corporate cultures are and can be. Any Corporate level trainer will tell you the same thing. You have to train everyone in the basics. After they have a grasp of basics, reminders and nudges from audits work. A reminder about phishing attacks will be ignored by people that don't know what phishing is or how it works. Reminders to follow the password policy will be ignored by people that don't know the policy. Finally, as stated previously, there are plenty of people that contribute to poor culture. The guys that talk smack about the training because they know it all are a huge issue. You have to build a culture of security if you want to be secure. That will never happen with a crew of sexual intellects (F'king know it all's) discouraging knowledge sharing and personal growth.
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266597.23/warc/CC-MAIN-20140728011746-00316-ip-10-146-231-18.ec2.internal.warc.gz
CC-MAIN-2014-23
2,227
9
https://apisyouwonthate.com/jobs/junior-backend-engineer-constellr-1581125
code
Europe, Germany, Belgium Constellr · Junior Backend Engineer A career at constellr is for those who are seeking the extraordinary. We are operating in the new space and agri-tech sectors, combining engineering, Earth Observation and agronomy to address one of the biggest challenges of our time: how to ensure global food security. We are a fast-growing team of highly motivated engineers, scientists, and business professionals with a shared vision to rebase decision making in tomorrow’s smart farming industry. constellr has offices in the European capital, Brussels, situated between the attractive Etterbeek and Watermael districts and in beautiful Freiburg, in south west Germany, bordering the Black Forest and near France and Switzerland. About the team To achieve our goals, we are building a state-of-the-art data platform, which transfers our satellite data to the cloud, where we process, enrich, and deliver analysis-ready geospatial data to internal and external customers. Our team consists of cloud, Data and backend engineers, reporting into our Head of Data Platform. Ideally you will be based in Belgium or Germany, however we can be flexible across Europe for specific geospatial, earth observation and remote sensing experience. The team are mainly based across Germany & Belgium. About this role You will have a key role in designing and implementing the data platform, the backbone of the business using modern technology. This will contribute to bringing the world’s first commercial infrared constellation into orbit / enabling an entirely novel set of smart farming capabilities / building a new biophysical atlas of the biosphere, driving an unprecedented understanding of our planet. As our Junior Backend Engineer, you will be: - Working closely with other backend engineers and stakeholders from the product, infrastructure, and image segment teams to ensure their requirements are being met. - Implement the services necessary to run our Data Platform, as well as our development environment. - Keep up to date with and evaluate new technologies, frameworks and standards. Bachelor or higher degree in Computer Science or related field; or relevant work experience. Good programming skills, familiar with Shell. An understanding of services and databases. Familiar with the complete software development process. Have good knowledge with different CI/CD tools available in the market (e.g., Gitlab CI, Github Actions, CircleCI). Nice to have: Earth Observation sector and/or NewSpace startups. Concrete experience in Microservices, API, REST, RPC. Understanding of Cloud services (e.g., AWS, GCP,). Knowledge of containerisation and orchestration (Kubernetes and Docker). What we offer you here at constellr You will be part of a fast-growing, multi-disciplinary and diverse team with a strong desire to change the world. You will be given the autonomy to set and deliver against your own goals, develop new skills and capabilities and lead others to do the same. You can directly influence your contribution to achieving several of the UN's sustainable development goals, including Zero Hunger, Clean Water, No Poverty and Climate Action. You will tackle challenges and contribute to delivering the world’s first commercial infrared constellation into orbit, enabling an unprecedented understanding of our planet and its resources.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00798.warc.gz
CC-MAIN-2023-14
3,370
29
https://help.approvalbot.com/en/articles/6375595-managing-admins
code
For subscriptions on Starter or Pro you can create a Group of users who will have admin permissions to the Approvals App. On the Free plan only the account owner has admin access. This allows them to view all reports and configure all settings. To create an admin group Navigate to Settings -> Groups. Name your admin group ie Approvals Admins Toggle the Admin Flag to on Use the search box to add users to your admin group You can update the users in your admin group at any time via the Group Settings.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00471.warc.gz
CC-MAIN-2023-14
504
7
https://vectormap.info/vector_map_manual/working-with-projections-2/
code
QGIS has support for approximately 2,700 known CRSs. Definitions for each CRS are stored in a SQLite database that is installed with QGIS. Normally, you do not need to manipulate the database directly. In fact, doing so may cause projection support to fail. Custom CRSs are stored in a user database. See section Custom Coordinate Reference System for information on managing your custom coordinate reference systems. The CRSs available in QGIS are based on those defined by the European Petroleum Search Group (EPSG) and the Institut Geographique National de France (IGNF) and are largely abstracted from the spatial reference tables used in GDAL. EPSG identifiers are present in the database and can be used to specify a CRS in QGIS. In order to use OTF projection, either your data must contain information about its coordinate reference system or you will need to define a global, layer or project-wide CRS. For PostGIS layers, QGIS uses the spatial reference identifier that was specified when the layer was created. For data supported by OGR, QGIS relies on the presence of a recognized means of specifying the CRS. In the case of shapefiles, this means a file containing the well-known text (WKT) specification of the CRS. This projection file has the same base name as the shapefile and a .prj extension. For example, a shapefile named alaska.shp would have a corresponding projection file named alaska.prj. Whenever you select a new CRS, the layer units will automatically be changed in the General tab of the Project Properties dialog under the Project (Gnome, OS X) or Settings (KDE, Windows) menu. QGIS starts each new project using the global default projection. The global default CRS is EPSG:4326 - WGS 84 (proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs), and it comes predefined in QGIS. This default can be changed via the [Select...] button in the first section, which is used to define the default coordinate reference system for new projects, as shown in figure_projection_1. This choice will be saved for use in subsequent QGIS sessions. When you use layers that do not have a CRS, you need to define how QGIS responds to these layers. This can be done globally or project-wide in the CRS tab under Settings ‣ Options. If you want to define the coordinate reference system for a certain layer without CRS information, you can also do that in the General tab of the raster and vector properties dialog (see General Menu for rasters and General Menu for vectors). If your layer already has a CRS defined, it will be displayed as shown in Vector Layer Properties Dialog . QGIS supports OTF reprojection for both raster and vector data. However, OTF is not activated by default. To use OTF projection, you must activate the Enable on the fly CRS transformation checkbox in the CRS tab of the Project Properties dialog. If you have already loaded a layer and you want to enable OTF projection, the best practice is to open the CRS tab of the Project Properties dialog, select a CRS, and activate the Enable ‘on the fly’ CRS transformation checkbox. The CRS status icon will no longer be greyed out, and all layers will be OTF projected to the CRS shown next to the icon. The CRS tab of the Project Properties dialog contains five important components, as shown in Figure_projection_2 and described below: If you open the Project Properties dialog from the Project menu, you must click on the CRS tab to view the CRS settings. This manual describes the use of the proj.4 and related command line utilities. The cartographic parameters used with proj.4 are described in the user manual and are the same as those used by QGIS. The Custom Coordinate Reference System Definition dialog requires only two parameters to define a user CRS: To create a new CRS, click the Add new CRS button and enter a descriptive name and the CRS parameters. Note that the Parameters must begin with a +proj= block, to represent the new coordinate reference system. You can test your CRS parameters to see if they give sane results. To do this, enter known WGS 84 latitude and longitude values in North and East fields, respectively. Click on [Calculate], and compare the results with the known values in your coordinate reference system. QGIS asks which transformation to use by opening a dialogue box displaying PROJ.4 text describing the source and destination transforms. Further information may be found by hovering over a transform. User defaults can be saved by selecting Remember selection. Source. Working with Projections If you can`t find the vector map of country or city you are looking for, or need some objects to be added, please feel free to contact >>> our friendly team. We draw it in less than 24 hours. All formats: .PDF; .DWG; .AI; .EPS; .CDR; .DXF; .DWG; .PPT Follow us on FACEBOOK Subscribe for New Maps Simpe: Let us know, what map you need!!! COUNTRY / CITY MAPS Kiev Metro Area Printable Map, Ukraine, exact vector street G-View Level 17 map (100 meters scale, 1:2990) Russian lg., all buildings, full editable, Adobe IllustratorPrintable Map Kiev Metro Area, Ukraine, exact vector street G-View Level 17 map (100 meters scale, 1:2990)… Kiev Metro Area Editable PDF Map, Ukraine, exact vector street G-View Level 17 map (100 meters scale, 1:2990), all buildings, full editable, Adobe PDFEditable PDF Map Kiev Metro Area, Ukraine, exact vector street G-View Level 17 map (100 meters scale, 1:2990),… Kiev Metro Area Printable Map, Ukraine, exact vector street G-View Level 17 map (100 meters scale, 1:2990), all buildings, full editable, Adobe IllustratorPrintable Map Kiev Metro Area, Ukraine, exact vector street G-View Level 17map (100 meters scale, 1:2990), all buildings, full…
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886124662.41/warc/CC-MAIN-20170823225412-20170824005412-00267.warc.gz
CC-MAIN-2017-34
5,713
9
http://www.forensicswiki.org/w/index.php?title=Document_Metadata_Extraction&direction=prev&oldid=8520
code
Document Metadata Extraction Revision as of 00:35, 5 October 2008 by Simsong Here are tools that will extract metadata from document files. - Extracts metadata from various Microsoft Word files (doc). Can also convert doc files to other formats such as HTML or plain text. - pdfinfo (part of the xpdf package) displays some metadata of PDF files.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864148.93/warc/CC-MAIN-20180621114153-20180621134153-00337.warc.gz
CC-MAIN-2018-26
346
5
https://technsight.com/the-devops-dilemma-can-sourcegraph-handle-the-heat-of-90-repositories/
code
“Given our complex infrastructure, which encompasses over 90 repositories, and a DevOps team of approximately 15 members, would you advise the implementation of Sourcegraph to manage our codebase?” In the realm of software development, managing a sprawling codebase can be a daunting task. For a DevOps team overseeing 90+ repositories, the challenge is not just about maintaining the code but also ensuring efficient navigation, comprehension, and collaboration. Sourcegraph: A Beacon in the Chaos: Sourcegraph emerges as a beacon of clarity in this chaos. It’s a tool designed to make sense of the tangled web of code that teams like yours grapple with. By indexing your codebase, Sourcegraph allows developers to search across all repositories with the speed and precision that simple text searches can’t match. One of Sourcegraph’s standout features is its code intelligence capabilities. It provides hover tooltips, definitions, and references for symbols within your code. This feature alone can save countless hours that your team might otherwise spend on trying to understand how different parts of the codebase interact with each other. Enhanced Code Reviews: For a team of 15, code reviews are critical. Sourcegraph’s code review features integrate with your existing tools, making the review process more thorough and less time-consuming. It allows for better-informed reviews and discussions, which is vital when dealing with a large number of repositories. Continuous Integration (CI) and Continuous Deployment (CD): In a DevOps setting, CI/CD pipelines are the heartbeat of the operation. Sourcegraph can integrate into these pipelines, providing automated checks that can help prevent bugs from reaching production. This integration can be a game-changer for maintaining high-quality code in a complex system. Collaboration Across Teams: Sourcegraph also fosters collaboration. It allows team members to share queries and insights, creating a shared understanding of the codebase. This shared context is invaluable, especially when onboarding new team members or when teams are distributed. Given the complexity of your infrastructure and the size of your DevOps team, Sourcegraph is not just recommended; it’s practically essential. It’s a tool that scales with your codebase, bringing order to chaos and fostering an environment of efficiency and collaboration. Implementing Sourcegraph could be the turning point for managing your codebase more effectively. In conclusion, Sourcegraph is well-suited for managing large, complex codebases and can significantly improve the productivity and collaboration of DevOps teams. Its implementation in your infrastructure seems not only beneficial but necessary for maintaining a high standard of code quality and development practices.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817103.42/warc/CC-MAIN-20240416155952-20240416185952-00257.warc.gz
CC-MAIN-2024-18
2,810
13
http://sanjaykumar.adaantest1.com/category/artist-dating-nl-brand1-app-2/
code
Mocospace go online a socialize, Talk, & See betting names a hundred% a hundred % totally free Mocospace work at a gamble video games, socialize, and you will cost-free Mocospace Speak towards-range. Do you wish to see how to build your Monospace relate with the net free? When the that is the case, you are invited to all of our webpage. We are going to take you step-by-action from expected techniques to generate a no rates opinions, log on and you can supply a stolen code. Table of content Throughout the Mocospace Signup Communicate with the fresh new-individuals after you register Moco, a giant, ranged internet dating sites program.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00495.warc.gz
CC-MAIN-2022-40
641
7
https://www.frontiersin.org/articles/10.3389/frobt.2022.1024594/full
code
BRIEF RESEARCH REPORT article Sec. Human-Robot Interaction Volume 9 - 2022 | https://doi.org/10.3389/frobt.2022.1024594 Safety considerations for autonomous, modular robotics in aerospace manufacturing - Fraunhofer Institute for Factory Operation and Automation IFF, Robotic Systems, Magdeburg, Germany Industrial robots are versatile machines that can be used to implement numerous tasks. They have been successful in applications where–after integration and commissioning–a more or less static and repetitive behaviour in conjunction with closed work cells is sufficient. In aerospace manufacturing, robots still struggle to compete against either specialized machines or manual labour. This can be attributed to complex or custom parts and/or small batch sizes. Here, applicability of robots can be improved by enabling collaborative use-cases. When fixed protective fences are not desired due to handling problems of the large parts involved, sensor-based approaches like speed and separation monitoring (SSM) are required. This contribution is about how to construct dynamic volumes of space around a robot as well as around a person in the way that their combination satisfies required separation distance between robot and person. The goal was to minimize said distance by calculating volumes both adaptively and as precisely as possible given the available information. We used a voxel-based method to compute the robot safety space that includes worst-case breaking behaviour. We focused on providing a worst-case representation considering all possible breaking variations. Our approach to generate the person safety space is based on an outlook for 2D camera, AI-based workspace surveillance. Our goal is to use the SSM method to implement collaborative applications that allow humans and robots to work as spatially close to each other as possible. To achieve this, the necessary safety distances should be determined as precisely as possible, uncertainties should be minimized, and areas should be dynamically adapted to the situation. Many original works around SSM implementations address one of the sub-problems: collision or distance calculations Glogowski et al. (2019), robot control in terms of stopping or avoiding Ubezio et al. (2021), workspace monitoring in terms of detecting and sensing approaching objects or people Ferraguti et al. (2020). A good overview can be found in Miro et al. (2022). We are trying to work towards a feasible approach for co-design of aspects. This mainly concerns the computation of relevant spaces as well as the sensory workspace monitoring with the help of cameras in a form that promotes the interaction of the two aspects. With respect to the control of the robot, we do not rely on any kind of avoidance behavior, which in turn results in more complexity in the interaction with approaching persons and can thus be a source of uncertainty. Instead, we use the approach of stopping the robot as fast as possible. This approach should be implemented by all other behaviors anyway as a fallback possibility (failure of the equipment during the safety-monitored avoidance movement), whereby these systems can basically not be better in terms of improving the cooperation with the human by further reduced distances. In aerospace manufacturing, the handling of large parts is a common occurrence Caterino et al. (2021). There is also low throughput compared to other industries. This leads to parts being stationary for some time while work is taking place around. A lot of work is carried out by human workers. When introducing robot-based automation for some of the tasks, the capability of close human-robot-collaboration and co-existence is beneficial Costanzo et al. (2022), Meißner et al. (2018). When reducing the overall robot speed is not desired, this leaves the options of minimizing separation distance by eliminating uncertainties, making it dynamic, and using capable sensors (German Commission for Electrical, Electronic and Information Technologies of DIN and VDE (2021)). Sensors should then be able to monitor position and movement of the persons in question in detail. Investigations of separation distances using different approaches were covered in the past Lacevic and Rocco (2010), Vicentini et al. (2014). More recent work also tries to exploit advances in pattern detection and recognition for safety applications Costanzo et al. (2022). In the following section of this work we present a brief analysis of a particular use-case of intelligent robotics applicable to pre-assembly as well as final assembly of aircraft structures. The use of both fixed as well as mobile robots are being considered. The application is covering the fastening of HI-LOK™ collars. Here, it is beneficial to employ human co-workers in parallel with robots Caterino et al. (2021). Next, we propose a method for the dynamic generation of first spatial volume around the robot based on pre-planned movement. This is a vital step for implementing a flexible and safe SSM-system. The need for dynamic generation of separation distance is also emphasized by the dynamic behaviour of the robot using autonomously generated actions based on models and environment perception. In the final part, we discuss how to detect and monitor the presence of persons in the vicinity using optical sensors. We discuss the possibility of using artificial intelligence (AI) based detection of humans using cameras. Furthermore, we present our current approach to construct another spatial volume representing the human based on a projection of the convex hull of the image space silhouette onto the ground floor. 2 Analysis of a collaborative application When we consider safety of robotics systems, it is mandatory to follow the principles laid out in the Machinery Directive EC (2006). The risk assessment is therefore specific to a particular implementation of a robot system, but contains reoccurring risks and mitigation measures. A major source of risk are mechanical hazards, like the collision of the robot with a person. Speed and separation monitoring aims to mitigate that risk by preventing the robot to contact a person close by while in motion. To better understand safety requirements, and in particular to evaluate implementations and possible improvements of speed and separation monitoring, we considered several possible implementations of the same application. We decided on the fastening of HI-LOK™ collars as the application. We consider this application because it is a common type of fastener used on many different parts of the fuselage. Some are more difficult to reach then others. Therefore it presents a suitable case for combining the different strengths of human workers and automatic solutions for working on the same product and in conjunction with shared work spaces (Figure 1). FIGURE 1. Co-existence with human workers using fixed sensor placement for separation (A); Autonomous robot with fully dynamic safety space (B); Implementation using a non-collaborative setup (C). We considered three possible variants: A non-collaborative implementation using fences (Figure 1C), a fixed robot with light curtains (Figure 1A), and an autonomous mobile robot with dynamic safety space (Figure 1B). The first variation would employ a large robot or a robot with a workspace extension via a linear unit in order to cover large shell pieces. It has no further implications for SSM. The other two variations use a small or medium sized robot which needs to be relocated multiple times in order to cover a large part. The third variation in particular would use a smaller type of robot because the mobile platform can easily move between each fastening step which in turn requires less reach of the actual robot arm. Here, power and force limiting would also be a strategy to mitigate collisions. In our case, SSM is still preferable since it better covers a wider range of tool related hazards, including non-mechanical ones. In this implementation, the robot needs to be made autonomous. It can reposition itself along the whole part as required based on situation dependent decision making. A human worker follows along at a distance in order to cover remaining work (parts not reachable for the robot). This however, requires dynamic repositioning of monitored safety zones. Furthermore, it is beneficial for the separation between robot and co-worker to be as small as possible in order for the worker to finish work shortly after the robot. This also minimizes the risk of inadvertently triggering safety stop by the worker. To summarize, the fully automated version behind fences is cumbersome when it comes to moving the part in and out. The other two versions show, that it is beneficial to reduce the necessary space between worker and robot in order to cut total time required. This is because larger separation distance results in less work to be done in parallel on the same part. 3 Robot safety space generation In our approach we split the separation distance in two parts: “robot safety space” and “person safety space”. Both terms are not to be confused with other terms found in the literature or in robot manuals like “maximum space”, “operating space”, “restricted space”, and so on. Here, robot safety space identification is the task of calculating the volume of space that may be occupied by a moving part of the robot at a certain point in time in case of occurrence of a stop condition. For a typical time triggered system this is aligned with activation times and is true for the duration of the cycle time. The stopping motion can be described by a swept volume. Similar swept volumes have been used for ensuring safety of whole movements in the past Täubig et al. (2011), but are usually used to cover a planned trajectory instead. The volume includes also the tool and parts attached to the tool. The robot safety space depends on the state of the robot, that is to say, on its point in time on the executed trajectory. It also depends on the performance characteristics of the equipment, like reaction times and braking capabilities Marvel and Norcross (2017). The contributing factors are laid out in the technical specification ISO/TS 15066 ISO (2016). For SSM, it provides a formula (Eq. 1) consisting of several summands for calculating the minimum required distance between any human worker and the robot system. It aims to stop the robot before an approaching human can touch the robot. It does not consider evasive movements. Nevertheless, a remark should be made that SSM cannot prevent humans from colliding with a robot that is in a stationary position after stopping. The separation distance There are some shortcomings to the standard approach when it comes to a person’s position and speed. If the person was guaranteed to be completely stationary, the robot could move in a way that it would stop directly in front of the person. In practice, this cannot be assumed. The stipulated assumption of 2 m/s for the person speed leads to a considerable distance requirement. This can only be countered by implementing a system with fast reaction times in conjunction with a slowly moving robot in order to minimize the stopping times. But not only faster reaction times of the equipment could bring worker and robot closer together. The simplification of reality that was used for the mentioned distance formula means that every body part of a person is treated the same. In contrast, workers may actually move their limbs, especially the arms, quite rapidly. This results in transient high speeds, exceeding the stipulated 2 m/s while being limited by the reach of the particular limb if the torso is not starting to move in the same direction as well. This leads to exaggerated separation distances. However, this only becomes a problem when actually performing live speed monitoring. In our approach, we consider the robot and the person safety space separately. The advantage is, that each part can be adapted to the needs or circumstance associated with either the robot or the sensor system used for detecting persons. However, the robot safety space is not completely independent from the sensor used. The sensors response time is also a contributing factor for the safety space. During sensor latency, the robot would move according to its designated trajectory. This means that we have to distinguish between occurrence of a stop event, i.e., the intersection of robot and sensor safety space, a trigger signal between both sub-systems, and the start of a stopping motion. Knowledge of the robot safety space is important for setting up an SSM-based HRC application. For a static set of pre-programmed trajectories, it is possible to consider the overall worst-case volume whereby all possible behavior variations of the robot are covered when stopping at any time during motion. In this case, safety barriers like light curtains can be placed at design-time to encapsulate the safety space. Although sensor performance, including spatial resolution and latency, need to be considered as well, this is a straight forward process. In the case of a dynamically generated movement, safety space is ideally done at run-time. Another possibility of handling dynamically generated motions would be to design it for a border-case and to perform a run-time check, whether or not the generated motion would be within these limits. The third case is the use of a more complex sensor systems which introduces constraints like occlusion. Here, the combination with dynamically generated motions is also possible. In order to deal with this general case, an online safety space calculation seems the most promising approach. In our case, we propose a voxel-based discretization in conjunction with a breaking model that covers not only a controlled stop, but also handles the case of departing the pre-determined trajectory by using dedicated (friction) breaks. This leads to larger safety spaces than assuming only the ideal breaking situation. By using a precise geometric model as well as the exact trajectory followed by the robot we can minimize the respective terms of the separation distance calculation. We consider two different object types: Environment objects that can be considered as static, and dynamic collision objects (DCOBJ). These are links of serial robots for which the voxelization is done by additionally applying breaking calculations based on the specific robot model as well as its current motion state. It also includes attachments like tools or large parts. To process DCOBJs, we implemented a multi stage approach. It is based on a fast voxelization capability as illustrated in Figure 2. At first, the swept volume representing the part of the trajectory covered during reaction time of the detection system is generated. Here, the links are incrementally moved according to the pre-panned trajectory and the corresponding voxels are marked as occupied. Next, during an iterative process starting from the tool and working its way link by link backwards to the robots base the swept volume of the actuated link is generated and saved to a separate voxel structure. For all consecutive links, the previously generated swept volume is added to the added polygonal model of the currently actuated link. Rasterization of polygonal models as well as resampling of voxel structure from the previous iteration is done by applying conservative rasterization. This prevents thin primitives to partially disappear because they may not cover the voxel center. These steps aim to create a volume structure not only representing the robot geometry at a single point in time on its trajectory, but also the space potentially required when breaking from that exact moment until standstill in all combinations of breaking distances for each link. This gives us the worst-case volume of space that may be occupied by a moving part during breaking. For this, information on the breaking performance of the robot under the specific circumstances is required. The robot type, attached payload, joint configuration, and speed of movement influence the breaking time and the residual movements of each of the individual joints. While breaking at slow speed can be nearly instantaneous, the kinetic energy that needs to be dissipated when breaking at full speed is much higher. This puts stress not only on the motors or breaks and on the overall structure of the robot but also on the mount or fixture where the robot is attached. The many contributing factors lead to typically conservative specifications of worst-case breaking distances by robot manufacturer. Usually, you can find a table within the documentation that provides the necessary information for exemplary payloads, speeds and extension. The extension basically refers to the distance of the payload from the base. For the given starting point on the trajectory we look up the worst-case bracket from the provided table and use the resulting information as input for our swept volume calculation. In a final step, uncertainty of robot position is added to the voxel structure by marking all voxels within that distance to occupied voxels also as occupied. The algorithm is provided in pseudo-code in 1. In case of a sensor guided movement, the planned trajectory gets perturbed during execution by the sensor input. Our approach could handle such applications as well by sampling from all the possibilities of typically 2D sensor input. Even though inefficient, this covers amplification of Cartesian deviations at the tool by robot structure and could be optimized in the future. FIGURE 2. Intermediate steps of the robot safety space generation (A–F); Polygonal representation (A), Voxel Representation (B), Simplification using bounding boxes (C), Swept volume when breaking 3rd joint (D), Combined result of braking 1st and 3rd joint (E), Space covered by ideal controlled stop (F). To summarize the robot safety space generation, two points are notable: The use of detailed polygonal models of the robot and attachments is beneficial compared to using coarse approximations in the form of few geometric primitives (see A and C) in 2. That is to say, simplification is implicit when converting the polygonal models to a voxel representation. The advantages are, that no additional collision models are needed, and no unnecessary padding is included in the generated volume. Another notable fact is, that our algorithm easily generates the volume that covers all possible variations of breaking behavior. While the resulting volume is usually larger then the ideal behaviour in stop category 1 or 2 (see E and F in 2), the result provides better safety because it also covers category 0 stops as well. 4 Workspace monitoring and person safety space For generating a detailed representation of the state of the person, we considered the capabilities of a camera based detection. Cameras deliver rich information with high spatial resolution. They can also be made with high frame rates and thus small reaction times, which is of particular importance for the workspace monitoring as we laid out in the previous section. Significant progress has been made when it comes to object detection by applying artificial intelligence (AI) based on machine learning. The perception of humans in the workspace area of robots is required to rate situations differently. Camera-based systems like 2D cameras (color, gray scale) or depth cameras (RGB-D cameras) are used to capture the robots environment. The evaluation of the data from the camera system can be done with the latest AI-based systems. Here, machine learning (ML) methods such as deep learning are particularly suitable for solving the various tasks in image recognition such as the identification of a variety of objects in cluttered environments or in changing lightning than classical image processing methods. There are a wide range of tasks in computer vision, and to determine which model can solve which tasks, we need to define the tasks what we want to solve (Figure 3). The simplest tasks for camera-based data is image classification (Figure 3A), in which only a single camera image is considered: if a person is recognized here, the system must activate an emergency stop, classification models were introduced by Krizhevsky et al. (2012) Simonyan and Zisserman (2014) or Huang et al. (2017) and the result refers to the whole image. To extract more detailed information from the image data, other methods for detection (Figure 3B) and segmentation (Figure 3C) can be used to analyse the robot environment. The detection networks localize objects in the image with the additional information of the classification within the estimated 2D-pixel coordinates (Ren et al. (2015), Liu et al. (2016), Redmon et al. (2016), Duan et al. (2019), Tan et al. (2020)). In Long et al. (2020), these models were tested on the MS-COCO data set of Lin et al. (2014) with various objects and people. As ML methods and models are constantly evolving, this provides a general overview of the method’s performance. Next, algorithms for pixel-wise classification (segmentation) are used to separate objects from the background. Representatives of segmentation networks were introduced by Ronneberger et al. (2015), He et al. (2017), Badrinarayanan et al. (2017) and Wang et al. (2020). Other models determine the segmentation of the individual body parts such as Lin et al. (2017), Güler et al. (2018) and Oved (2019). Additionally, the recognition of the human’s kinematic state is beneficial, so that the estimation is not only based on the human’s position. Here, it can be determined where the limbs of the human are and whether they are in a vulnerable position. However, the main advantage for acquiring the person’s kinematic state is to differentiate between different implications for possible separation distance violations: Rapid movement of the hand is limited by arm length, but movement of the whole torso is not. Network architectures that are able to capture the persons limbs to generate a topological skeleton (Figure 3D) as in Kendall et al. (2015), Cao et al. (2017), Güler et al. (2018) and Li et al. (2019) are available. Information is typically generated in 2D key-point coordinates, so that an additional distance estimation is required in order o generate world coordinates. Here, deep learning methods can directly determine a 3D position of the human in world coordinates or they can accomplish the construction of volumetric models (Saito et al. (2020), Suo et al. (2021)). Neural Radiance Fields (NeRFs) Mildenhall et al. (2020), Gao et al. (2022) are a recent technique to generate 3D-like representations from a set of 2D images of an object or scene. FIGURE 3. Various computer vision tasks for humans detection: classification(A), detection(B), segmentation(C), skeleton (D) The extend of the problem to be solved - detection by simple classification up to 3D reconstruction of limbs—has implications for accuracy, required computing power, remaining uncertainty. Other aspects relate more to hardware issues: Camera resolution and mounting distance, dynamic range, frame rate, and integration time. The dynamic range of standard cameras is still too small to easily cope with shadows, artificial light as well as direct sunlight in the same scene. The resolution is a trade-off between clearly resolving limbs, frame rate, and the input resolution of the network and thus of the available compute resources. To ensure a large viewing area and to avoid occlusion, at least two synchronized cameras should have an overlapping viewing area. In order to avoid occlusion problems and for a simplified distance calculation, we favour ceiling mounted cameras facing downwards. This approach is applicable to both, whole body detection as well as pixel-wise classification. However, it needs to be extended for differentiating limb movements. We generate the person safety space again in multiple stages (Figure 4). The fist step is selecting an available camera that is not obstructed by either the robot safety space or other structures. We then detect the presence of a human using multiple models running in parallel. Next, the human is then segmented from background (Figure 4A). Next we compute the convex hull of that silhouette. A spatial volume is then constructed by projecting lines from the camera point of view onto the ground floor through the generated hull. In a final step, the resulting pyramid (Figure 4B) is thickened on all sides by the adding sensor uncertainty as safety margin, and finally the hypothetical distance the person could move during the combination of reaction and breaking time. The breaking time is dynamic and is taken from the previous step of calculating the robot safety space. Both volumes of space can then be used together to check whether or not they touch each other. If this is the case, the minimum separation distance would be reached and the stopping of the robot would need to be commenced. When multiple workers are present in the work area, an individual person safety space is generated and checked for intersection with the robot safety space for each individually. We presented results of an experimental setup for detecting humans in an industrial setting using machine learning techniques in Bexten et al. (2020). FIGURE 4. Experiment for detecting a person using top-down perspective and general concept of workspace setup (A); Segmentation result with convex hull and top of 3D-pyramid (B). 5 Conclusion and outlook We have discussed the need for workspace monitoring and detailed separation distance calculation in order to enable intelligent robots in aerospace manufacturing. Application scenarios like the one mentioned in this paper benefit from the capability of human-robot-collaboration at least in the sense of co-existence in shared work spaces. The proposed method for 3D safety space generation which covers all possibilities of braking modes can already be used for analyzing static robot programs at design-time. Our approach to generating the person safety space is based on generating a 3D representation out of a 2D segmentation of a top-down image in a post-processing step using a silhouette-based algorithm. A future, safety rated implementation of 2D image-based human detection would open up the possibility of deploying our approach. With further development, two interesting improvements are possible. The first one is related to 3D conversion of detected 2D image regions. Here, recent AI-based techniques like NeRFs show promising results when generating 3D representations directly. When considering reliability requirements of safety applications, a combination of multiple AI techniques in a redundant fashion seems to be the most promising approach to future implementations. In these scenarios, the presented algorithm can be applied to a combined 3D representation without any changes. The second improvement is related to differentiating between individual body parts of the recognized persons. This would avoid unnecessarily huge separation distances that are a result of treating every point on the body of a person the same. This would require a future safe implementation of techniques that observe the (approximate) state of a persons kinematic structure like body part recognition, 3D key-point tracking, or similar. In order to implement the robot-side of the approach in real-world applications, the typical commercial robot controllers used currently need to be replaced. They lack both features and processing power. A robot controller with the capability of pre-planning the trajectory is needed. We also required the robot to safely monitor trajectory execution. The voxel-based computations are expensive in the sense that they require more compute power on the controller in conjunction with high-bandwidth interface to the sensor for volume data exchange. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. CW, SB, TF, MS, and NE contributed to conception and design of the study. CW implemented the safety space algorithms. SB performed the AI-recognition algorithm. CW wrote the first draft of the manuscript. CW, SB, MS, and TF wrote sections of the manuscript and provided images. All authors contributed to manuscript revision, read, and approved the submitted version. This work was in part funded under grant number 20X1720C by the German Federal Ministry for Economic Affairs and Climate Action. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495. doi:10.1109/tpami.2016.2644615 Bexten, S., Saenz, J., Walter, C., Scholle, J.-B., and Elkmann, N. (2020). “Discussion of using machine learning for safety purposes in human detection,” in Proceedings of the 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vienna, Australia, 8-11 Sep. 2011, 1587–1593. doi:10.1109/ETFA46521.2020.9212028 Cao, Z., Simon, T., Wei, S.-E., and Sheikh, Y. (2017). “Realtime multi-person 2d pose estimation using part affinity fields,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21-26 July 2017, 7291–7299. Caterino, M., Chiacchio, P., Cristalli, C., Fera, M., Lettera, G., Natale, C., et al. (2021). Robotized assembly and inspection of composite fuselage panels: The LABOR project approach. IOP Conf. Ser. Mat. Sci. Eng. 1024, 012019. doi:10.1088/1757-899x/1024/1/012019 Costanzo, M., De Maria, G., Lettera, G., and Natale, C. (2022). A multimodal approach to human safety in collaborative robotic workcells. IEEE Trans. Autom. Sci. Eng. 19, 1202–1216. doi:10.1109/TASE.2020.3043286 Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., and Tian, Q. (2019). “Centernet: Keypoint triplets for object detection,” in Proceedings of the IEEE/CVF international conference on computer vision, Seoul, Korea, 27 Oct. 2019-02 Nov. 2019, 6569–6578. EC (2006). DIRECTIVE 2006/42/EC OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 17 May 2006 on machinery, and amending Directive 95/16/EC (recast). Luxembourg, Europe: Publications Office of the European Union. Ferraguti, F., Talignani Landi, C., Costi, S., Bonfè, M., Farsoni, S., Secchi, C., et al. (2020). Safety barrier functions and multi-camera tracking for human–robot shared environment. Robotics Aut. Syst. 124, 103388. doi:10.1016/j.robot.2019.103388 Gao, K., Gao, Y., He, H., Lu, D., Xu, L., and Li, J. (2022). Nerf: Neural radiance field in 3d vision, a comprehensive review. Priprint. doi:10.48550/ARXIV.2210.00379 German Commission for Electrical Electronic and Information Technologies of DIN and VDE (2021). Safety of machinery - electro-sensitive protective equipment - Part 1: General requirements and tests. (IEC 61496-1:2020); German version EN IEC 61496-1:2020. Glogowski, P., Lemmerz, K., Hypki, A., and Kuhlenkötter, B. (2019). “Extended calculation of the dynamic separation distance for robot speed adaption in the human-robot interaction,” in Proceedings of the 2019 19th International Conference on Advanced Robotics (ICAR), Belo Horizonte, Brazil, 02-06 Dec. 2019, 205–212. doi:10.1109/ICAR46387.2019.8981635 Güler, R. A., Neverova, N., and Kokkinos, I. (2018). “Densepose: Dense human pose estimation in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Istanbul, Turkey, 30 January 2018–31 January 2018 7297–7306. He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017). “Mask R-CNN,” in Proceedings of the IEEE international conference on computer vision, Venice, Italy, 22 October 2017–29 October 2017 2961–2969. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21-26 July 2017, 4700–4708. ISO (2016). ISO/TS 15066:2016. Robots and robotic devices—collaborative robots. Geneva, Switzerland: ISO. Kendall, A., Grimes, M., and Cipolla, R. (2015). “Posenet: A convolutional network for real-time 6-dof camera relocalization,” in Proceedings of the IEEE international conference on computer vision, Santiago, Chile, 7 December 2015–13 December 2015, 2938–2946. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Adv. neural Inf. Process. Syst. 25, 1097–1105. doi:10.1145/3065386 Lacevic, B., and Rocco, P. (2010). “Kinetostatic danger field - a novel safety assessment for human-robot interaction,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18-22 Oct. 2010, 2169–2174. Li, J., Wang, C., Zhu, H., Mao, Y., Fang, H.-S., and Lu, C. (2019). Crowdpose: Efficient crowded scenes pose estimation and a new benchmark. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 15-20 June 2019, Long Beach, CA, USA, 10863–10872. Lin, G., Milan, A., Shen, C., and Reid, I. (2017). “Refinenet: Multi-path refinement networks for high-resolution semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Hawaii Convention Center, 21 July 2017–26 July 2017, 1925–1934. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., et al. (2014). “Microsoft coco: Common objects in context,” in Proceedings of the European conference on computer vision (Springer), Zurich, Switzerland, 6 September 2014–12 September 2014, 740–755. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., et al. (2016). “Ssd: Single shot multibox detector,” in Proceedings of the European conference on computer vision (Springer), Amsterdam, Netherlands, 8 October 2016–16 October 2016, 21–37. Long, X., Deng, K., Wang, G., Zhang, Y., Dang, Q., Gao, Y., et al. (2020). Pp-yolo: An effective and efficient implementation of object detector. arXiv preprint arXiv:2007.12099 Marvel, J. A., and Norcross, R. (2017). Implementing speed and separation monitoring in collaborative robot workcells. Robotics Computer-Integrated Manuf. 44, 144–155. doi:10.1016/j.rcim.2016.08.001 Meißner, D.-W. I. J., Schmatz, M. F., Beuß, D.-I. F., Sender, D.-W. I. J., Flügge, P. D.-I. W., and Gorr, D.-K. E. (2018). Smart human-robot-collaboration in mechanical joining processes. Procedia Manuf. 24, 264–270. doi:10.1016/j.promfg.2018.06.029 Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., and Ng, R. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. CoRR abs/2003.08934 Miro, M., Glogowski, P., Lemmerz, K., Kuhlenkoetter, B., Gualtieri, L., Rauch, E., et al. (2022). “Simulation technology and application of safe collaborative operations in human-robot interaction,” in Proceedings of the ISR Europe 2022; 54th International Symposium on Robotics, Munich, Germany, 20-21 June 2022, 1–9. Oved, D. (2019). Introducing BodyPix: Real-time person segmentation in the browser with TensorFlow. Js. Available at: https://medium.com/tensorflow/introducing-bodypix-real-time-person-segmentation-in-the-browser-with-tensorflow-js-f1948126c2a0 (Accessed August 15, 2022). Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, 27 June 2016–30 June 2016, 779–788. Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. neural Inf. Process. Syst. 28, 91–99. doi:10.48550/ARXIV.1506.01497 Ronneberger, O., Fischer, P., and Brox, T. (2015). “U-net: Convolutional networks for biomedical image segmentation,” in Proceedings of the International Conference on Medical image computing and computer-assisted intervention (Springer), Munich, Germany, 5 October 2015–9 October 2015, 234–241. Saito, S., Simon, T., Saragih, J., and Joo, H. (2020). “Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, 13 June 2020–19 June 2020, 84–93. Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 Suo, X., Jiang, Y., Lin, P., Zhang, Y., Wu, M., Guo, K., et al. (2021). “Neuralhumanfvv: Real-time neural volumetric human performance rendering using rgb cameras,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Nashville, TN, 20 June 2021–25 June 2021, 6226–6237. Tan, M., Pang, R., and Le, Q. V. (2020). “Efficientdet: Scalable and efficient object detection,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Seattle, WA, 13 June 2020–19 June 2020, 10781–10790. Täubig, H., Bäuml, B., and Frese, U. (2011). “Real-time swept volume and distance computation for self collision detection,” in Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25-30 Sep. 2011, 1585–1592. doi:10.1109/IROS.2011.6094611 Ubezio, B., Schöffmann, C., Wohlhart, L., Mülbacher-Karrer, S., Zangl, H., and Hofbaur, M. (2021). “Radar based target tracking and classification for efficient robot speed control in fenceless environments,” in Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Public, 27 Sep. 2021-01 Oct. 2021, 799–806. doi:10.1109/IROS51168.2021.9636170 Vicentini, F., Giussani, M., and Tosatti, L. M. (2014). “Trajectory-dependent safe distances in human-robot interaction,” in Proceedings of the IEEE International Conference on Emerging Technology and Factory Automation, Barcelona, Spain, 16-19 Sep. 2014, 1–4. Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., et al. (2020). Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 43, 3349–3364. doi:10.1109/tpami.2020.2983686 Keywords: human robot collaboration, operator safety, aerospace manufacturing, safety space, artificial intelligence (AI) Citation: Walter C, Bexten S, Felsch T, Shysh M and Elkmann N (2022) Safety considerations for autonomous, modular robotics in aerospace manufacturing. Front. Robot. AI 9:1024594. doi: 10.3389/frobt.2022.1024594 Received: 22 August 2022; Accepted: 01 November 2022; Published: 18 November 2022. Edited by:Marcello Valori, Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing (CNR), Italy Reviewed by:Jose Antonio Mulet Alberola, National Research Council (CNR), Italy Luca Gualtieri, Free University of Bozen-Bolzano, Italy Copyright © 2022 Walter, Bexten, Felsch, Shysh and Elkmann. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Christoph Walter, [email protected]
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00458.warc.gz
CC-MAIN-2023-14
40,442
90
http://keyworddiscovery.co.uk/review-by-glenn-gabe-mrmworldwide.html
code
Using Keyword Discovery for Keyword Research, Some Commonly Overlooked Features and Functionality By Glenn Gabe |Back to reviews page| If you've read any of my posts about SEO or SEM, then you probably know how strongly I feel about keyword research. I believe performing extensive keyword research is critical to understanding what people are actually searching for versus what you think they are searching for. Opinions are nice, but you should always try and back your decisions with real data (at least as much as possible). In case you are interested in learning more about Keyword Research, you can read my blog post about using Keyword Discovery and WordTracker. I'm a fan of both tools, but I must admit that I'm a bigger fan of Keyword Discovery (KD). Actually, I couldn't imagine focusing on Search and not having KD by my side. But something hit me about a month ago.I was overlooking some of the outstanding functionality included in Keyword Discovery. Actually, based on my conversations with other marketers, I believe many aren't using all of the power of Keyword Discovery. So I'm going to help you (and them) by identifying some of the functionality that might be easily overlooked. Let's get started. Global Premium Database, Historical View (Past 24 Months) Keyword Discovery enables you to choose various databases to tap into while performing keyword research. Their Global Premium database holds a few billion searches, looking back 12 months. But, did you know you could actually look back 24 months? Yes, you can and it's simple to do. Just click the checkbox for "Historical" while searching for keywords. Why would you want to search historical data? Depending on the keywords you are researching, there are times you would definitely want to see back past a year. There might have been specific things happening in the past 12 months that would skew your data (think about a presidential election) or a new movie that comes out. X-Ref (Cross Reference Tool) I love this tool. Let's say you are researching a prospective client's website and want to check a competitor's site for the keyword set you just searched for. Easy, just click the x-ref tab and Keyword Discovery will prompt you for a URL. Enter a competitor's URL (the exact page you want to check) and KD will display how many times those keywords show up in the title tag, meta keywords, meta description, and in the page copy on your competitor's webpage. Keep in mind, the cross reference tool will check at the page level and not at the domain level. This is important.you wouldn't want to run back to your client and show them one page's data thinking it was for the entire site. However, it's a great way to check other pages that rank highly for the terms you are targeting. For example, let's enter the keyword "Halloween" and cross reference BuyCostumes.com (my favorite online Halloween shop). Keyword Discovery returns the following results for the homepage: Click the image below to see a larger version: There are times where you want to see the volume for several keywords working together, but ordered in a different sequence. This tool will enable you to target your selected terms (only those terms) and show you all the permutations in the database. This can help you decide which permutations to target (based on the volume of searches you find). To use the tool, simply enter the keywords you want to target, separated by commas.i.e. keyword1,keyword2,keyword3 For example, let's enter apple,nano,video: There are times you will be targeting languages other than English. Well, if you are setting up projects in Keyword Discovery to organize your work, then you can also translate your projects into other languages. Yes, this is a very cool piece of functionality that KD provides (although it's somewhat hidden). Simply create a project, research keywords, and populate that project with those keywords. Then open your project and scroll down to view the icons at the bottom of the results. You will see the Babel Fish icon (a yellow fish icon). When you hover your mouse over the icon, it will say "Translate Keywords". When you click the icon, you will be prompted to translate your project from English to either Spanish, French, German, or Italian (or vice versa). Select which translation you want to perform and click submit. Voila, your keywords have been translated. Note, you probably wouldn't want to just take these translations at face level. It's a good starting point, but I would try and work with someone fluent in that language before implementing a campaign. ;-) This feature isn't overlooked as much as the others, but it's worth mentioning here. Whenever you perform research in Keyword Discovery, there is an option to view trended data for each keyword (as shown below). This enables you to view keyword data over the past 12 months graphically and is extremely important if you are targeting terms that are seasonal. Think about "roses" and Valentine's day. You can view charts based on historical data, monthly, trended, and you can see market share by engine. This data can help you and your clients map out strategies for targeting groups of keywords throughout the year. Now Don't Overlook These Great (But Commonly Overlooked) Features! If you are currently using Keyword Discovery and don't use these features yet, I think you will be pleasantly surprised. If you aren't using Keyword Discovery, you should be. I don't view it as a nice-to-have, it's a required tool in my arsenal. Once you are comfortable researching keywords, working in the interface, and understanding what the data means, then definitely test out the features I listed in this post. Glenn Gabe President of G-Squared Interactive, (formerly Director of Search Strategy, MRM Worldwide) - hmtweb.com
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348511950.89/warc/CC-MAIN-20200606062649-20200606092649-00087.warc.gz
CC-MAIN-2020-24
5,806
20
https://community.act.com/t5/Act-version-6-x-Prior/Migrating-from-Symantec-ACT-4-0/td-p/180123
code
02-14-2012 07:17 AM I am in the process of migrating a client to a new computer. The system is old, but still functional. All of their contact information is saved in Syamantec ACT! 4.0. I have the install media for it, but the software is so old (fourteen years) that it cannot run in Windows 7 64-bit (no real surprise there). Is Sage ACT! compatible? Should I be able to directly import the old information in to the new systems? If I cannot, does anyone have any ideas? I've never run across this particular software before, so I'm kind of stumbling in the dark here. 02-14-2012 07:26 AM
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634576.73/warc/CC-MAIN-20210617222646-20210618012646-00369.warc.gz
CC-MAIN-2021-25
591
5
https://www.experts-exchange.com/questions/28300362/protect-sheet-with-delphi.html
code
. What I Need to know: how can I control properties like "DrawingObjects:=True, Contents:=True, Scenarios:=True , AllowFiltering:=True" ... I think, the protect-method has more Parameters than the Password. But where can I find a description of this Parameters? Network and collaborate with thousands of CTOs, CISOs, and IT Pros rooting for you and your success.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00389.warc.gz
CC-MAIN-2021-43
362
2
https://dreamhosting.xyz/mongodb-dba-homework-43-answer-81/
code
Mathematics Fall Second homework due Friday, September 24 This assignment will require much calculation, but not much thought. Download the handout Homework 4. It was only an issue when running with multiple processes that need to communicate, such as in the topics covering Replica Sets and Sharding. In this homework assignment you will be adding some indexes to the post collection to make the blog fast.. One way to assure people vote at most once per posting is to use this form of update: MongoDB for Java Developers. Once you have it right and are ready to move on, ctrl-c terminate the shell that is still running the homework. Reconfigure the replica set so that the third member can never be primary. All the Answer for week 4 mongodb course 1 Homework 4. M – HomeWork 2. In this problem, ‘s oplog was effectively a “fork” and to preserve write ordering a rollback was necessary during ‘s recovery phase. MathWinterHomework 7 1 Find the standard matrix of the linear transformation T: Which of the following options will allow you to ensure that a primary is available during server maintenance, and that any writes it receives will replicate during this time? MathIntermediate Algebra. Then when you start your MongoDB processes they will function correctly. MongoDB for NodeJs devs week4: Once done with that, run homework. Check all that apply. Play next; Play now; Homework 2. En la tarea 2. Andhra Pradesh Industrial Infrastructure Corporation. I installed the latest version of MongodDB version 3. Please insert into rba m collection, Homework 4. MongoDB has a data type for binary data. What is the output? Recognize a digit represents 10 times the value of what it represents in the place to its right. Daily MongoDB Blog: MongoDB University Course M MongoDB for DBAs Newer Post Older Post Home. Triggering Rollback In this problem, you will be causing a rollback scenario. In this problem, you. The questions were all quite straightforward and covered in the online course material. Write a program in the language of your choice that will remove the grade of type “homework” with the lowest score for each student from the dataset in the handout. Download Handout Step 2: MongoDB is a document database that provides high performance, high availability, and easy scalability. Create an index on. Once you have it right and are ready to move on, ctrl-c terminate the shell that is still running the homework. Newer Post Older Post Home. Week4 – Homework Homework: This will simply confirm all the above happened ok. homework 4.1 m102 MongoDB allows you to choose the storage engine separately for each collection on your mongod. You only need to do one. Crud; MongoDB course for developers. The mongo db for hw 3. Play next; Play now; hw 2 2 by MongoDB. MongoDB is “multi-master” — you can write anywhere, anytime. What result does this expression give when evaluated? Connect to the mongos with a mongo shell. As for the calculations.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494331.42/warc/CC-MAIN-20200329105248-20200329135248-00138.warc.gz
CC-MAIN-2020-16
2,976
14
https://www.servethehome.com/asus-rs700-e10-rs12u-1u-intel-xeon-ice-lake-server-review/2/
code
ASUS RS700-E10-RS12U Internal Overview Inside the system, we are going to work from the front to the rear again. Before we get to the fans, there is a small feature that we wanted to point out. There are two fan boards in the system. One can see that the fans are connected via 4-pin PWM fan cables. These cables are very short so servicing is easy. Something that is perhaps more notable is that there are 6-pin fan headers designed for hot-swap fan modules. We typically do not see 1U servers have hot-swap fans just due to component dimensions. It appears as though these fan boards could also be used for 2U hot-swap can chassis. There are a total of nine fans in the system. We should point out that these are single fan units so these are not counter-rotating fan modules or anything like that. Perhaps the next interesting feature is the heatsink design. To support up to 270W TDP CPUs, the heatsinks have large heat pipes from the main heatsink to auxiliary heatsink sections. These sections are placed in front of the motherboard next to the fans. In many systems, we see these auxiliary heatsink sections placed after the memory. This placement means that these extra sections can be deeper in the chassis since they extend beyond the motherboard. It also means we get relatively longer heat pipes. The system is powered by two Intel Xeon Ice Lake generation processors. This system can support the full stack of up to Platinum 8380 processors. For those who have not migrated to EPYC, this is a big jump in core counts while also bringing features such as more PCIe Gen4 lanes. Since we get many questions around the SKU stack, we have a video explaining the basics of the various Platinum, Gold, Silver, letter options, and the less obvious distinctions. You can also see our Installing a 3rd Generation Intel Xeon Scalable LGA4189 CPU and Cooler guide for more on the socket, as well as our main Ice Lake launch coverage. Around the CPU socket, we get a total of eight DIMM slots. These can accept up to DDR4-3200 so long as the CPU supports it (see our 3rd Gen Intel Xeon Scalable Ice Lake SKU List and Value Analysis piece for more on that.) Since ASUS is using a proprietary form factor motherboard (common in servers these days) it is able to fit the full set of 16 DIMMs per CPU and 32 total. We only had 8 DIMMs installed for the photography session because DDR4-3200 DIMMs are in high demand in the lab. One can also add up to four DDR4 DIMMs alongside four Optane PMem 200 modules. We have more on how Optane modules work, since much of the documentation is not clear on that, here. Taking a second to note here that we are seeing the trend towards more cabled PCIe connectivity in servers. With the NVIDIA A100 riser removed we can see more of these cabled connections. We previously covered the rear I/O risers, but there is one more in this system. We have an internal PCIe Gen4 slot here for applications such as installing an ASUS PIKE SAS/ SATA HBA or RAID controller. Since a RAID controller will typically service the front bays, it can be housed internally in the system without taking up rear I/O space. With the middle riser removed we can see the BMC and the MicroSD card slot in the system. One other small feature that would be easy to miss just looking at photos is that there are two M.2 slots that sit just in front of the power supplies. These M.2 slots get a bit of airflow from the power supply fans as well as the chassis fans. Overall, this is a fun ASUS spin on a mainstream 1U server. It is certainly a bit different if we compare it to how others are designing servers with the POST code LED, cooling design, and PCIe riser configuration. It is always fun to see how innovation happens even on relatively mainstream platforms. Next, we are going to check out the block diagram as well as the management before moving on with our review.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817158.8/warc/CC-MAIN-20240417142102-20240417172102-00194.warc.gz
CC-MAIN-2024-18
3,882
16
https://www.ergoforum.org/t/ergo-mainnet-4-0-0-release-please-update-your-nodes/553
code
This 4.0.0 release represents “The Hardening” protocol upgrade and thus contains breaking changes! . The Hardening protocol upgrade will be activated on block # 417,792. Initial difficulty for the block is set to “6f98d5555555” (in hex), which is corresponding to ~ 1TH/s hashrate. - Autolykos 2 PoW scheme. It has non-outsourceability being switched off, table size growing with time, possible memory optimizations fixed. - Merkle tree of transactions now also committing to transaction witnesses (a SegWit-like construction). - possibility to enhance header structure via velvet forks added Also, sigma-interpreter (ErgoTree interpreter) dependency updated to 4.0. Full details: ScorexFoundation/sigmastate-interpreter#712 , most important for the Ergo protocol are: - activatedScriptVersion field added to execution context (ErgoLikeContext) - ErgoTree interpreter is now skipping validation for scripts if activated script version is higher than interpreter supports (so old nodes are skipping validation on soft-forks when 90+% mining power activated the ErgoTree upgrade). See changes in Interpreter.verify() - v4.0 can support AOT -> JIT switch as v5.0 soft-fork Upgrade from 3.3.4 and on - just replace old jar with the new one. 3.3.0 - 3.3.3 - full resync is needed for MacOS X (leave /wallet/keystore folder where encrypted seed is stored). Also, please see upgrade notes for 3.3.4 if you are restoring pre-3.3.4 mnemonic Release Ergo Protocol Reference Client 3.3.4 · ergoplatform/ergo · GitHub 3.2.x - unlock wallet on the first 4.0.0 node run and do wallet rescan with /wallet/rescan API call 3.0.x && 3.1.x - full resync is needed (leave /wallet/keystore folder where encrypted seed is stored)
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00327.warc.gz
CC-MAIN-2022-27
1,717
13
http://www.shsu.edu/~ldg005/
code
- Department of Mathematics and Statistics - Sam Houston State University - Huntsville, TX 77341-2206 - Phone: (936) 294-1581 - Fax: (936) 294-1882 - 2004: PhD, Mathematics, Virginia Polytechnic Institute and State University - 1999: BS, Mathematics, Universidad Nacional Autónoma de México, México - Research Interests - Algebraic Statistics, Computational Algebraic Geometry, Combinatorial Commutative Algebra - Associate Professor, SHSU Department of Mathematics and Statistics (2013 - ) - Assistant Graduate Coordinator of SHSU's MS in Mathematics Graduate Program (2014 - ) - Assistant Professor, SHSU Department of Mathematics and Statistics (2007 - 2013) - SAMSI New Researcher Fellow, Statistical and Applied Mathematical Sciences Institute (Spring 2009) - Visiting Assistant Professor, Texas A&M University (2005 - 2007) - Postdoctoral Research Fellow, Mathematical Science Research Institute (Fall 2004) - Postdoctoral Research Fellow, University of California, Berkeley (Summer 2004) - Graduate Research Assistant, Virginia Bioinformatics Institute (Spring 2003) - Graduate Research Fellow, Physical Science Laboratory (Summer 2000) Math 1332 (online) during Spring 2017. - National Alliance for Doctoral Studies in the Mathematical Sciences pre-doctoral mentor. - Early Career Mathematician Mentor as part of the MAA Early Career Mentoring Network and the Project NExT program. - Editorial Activities - Associate Editor of the Journal of Algebraic Statistics. - Associate Editor of the American Mathematical Monthly. - Contributing Editor of the AMS blog On Teaching and Learning Mathematics. - Events of Interest The Algebraic Statistics workshop at the Oberwolfach Research Institute for Mathematics will take place on April 16 - 22, 2017. The International Conference on Effective Methods in Algebraic Geometry MEGA 2017 will take place in Nice (France), June 12 - 16, 2017. The second Mathematical Congress of the Americas will take place in Montreal, Canada on July 24 - 27, 2017. The SIAM Conference on Applied Algebraic Geometry will take place at the Georgia Institute of Technology on July 31 - August 4, 2017. Watch my Numberphile video on sandpiles. Featured mathematician in the American Mathematical Society' Lathisms project. - Rebecca Funke (University of Richmond) is a 2015 Goldwater Scholar. - Megan Chambers (Youngstown State University) won a MAA Undergraduate Student Poster Session Oustanding Presentations award at the 2015 Joint Mathematics Meetings. - Vince Longo (College of New Jersey) won a 2014 Student Research Presentation Recognition at the 2014 SACNAS National Conference. - Gautam Webb (Colorado College) won a 2014 Barry Goldwater Honorable Mention. - Alex Diaz and Andy Howard (Sam Houston State University) won a 2009 MAA Undergraduate Poster Session Prize at the 2009 Joint Mathematics Meetings. - SIAM News July/August 2007 Issue: Algebraic Geometers See Ideal Approach to Biology.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120187.95/warc/CC-MAIN-20170423031200-00123-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,935
39
https://community.pulsesecure.net/t5/Pulse-Connect-Secure/Upgrade-SA4500-Client-side/m-p/1395
code
I would like to upgrade my SA4500 cluster (actually in 7.3R1 version). Before i have some question about the installers. - The installers will be update too automaticaly ? - It will be possible to have a connection with ealier version after the upgrade or the client side will update automaticaly ? (for exemple juniper install service, host checker, network connect) - What is your procdure for users updates who are not admin of their computer ? Thank you ! I've my SA configured to automaticaly upgrade the clients... and it runs well if the users has admin privileges on the machine. If they don't have, they must have the Juniper Installer Service already installed or they will be in trouble. Another problem is, if you use rewriting, you must warning your users to clean the cache. Specially in upgrade of major releases, it is a nightmare... Where can i specifiy to not upgrade automaticaly all the client installers ? If i do this, both new installers and installer 7.3 will be compatible at the the same time wirh the portal ? This will give me time to install the new version on user's computer who don't have admin right. I've done a double check about what I've said, and just WSAM has role base configuration to do auto update or not. Junos Pulse is system wide configuration. What almost all have is auto-launch... apology for mislead you. I think if you don't have the new client versions, it will not be possible to use the new SA version. At least, HC and Network Connect fails... You can have several versions of Network Connect installed, but not HC. If i take the problem by the other side, it's possible to find and install manually all client side before the upgrade of the SA4500 (it's require that's client version newer than 7.3R1 will be compatible with my portal) ? It's a puzzle... I want to upgrade my SA4500 but I'm stuck because of my customer who have no admin right on their computer... Send the package Juniper Installer Service to the users... It is in maintanance -> System -> Installers... You have .exe to single instalations or .msi to deploy in the organizations. After all users have this service installed, you don't have any problems... just install new updates in the windows way... next next next Anyone else for helping me ? Yes but it would be easier if I could do it before updating the server. That's my question, finally
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499697.75/warc/CC-MAIN-20230129012420-20230129042420-00072.warc.gz
CC-MAIN-2023-06
2,371
20
https://community.filemaker.com/thread/91443
code
Cannot Access Database Using Admin Password I managed to turn off access privileges to my FM9 database using the client. Now I cannot access the database using my login. After reading about privileges I now know I should have a master login as a backup. Unfortunately, I've been using my admin account to do layouts, etc. All users have access via web publishing and client, but since I don't have an active Admin account I am desperate for a solution without scrapping the whole database. Does anyone know how I can get to the privileges so I can reinstate my Admin access via FM client? Or, how do I use my Admin access in web publishing to access admin features?
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00639.warc.gz
CC-MAIN-2018-05
665
5
https://triplelband.com/coursera-eveloping-innovative-ideas-for-new-companies-the-first-step-in-entrepreneurship/
code
Coursera Eveloping Innovative Ideas For New Companies: The First Step In Entrepreneurship If you are considering a career change, consider enrolling in a Coursera course. While you may not get a degree from this online university, Coursera is a great option for those who wish to earn a diploma or advance their career in a specific field. You can earn a certificate completion through the specialization, even though these courses are not creditable. For more information about Coursera, see our review of the service. To register, you must be at least 13 years of age. Coursera also offers mobile apps for iOS, Android, and Windows. Although you can view the courses on your mobile device, some require flash plug-ins to view. After signing up, you can download course materials to your computer to view offline. Coursera currently offers the majority of its courses in English, although there are a growing number of courses offered in other languages. Coursera certificates can be used in the workplace, unlike many other online learning platforms. They are as valuable as a college diploma and employers are often impressed with them. Coursera’s online courses allow you to advance your career and land your dream job. Coursera certificates are recognized by most employers, and they can even be added to your resume. Coursera certifications are valid and valuable, despite the fact that some online learning platforms offer certificates worth thousands of dollar. Coursera is one of the most reliable and secure online earning platforms. The site requires certain information, including your payment information, but it keeps your information secure. Because Coursera has a proven track record of not stealing information, you can rest assured that your information and money are secure. In addition to providing free courses, Coursera is also safe to use, with their security measures and partnerships with prestigious universities. Whether you want to improve your skills and obtain a college degree, Coursera is a great place to start. They offer free and affordable courses, as well as specialized, industry-relevant courses from the world’s best universities. With over four thousand courses to choose from, there is bound to be one that suits your interests and budget. Coursera offers over 1,700 courses you can take for free. You can also upgrade to a certificate if you’d like. Coursera’s founders are Stanford University professors. The company does not create its own courses, but rather collaborates with leading universities and educational institutions to put existing educational offerings on the online platform. Coursera’s courses have been accredited by the university that organized them. Coursera has more than 3,900 courses to choose from, broken down into 6 categories. The courses are also available offline on the Coursera app. And, of course, there are accredited certificates for those who successfully complete the courses. Some people prefer to pay Coursera for a certificate. Free courses are fine for hobbyists. However, those who are looking to get a job might prefer to pay for a paid course. In addition to being able to add a certification to their resume, the free courses do not offer peer-to-peer grading, instructor feedback, or professional feedback. If you are serious about your career, and want to be a leader in your field, then Coursera is a good option.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00236.warc.gz
CC-MAIN-2022-27
3,414
8
https://www.godaddy.com/help/configure-google-pay-for-godaddy-payments-41586
code
Configure Google Pay for GoDaddy Payments Step 2 of the Set up Google Pay for GoDaddy Payments Series Once you activate Google Pay, you can customize the settings. Some settings are pulled from your GoDaddy Payments account, and they are displayed on the Settings page. Review and adjust the following options as needed: - Under the General section, you'll see the settings that are pulled from your GoDaddy Payments account settings: - Transaction type: Authorization or Charge. - Charge Virtual-Only Order: Enabled or Disabled. - Detailed Decline Messages: Enabled or Disabled. - Debug mode: Off, Show on the Checkout page, Save to log or Both. - Pages to enable Google Pay on: Single Product, Cart, and/or Checkout. - This will display the Google Pay button and allow customers to check out via Google Pay on any or all of the options that you choose. - When Google Pay is enabled on a Single Product, customers will be able to order from the product's page. - Button label: Optionally choose a label for the Google Pay button using Google’s button options (Subscribe is not supported for the moment). - Button style: Choose a style for the Google Pay button between white and black. - Button size: Change the size for the Google Pay button, the default value is 45.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00438.warc.gz
CC-MAIN-2023-50
1,271
15
https://cdcvs.fnal.gov/redmine/issues/22162
code
cloning sample files needs to fix experiment= and actually fix the name Right now, cloning the sample workflows only works for samdev, because the sample .ini files say We have a clone button, but it doesn't fix this... #3 Updated by Marc Mengel over 1 year ago After some finagling, cloning sample campaigns is working in release/v4_1_1 and develop branches. - mark the items "dirty" (by putting a wrong hash value in the form attributes) so they would get saved - add login_setups and job_types from the .ini file dump to the pulldowns list, as well as the ones from the databases. - some debug logging and error handling in save_campaigns
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107865665.7/warc/CC-MAIN-20201023204939-20201023234939-00656.warc.gz
CC-MAIN-2020-45
641
8
https://www.goretro.ai/post/sprint-planning-meeting-guide
code
Sprint retro. Sprint review. Sprint planning. Just what do these all mean? In this quick guide, we take a deep dive into everything you need to know about sprint planning. What is a Sprint Planning Meeting? As the name suggests, this is the Scrum planning session that is conducted to plan the upcoming sprint. Using your collective knowledge of what happened in the past sprint (during the sprint review and sprint retrospective process, which are two separate meetings that are also a part of the Scrum cycle). The goal of the sprint planning meeting is to define a set of outcomes, which is never a bad thing (in sprints, but in life too!). The Hows, Whats, Whos and More Sprint planning does not have to be a complicated thing! Here’s a quick breakdown of who does what, when, where and how: The product owner will share their objective for the sprint - for example, to develop a new feature. They'll then sort the backlog items related to that objective. The scrum team will have to decide how to make the objective happen. The Dev team works together to plan out what they need to make the objective happen on their end. It’s worth mentioning here that the sprint planning meeting is often a sort of negotiation between the outcomes set by the Product Owner, and what the Dev team can realistically deliver. The Product Owner, Dev team and any other relevant stakeholders will be present at the sprint planning meeting. It's incredibly important that all stakeholders are relevant, to say their piece and offer realistic goals… or someone could end up with more work (and unattainable goals!) than they'd bargained for. Perhaps the most important aspect of the whole agile sprint planning meeting are the outcomes that are set and agreed on. Anything agreed upon will be broken down into stories and added to the sprint backlog, ready for everyone to start working once the sprint commences! Sprint Planning Meeting Agenda Unlike other areas of the Scrum process, sprint planning meetings require a solid structure, patience and collaboration. You can expect a mix of: - A refined product backlog, ready to go with the Product Owner’s needs. - Outcomes and action items from the last sprint review, for reference. - Calculations of commitments - both past and present. - Determining the velocity and capacity of the team during the upcoming sprint, according to stories and overall goals. In general, the sprint planning meeting will follow this structure (but you may do it differently, depending on your team’s needs): - Overview and big picture sharing. - New information for the upcoming sprint. - Capacity and velocity discussion. - Known issues discussion (according to sprint review outcomes). - New backlog items to add to the sprint. - Set estimates for work . - Discussion of the overall plan and confirmation of stakeholders. Sprint Burndown Chart This is an integral part of the agile sprint planning meeting. It's a graphic of how quickly the backlog of a sprint is completed, which should (if all goes according to plan) slope downwards, and come to an end by the time point of the sprint’s completion. As stories are completed, the graph curves downward more sharply. It confirms how much work remains, NOT how long it takes to do it! In the best case scenario, the sprint burndown chart has a very sharp slope downward; in the worst case scenario, the burndown chart remains pretty flat. User Stories, Estimations, Splitting and Mapping There’s a lot of estimation (based on experience and past sprint reviews) during the agile planning meeting. The team obviously needs to estimate what they can legitimately do during the sprints, according to their capacity and planned velocity of completed items. But - and it's a big ‘but’ - estimation is NOT the same as ‘commitment’. Here’s why: - The product backlog will be split into user stories. This will likely be sorted according to the product owner, not the actual Devs who will be working on them. - These stories will need to have an overestimation assigned to them, regarding the time they will take to complete. - These user stories will likely need to be split down into smaller chunks of manageable user stories. - Once these have been split, they need to be mapped out according to time, assigned owner, and overall priority. Only once all stakeholders have signed off on these, do they become a commitment. Sprint planning meetings best practices really run the gamut of anything and everything, and depend on the way your team does things. To read more about it, check out our guide to sprint planning best practices. Solid agile sprint planning relies on certain rules, whether spoken or unspoken: - Always have a sprint goal - defined, out loud, and (if possible) written down for all to see and keep at the forefront of all discussions. - All stakeholders need to agree and be happy with the outcomes. - The sprint planning meeting needs to remain focused on the upcoming sprint - it's not the place to point fingers or discuss past sprints other than in a truly constructive, or highly related manner to the upcoming sprint, and even then, only if it directly relates to it). - It's always better to overestimate than underestimate when it comes to capacity and velocity. Using a sprint planning template will help the entire team to align in sprint goal setting, capacity estimations and make each sprint as effective as it can be. Using a sprint planning template will allow you to: - Close the previous sprint decisively, and with learning outcomes - Get your team aligned with the upcoming sprint’s goals - Plan the team’s capacity - Share the team’s projected velocity The sprint planning meeting activities relate around dissecting the product catalog, and assigning user stories: - Discussing and setting the overall sprint goal. - Reviewing the highest priority stories and the product catalog. - Determining what the Dev team can and can't do (estimating) according to their capacity and velocity. The sprint planning meeting is a precursor to the sprint. If it’s not up to scratch, the outcome of the sprint will suffer. The planning session should leave each team member with two outcomes: detailed information and focus. Firstly, make sure the team is aligned on all aspects of the sprint - ownership of tasks, the sprint backlog and the sprint goal should be known by each team member otherwise the team faces the problem of being disconnected. Secondly, each team member should be aware of their own exact tasks and what it takes to complete them - this will ensure that team members don’t stray from the path of the sprint. Remember that planning is also just that, planning. Almost always unplanned events come into the sprint but a good sprint plan should allow these events to not derail the whole sprint - so allow room for unexpected items that might pop up.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00614.warc.gz
CC-MAIN-2022-27
6,898
55
https://www.mannmansion.com/2016/so-i-walked-to-the-gym-without-carrying-a-phone/
code
Started this morning with looking at why my sql server had decided to fail again. Same cause, out of memory. It even managed to corrupt the database so wouldn’t even start on a reboot. Managed to get the rebuilt and the service started. Then looking at the firewall I noticed I was getting attacked by one IP address about every ten milliseconds. So blocked that one and found another one later. I expect all those hits on Apache were overloading the SQL server. It’s been running fine since, but I need to keep a very close eye on it. Work was shite. Walked the dog. Walked to the gym, as I’m now almost at my ultimate goal on Pokemon Go I didn’t take the phone, I just walked instead. Did Combat, walked home. Did a bit more work. I’m going to change my working hours. I think two hours on a Thursday afternoon should cover it.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00502.warc.gz
CC-MAIN-2024-18
839
1
https://www.open-mpi.org/community/lists/users/2011/11/17700.php
code
Sorry for the delay in replying. We don't have any formal documentation written up on this stuff, in part because we keep optimizing and changing the exact makeup of wire protocols, etc. If you have any specific questions, we can try to answer them for you. On Oct 21, 2011, at 2:45 PM, ramu wrote: > I am trying to explore more on technical details of MPI APIs defined in OpenMPI > (for e.g., MPI_Init(), MPI_Barrier(), MPI_Send(), MPI_Recv(), MPI_Waitall(), > MPI_Finalize() etc) when the MPI Processes are running on Infiniband cluster > (OFED). I mean, what are the messages exchanged between MPI processes over IB, > how does processes identify each other and what messages they exchange to > identify and what all is needed to trigger data traffic. Is there any doc/link > available which describes these details. Please suggest me. > Thanks & Regards, > users mailing list For corporate legal information go to:
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826916.34/warc/CC-MAIN-20160723071026-00121-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
918
14
https://gompertlab.com/category/uncategorized/page/4/
code
We recently published a paper in Molecular Ecology Resources describing a method to determine ploidy in mixed ploidy samples from low to modest coverage GBS-like data (paper). This relies on both estimates of heterozygosity and allele specific read depth. The method we describe is implemented in the R package, gbs2ploidy. You can read more about it in this informative blog from the Molecular Ecologist. Lauren’s paper on patterns and rates of gene flow within four narrowly endemic, spring-associated species is now out in Freshwater Biology. An image from the study of the amphipod Stygobromu made the cover (see above). The paper makes nice use of approximate Bayesian computation to contrast rates and models of gene flow among distantly related sympatric taxa. Vladmir Nabokov is probably best known for his work in fiction (he wrote Lolita, among other things), but he was also a Lepidopterist. In fact, he was responsible for much of the early systematic work on the butterflies we study, Lycaeides. Lauren’s book (she contributed a chapter, Stephen Blackwell and Kurt Johnson were the editors) connects his work in art and literature to his science. Its out now (you can get it here)! Her chapter highlights how our own research has been influenced by Nabokov. Vladmir Lukhtanov wrote a nice review of the book for Nature. Molecular Ecology published a special issue on detecting selection in natural populations. It included our paper, which introduced a new method for estimating selection from genetic time-series data (allele frequencies), along with interesting papers on detecting polygenic selection (Stephan 2016), promises of combing experimental evolution with population genomics (Bailey & Bataillon 2016), and considerations for conducting genome scans (Haasl & Payseur 2016) (along with many interesting empirical papers). Check it out! Stacy Krueger-Hadfield wrote a nice blog piece for The Molecular Ecologists covering some of our work on modelling uncertainty in DNA sequence data. As she notes, we have long-advocated for propagating uncertainty in genotype into inferences of population genetic parameters (e.g., measures of differentiation, introgression, etc.). This provides more honest measures of overall uncertainty and allows one to take advantage of lower-coverage sequence data. Check out her post here. We seek a Master’s student to conduct research on evolutionary responses to climate change. The student will be co‐advised by Peter Adler and Zach Gompert at Utah State University. Together we will investigate changes in the genetic diversity of two perennial grass species in a long‐term precipitation manipulation experiment in an eastern Idaho sagebrush steppe. We will compare the experimental responses with patterns across an elevation and precipitation gradient. Field sampling will begin in May, 2016. Stipend support will consist of both research and teaching assistantships. To apply, please email a 1) cover letter, 2) CV, 3) description of research experience, and 4) contact information for three references to Peter Adler ([email protected]) by Dec. 1. We have developed a new computer program (spatpg) to infer variance effective population size and environment-dependent selection based on genetic time-series data (allele frequencies) from multiple populations. The software, including the source code and manual, can be downloaded here or using the link on our software page. The paper describing the method will be published in a forthcoming special issue in Molecular Ecology on detecting selection in natural populations. The accepted article is available here: Gompert-2015-Molecular_Ecology. We hope to release an updated version of the program that supports parallel processing and low-coverage DNA sequenced data shortly. I was invited by Lacey Knowles and her lab group to give a seminar at the University of Michigan a few months ago. It was a a fun trip. The seminar was recorded and is now on youtube. It covers our work on hybridization and diversification in Lycaeides butterflies and includes some discussion of statistical approaches for the analysis of admixed populations. Here is a link to the recorded seminar. Our paper examining constraints on the evolution of host use in the Melissa blue butterfly (Lycaeides melissa) was published in today’s issue of Molecular Ecology. We were interested in the potential for limited genetic variation or genetic trade-offs in performance on different host plants to slow or prevent adaptation to a novel host, in this case alfalfa. To address this question we conducted a massive larval rearing experiment (see the image below) and generated and analyzed partial genome sequences (GBS data) from the larvae (we also sequenced and assembled a draft whole genome for L. melissa to provide genomic context for our results). We found that L. melissa harbor genetic variation for performance on alfalfa and that genetic variants that affect performance on alfalfa do not affect performance on a native host plant. In other words, we found no evidence that genetic trade-offs limit diet breadth (cause host plant specialization) in these butterflies. I think this study shows how genomic data can be used to better test a classic hypothesis in evolutionary genetics. Alberto de Rosa and Samridhi Chaturvedi joined the lab this fall. They are both PhD students. We are very excited to have them here. Samridhi earned her B.Sc in Chemistry, Botany, and Zoology from Christ College, Bangalore, India, and then earned her Masters in Applied Microbiology from the Vellore Institute of Technology, India. Her project focused on the genomic basis of Asthma. She then worked at the Indian Institute of Science on molecular phylogenetics of bufonid toads before joining the lab here. Alberto de Rosa completed his M.Sc in Biodiversity and Evolution from the Università di Bologna in Italy. His thesis was on inter-specific pecking order of mixed groups of birds in semi-natural areas. He has since established an international network of collaborators to investigate the determinants of diversification in island Barn Owls in the Caribbean, with a focus on conservation applications of his research.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300805.79/warc/CC-MAIN-20220118062411-20220118092411-00360.warc.gz
CC-MAIN-2022-05
6,229
12
https://support.huawei.com/enterprise/en/knowledge/EKB0000270329
code
Version: R3680e version vrp3.3 release 0001 R3680 uses different sub-interfaces to terminate different VLANs, and connect the users to different VPN instances Symptoms: Packet loss occurs when users ping the gateway (the interface address of r3680). 1. Check the links, and there is no error packet increasing when packet loss occurs in ping to the gateway. It indicates the problem is independent of physical links 2. Change the user to the network segment managed by S3026, and no packet loss occurs when S3026 pings the user. It indicates the problem is independent of PC, not resulted from that PC is infected by virus. 3. Connect the user to the original VPN; at 3680, execute debug ip packet ACL (user address and the gateway address) command, and perform ping test; it is found that the first and third packets pass at the user side; the second and fourth packets do not pass. According to the ICMP replay printed by 3680, the first and third packets have been sent to the eth 1/0.4, and the second and fourth packets are sent to eth 1/0.2 . 4. Check the configuration of eth1/0.2 of 3680, and it is found that one slave address of the interface is the same as the one of eth1/0.4 5. Delete the slave address of eth1/0.2, and the proble is solved. 1. The links are problematic. 2. The user is infected by virus. 3. R3680 fails in forwarding. 3680 vrp3.3 0001 cannot distinguish the private network addresses with different MPLS vpn RD, and it needs to upgrade the version.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889173.38/warc/CC-MAIN-20201025125131-20201025155131-00300.warc.gz
CC-MAIN-2020-45
1,479
12
https://libraries.io/github/mscoutermarsh/exercism_coveralls
code
Application to support working through sequential programming problems, with crowd-sourced code reviews. This an experiment, and the code reflects that. Many features have been thrown in, only to be deprecated shortly thereafter, and there's scar tissue throughout the system. Features may be here today, gone tomorrow. Things do seem to have settled a bit in the past couple of months, so there's a chance that we'll reach some sort of 1.0 in early 2014. The messaging right now is a disaster. The site is confusing, the process is opaque, and it's hard to figure out where you need to look to figure stuff out. What we think we know This is a process with two parts: - practice (writing code, iterating) - nitpicking (looking at code, providing insights and asking questions) It's not about getting code perfect or right, but using the pieces of code to talk about the little details of what makes code simple, readable, and/or expressive. The warmup exercises are collected from all over the web. The common data for assignments are in This includes some metadata that gets sewn into a README. Not all assignments will be appropriate for all languages. The actual assignment consists of a test suite, where all test are pending except the first one. The languages paths are configured in The list of assignments is just a really big array of assignment slugs in the order that they will be assigned. Different languages/trails do not need to have the same assignments or the same order. - Install postgresql with: brew install postgresqlor apt-get install postgresql-9.2 .ruby-versionif you use a Ruby version manager such as RVM, rbenv or chruby - Install gems with: - Get a client id/secret from Github at https://github.com/settings/applications/new. - Name: whatever - URL: http://localhost:4567 - Callback url: http://localhost:4567/github/callback - Presuming you have Postgres installed (if not: brew install postgres): - create db user with: - create database with: createdb -O exercism exercism_development. - create db user with: - Run the database migrations with - Run the database seed with rake db:seed. If you want LOTS of data: rake db:seedor some other big number. .envto fill in the correct values. - Start the server with - Login at http://localhost:4567. - You can view the mails send in MailCatcher in your browser at localhost:1080. - Work through 'Frontend development setup' below and run lineman for correct styling at http://localhost:4567 Frontend development setup - Install node and npm - osx: brew install node - others see: https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager - Install lineman via sudo npm install -g lineman cd frontendand start lineman with - note lineman watches for file changes and compiles them automatically, it is not required to be running for the server to run If you want to send emails, you will need to fill out the relevant environment variables in .env and uncomment the lines so that the variables get exported. There's a script in bin/console that will load pry with the exercism environment loaded. - Prepare the test environment with RACK_ENV=test rake db:migrate. - Make sure that - Run the test suite with To run a single test suite, you can do so with: If it complains about dependencies, then either we forgot to require the correct dependencies (a distinct possibility), or we are dependening on a particular tag of a gem installed directly from github (this happens on occasion). If there's a git dependency, you can do this: bundle exec ruby path/to/the_test.rb For the require, you'll need to figure out what the missing dependency is. Feel free to open an issue on github. It's likely that someone familiar with the codebase will be able to identify the problem immediately. To enable code coverage run: COVERAGE=1 rake test Browse the results located in Let Heroku know that Lineman will be building our assets. From the command line: heroku config:set BUILDPACK_URL=https://github.com/testdouble/heroku-buildpack-lineman-ruby.git Thank you for wanting to contribute! Fork and clone. Hack hack hack. Submit a pull request and tell us why your idea is awesome. For more details, please read the contributing guide. GNU Affero General Public License Copyright (C) 2013 Katrina Owen, [email protected] This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00492.warc.gz
CC-MAIN-2020-16
4,788
75
https://www.filmgoss.com/reviews/2000/the-new-new-thing
code
Author Michael Lewis sets out on a safari through Silicon Valley to find the world's most important technology entrepreneur. This man who embodies the spirit of the coming age. It is Jim Clark who is about to create his third billion-dollar company. Clark first created Silicon Graphics then Netscape and now Healtheon. Healtheon is a startup that may turn the $1 trillion healthcare industry upside down. We also get the inside story of the battle between Netscape and Microsoft.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00004.warc.gz
CC-MAIN-2023-14
480
1
https://bitcoin.stackexchange.com/questions/81273/dynamically-change-transaction-fee-using-delegate-service
code
- API service holding a bitcoind Full Node. - Client Wallet - I have no control over that system. User application - Who gets the barcode for payments. - So, My API (1) only has the Receiving Address of the Client wallet (2). Let's say my client is a hotel, I want the API (1) to set the maximum fee for the transaction if the user (3) wants to get a room today and the minimum fee available if the user will set a reservation for two weeks from now. Is it possible without any control over my clients (2) wallet?
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00374.warc.gz
CC-MAIN-2022-05
513
6
https://www.electrondepot.com/electrodesign/pic-dspic-development-904817-.htm
code
Dunno what a 'ds'PIC is I only program PICs (12F 16F 18F) in asm. For that I use gpasm in Linux (is part of gputils). Designed my own programmer (based on noppp by somebody else), and wrote my own programming software, that is in my site. I do not use Microchip mplab, I do not use ice, I do not use any other slimulator. I debug via the PIC's serial port, and if it does not have one write a soft one in ... PIC asm. Does this help? Over time you build a library of fast asm, there is the piclist website too where you can find many asm code examples. The PICs I use are too small or impractical for any C programming. Sometimes I write the code in C on the PC and if that works translate it into 32 bits asm. Normally floating point is not needed, 32 bits is enough for most purposes, I use a 32 bit asm math library from somebody I found on PICLIST. But then I started coding in binary long time ago, so for me asm is a high level language. I started with PICs cracking TV smartcards that had those in it. So the dirty secrets I should know... Was still legal back then. It is actually fun. PIC asm, and I do not even claim I know the whole 16F instruction set. Have fear? Mplab, unless you perfer some GNu. I never had a problem with 8.x (8 bit). Mplab X is C90/C99 and some things don't port over directly. If you use the Code Configurator, be sure you check to see if the device is supported, older devices are not. I'm sure the 32 bit dspic compilier has similar quirks. So if you inherit a project in 8.x, You'll need Win7 and ICD3 Otherwise MplabX and ICD4. MPLAB-X with their compiler XC-16. It's a bit greedy with resources but oka y on a modern machine. There is a free version with no optimization. I am n ot sure how well the DSP functionality is supported by the compiler, we jus t used it as a relatively fast 16-bit micro. The compiler is $995 but there is a 50% off discount coupon good to the end of this month. Use Coupon Code : TP1932 This family is a bit long in the tooth, but if your customer wants it.. I wonder how much of a future 8-bit AVR has, unfortunately. I love 'em and use the ATTiny series all the time, they're IMO a significantly superior architecture wrt high-level C/C++ development than PIC. My time from concept to working prototype on breadboard is astonishingly low, sometimes just tens of minutes. But 8-bit PIC beats them on the long-term availability/support/"legacy" front. And 32 bit ARM is coming down in price and power consumption all the time, also astonishingly so. Thanks, those dev boards do look good not at all expensive. So apparently there's no particularly good option for an optimizing C compiler plugged into 3rd party tools e.g. Code Blocks, or Eclipse et. al. for the platform? Or rather one that costs sub $1000 or $500. $500 not unreasonable for a good closed-source optimizing C compiler if I'm going to end up doing development for this platform routinely but more so for a one-off job. okay on a modern machine. There is a free version with no optimization. I am not sure how well the DSP functionality is supported by the compiler, we just used it as a relatively fast 16-bit micro. end of this month. Use Coupon Code : TP1932 That may well be more about the people doing the design than anything inher ent in the design process. The people I've met who like PICs seem to do so just because they "like" them rather than being able to explain any partic ular features or benefits of the PICs. While devices like the AVR often are justified because of a "modern" instru ction set (which is not really an engineering evaluation) advocates can usu ally point to some real advantages. I'm not saying the PICs don't have advantages. I'm saying that when advoca tes can't justify their preference it makes me suspect other aspects of the ir work. (AFAIK, the dsPIC is a very different architecture from the traditional PIC14/PIC16/PIC18.) PIC microcontrollers are horrible to use and program, but their long-term availability is unrivalled. They are also incredibly robust. (We once had a customer who wanted help improving the temperature range of their board. It ran fine up to 135 C, but they wanted to get to 150 C. It turned out that the board had an 85 C qualified PIC on board - but that was not the problem, and was quite happy at 150 C.) AVR's are not going away any time soon, but they might be getting more specialised. They are extremely popular in ASICs and custom chips - that was Atmel's main business, I believe. It is because of a zilion pics versus 1 avr sort of thing. And PICs were (are?) very popular by 'amateur' programmers. Those have questions, many real programmers too, good help. These days it is 'duinos, and moving towards 'berries. If your application has any serious I/O to the world including internet, WiFi, USB, keyboard and monitor etc, raspis are the way to go now. I just configured an old raspi as 4G access point for my LAN, 4G USB stick in it, 10 lines of code, bash script that is. canceled my cable subscription and save loads of $$$$$ a year. So far I found no difference in performance / speed, even youtube works. Lots of I/O via GPIO. Only for things that need very small space and / or very low power consumption maybe a PIC. Or for fun.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00390.warc.gz
CC-MAIN-2023-40
5,248
31
http://www.linuxquestions.org/questions/linux-newbie-8/suid-and-sgid-883721/
code
Hi I have problem with understanding setgid on a binary executable. I know that when sgid bit is set on a binary executable file it will run with the group permission of the binary file, instead of the one who runs it. There are lot of examples available on the internet demonstrating suid permissions, but not sgid permissions. I was able to demonstrate suid permissions by calling a bash script from a compiled c program with suid bit set. I have a file /tmp/1.txt which have the following permissions. ls -l /tmp/1.txt -rwxr----- 1 root root 5 May 31 11.50 /tmp/1.txt As you can see, only owner & group users can read this file. I wrote a bash script '/tmp/read' chmod u+x /tmp/read ls -l /tmp/read -rwxr--r-- 1 root root 28 May 31 11.50 /tmp/read setuid( 0 ); system( "/tmp/read" ); make call call.c chmod u+s call ls -l call -rwsr-xr-x 1 root root 4828 May 30 05.55 call Now normal users can execute './tmp/call' with elevated privileges & read 1.txt. But I am unable to do the same with sgid bit set. Can any one provide me, an example like the above script to demonstrate sgid permissions ??? Please help ...
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170940.45/warc/CC-MAIN-20170219104610-00629-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
1,115
19
https://archive.sap.com/discussions/thread/146180
code
Current Date in file name I export daily a report to csv using RSCRM_BAPI ... I need to the put the date of the extraction in the file name (export_csv_120506.csv) ... It's possible ? export_csv_ + "sysdate" ... something like that ? Thanks in advance for your help,
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480272.15/warc/CC-MAIN-20190216105514-20190216131514-00609.warc.gz
CC-MAIN-2019-09
266
4
https://elastisys.com/enable-open-policy-agent-opa-policies/
code
To ensure that your applications deployed in the Compliant Kubernetes environment are following the best practices. Compliant Kubernetes includes a service called Open Policy Agent (OPA). It includes a few policies to help ensure that your application will run according to operational best practices. All of these policies are currently not enforced in your cluster, but we would like to enable them, so that all of your applications must follow these policies. The following are the policies we would like to enforce for now: - Require resource requests: This would require all of your pods to have resource requests for CPU and memory. This will help ensure that your Kubernetes nodes are not overused and that pods are better spread out across all nodes. This would also give you an indication when additional nodes are needed, or existing nodes need to be resized. Read more about resource requests at our public docs: https://elastisys.io/compliantkubernetes/user-guide/safeguards/enforce-resources/ - Require network policies: This would require all of your pods to have a network policy attached. This will help ensure that your pods are protected from unexpected network communication to and from malicious actors. We do not place any restrictions on how these network policies are written, but our recommendation is to make them as restrictive as possible and only allow communication that is needed. Read more about network policies at our public docs: https://elastisys.io/compliantkubernetes/user-guide/safeguards/enforce-networkpolicies/ - Require trusted registries: This would limit your pods to only use images from a specific list of container registries that you know and trust. This will help ensure that you (or a malicious actor in case of a breach) don’t start to run unsafe images. You will get to choose what is on that list, you can include whole registries (such as our Harbor) or specific images in specific registries (e.g. nginx from docker hub). Read more about limiting container registries at our public docs: https://elastisys.io/compliantkubernetes/user-guide/safeguards/enforce-trusted-registries/ - Disallowed tags: This would disallow specific container tags to run in your pods. This will help to ensure that you don’t run on unsafe tags or tags that might change image over time. You will get to choose what tags are on this list, you can disallow tags with a prefix using “” (e.g. “dev-”). The “latest” tag will often change which can result in unknown behaviors if a pod restarts and downloads a newer image version Tag made for development that should not be added to production clusters. Read more about disallowed tags at our public docs: https://elastisys.io/compliantkubernetes/user-guide/safeguards/enforce-no-latest-tag/ To see which of your resources are violating these policies, log in to your OpenSearch instance and on the Discover page filter: constraint_kind = K8sRequireNetworkPolicy, K8sDisallowedTags (one policy at a time) in the kubernetes* index pattern and then select these fields on the left side: Once we enable all or some of these policies, if you try to start a pod that does not follow the enforced policies you will get an error stating which policy you are violating and why. To enable OPA policies please file a Jira ticket with the following information: - The environment name to enforce OPA policies on (prod, dev, …) - What container registries you would like added to the list of trusted registries. (we would recommend just using our Harbor registry) - Which container tags should be disallowed. (we would recommend disallowing “latest”) Get in touch with us As customers, you can log in to the service desk, send us a message over Slack, email us, or call us!
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817491.77/warc/CC-MAIN-20240420060257-20240420090257-00161.warc.gz
CC-MAIN-2024-18
3,761
22
https://de.wikibooks.org/wiki/Serlo:_EN:_Overview:_Objects_in_Measure_Theory
code
Autorinnen und Autoren für die Maßtheorie gesucht! Wir sind der Meinung, dass Mathematik eine lebendige Wissenschaft und mehr als nur Formalismus ist. Dies wollen wir auch auf die Maßtheorie übertragen und erstellen hier ein neues Buch zu diesem Thema. Magst du mitmachen? Dann melde dich unter [email protected]! In the following articles on measure theory, we will gradually introduce various mathematical objects. These articles tell a story in which we will follow possible considerations of a mathematician, so we introduce new objects only when we really need them. This article summarizes the objects in a compact way, so you can easily compare them. The basis of measure theory is always a large basic set , for which we want to assign a measure to subsets that is, a number that indicates how large is. In many cases, however, not every subset is suitable for such an assignment. For instance the Banach-Tarski paradox or the Vitali sets show this. Those sets to which, we can assign a suitable measure will simply be called measurable. We put them into a set system. So the is a set containing sets itself (like a bag in which there are even more bags), e.g. with . To do computations with measures (i.e., addition, subtraction), we would like to perform operations with the sets, like unions , intersections or taking complements . And this without "getting kicked out of ", if possible. In mathematics, one therefore classifies set systems into different types, depending on how many operations we can perform without getting kickes out of and on other nice properties which they may satisfy: The -algebra is the most special and most frequently encountered set system type. Here we can "afford relatively many operations" which avoids problems of the kind " is not defined on this set". is an-algebra if and only if If a sequence of sets is in , then also An algebra (of sets) also satisfies these 3 axioms, but the 3rd is only required to hold for finite sequences . The "" here stands for a countably infinite union of sets. If one omits it, then "only" finite sequences are allowed and one gets a more general set system. That is, there are "more algebras than -algebras". A set system is an algebra if and only if If a sequence of sets is in , then also A -ring (also denoted ) satisfies all conditions of the -algebra, except for 1. That is, we also allow set systems containing only smaller sets. E.g., there could be a maximal set containing all . A system is a ring if and only if Immer wenn eine Folge aus Mengen in liegt, dann ist auch Sometimes it is required as an additional condition that is not empty, i.e. . As soon as a ring contains any set , it always contains also the empty set . A ring (of sets) is obtained equivalently by taking away from the -ring the -property. That means, we allow only finite in our condition. From the axioms of the -algebra only the 2nd and the 3rd in "finite" form are valid: If a sequence of sets is in , then also * The Dynkin-system is a separate type of set system. We will need it later to describe when measures match. The 3 axioms are: For every pair of sets with we have For countably many, pairwise disjoint sets we have . Further set systems, that do not appear in the articles, are: The monotone class: a type of set system containing all limits of monotonically increasing or decreasing set sequences . That means, If form a monotonically increasing sequence, i.e., , then we have If form a monotonically decreasing sequence, i.e., , then we have The semi-ring is a generalization of the ring. The essential point is that no longer lies in , but only needs to be represented by a disjoint union of sets from it. The condition (which always holds on rings) must thus be required separately. Instead of union stability one also demands cut stability: On the set systems defined above, we now try to define functions (or ) that intuitively measure the "size" of a set. The intuition "measure a size" can be translated into several desirable properties. For example, an empty set should have size 0, so . The more of these properties hold, the more the function matches our intuition to measure the size of sets. Depending on how many and which of these desirable properties are satisfied by , we divide these functions into different classes. The most specific class is the measure, which has relatively many good properties and is therefore often used in mathematics, e.g., with in probability theory and statistics. A measure is a function on a -algebra with the quite intuitive property that the empty set has measure 0 and when joining sets that do not overlap, their measures must also be added: is -additive, i.e., A pre-measure is in principle the same as a measure, but needs to be defined only on a -ring . So the set can be in , but doesn't need to be. Thus it holds that is -additive, i.e., A volume on a ring is a kind of "measure without -property for additivity". So we require the additivity only for finite unions: The following two classes of functions are not additive, but only sub-additive and therefore get their own letter . I.e., if one unites, for example, a set with and one with there could be (or we could have any number )! This contradicts our intuition of "measuring a size". Therefore we give the functions a separate symbol . The outer volume on the power set is defined in analogy to the volume above. But instead of additivity, one only requires sub-additivity: is sub-additive, so from we get An outer measure on the power set is the -version of the outer volume. That is, subadditivity is required even for countably infinite unions instead of finite unions. Thus the sub-additivity becomes a -sub-additivity, where the stands for "countably infinitely many". is -sub-additive, so from we get Examples: separating set system definitions[Bearbeiten] The definitions of the set systems above are quite abstract and it is not obvious why some of them might not be equivalent. In the following examples, we will see how to separate the set system types That means, we find examples that are of one but not a second type, respectively. In addition, you may find some (out of many possible) visualizations for those abstract definitions. Example (Rings vs. -rings) We construct a set system in which "limiting sets" are not contained: Let be a sequence of real numbers. These are elements of the interval . We choose the basic set to be slightly larger (this is perfectly allowed) and define the set system i.e., all half-open intervals with endpoints from the sequence. This is not yet a ring. But we can make it a ring: Let be the set system of all finite unions and intersections of intervals from (which are again finite unions of half-open intervals with as endpoints). The system is a ring (the "ring generated by "), but is not a -ring: the limiting sets are not included. However, we can turn into a -ring by including the "limiting sets" for all . The set system is again a -ring (and thus also a ring). The system is "only" a ring (of sets). The system is at the same time a -ring. Example (-algebras vs. -rings) We copy the setting from the previous example: The set system consists of all intervals with endpoints in the sequence . We choose the basis set to be (see figure). The set system is a -algebra. The set system is "only" a -ring. The set system generates a ring (by finite unions and intersections). Joining it with the limit sets yields a -ring However, the set system is not a -algebra, since it does not contain the basic set (likewise the empty set is not contained). In a sense, the set system is "too small". We can make it a -algebra: For this we add and the empty set , as well as the complement sets for all . From these sets we again form all arbitrary unions us intersections. and add them to the set system. The result is a -algebra consisting of arbitrary unions of half-open intervals, with endpoints in . The only difference to is that 2 is also allowed as endpoint and the empty set is included. -algebras can hence thought to be larger than mere -rings. We can make even larger by adding a finite set of arbitrary points to the set of allowed endpoints. From this we can construct a "larger algebra" by choosing as a set system of arbitrary unions of half-open intervals between the endpoints (which is shown in the figure above). Example (-Algebren vs. Algebren) We use again the setting of the two examples above: On the interval we define a sequence of endpoints . The set system , which consists of arbitrary finite unions of intervals of the form is a ring of sets. However, it is not a ring. If we add 1 to the possible endpoints (that is, are allowed as endpoints of half-open intervals), and also allow countably infinite unions, we obtain a -ring. However, it is not a -algebra, since it does not contain . If we add an additional 2 to the possible endpoints, we get a -algebra. Now, if we add finitely many more arbitrary endpoints, we still end up with -algebras, which are, however, larger, and will be denoted by . The -algebras and can be very easily transformed back into an algebra by taking out the endpoint 1. We define the set system to contain any countable unions of intervals with the same endpoints as those from , except 1. Then is an algebra, because contains . But taking it out destroys the -property. That is, a countable union of sets (which contains, say, 1 as an endpoint) need no longer be in . Similarly, we can transform the -algebra into an algebra by removing the endpoint 1. The -property will then no longer hold. The system is a -algebra. If the endpoint 1 is taken away, "only an algebra" remains. Über 150 ehrenamtliche Autorinnen und Autoren – die meisten davon selbst Studierende – haben daran mitgewirkt. Wir wollen, dass alle Studierende die Konzepte der Hochschulmathematik verstehen und dass hochwertige Bildungsangebote frei verfügbar sind. Bei dieser Mission kannst du mitmachen oder uns mit einer Spende unterstützen. Feedback? Do you want to join? If you have questions concerning the content, or didn't understand something, the feel free to contact us! We would love to answer your questions! Also we are thankful for critics and/or comments! If you share our vision to explain university math in an comprehensible way, then contact us under: This article is licensed under the free license CC-BY-SA 3.0. With that you can use it, modify it or share it freely, as long as you name „Serlo“ as source and put you changes under the same CC-BY-SA 3.0 oder an compatible license. On the page „Kopier uns!“ we explain you what you have to pay attention to, when using our texts, picture or videos.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100873.6/warc/CC-MAIN-20231209071722-20231209101722-00504.warc.gz
CC-MAIN-2023-50
10,715
71
http://www.ifans.com/forums/threads/new-way-to-read-ebooks-pdfs.23651/
code
I just downloaded the zombie survival guide pdf from a different thread but couldnt convert it to html. but then I remembered I could read pdfs on my ipod. so heres how to do it: -install apache from installer. -go to localhost in safari. -a little apache info page should appear. if so, continue. -using any method, go to library/webserver/documents and delete everything (you can keep apache_pb.gif, but thats all) -now put your pddfs into this folder and go to 127.0.0.1 in safari (you dont need to have wifi here, you are conecting to your own ipod) you will see a list of all your pdfs. I found that the zombie book isnt hard to read, and it numbers the pages for you! plus, of course, theres pictures in books now!
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540915.89/warc/CC-MAIN-20161202170900-00347-ip-10-31-129-80.ec2.internal.warc.gz
CC-MAIN-2016-50
720
1
https://www.express.co.uk/showbiz/tv-radio/997086/John-Cleese-Monty-Python-BBC-comedy-Fawlty-Towers-Terry-Gilliam-Michael-Palin
code
Monty Python’s brand of humour was revolutionary and was the source of inspiration for many of today’s modern comedies. The British sketch comedy show was groundbreaking and unlike anything that had come before it. The sketches were written and performed by Graham Chapman, John Cleese, Terry Gilliam, Eric Idle, Terry Jones and Michael Palin and they went on to make five films. The show was broadcast on the BBC between 1969 and 1974 and was a hit around the world and has remained revered until this day. However, the BBC hasn’t shown a single episode from the show, or any of the movies, for nearly twenty years. Actor John has now claimed exactly why this is the case. John, 78, has recently revealed the real reason why the BBC has been so averse to airing the classic comedy show. Cleese revealed that the “only explanation” is because the show is funnier than the BBC’s latest shows. “It's not been shown for 17 years, maybe it's too funny,” Cleese told Radio Four’s Today programme. “It might not contrast well with some of the comedy they're doing now, that's the only explanation I've got. John Cleese reveals REAL reason why the BBC won’t air Monty Python “People might not laugh at modern comedy.” The actor also took the time to defend claims that Monty Python wasn’t diverse enough. “Terry has decided he is a black lesbian,” he said. “And Graham Chapman - I'm not allowed to use the word p***, what do I say? “Graham was homosexual and also dead, so there's a certain amount of diversity,” he finished. The head of BBC comedy, Shane Allen, revealed recently that Monty Python wouldn’t be made today. “If we're going to assemble a team now it's not going to be six Oxbridge white blokes. It's going to be a diverse range of people who reflect the modern world and have got something to say that's different and we haven't seen before,” he said. Cleese also recently revealed that he’s moving to the Caribbean, stating that he’s “fed up” with the political climate in the UK. The comedian claimed that he was angered that the United Kingdom failed to bring in proportional representation or order a second Leveson inquiry. He revealed that he'd be swapping the grey British skies for the Caribbean’s tropical weather sometime in November. Express.co.uk has contacted the BBC for comment.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00013.warc.gz
CC-MAIN-2019-47
2,355
22
http://www.mikelisi.me/2015/09/derbycon-ctf-crypto-challenge.html
code
Discovering the challenge took a bit. On one of the hosts in the /16 scoped network there was a mocked up university website, pwnedu. In a subdirectory of the site was a list of faculty pages. One og the faculty had a few subfolders in their personal directory. One of these wasa folder called 'crypto' in the 'homework' directory. There were 3 files of interest here: Two of these files, Assignment.cipher.txt and secret.txt were encrypted and thus unreadable. The other file, Assignment.plain.txt contained readable text, but the layout was... interesting. In order to solve the challenge, you needed to decipher the encrypted message in the secret.txt file. The problem was that you had no information on exactly what was used to encrypt the message, or any key information. You had to figure out how to use the two 'Assignment' files to extract the secret message. Let's start with what we can actually read, the Assigment.plain.txt file. As I mentioned previously, the layout was interesting at first glance. Each of the words in the file seemed to be laid out in columns, which made it difficult to read. So I made a copy and stripped it down to understand what was written inside. Stripping out the extra whitespace yielded the following: For this assignment your task is to take on the role of a code breaker. You will use what we learned in class about crypto analysis , and some comon cryptographic operations such as XOR. You will also need to utilize your knowlege of block ciphers such as AES and DES and the various modes these ciphers can be utilized in , especially ECB or , electronic code book. It will also help to understand other modes such as CTR , or counter. An encoded copy of this assignment is provided. You must use this plain text to perform a know plain text attack. Comparing the content of this message with the provided cipher text copy will allow you to discover enough information to enable you to decipher the contnet secret.txt. Follow the instructions in secret.txt to reveal the FLAG. Come prepared to identify the flag in class Tuesday. A Cryptographers tale. Once a upon a time Alice had a message she wanted to send Bob. Alice did not want Jim to be able to read the message. She also need to send a meesage to Jim , that ideally Bob would not read. Alice decided the best solution was to protect her message with AES. Unfortunitly for Alice her computer is very slow and has no dedicated cryptographic hardware. It was a wire wrap hand built affir using individual transistors and a number of toggle switchs for Alice 's father was convinced all integrated circuits were bugged with listening devices and would not all Alice to have anything in the house that utilized them. He had not been right since the war. Alice accepted this though because she felt all the tin foil clothing he provided her was quite stylish , she liked her Sunday hat in particular. Things are what they are she thought. The letter she needed to send Bob was very long. After several long afternoons writing out her AES implementation in assembly , desk checking it two times , she was tired. Alice knew she should probably implement CBC or CTR modes but the thought of many hours ahead of her converting her assembly to binary before she could even start entering the program on the toggles lead Alice to decide to just go with ECB and hope Jim would not be able to break the code. Jim was the sort of idiot who could hardly count anyway. Alice finished entering her program and letters one character at a time. clearing the register each time. Finally Alice was able to send her messages to both Bob and Jim in relative safety. She skipped a number of spaces and punctuation to save time. Alice was so excited by Bobs reply she could hardly put it down. Alright, so there's a bunch of information here, but the key pieces to solving it are as follows: Block Ciphers, Electronic Codebook, and known-plaintext attack. When you're encrypting something using and ECB cipher, each block of data is encrypted individually with the provided key. This is different from other types of encryption modes like Cipher-Block-Chaining (CBC), where the result of encrypting the previous block are used when encrypting subsequent blocks. This means that, in ECB mode, if you have two blocks of data that are identical, the resulting ciphertext will be the same if the key used to encrypt the plaintext blocks are the same. The original formatting yields the first clue. Each of the words is padded to 16 characters (128 bits), which is a typical block size. Any space not utilized by the letters of the word are filled with spaces. That means that when the message is encrypted, each word then becomes a separate block. We can use the contents of the Assignment.plain.txt and Assigment.cipher.txt to map a plaintext word to it's encrypted equivalent. To verify, let's take a look at the Assigment.cipher.txt. Here, I read in the file and spat out the ciphertext in hexadecimal representation, separated into blocks so I could match up the plaintext word to the ciphertext output. The first word in the plaintext file is "For". Using the first block of ciphertext from the encrypted file, the result was "e2ca86791386af2771f05dc2fcb15eac". Unfortunately the word "For", including the capital F, doesn't show up again in the plaintext. Case matters! So another block was needed to compare. There are multiple instances of the word "for" in all lowercase in the plaintext file. And in both cases, the corresponding encrypted block showed up in the ciphertext file as "9973ea1b6d5fdec957798914dce2b09d". Progress! Using that information, each plaintext and ciphertext blocks were then mapped out so that I could do a lookup to determine what plaintext word was being represented by the ciphertext block. With the hard part done, the decryption of the secret message was now possible. The process was something like this: read in each encrypted ciphertext block from secret.txt, and match the ciphertext hex output with the mapped file above. Which ever word matched the encrypted block was the same word in secret.txt. The plaintext content's of secret.txt were: "count the number of times Bob, Jim and Alice are in the tale. the FLAG is Alice Jim Bob and the count of each with no spaces." Therefore, the flag was AliceJimBobxyz where x y and z represent the number of times the words Alice, Jim and Bob appear in the Assignment.plain.txt file. A quick search yields the following: 12 instances of the word Alice, 5 instances of the word Jim, and 5 instances of the word Bob, resulting in the final flag of AliceJimBob1255 netting a cool 350 points. Thanks to Anthony (@amillerrhodes) for helping me with the Python-fu required for parsing all the text for this challenge.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804965.9/warc/CC-MAIN-20171118132741-20171118152741-00618.warc.gz
CC-MAIN-2017-47
6,773
48
https://www.natmedtalk.com/threads/lance-armstrong.21949/
code
As much as I don't like DCA, I save the most interesting sites related to alternative cancer treatments for that just in case moment. Minds can be changed when the situation arises. This might give you some info. http://www.thedcasite.com/I have just sourced DCA witch i have been very keen to try for a while now but need to know if i can take it along side the latreal as the latreal produces cyonide when consumed and i'm not that understanding of the chemistry involved. If any one can give me advice it would be greatley appriciated. Best regards, paule.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00225.warc.gz
CC-MAIN-2021-04
559
1
https://www.2ukev.com/
code
Hi, I'm Kevin Corcoran. I am an experienced E-Learning Developer, and Instructional Designer. The goal is to make learning motivating, memorable, meaningful, and measurable for all learners. This Storyline 360 contains xAPI, an array of branching scenarios, variable loops, and a four-digit timer to provide an engaging Star Wars trivia game and quiz. Microlearning: Data Security Breach This Storyline 360 is an interactive drag-and-drop with video. This drag and drop uses an array of advanced variables to engage the viewer using the S.T.A.R interview method in a whole new way!
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819067.85/warc/CC-MAIN-20240424045636-20240424075636-00006.warc.gz
CC-MAIN-2024-18
581
5
https://www.mapleprimes.com/questions/227540-Hints-On-When-NOT-To-Use-Simplify
code
I have been using simplify() in number of places, and not really expecting it will do any harm. At worst, it will have no effect, or it will change the expression to different form, but the semantics will remain the same. Until I noticed that odetest() fail on some of my solutions because I called simplify on the solution before. One example why this happens, is that Maple simplifies cos(2*x)*sqrt(1/cos(2*x)^2) to csgn(1/cos(2*x)) and this makes odetest fail. Adding assuming x::real has no effect on making odetest happy. So now I changed simplify(sol) to simplify(sol,size) and this seems so far not to have this adverse effect. My main reason for calling simplify is to make the expression smaller. In Mathematica that is what I do, In Mathematica there is no "size" option to Simplify. So now, I am very worried about calling simplify() as is. Could some Maple experts share some of their experience on this? Should one call simplify() only when an explicit option, like size, trig, exp, etc....is also used and not call simplify as is?
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00042.warc.gz
CC-MAIN-2022-40
1,044
7
http://www.softwareok.com/?page=Windows/Tip/AlwaysMouseWheel/1
code
Master sound volume control via mouse wheel on the taskbar in Windows 11/10! You know the volume control in the taskbar, the volume can be adjusted here under Windows 11, 10, 8.1, ..., why not if it's that easy! The only drawback is that if you are not accurate the volume is really bad. Otherwise you can adjust the volume between 0% - 100%. If you find the volume control too impractical, change it over the entire taskbar area . Regardless of whether you are listening to music or watching Youtube videos , this may turn out to be a practical .. volume adjustment for the Windows sound Change the Windows 11, 10, ... volume over the entire task bar area! 1. Start AlwaysMouseWheel 2. Activate the mouse wheel volume controls on the taskbar 3. If you now turn the mouse wheel over the taskbar, the volume will be adjusted. | (Image-1) Master sound volume control via mouse wheel on the taskbar! The free tool allows you to use the mouse wheel to create a volume control. Just hold the mouse over the taskbar and turn the mouse wheel, and the music will start to get louder or softer. Thus, you can adjust the volume of your system quickly and easily by simply turning the mouse wheel. Updated on: 13 October 2021 21:42 Keywords: alwaysmousewheel, master, sound, volume, control, mouse, wheel, taskbar, adjusted, under, windows, 11, 10, 8.1, server Similar information on the page The solution is simple to Uninstall Always Mouse Wheel from Windows Please close the AlwaysMouseWheel, delete AlwaysMouseWheel.exe and delete Of course, you can use the mouse wheel forwarding tool on Windows 11 because it has been successfully tested under Windows 11 Contents: 1. mouse wheel This Mouse improve program this gives you the possibility when using the mouse wheel over any window to wheel it with no limits Info: Send message to Window The solution is simple to use horizontal scrolling with always mouse wheel, ina all Windows 10, 8.1, ie 11, and 7 From Version 3.81: Always Mouse Wheel does Its easy to improve horizontal scrolling with the mouse wheel as an alternative on Windows 11, 10, Many users have specialized in precise scrolling Not infrequently, the volume control disappears in the info area of the Windows 11, 10, 8.1, taskbar Turning on and off the notification area taskbar Why not, you can simply use the numeric keypad to control the mouse under Windows 11 to control a point more precisely or if the mouse doesnt want to See also : ... AlwaysMouseWheel FAQ Back to : ... AlwaysMouseWheel Homeage 3D.Benchmark.OK # AlwaysMouseWheel # AutoHideDesktopIcons # AutoPowerOptionsOK # ClassicDesktopClock # DesktopDigitalClock # DesktopNoteOK # DesktopOK # DontSleep # Edge-Chromium # ExperienceIndexOK # Find.Same.Images.OK # FontViewOK # GetPixelColor # GetWindowText # Internet # IsMyHdOK # KeepMouseSpeedOK # NewFileTime # OpenCloseDriveEject # PhotoResizerOK # Q-Dir # QuickMemoryTestOK # QuickTextPaste # ShortDoorNote # StressMyPC # System # TheAeroClock # Tools # Version # WinScan2PDF # Questions for this: - Simply turn the tool without installation as free programs with the mouse pointer on the screen and turn the volume down or up with the mouse wheel? - Change windows 10 without new add-on program and volume? - About WheelsOfVolume? - Do you need a brief description to control the volume on the PC using the mouse wheel? - Actually I noticed that the percentage popup *only* stops appearing for good if I put the mouse over the secondary monitor's taskbar and scroll the wheel (which again does not work to adjust the volume at all, but it seems to be breaking that pop-up when I try to, in that the popup stops appearing after that, even when I do change the volume scrolling over the main taskbar). Hopefully this makes sense. Also I noticed that sometimes when using the volume scroll on the main taskbar that does work, randomly the little volume percentage pop-up window will stop showing. I can see that scrolling the wheel is still changing the volume, but no more popup unless I close and re-open the app completely. - Control the volume faster with the mouse wheel? - Can I adjust the volume on my computer with loud music (high tones) in seconds? - Can you use the mouse wheel as a volume control? - Can I just hold down a button and turn it with the mouse wheel and turn the music up or down? - Is the volume control faster than with Windows Tools? - Is it possible if I use the mouse wheel when scrolling playback device and sound settings to change the volume on the PC? - Looking for a free tool that can control sound volume or volume using the mouse wheel as a volume control? - The volume control in the Windows taskbar is a small loudspeaker symbol there I can adjust the volume very badly with the slider, is there anything better? - Loud and quiet with the mouse wheel on Windows?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818452.78/warc/CC-MAIN-20240423002028-20240423032028-00423.warc.gz
CC-MAIN-2024-18
4,826
39
http://aramaicnt.org/articles/problems-with-peshitta-primacy/
code
For those of you who are not familiar with Peshitta Primacy, it is the belief that the Syriac Peshitta (the Syriac Bible) is the original text of the New Testament. 1 It is a movement that first gained traction with the works of the late George Lamsa, and is primarily a position popularized by individuals within the growing Messianic Judaism movement in North America as well as the official position of the Assyrian Church of the East. On its face, there are some rather compelling arguments to the layman that have to do with places where the Syriac text displays some interesting quirks of idiom (such as wordplay, puns, or ambiguous meanings) that the Greek text of the New Testament, as we have it today, misses or potentially mistranslates. In truth, many of these phenomena are due to the fact that the tradition they represent dates back to an Aramaic source that was translated and incorporated to the New Testament some time during its transmission, and since the Peshitta is in a dialect of Aramaic they come across more clearly; however, this does not necessarily point to the Peshitta being the original. When we look at the New Testament in light of Jesus’ own dialect (early Galilean Aramaic, a dialect quite different from Syriac), we can find places where such phenomena (wordplay, puns, mistranslations, etc.) exist that are not present in the Peshitta. This article will grow, over time, to house an annotated list of such possibilities. He Who Lives By The Sword In Matthew 26:52 we have a scene where Jesus rebukes Peter for being rash: Then said Jesus to him, Put up again your sword into his place: for all they that take the sword shall perish with the sword. In the Greek New Testament, the bolded part above reads thus: παντες γαρ οι λαβοντες μαχαιραν εν μαχαιρη απολουνται pantes gar hoi labontes mahairan en mahairê apolountai For all who did take a sword, by a sword they shall die. And in the Peshitta it reads thus: kulhun ger hanun da-nsab saife, b-saife n’muthun For all of they who take up swords, with swords they shall die. Note how the Peshitta renders “swords” (in the plural) rather than “sword.” This is very curious, because a plain retro-translation back into Galilean Aramaic reads: bagin kal d-nsab saiyp, b-saiyp 2 yimuthun For everyone who took up a sword, by a sword (OR “in the end”) they shall die. In Western Aramaic dialects the word saiyp can mean either “sword” or “end.” Given the context, this wordplay is undoubtedly intentional, and the use of b-saiyp as “in the end” is well attested in Rabbinic literature. 3 The Greek, of course, misses this right off the bat. Furthermore, this double meaning does not occur in Syriac, or other Eastern dialects 4 from the era, so the Peshitta misses it completely, instead choosing to render both instances of saipa in the plural (which makes the pun impossible). This wordplay also does not occur in Hebrew. As such, the “original” version of Matthew cannot be from the Syriac Peshitta. - In contrast, most scholars conclude that the Peshitta is a ~4th century translation from the Greek. ↩ - Or /b-sof/, same root. ↩ - For both /b-saiyp/ and /b-sop/: The Palestinian Talmud, Targum Neophyti, and others. ↩ - Both “Eastern” and “Western” Syriac are actually Eastern Aramaic dialects. Eastern and Western in their case are relative to each other rather than the Aramaic language family as a whole. ↩
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701943764/warc/CC-MAIN-20130516105903-00003-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
3,491
24
https://twogreyhoundtown.blogspot.com/2008/09/gas-shortage.html
code
I am not sure how I missed the problem that some of the southern states are having getting gas. People are waiting in very long lines to fill up. I heard that 1 out of 6 stations are dry. This is due to Hurricane Ike. We were warned that this could happen in my state, but thankfully it didn't. My heart goes out to all of you that are facing this problem.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.62/warc/CC-MAIN-20180718213135-20180718233135-00192.warc.gz
CC-MAIN-2018-30
356
2
https://cboard.cprogramming.com/cplusplus-programming/27042-random-numbers-cplusplus.html
code
I noticed that in BASIC, if you used the RND function get an array of say, 10 characters, it would randomly fill an array with numbers. However, if you ran the program again, those same numbers would always show up. Does this happen in C++. I don't know what the function is, but I can find that pretty easily, so don't worry about that. Thanks if you can help.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123172.42/warc/CC-MAIN-20170423031203-00505-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
361
1
https://forumfiles.thegamecreators.com/thread/226818
code
Quote: "I think this has got a bit too "hot" when nothing nasty was meant." I never got overly upset, just disappointed. I am not trying to continue any disagreement or cause further trouble here, just trying to explain my position as I did in my last post when addressing the comment and question quoted. The "Any ideas?" part of his post was taken on my part that any kind of help offered would be acceptable. In my first response, I not only made a point to make it clear I was guessing, but also made a point to state why my guess was being made. Only to have the logistics of my idea questioned to the point that I must have been thoughtless and even to the point of asking if I were suggesting things that I did not even say. This may have not been intended to give offense, but was certainly not any measure of appreciation for any idea offered, and also implied that it may have caused further frustration for some reason. I was surprised at the slap, and I did not request it or complain about his response to anyone in any way. However, when reading it, I felt compelled to at least explain my stance pertaining to the quote given, and to make it clear that if my ideas or attempts to help are unwanted, then I will oblige to such a request. Not in an effort to not be helpful in the future, but to make a point that any help offered, even if it doesn't resolve the issue, was being done so in a friendly gesture, and while appreciation is not expected, ridicule or persecution for not being right may leave the one trying to help ( and/or others ) in a position of not wanting to help if they know they will be questioned for their sincerity or logistics. ( a simple "No, that was not it" would suffice) Had I said, you need to do this, or you should do this, then I would have better understood his frustration with my idea not helping, but when I had made clear it was a guess and what led me to guessing it, a response of that makes no sense seemed like an attempt to insult, especially when taken to the point of asking if I were suggesting flags that I never mentioned. Point is, when you specifically ask for "Any Ideas?" then you may get some offered that might not be the best, but that does not mean the person's idea is stupid or not intended to be helpful, which is what his response implied. Quote: "Can we all be a bit more accepting and helpful?" Many times I have tried to be helpful unsuccessfully, but once in awhile I do have a good idea or suggestion that does resolve an issue someone is having. Just one instance of being helpful makes up for countless other instances of not finding a solution, as the reward is seeing someone find a way to resolve their problem. I am not the only one who has tried to help others and failed, and I expect no praise for success, but ridicule for trying is not going to get many offers of help from others in the future, and that was the point I was trying to get across. Understood he is a fairly new user, and that many times text can be misinterpreted, and you all know I struggled with forum etiquette as a new user. So while my comment of not offering help in the future may have sounded a bit harsh, believe it or not, it too was in an effort to help him. Yes Rick, I will try to be more accepting and helpful, and I apologize if my comments made in an effort to explain my position added any fuel to the fire as that was not my intention. Coding things my way since 1981 -- Currently using AppGameKit V2 Tier 1
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00308.warc.gz
CC-MAIN-2023-14
3,483
20
https://www.gethabitcoach.com/idea/414/
code
The current work system forces us to be at work eight hours daily, no matter how much we work. This has resulted in people being only as productive as they need to be. People have gotten into the habit of being busy and not necessary productive. Here is a new way of looking at it: Laziness is not when you do not work, but when you work for work’s sake. Instead of being at work for eight hours and keeping yourself busy, spend five hours there being as productive as possible. You can save a huge amount of time if you focus on things that are important instead of keeping yourself busy. Time is the most precious asset; don’t waste it! When you work, make this time as productive as you can. For example, don’t waste hours handling unimportant emails. Work fewer hours, while producing more. It might be hard, because our culture tends to reward personal sacrifice instead of personal productivity.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00377.warc.gz
CC-MAIN-2019-18
907
6
https://www.cita.utoronto.ca/events/presentation-archive/?talk_id=904
code
Constraining the reionization history with quasar absorption lines Laura Keating (CITA) December 02, 2019 Abstract: The epoch of reionization marks the last major phase transition of the Universe, when photons emitted by the first structures ionized and heated the gas surrounding them. A complete understanding of reionization would reveal the properties of the first stars and galaxies, as well as increasing the precision to which the high-redshift intergalactic medium can be used as a cosmological probe. In this talk I will present results from radiative transfer simulations of cosmic reionization, which were carefully calibrated to reproduce the mean flux of the Lyman-alpha forest below redshift 6. I will show that matching the observed mean flux requires reionization to have ended later than previously thought. I will demonstrate that future observations of high redshift quasars have the potential to constrain the entire latter half of reionization from quasar absorption line data alone.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00396.warc.gz
CC-MAIN-2023-14
1,004
4
https://blog.jetbrains.com/phpstorm/2023/10/the-pest-plugin-is-now-maintained-by-jetbrains/
code
The Pest Plugin Is Now Maintained by JetBrains You’ve probably heard about the Pest test framework developed by Nuno Maduro and the community. There is also a dedicated PhpStorm plugin for Pest, which until now has been brilliantly developed by Oliver Nybroe. In this post, we will talk about how JetBrains is going to maintain the plugin, and here’s what that means for the community. The Pest plugin will be bundled with PhpStorm starting with v2023.3. It will be developed by JetBrains but stay open source. We want PhpStorm users to get as much use and value out of the box as possible. That’s why we offered the author of the Pest plugin Oliver Nybroe to bundle the plugin in the next release of PhpStorm. Oliver supported this idea, and we came to an agreement to move the plugin repository under the JetBrains organization. Our team will continue the development, but the plugin will remain fully open source. This means that any developer can still send a pull request or just use the plugin for inspiration and as a basis for their development, while the PhpStorm team will conduct code reviews. The plugin will continue to be distributed under the MIT license. We have also closed the existing issues on GitHub and suggest using YouTrack, our public issue tracker, instead. Since the plugin will be a part of the IDE, issues and bugs related to it should be reported in the same place as all others for clarity and consistency. A thank you to Oliver Nybroe We sincerely thank Oliver for his dedication and hard work on the Pest plugin and his contributions to the PHP community. His expertise and creativity have been instrumental in making Pest a beloved tool for PHP developers. Senior Software Engineer We’re committed to building upon this foundation to provide developers with the best experience possible. You can expect regular updates, and full-fledged support to simplify your testing processes. What’s coming for Pest in PhpStorm? As we assume work on the Pest plugin, our primary goal is to ensure that PHP developers have access to a robust and seamlessly integrated solution for their testing needs. The PhpStorm team has already implemented a noticeable improvement, which is available in the latest version of the plugin. Reworked custom expectation support engine Reworking the engine allowed us to fix some technical issues with projects stuck on closing and thus streamline performance. On top of that, you can now go to a custom expectation declaration and find its usages as expected. Simply Cmd+Click on one or press Cmd+B. It’s also possible to perform the Rename refactoring for custom declarations, and PhpStorm will automatically rename all occurrences: This is already available in PhpStorm 2023.2.3 with the latest version of the Pest plugin. To enjoy these new improvements, download the latest plugin version or update to it. Starting with PhpStorm 2023.3, this will be supplied out of the box. As we embrace this transition, we are eager to hear your feedback. Your insights, comments, bug reports, and contributions are not only welcome but highly appreciated. Once again, thanks to Oliver, the Pest community, and PHP developers around the world for their invaluable contributions to this plugin. Subscribe to Blog updates Thanks, we've got you! PHP Annotated – October 2023 Welcome to the October edition of PHP Annotated! We'll recap the most interesting developments in the PHP community over the past month, featuring handpicked news, articles, tools, and videos. (more…)… PHP Annotated – September 2023 Welcome to the September edition of PHP Annotated! We'll recap the most interesting developments in the PHP community over the past month, featuring handpicked news, articles, tools, and videos. (more…)… PhpStorm Public Roadmap: What’s Coming in 2023.3 After recently releasing PhpStorm 2023.2, we’re now ready to share our plans for the next version of PhpStorm.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00263.warc.gz
CC-MAIN-2023-50
3,941
29
http://oeis.org/A106808
code
The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation. Please make a donation to keep the OEIS running. We are now in our 56th year. In the past year we added 10000 new sequences and reached almost 9000 citations (which often say "discovered thanks to the OEIS"). These are primes with square digits and they have all been certified. No more terms up to 37000. Primality proof for the largest: PFGW Version 20041001.Win_Stable (v1.2 RC1b) [FFT v23.8] Primality testing 4440011*10^14463-1 [N+1, Brillhart-Lehmer-Selfridge] Running N+1 test using discriminant 3, base 1+sqrt(3) 4440011*10^14463-1 is prime! (268.8294s+0.0548s)
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00498.warc.gz
CC-MAIN-2020-50
681
4
https://forums.adobe.com/thread/1937536
code
Yes, the raw image converter in LR does different things than the raw image converter in other software. So this is nothing to worry about. Because ACDSee applies different default settings. However, being a newcomer I'm a bit confused. You are talking about "raw image converter". But I didn't convert the image. I just view them as they come out of the camera. Lightroom is a raw converter, ACDsee is a raw converter. If the raw file wasn't converted then you wouldn't be able to look at it. Lightroom has its own default settings. And so does ACDsee. If you don't like Lightroom's default settings then change them and save new defaults.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892802.73/warc/CC-MAIN-20180123231023-20180124011023-00480.warc.gz
CC-MAIN-2018-05
640
4
https://community.atlassian.com/t5/Bitbucket-Pipelines-questions/Does-Bitbucket-Pipelines-support-self-hosted-runners/qaq-p/1315806
code
Would like to know if Bitbucket pipelines supports self hosted runners and/or a hybrid hosted and self hosted runner setup. Bonus if integrating with AWS spot is possible. As far as I know as a non-user of Atlassian Bitbucket Server, it does not have the Pipelines Plugin. So, I might be wrong believing that there is no runner in the scope of the options that Atlassian Bitbucket Server (or Data-Center) gives, as far as I can tell the server documentation does not have it (searched in it for "pipeline/s"). Apart from Atlassian itself, there are a couple of runners independently available, most that I know of are somewhat hack-ish (no bad feelings here), mostly for development (which makes sense) which means not feature complete. And a shameless self-plug here, as I have one such utility on Github as well which I normally extend to the level of features I need my own: But I guess you've looked already around a bit and might have stumbled over these. What I've always missed is that Bitbucket never came en-par with Gitlab (gitlab-runner) which offers the command line runner since like always, as well as other more CI/CD specific offerings like Codeship (jet) also like always offering it and always very well integrated or Circle CI (CircleCI CLI) which I never used the command-line utility personally, but I've heard the utility does run the jobs locally. as far as I know, Bitbucket Pipelines is available only for Bitbucket Cloud and doesn't support any sort of runners, you can buy only "minutes". We're working on a self-hosted alternative that provides native integration with Bitbucket interface and allows to run any amount of runners — snake-ci.com Also, we're offering private beta builds, drop me a line if you have any questions: [email protected] ...hey are a part of us, shaping how we interact with the world around us. The same holds true for programming languages when we think about how different kinds of vulnerabilities raise their heads in t... Connect with like-minded Atlassian users at free events near you!Find an event Connect with like-minded Atlassian users at free events near you! Unfortunately there are no Community Events near you at the moment.Host an event You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201826.20/warc/CC-MAIN-20200921143722-20200921173722-00183.warc.gz
CC-MAIN-2020-40
2,319
13
https://jobs.hyperisland.com/job/814394-biz-dev-summer-interns-digilab-digilab
code
Biz Dev Summer Interns @ digiLab Are you looking to break into the world of tech startups? Want to accelerate your career with an internship developing the sales and marketing strategy for a Machine Learning scale-up? Your internship could also turn into a full-time job offer, as we initiate our graduate hiring programme. digiLab is an Exeter-based start-up developing cutting-edge AI software to tackle the biggest and most exciting challenges of today: nuclear fusion, air traffic control, and environmental sustainability. So far, our customers have included the UKAEA, NATS, Jacobs, Airbus and South West Water. We are a friendly and inclusive team, and we are particularly proud of our: - Supportive and collaborative work environment - Commitment to designing and delivering quality products - Passion for developing and sharing new skills - Sociable working culture and lively sense of humour As an intern, you’ll be working with the founding team on live projects from day one: we believe in giving you the chance to do things that matter, everyday – except Fridays (we’re a four-day week company!). We have no formal requirements. The most important qualities that you bring to the team will be your appetite for collaborative problem-solving, your desire to learn new skills and your ability to make a meaningful impact! You can choose whether to work remotely or relocate to Exeter for the internship - we're flexible! :) We offer £20k pro-rata for the internship. If you would like, we will also offer free accommodation in Exeter for the duration of your internship. This opportunity closes on the 31st March, but we are recruiting on a rolling basis, so please apply early :) The internship will run for 8 weeks, from the start of July to the end of August 2023. Precise dates to be finalised. Your application has been successfully submitted. Trusted Machine Learning for Safety-Critical Engineering.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00860.warc.gz
CC-MAIN-2023-06
1,924
17
https://dynamicsuser.net/ax/b/hatlevik/posts/d365f-o-community-driven-engineering
code
I have previously blogged about the importance of reporting new ideas, issues and bugs to Microsoft, and also why the community will benefit from sharing. I have often experienced that experienced engineers often have the solution available and are more than willing to give it for free to get the fixed-up code into the standard solution. But the formalized support path does require time and energy and remember that not all Microsoft support consultants are engineers that you can discuss X++ topics with. But how can the process of contributing to the D365 community be easier? But did you know that Microsoft have a program for Community Driven Engineering with Dynamics 365 F&O? This covers not only bugs, but also new features. Community driven engineering (CDE) is a Microsoft effort to make external engineers more efficient at providing recommended bug fixes as minor features to Microsoft, as well as to make Microsoft more efficient in accepting fixes from the community. If the fix is accepted, it will be merged into the main Dynamics 365 F&O branch. I have tried the program, and reported in a fix for auto-report as finished, and the fix was accepted, and hopefully in the near future the entire community can benefit from it. If you have the right skills and the willingness to share and give away your fixes (or features) you can sign up at https://aka.ms/Communitydrivenengineering. You need to be accepted into the program, and your user must be whitelisted before you can access. The CDE also have a private Yammer group, that you get access to when accepted. But I must warn you. This program is meant for the most experienced and technical people we have in our community, and that are deep into X++ and AzureDevOps. You must have approval from CxO-level in your organization that you can share code with Microsoft. (Lawyer stuff) Here is the overall flow for the external engineer: The following text is copied from the onboarding documentation of the CDE. It takes approximately one hour to get started with CDE, the majority of which is the initial build time. At this point you can start development(in the SYS layer actually) Changes submitted by the community are committed to the same REL branch matching the version on the dev VM. Once the pull request (PR) is completed, that signals that Microsoft has officially accepted the change and it will show up in a future official release, usually the next monthly release (depending on what day of the month the release closes). The change will only enter the master branch of msdyncde through a future official release. Syncing to the tip of a REL branch will pull in other community changes submitted from that version. Any feedback from Microsoft reviewers (or other Community reviewers) will show up in the PR. Changes can be made to the PR by editing in Visual Studio, and doing git add / commit / push again. Once Microsoft has signed off, all comments have been resolved, a work item is linked, and all other polices have been met, then you can click Complete to complete the pull request. When a PR is completed, that is official acceptance by Microsoft that the change will become part of a future official release, usually the next monthly release. In addition to having an accelerated approach to get fixes into main branch, participants also have some more benefits. You will have access to the latest & greatest code changes through all code branches that Microsoft makes available. You can search through the code and see if there are code changes that affects extensions or code that is local to you installations. You can also see how the Microsoft code is evolving and improvements are made available in the standard application. You will also build gradually very valuable network towards the best developers in the world, where you will discuss technical topics with the actual people creating the world’s best ERP-system. One final joke for those considering going into this program: Git and sex are a lot alike. Both involve a lot of committing, pushing and pulling. Just don’t git push –force
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107909746.93/warc/CC-MAIN-20201030063319-20201030093319-00596.warc.gz
CC-MAIN-2020-45
4,097
12
https://www.staffrostersolutions.com/complexity.php
code
The staff rostering problem is classified according to mathematical complexity theory as NP-Hard. For problems within this complexity group there is no known algorithm for finding provably optimal solutions within practical time limits. Theoretically there could be instances of the problem which the best known algorithms may take as long to solve as, say, the estimated age of the universe! In fact, whether an algorithm exists or not which can solve these problems in practical (polynomial) computation times is one of the major open questions in mathematics and computer science (P vs. NP). One of the reasons why these problems are so difficult to solve is due to their exponentially large search spaces (a search space is all the possible solutions for a particular instance). Consider a problem instance with 19 employees, 3 shift types and a planning horizon of 28 days. Even this relatively small instance has a search space of 10320 (one followed by 320 zeros). To put that into perspective, the estimated age of the universe is approximately a mere 1018 seconds and planet Earth is thought to contain around 1050 atoms. Both are tiny numbers in comparison to an average search space size. Fortunately however, there are algorithms which can find optimal and near optimal solutions for these challenging problems very quickly (often within a few seconds). They just cannot guarantee an optimal solution on every possible instance. Also, even near optimal solutions are almost always much better than those produced by expert human planners. Different algorithms solve the problem in different ways but they all need to be able to differentiate between good rosters and bad rosters. The user provides the algorithms with the information to do this by modelling the problem using constraints and objectives. For example, there are usually laws and regulations related to night shifts, so the model may include a constraint which imposes a certain number of days rest after a series of night shifts. There may be constraints to ensure each employee gets their requested days off. There will be constraints to ensure there are the correct number of employees working during each shift, and so on. In most staff rostering problems though, there are so many conflicting constraints that if they all had to be satisfied then a feasible solution would not exist. Therefore, in practice, some of the constraints are relaxed by the user. These are called soft constraints or objectives. These soft constraints may also be given different priorities by using weights which are simply number values and the more important the constraint the higher the number. Using this information the algorithm can now differentiate between good and bad solutions. It does this by counting the number of violations within a solution of each soft constraint and multiplying these values by each constraint's weight and then adding it all up. This gives a score for each solution which represents its quality. The lower the score the better the roster as a low score indicates that most of the important constraints have been satisfied. A solution with a score of zero has no violations at all. As mentioned though, those perfect, zero-score rosters rarely exist due to the over-constrained nature of the problem. Instead the algorithms try to produce solutions with a score as close to zero as possible. For more information on creating the model and the variety of constraints that can be used see the XML format documentation and the modelling FAQs.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057417.10/warc/CC-MAIN-20210923044248-20210923074248-00526.warc.gz
CC-MAIN-2021-39
3,534
27
https://medium.com/radiant-earth-insights/join-the-cloud-native-geospatial-outreach-day-and-sprint-286f6fd553c3?source=post_internal_links---------1-------------------------------
code
Join the ‘Cloud Native Geospatial’ Outreach Day and Sprint On September 8th we will be continuing SpatioTemporal Asset Catalog (STAC) Sprint #6, but we decided to expand its scope to include more of the ‘Cloud Native Geospatial’ ecosystem. The core idea of Cloud Native Geospatial is articulated in this ‘blog series’, with the first post positing the question: ‘what would the geospatial world look like if we built everything from the ground up on the cloud?’. The second post introduced the Cloud Optimized GeoTIFF (COG), and it has since emerged as a default format for anyone doing geospatial on the cloud. So the ‘Outreach Day’ will aim to introduce STAC, COG, and other emerging cloud-native geospatial formats and tools to new audiences. Our hope is to make the day accessible even to those who do not have deep geospatial or software development knowledge. We want to welcome new people into our community, as we believe that having all geospatial data in the world on the cloud, with tools that can process it, can help us tackle the largest problems that face us. But the technology won’t have a real impact on our planet unless we enable a diversity of users to learn it, contribute to it, and use it in their work. The full agenda will be announced soon, but our plan is to have a number of ‘Introductory Sessions’ that run in various timezones through the whole day of September 8th. These will enable people who want to learn more about Cloud Native Geospatial topics and tools to learn and ask questions in smaller groups. We’ll even have some sessions that introduce the basics of geospatial and remote sensing, so please join us even if everything is new! We’ll have many people who can welcome you and answer your questions. We will also try to organize a ‘virtual job fair’ of some sort since there are many organizations looking to hire people to work in Cloud Native Geospatial. Starting at 13:30 UTC (in the ‘middle’ of the day globally) We’ll do a couple of hours of ‘core’ sessions that everyone should try to join. And we will aim to record and post these as soon as possible for those who can’t join live. This will include a ‘State of Cloud Native Geospatial’ talk, with brief overviews of STAC & COG, along with a variety of ‘lightning talks’ from various community members. We’ll hear about companies large and small embracing Cloud Native Geospatial, dive into various datasets that are becoming available in the formats, and survey the ecosystem of tools that supports working with them. After the core sessions, we’ll continue the beginner sessions. If you are interested in presenting a lightning talk or a beginner session please submit your ideas. After the talks, we’ll also hold the kick-off for the ‘Data Sprint’. This sprint runs for a week, and the goal is for everyone to team up and convert as much interesting data as possible to STAC and COG formats, and to do projects that show off the data. There will be a wide variety of tasks that people can do to help out — no matter what skills you come with we’ll be able to find a way for you to help out! And our hope is that people can take what they learn in the beginner sessions and actually apply it during the sprint, since the best way to learn is by actually doing. We’re also going to run a Data Labeling contest as part of the sprint. This should be a great task for those who are new to geospatial, we’ll use Azavea’s Groundwork to provide a nice user interface to trace geospatial imagery, automatically creating geospatial labeled data for Machine Learning applications. The results of everyone’s work during the sprint will be a new STAC + COG dataset that we release to the world on Radiant Earth’s MLHub. We’ve had really amazing sponsorship, with Planet and AI for Earth at Microsoft leading the charge, and Arturo also providing substantial support. Digital Earth Africa and the World Bank are first time sponsors and are helping push us to be globally inclusive in our outreach. Maxar, SparkGeo, Element84, Amazon Web Services, Pixel8, CosmiQ Works, TileDB, and Azavea round out our great sponsors. Normally we’d use the sponsorship money to fund travel grants to welcome new people to our community. But with a purely virtual event our plan is to give out a variety of prizes to recognize the work done and help encourage new contributions. We will post the list of prizes soon, but there will be several that are available to groups less represented in STAC+COG today. There will be some nice prizes, including an openly licensed Planet SkySat SkySat 50cm image tasked to the location of your choosing, and $15,000 in Azure credits for top projects focused on sustainability. Anyone who makes a meaningful contribution during the sprint will get a t-shirt (with some limit if we get an unexpected amount of contribution), and there will be at least one top prize of $2500 or above. If you are interested in attending please sign up at on this Google Form and you’ll receive email updates on all the details on the event. Thanks! UPDATE: More information on how we are working to welcome new people and prize specifics is available here.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604794.68/warc/CC-MAIN-20220526100301-20220526130301-00538.warc.gz
CC-MAIN-2022-21
5,231
11
https://www.samanthaludlow.co.uk/post/vtalk
code
VTALK with Marco Bertozzi – Vice President for Europe (EMEA) Spotify People were able to submit their questions for Marco to answer. Here is a simple of questions.... Marco Bertozzi gave advice of future careers. In the photos below are copies of my notes I took from the talk. The talk was interesting to a degree but I did not feel it was appropriate to my circumstance.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00759.warc.gz
CC-MAIN-2022-21
374
5
http://bunakepi.tumblr.com/how-does-a-variable-pitch-propeller-work
code
Hamilton-Standard soon began selling its first two-position variable-pitch propellers to engine. The adjustable propeller’s pitch can be changed only by a. I’m trying to figure out how to properly use the variable pitch control on a plane with this featur. He then continued working on the constant-speed variable-pitch propeller. Planes with variable-pitch propellers (including World-War fighter planes) have another useful. I’m trying to figure out how to properly use the variable pitch control on a plane with this featur. A CPP does not produce more or less wear or stress on the propeller shaft or. There are two types of variable-pitch propellers adjustable and controllable. Planes with variable-pitch propellers (including World-War fighter planes) have another. Learn about controllable or variable pitch propellers. But nobody could make it work until 1910, when the first variable-pitch propellers were used on some airships. I thought you had actually read my post - I mean the one you quoted from. A controllable pitch propeller (CPP) or variable pitch propeller is a type of propeller with blades that can. To learn how to change power with a variable pitch propeller. I thought you had actually read my post - I mean the one you quoted from. Q Name the components of a constant speed unit - how does it work. How exactly does it work:variable pitch propeller, variable pitch propeller, variable pitch propeller rc, variable pitch propeller operation, variable pitch propeller mechanism, variable pitch propeller boat, variable pitch propeller inventor, variable pitch propeller design, variable pitch propeller aircraft, variable pitch propeller rc airplane, how does a variable pitch propeller work, thrust from propeller engine, how do boat propellers work, how a boat propeller works, high thrust propellers, how a propeller produces thrust, ellipse propeller, how a airplane works, how does flight work hessian mercenaries civil war rok straps dog wahconah regional high school girls soccer camp modin 2012 mrs america pageant blog sinus infection home remedies without antibiotics anselm reyle christian dior diesel remote control vehicles mcelroy metals roof tremont clinic loreal hicolor timberline estimating assemblies jrc transportation ga technic inc msds icaretech How Does A Variable Pitch Propeller Work
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138017.47/warc/CC-MAIN-20140914011218-00211-ip-10-234-18-248.ec2.internal.warc.gz
CC-MAIN-2014-41
2,340
5
http://ose.vn/tin-tuc/page/77/
code
Information Jungle Position Evaluation The main chore with the participants is to look for the Pyramids. These Pyramids exist deeper in forest. With this video game, around 112 may activated employing the maximum multiplier of 10 circumstances the decision. The icon of Idol behave as the crazy also it can triple the payment its in being victorious mixing. Immediately Video Gaming Gambling Enterprises no-deposit Finest commission: 50,000 instances per solutionDetails
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00489.warc.gz
CC-MAIN-2021-49
470
4
http://www.lqliquidhealth.com/ambassadors
code
WORK WITH LQ! Love LQ products? Want to with us as an influencer? ARE YOU AN INFLUENCER? We work with lot's of influencers! Any size, any platform, it doesn't matter to us. You just have to believe in the brand. We prefer to work with influencers who fit into what LQ is all about. If you are looking for a quick pay check, or a load of free stuff, then partnering with us is probably not for you (sorry). We do give away free products, and on occasion we also pay our influencers, but this tends not to be for a single post, or on the first time we talk to them. Our influencers love the brand (and the results) and they share their experiences of using LQ over a long period of time. If you think that this is something you would like to do, then why not get in touch! IF YOU ARE, AND YOU LOVE LQ, WE WOULD LOVE TO HEAR FROM YOU... JOIN OUR AMAZING INFLUENCERS NOW!
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00555.warc.gz
CC-MAIN-2020-05
867
8
http://sourcedigit.com/20204-how-to-listen-internet-radio-on-ubuntu-16-04/
code
Internet Streaming Radio for Linux Ubuntu. Install Gradio to listen Internet Radio on Ubuntu. Gradio is a GTK3 App for finding and listening Internet Radios on Linux Ubuntu Systems. It can easily play Internet Radio streams, via online. - Search radio stations (worldwide) - Add them to your library - Vote for radio stations - Visit their homepage - MPRIS support - Showing a cover for a station Install Gradio On Ubuntu For Ubuntu based distros (16.04) run the following commands to install Gradio, via PPA. sudo add-apt-repository ppa:haecker-felix/gradio-daily sudo apt update sudo apt install gradio Once installed, open Gradio app from Ubuntu Dash or Terminal. For Ubuntu based distros (16.04) you can add the daily ppa. deb http://ppa.launchpad.net/haecker-felix/gradio-daily/ubuntu xenial main deb-src http://ppa.launchpad.net/haecker-felix/gradio-daily/ubuntu xenial main sudo apt-get update sudo apt-get install gradio If you install from source you must have the original compiled source to uninstall. cmake does not provide a make uninstall but list all the files installed on the system in install_manifest.txt. You can delete each file listed seperatly or run: sudo xargs rm < install_manifest.txt For ubuntu based distros: For other distros:
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812665.41/warc/CC-MAIN-20180219131951-20180219151951-00600.warc.gz
CC-MAIN-2018-09
1,256
22
http://sto-forum.perfectworld.com/showthread.php?t=386361
code
I added a list of acknowledgements at the end so people could see what tutorials and software I used to create them, but very basically the steps I take are: 1. Use the inbuilt STO demorecord feature to record segments of gameplay. 2. Use demorecord to playback the gameplay, and use its camera sweeps. 3. Use fraps to record the playback to video files. 4. Import the video files into Windows Live Movie Maker to organise and trim them (and add soundtrack). 5. Upload to YouTube.
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657041.90/warc/CC-MAIN-20150417045737-00104-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
480
6
https://www.epfl.ch/labs/la/page-52970-en-html/page-52971-en-html/page-53006-en-html/
code
- Robust Control of Plants with Parametric Uncertainties - Robust Controller Synthesis for Linear Systems with Parameter Uncertainty Using Convex Optimization - Multi-model Robust Control Design KUNZE Marc, KARIMI Alireza, LONGCHAMP Roland The high precision position control of linear direct-drive motors requires the modeling of the system including the friction nonlinearity, reluctant and cogging forces together with the model parameter uncertainties. Besides, the model parameters may change in different operation points. In order to obtain a stable and high performance control system several robust control methods will be considered: 1. A discrete-time identified model is considered as the nominal model. All nonlinearities and parameter variations are considered as model uncertainties and disturbances in the frequency domain. Then a robust controller is designed based on the principe of loop shaping. 2. Several models are identified in different operating points then a robust multimodel approach is used to stabilize all models. Another approach is to use different controller for each model. 3. Each problem is locally treated. It means that the friction nonlinearity is compensated with feed forward or adaptive control. The cogging and reluctant forces which are position dependent are compensated using the ILC approach or a look-up table. Finally, the variation of the load inertia is compensated with a robust controller with respect to parametric uncertainty.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488620.24/warc/CC-MAIN-20191206122529-20191206150529-00127.warc.gz
CC-MAIN-2019-51
1,483
8
http://webmademovies.lighthouseapp.com/projects/63272/tickets/802
code
merge conflict causing regression This commit right here: Reverted the fix that says + src + "?" + query + Comments and changes to this ticket Create your profile Help contribute to this project by taking a few moments to create your personal profile. Create your profile » Popcorn.js is an HTML5 video framework that lets you bring elements of the web into your videos. Popcorn.js is a project of Web Made Movies, Mozilla's Open Video Lab.
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00510-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
441
9
https://advance.sagepub.com/users/719893/articles/704791-roma-seminomadic-tradition-between-social-inclusion-and-the-protection-of-natural-resources-a-case-study-of-the-toplica-district-serbia
code
Roma seminomadic tradition between social inclusion and the protection of natural resources a Case study of the Toplica District (Serbia) In this research, we try to connect sociology with GIS (Geographical Information Science). The main problem in the South-East Serbia present low integration of Roma minority group into society. In that case, better collective ecology conscience may give better inclusion results.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817095.3/warc/CC-MAIN-20240416124708-20240416154708-00637.warc.gz
CC-MAIN-2024-18
417
6
http://mmgn.com/Users/Codeman34
code
Codeman34 has been a member of MMGN since Jun 01 2009. PSN ID: CodemanPSX Steam ID: SuperCodeman Wii Friendcode: 000000100000011000000100001111 (Not real) DS Friendcode: 12 13 1 15 (Also Not real) Hey I'm Codeman34 from MyWii I'm on Little Big Planet 1 and 2 and also Cod:Black ops My Psn id is... Want to add me on 3DS? Just PM me ;) 1GB GeForce GT 440 Graphics Card Intel Core i5-2500 CPU @ 3.30 GHz Alternatively, connect with Facebook.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00063-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
439
12
http://www.rpi.edu/datawarehouse/dw-req-decisionworks.html
code
To ensure a requirements-driven approach to building the Rensselaer data warehouse, the Division of the Chief Information Officer (DotCIO) contracted with DecisionWorks Consulting, Inc. to conduct an Institute-wide business requirements analysis. DecisionWorks consultants visited Rensselaer for a 10-day period from September 10th to September 19th, 2001. During this analysis period, 31 interviews occurred with 49 individuals representing a wide variety of portfolios across Rensselaer, as well as a diverse vertical span of the Institute. Interviews focused on the vision, goals, performance plan objectives, and supporting information and analysis requirements of the participants in regards to their data warehousing needs. The fundamental goals of this effort were to: - identify and summarize data warehouse requirements - facilitate the prioritization of warehouse deliverables - determine success criteria for the data warehouse project - ensure that we are properly focused before expending resources and making significant investments. requirements analysis deliverables
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00450.warc.gz
CC-MAIN-2018-05
1,082
9
http://vegananaliz.xyz/archives/22884
code
Novel–Release that Witch–Release that Witch 1455 Pioneer roasted therapeutic “Hus.h.!.+” Balshan placed a finger to her mouth. “Should you listen to that appear?” Charms quit his respiratory and targeted. Now, he read a faint hum. It absolutely was small and turbid and sounded such as a whistle combined with the flapping in the wildlife. He looked over Balshan’s phrase and suddenly sensed his self confidence wane. “Endure, you’ve attained His Majesty?” “Uhm… just what are they?” Just when he thought that she obtained halted replying to, Balshan whispered, “My capacity will be to kill.” After hacking and coughing double, Charms arrived at into his pocket. “Anyway….We have two passes to the new enjoy tonight.” Charms got two actions lower back. “Then why aren’t you fighting against the monsters?” Release that Witch “No, the sound is all the more faraway” Her phrase has become significant. He viewed Balshan’s term and suddenly observed his self-assurance wane. “Hold up, you’ve attained His Majesty?” Charms uncovered himself in a dilemma and could not bring to mind anything useful for a purely destructive power aside from conflict, even if racking his minds. His Majesty essential been vexed in those days. But Charms noticed which it was not Master Roland’s fault, and admitting this time was far a whole lot worse than him not honoring a guarantee. He viewed Balshan’s manifestation and suddenly felt his trust wane. “Endure, you’ve satisfied His Majesty?” Loving In Silver: Hot, Wild And Crazy Each checked up and spotted 1000s of birds capturing over their heads. It turned out Charms’ first knowledge in witnessing a flock of migratory birds that resembled black clouds which blotted your natural light. “Yep.” Balshan threw the black color lump to the ground. “The Witch Union arranges do the job in accordance with expertise initially ahead of seeking their individual opinions. Then both sides may come to the binding agreement. Dusk is the ideal illustration. Individuals with skills without recognizable use will likely be pa.s.sed to His Majesty to control in person. In line with him, a variety of capabilities can be useful to Graycastle’s creation, and that we now have no pointless capabilities,” she paused for a second, “I belong to the latter.” “Yep.” Balshan threw the black lump to the floor. “The Witch Union arranges operate based on skills primary right before demanding their personal ideas. Then each side will come in an agreement. Dusk is the best illustration. People who have expertise without any evident use will likely be pa.s.sed to His Majesty to take care of in person. In line with him, all kinds of proficiency will be appropriate to Graycastle’s improvement, and there presently exists no ineffective skills,” she paused for just a moment, “I participate in the second.” Simply speaking, she was great. Not wanting him to confess his emotions and thoughts, Balshan was dumbstruck for a moment. “Wh… what lovable, that’s not the point! She is a Witch, and you ought to know what a Witch cannot do!” “I only discover their wings flapping what other appears are there any?” best area to live in rome italy “The Witch Union doesn’t accept from it. They are accountable for the delegation of labor on the Witches, but my ability demands bodily speak to to become executed. They recognize that the potential risks are way too terrific and also there are incredibly handful of spots to me to complete my ability. Eventually, they allowed me to pick out some tips i want to do, in addition to combat.” Balshan laughed outside in self-mockery. “So verbal assurances usually do not make sure anything… along with his Majesty Roland is no exception.” Charms found himself in a very challenge and can even not consider anything else ideal for a purely destructive potential apart from conflict, even though racking his brains. His Majesty should have been vexed in the past. But Charms felt it had not been Queen Roland’s problem, and admitting this time was far more serious than him not keeping a assure. Under the Mendips “Whichever.” Charms shrugged. “Proper, in the event you didn’t mention that you had been a Witch, I might have overlooked it. What electrical power are there? So why do I actually feel that you are currently purely making use of your power to transport the equipment?” It was subsequently not only her overall look not surprisingly, her facial characteristics ended up extremely lovely, which was a common function from the Witches. It decided to go the same for Balshan, who constantly brought him the chilly treatment method. It turned out extremely hard to work with the idea of ugly to refer to them, a lot of that even her serious deal with actually… covered some kind of distinctive and specific style. Charms had taken two methods back. “Then why aren’t you battling with the monsters?” “So what.” Charms stuck his torso out and disclosed the ‘war hero’ badge worn out on his s.h.i.+rt. “We have an elder brother, so my dad wouldn’t head regardless if I don’t possess any youngsters! And this is a badge personally awarded by His Majesty it happens to be definitely more than enough to ensure her long term livelihood, what exactly other doubts have you?” “However I have a thing on tonight, and doubt I will make it…” He hesitated. “Why don’t you like the play with Dusk it’ll definitely be better than squandering the tickets…” The two searched up and found several thousand birds capturing over their heads. It was actually Charms’ primary knowledge of witnessing a flock of migratory wild birds that resembled dimly lit clouds which blotted out the natural light. Dusk was unlike some of the other females he had come across and was extremely special. If everyone else was grayscale, she will be green-orange, exactly like her small and curly red hair. It is going to just generate a lot more cold and dangerous glares. Before she could solution him, the skies suddenly echoed by helping cover their crackling looks. Novel–Release that Witch–Release that Witch
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00221.warc.gz
CC-MAIN-2022-40
6,212
35
https://frederickgrove.com/products/tormilated-quartz-python
code
Tourmalated Quartz Python - Hand carved - Approximate weight: 10.5g - 12x10 Tourmalated Quartz Hand made in London, cast from 925 Sterling. Each piece is hand made, sized and finished to order. Please allow up to 7 days for your piece to be made. Please refer to our size guide for help with sizing.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202471.4/warc/CC-MAIN-20190320210433-20190320232004-00037.warc.gz
CC-MAIN-2019-13
299
8
https://www.forgov.qld.gov.au/information-and-communication-technology/qgea-policies-standards-and-guidelines/data-encryption-standard
code
Data encryption standard Final | June 2019 | v1.0.2 | OFFICIAL - Public |QGCDG The Queensland Government uses a range of information and communications technology systems to process, store and transmit information. The Queensland Government is responsible for ensuring it applies adequate security for this information. The Data encryption standard outlines the minimum requirements for encryption and management of encrypted, Queensland Government owned data (in use, in transit, and at rest). The Data encryption standard is enforced by the Information security policy (IS18:2018) requirement 3: Agencies must meet minimum security requirements, with all information transmitted over data communication networks secured in line with the Data encryption standard. The Data encryption standard corresponds to the ISO/IEC 27001:2013 (Opens in new window) (Opens in new window) control domain of cryptography (A.10). Conformance with ISO 27001 requires consideration of the development and implementation of policies on cryptographic controls and a policy on cryptographic key management where appropriate. - implement policy on the use of encryption, cryptographic controls, and key management - implement controls at least equivalent to those outlined in the appendix A.1 "Required Controls" of the Data encryption standard This standard provides a direction and processes for choosing and implementing encryption for data-in-transit, data-in-use, and data-at-rest. The standard also sets the minimum required standard for encryption of Queensland Government data. The Data encryption standard is mandated through the Information security policy (IS18:2018). For further information on applicability see the Information security policy (IS18:2018). By design, this standard does not provide specific guidance for handling national security information, classified material or systems that are assessed to have confidentiality requirements above PROTECTED. Where an agency has cause to handle such material/systems, it should refer to the Australian Government Protective Security Policy Framework (PSPF) and the Security and Counter-Terrorism Group in Queensland Police Service. Telephone 07 3364 4549 or email [email protected]. For more details on information security classification, please refer to the Queensland Government Information Security Classification Framework (QGISCF). The Data encryption standard is intended for use by: - Network and security architects, project managers, information security professionals, and those responsible for Queensland Government data and information. - Third-Party service providers developing or providing systems and services that will be storing and managing data/information on behalf of the Queensland Government. Readers should be familiar with the concepts and workings of the QGISCF. The Data encryption standard supersedes the Network Transmission Security Assurance Framework (NTSAF). References to the NTSAF in other QGEA documents should be taken to refer to the Data encryption standard. Data can exist in various states or locations throughout its life-cycle. The following terms and definitions have been used within this document to describe the state or location of data: - Data-at-rest: the stored location of data, be it on a storage device, server or other storage system. - Data-in-transit: data that is currently being transmitted between locations. - Data-in-use: data which is in use on a client device or session. The Data encryption standard has been designed and written to replace the NTSAF. This document has changed from focusing primarily on the security of network transmission. It now covers the security of data and information in all its forms for the following reasons: - to remove the minimum-security assurance levels applied to networking technologies encouraging agencies to independently risk assess technologies - to focus the document to the topics of encryption, cryptography, and key management, removing extraneous topics - to simplify implementation and understanding of the standard - no other industry standards or frameworks require controls to the same detail as the NTSAF on the topic of network transmission security - to provide clearer mapping to the ISO/IEC 27001:2013 control domains to assist with implementation of agency ISMSs - to align with the Australian Governments minimum encryption control sets and support information sharing - to expand the scope of the standard to incorporate data in its various states. Overview of use This standard is intended to be used to: - assist agencies in developing and implementing policies on encryption, cryptographic controls, and key management - determine appropriate encryption requirements considering the security classification of information and data - ensure that the risk of data security being breached is effectively reduced through the appropriate implementation of cryptographic controls - identifying acceptable configurations and supplementary controls which must or should be applied to cryptographic algorithms and protocols when being implemented. Each of the above are explained in more detail in the following sections. Dot point 4 is further discussed in the control set sections Cryptographic algorithms and Cryptographic protocols. Assist agencies in developing and implementing policies on encryption, cryptographic controls, and key management In order to conform with ISO27001 requirements, agencies must consider and where appropriate develop and implement a policy on the use of cryptographic controls for the protection of information. The Data encryption standard enforces that agencies MUST Implement policy on the use of encryption, cryptographic controls, and key management. The Data encryption standard should be used as a basis for department and agency policies regarding encryption, cryptographic controls, and key management. Iterating upon the minimum requirements and controls described in control sets to align with internal departmental requirements, should effectively fulfil the cryptography policy requirements of ISO27001. Agencies may also consider reviewing National Institute of Standards and Technology (NIST) SP800-53, ISO/IEC 27002 , ISO/IEC 11700, The Australian Cyber Security Centre (ACSC) Information Security Manual (Opens in new window) (Opens in new window) (ISM) when developing their policies and supplemental controls. Determine appropriate encryption requirements considering the security classification of data The encryption requirements for data are determined by their confidentiality security classifications, assessed against the QGISCF. If the data does not have a security classification refer to the QGISCF for details on classification. When considering the security classification of a system it is important to consider the highest level of confidentiality security classified information being processed. When processing PROTECTED information or data the cryptographic requirements greatly increase. Agencies must consider the integrity requirements of the information/data/information system when implementing controls. The Data encryption standard only enforces minimum control sets based on the confidentiality classification, however encryption can also provide additional assurance when handling information / data / information systems with higher integrity requirements. Where agencies are expected to align controls with the current ACSC ISM: - ACSC ISM OFFICIAL (O) controls are applied to QGISCF classified information/data at the OFFICIAL and SENSITIVE levels - ACSC ISM PROTECTED (P) controls are applied to QGISCF classified information/data at the PROTECTED levels. Ensure the risk of data security being breached is effectively reduced through the appropriate implementation of cryptographic controls Based on the security classifications derived from the QGISCF confidentiality assessment, agencies must - implement the mandatory controls and requirements described in Control sets - consider all recommended controls described with a should statement - record the gaps where a mandatory or recommended control has not been implemented in the agency risk register. Agencies should also: - compare the mandatory requirements of the Data encryption standard against their currently implemented control sets to identify any control gaps in their existing systems - develop and implement additional supplementary cryptographic controls where deemed necessary to provide a higher level of assurance or mitigate an existing risk - keep a comprehensive identification of the controls that have been implemented in an auditable format and reviewed regularly through the ISMS process - select the appropriate means of securing data with consideration of a range of factors including cost, ease of use, and appropriateness to the business. Cryptography is the practice and study of techniques for secure communication in the presence of third parties, including adversaries. The application of cryptographic processes is designed to provide confidentiality, integrity, authentication, and non-repudiation of information and data. Cryptography is used to encrypt information and data to provide additional assurances to their security. Organisations that use encryption for data at rest, or in transit, are not reducing the sensitivity or classification of the information. However, when information is encrypted, the consequences of the encrypted information being accessed by unauthorised parties is considered lower. This enables reduction in handling, storage and transmission requirements. There are two key aspects of cryptography: the cryptographic algorithm, and the cryptographic protocol. The cryptographic algorithm is the mathematical means for concealing data and verifying integrity, whereas the cryptographic protocol is a transmission mechanism that applies additional security to data transmission using cryptography. Cryptographic algorithms can be used to secure data during storage and, when used within an appropriate network protocol, can provide a trusted communications channel through untrusted communication paths. A cryptographic algorithm creates a cipher by performing a set of mathematical functions using keys, which are then used to encrypt data. There are several categories of cryptographic algorithms used for information security. The most widely adopted forms are asymmetric (AKA public-key), symmetric-key (AKA secret key), hash and key-hash message authentication code (HMAC) cryptography. To ensure security, cryptographic algorithms that have been subjected to rigorous testing by cryptographers in the international community should be selected over lesser known algorithms. The fundamental security principle for selecting cryptographic algorithms is to only use algorithms where the security is given through the computational difficulty of the algorithm. Cryptographic algorithms that rely on the secrecy of the algorithm itself to provide security are considered vulnerable to having their secret revealed, stolen or inadvertently discovered. The strength of cryptographic algorithms is generally influenced by two factors. The first factor is the structure of the algorithm in providing computational complexity. The second is the length of the key fed into the algorithm to create the unique cipher. A longer key generally equates to a stronger cipher and requires an exponentially greater time to decipher. It should however be noted that equivalent key strengths can differ substantially between different types of algorithms. Cryptographic strength is measured by the number of computing cycles required to decipher information. The length of time it takes to run the computing cycles has dramatically decreased as the speed of new processors has exponentially increased. In this way, advancements in computing power have rendered several once-strong algorithms obsolete, and new algorithms or key lengths continue to be required to maintain strength in the face of improving hardware capabilities. Cryptographic algorithm requirements When implementing cryptography or a cryptographic product the algorithm must have been reviewed by the Australian Government and currently have approval as an ASD Approved Cryptographic Algorithm (AACA) unless the associated risks are assessed, understood, and formally accepted at the departmental level. AACAs fall into three categories: asymmetric/public key algorithms, hashing algorithms and symmetric encryption algorithms. The currently approved asymmetric/public key algorithms are: - Diffie-Hellman (DH) for agreeing on encryption session keys - Digital Signature Algorithm (DSA) for digital signatures - Elliptic Curve Diffie-Hellman (ECDH) for agreeing on encryption session keys - Elliptic Curve Digital Signature algorithm (ECDSA) for digital signatures - Rivest-Shamir-Adleman (RSA) for digital signatures and passing encryption session keys or similar keys The approved hashing algorithm is: - Secure Hashing Algorithm 2 (SHA-224, SHA-256, SHA-384 and SHA-512) The approved symmetric encryption algorithms are: - AES using key lengths of 128, 192 and 256 bits - Triple Data Encryption Standard (3DES) using three distinct keys. Where there is a range of possible key sizes for an algorithm, some of the smaller key sizes do not provide an adequate safety margin against intrusion methods that might be found in the future. For example, future advances in integer factorisation methods rendering smaller RSA moduli vulnerable to applicable attacks. If the information to be secured is expected retain high confidentiality requirements over a long periods (e.g decades), this should be taken into account. ASD approved cryptographic algorithms 2018 [Official and Protected*]** Minimum Key Strengths 160+ bits field/key size SHA-224, SHA-256, SHA-384 and SHA-512 128, 192 and 256 bits Triple Data Encryption Standard (3DES) 3 Distinct Keys Table 1: ASD approved cryptographic algorithms requirements * See appendix B for mapping to QGISCF classified information ** This list is current as of December 2018, refer to the ACSC ISM for the current list of AACAs and supplementary controls in this section. When using AACAs agencies must ensure the implementation is aligned with the associated controls in the current ACSC ISM (Opens in new window) (Opens in new window) . Agencies should use ECDH and ECDSA in preference to DH and DSA due to an increase in successful sub-exponential index-calculus based attacks on DH and DSA. ECDH and ECDSA offer more security per bit increase in key size than either DH or DSA and are considered more secure alternatives. When using elliptic curve cryptography, agencies must use a curve from the USA Federal Information Processing Standard 186-4 (FIPS 186-4). When using RSA for digital signatures, and for passing encryption session keys or similar keys, a key pair for passing encrypted session keys that is different from the key pair used for digital signatures must be used. AES and 3DES must not use Electronic Codebook (ECB) mode. Electronic Codebook (ECB) mode in block ciphers allows repeated patterns in plaintext to appear as repeated patterns in the ciphertext. Correctly implementing cryptographic protocols is the primary way to protect against network-based attacks and provide encryption-in-transit. Although many cryptographic protocols use strong standards-based cryptographic algorithms, they may still be vulnerable to weaknesses in the protocol structure or weakness in the implementation. Like cryptographic algorithms, the most secure protocols are typically based on mature industry standards as they have undergone international scrutiny to ensure there are minimal vulnerabilities. It is more likely that lesser known cryptographic protocols will contain vulnerabilities that could potentially be exploited. Many secure protocols rely on digital certificate technology to provide entity authentication. Digital certificates are created using a combination of public-key and cryptographic hash algorithms, and their security relies on a trust-based infrastructure model known as public key infrastructure (PKI). The most commonly used digital certificate standard is X.509 and this is considered the industry default. Guidelines for establishing a PKI based on digital certificates are out of the scope of the Data encryption standard. Transmission networks consist of many protocol layers working together to ensure that information is delivered to its intended recipient and in an appropriate way, based on the type of information. There are many cryptographic protocols that operate at different layers of granularity based on how information is secured for transmission. Cryptographic protocol requirements The Data encryption standard is aligned with the ACSC ISM and where appropriate recommends the use of ASD Approved Cryptographic Protocols (AACP), the current ASD Approved Cryptographic Protocols are: - Transport Layer Security (TLS) - Secure Shell (SSH) - Secure Multipurpose Internet Mail Extension (S/MIME) - OpenPGP Message Format - Internet Protocol Security (IPsec) - Wi-Fi Protected Access 2 (WPA2). When implementing AACPs agencies must ensure the implementation is aligned with the associated controls in the current ACSC ISM (this includes cryptographic protocol versioning). Cryptographic Protocol implementation must align to the QGISCF confidentiality classification of the information being transmitted and required controls can be found in the ASD Approved Cryptographic Protocols section of the Cryptography chapter (Opens in new window) (Opens in new window) in the current ACSC ISM (Opens in new window) (Opens in new window) . Agencies should establish and maintain end-to-end encryption for all applications, and where an agency does not have physical control over the network infrastructure used for transmission data should be encrypted by default. Encryption at rest Encryption at rest (or data-at-rest) is the result of encrypting data that is not considered moving (i.e not being transmitted). Data-at-rest includes data that resides in databases, file systems, hard drives, USB keys, memory, and any other structured storage method. There are different methods to implement encryption at rest, the primary methods include full disk encryption, partial disk encryption, and file-based encryption. Encryption of data at rest can be used to reduce the physical storage and handling requirements for media or systems containing sensitive information. When implementing encryption at rest, full disk encryption is the preferred implementation method. Full disk encryption provides a greater level of protection than file-based encryption. File-based encryption can be used to encrypt individual files, however there is the possibility of temporary copies remaining on the device in an un-encrypted form. Partial disk encryption can also be used to ensure specifically sensitive data can be stored in a secured manner. Partial disk encryption can be implemented by partitioning the storage in a device or database and encrypting a specific partition(s). Partial disk encryption must be accompanied with appropriate access control that will only allow writing to the encrypted partition. Encryption at rest can be implemented to protect files and data from external attackers and malicious insiders. If encryption at rest is implemented appropriately alongside access control measures it should mitigate the likelihood of inappropriate access to information and reduce the impact of data theft. Encryption at rest requirements There are legitimate reasons why information owners choose not to encrypt data at rest including: - difficulties in recovery of any corrupted data, especially if there is cipher block chaining. - conflict with compression algorithms. - environmental increased use of CPU cycles to undertake tasks. Following an assessment of risk Queensland Government agencies should, where possible, implement full disk encryption to protect data at rest and reduce the impact of device theft and data leakage. In all other cases, encryption at rest may be implemented with a business owners' approval to mitigate an existing risk or reduce the physical storage and handling requirements of the data/information. Cryptographic systems are comprised of equipment and keying material. Keying material is the data (e.g., keys and initialisation vectors) necessary to establish and maintain cryptographic keying relationship. Keying material is either symmetric or asymmetric in nature, although there are several different types of keys defined. Keys can be identified according to their classifications as public (asymmetric), private (asymmetric), symmetric, and relating to their use, for more information regarding types of keys see NIST SP 800-57 Pt. 1 section 5.1.1. Cryptographic equipment is the aspect of the cryptographic system that allows the users to encrypt and decrypt data/information while keyed. While cryptographic equipment is usually a physical device it can be a part of encryption software. Key management is susceptible to several threats and vulnerabilities. These can have significant affects to the confidentiality or integrity of the encrypted information, some of these threats and vulnerabilities include: - disclosure of the keying material: Either the keying material is in plain text, is not protected and can be accessed, or is enciphered and can be deciphered. - modification of keying material: Changing the keying material so that it does not operate as intended. - unauthorised deletion of keying material: Removal of the key or key related data. - Incomplete destruction of keying material: This may lead to the compromise of current or future keys. - unauthorised revocation: The direct or indirect removal of a valid key or keying material. - masquerade: The impersonation of an authorised user or entity. - delay in executing key management functions: This may result in a failure to generate, distribute, revoke or register a key, update the key repository in a timely manner, maintain a users authorisation levels, and so on. The delay threat may result from any of the previously mentioned threats or from physical failure of the key related equipment. - misuse of keys: The use of a key for a purpose for which it is not authorised, excessive use of a key, provision of keys to an unauthorised recipient, and the use of a key management facility for a purpose which it is not authorised. Key management requirements An agency policy on the use, protection and lifetime of cryptographic keys must be developed and implemented through their whole lifecycle. Agencies must develop their key management policies to comply with the ACSC ISM key management requirements and ISO27002:2015. ISO27002:2015 and the ACSC ISM both require the implementation of a key management plan that must include an agreed set of standards, procedures and secure methods the following topics: - generating keys for different cryptographic systems and different applications - issuing and obtaining public key certificates - distributing keys to intended entities, including how keys should be activated when received - storing keys, including how authorised users obtain access to keys - changing or updating keys including rules on when keys should be changed and how this will be done - dealing with compromised keys - revoking keys including how keys should be withdrawn or deactivated, e.g. when keys have been compromised or when a user leaves an organisation (in which case keys should also be archived) - recovering keys that are lost or corrupted - backing up or archiving keys - destroying keys - logging and auditing of key management related activities. When utilizing a third party for the storage and/or transmission of information deemed to require encryption Agencies should where possible, appropriate, and economic seek to control the encryption keys. When developing key management policies and plans, agencies should review ISO/IEC 27002:2015, AS11770-1, ACSC ISM (Opens in new window) (Opens in new window) , NIST SP 800-57, and FIPS 140-2. These industry standards provide a holistic view of key management and offer minimum control sets. Appendix A Required controls |Policy||Implement policies on the use of encryption, cryptographic controls, and key management||Must| |Implement controls at least equivalent to those outlined in the Data encryption standard||Must| |Any required control with a should or must statement that is not implemented must be recorded in the agency risk register||Must| |Algorithm||When implementing cryptography or a cryptographic product the algorithm must have approval as an AACA, unless the risk is formally accepted by the agency accountable officer (i.e. Director-General or equivalent executive delegate)||Must| |Agencies must ensure the implementation of AACAs is aligned with the associated controls in the current ACSC ISM (Opens in new window) (Opens in new window)||Must| |ECDH and ESDSA should be used in preference to DH and DSA||Should| |Elliptic curve cryptography must use a curve from the FIPS 186-4||Must| |RSA must use a key pair for passing encrypted session keys that is different from the key pair used for digital signatures||Must| |AES and 3DES must not use Electronic Codebook (ECB) mode||Must| |Protocols||Where an agency does not have physical control over the network infrastructure used for transmission by default data should be encrypted||Should| |Where appropriate agencies should use ASD Approved Cryptographic Protocols (AACP)||Should| |When implementing AACPs agencies must ensure the implementation is aligned with the associated controls in the current ACSC ISM||Must| |Protocol implementation must align to the confidentiality classification of the information being transmitted||Must| |At-rest||Where possible full disk encryption at rest should be implemented||Should| |Partial disk encryption must be accompanied with appropriate access control||Must| |Key management||Policy on the use, protection and lifetime of cryptographic keys must be developed and implemented through their whole lifecycle||Must| |Agencies must develop their key management policies to comply with the ISM key management requirements and ISO27002:2015||Must| |Agencies should control encryption keys when storing and/or transmitting information deemed to require encryption on a third-party system||Should| |Agencies must implement a key management plan includes an agreed set of standards, procedures and secure methods for the listed topics in section 4.1.4||Must| Appendix B Control classification mapping Information Security Manual 2017 Information Security Manual UD: Baseline controls O: Official controls P: Protected controls P: Protected controls Depending on risk assessment
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100651.34/warc/CC-MAIN-20231207090036-20231207120036-00072.warc.gz
CC-MAIN-2023-50
26,917
176
http://gailcarmichael.com/work/projects/streetsom
code
I chose the paper Selection of Streets from a Network Using Self-Organizing Maps as the basis of my final project for the GIS grad course (COMP 5204) I took fall 2007. I implemented a simple street selection demo application based on the research described within the paper using Java and some open source libraries. There are two levels of work to be done when determining what roads should be shown at a particular level of detail. This whole process is known as generalization, since we must decide what subset of all the available information will be shown. The first step of generalization involves examining the model of the road network and using information about street connections, types, and so on to pick out the most important or relevant pieces to show. Once we have the data that is to be visualized, the second step analyzes the geometric properties of the chosen roads and uses the results to simplify the graphical rendering of the roads. For example, a road with many small bends that can't be rendered onto the screen as such will be simplified to a straight line instead. It is in the first step that self-organizing maps (or SOMs) can play a role. These maps are actually neural networks that are used to cluster and visualize data. To make a longish story short (and simplified), you create vectors with as many dimensions as you have numerical properties about each item you want to cluster. Using these numbers, the SOM sorts out the data so that items that are most alike end being "close" together. You might, for example, have a SOM that is represented by a two dimensional grid, and the input vectors will be organized into the cells based on their similarities. In the case of selecting roads from a network, there are some obvious numerical values that can be used to describe each street for use in the SOM, such as length. There are, however, also some less obvious choices that seem to help categorize roads in a more useful way. A good list of attributes for each road is as follows: - Number of streets that intersect this one - A measure of closeness between this road and all others around it, using shortest path distances - A measure of betweenness, the proportion of shortest paths between all other roads that pass through this one - Length and width (i.e. number of lanes) - A numerical representation of the class of road (i.e. avenue, street, highway, freeway, etc) - Speed limit Using these properties, roads can be clustered into a SOM. We can then pick a group of similar roads very easily. If we are careful about using the numerical properties so that when the value increases, so does the corresponding importance of the road, we can simply pick the SOM cells with the highest average values for the vectors stored within. These roads will be used in the map's final generalization. The main component of the project's implementation is the GeoTools open source Java Toolkit. This library is fully featured for the general manipulation of geospatial data and adheres to the OpenGIS® Specifications. It is capable of opening several common data formats including ESRI shapefiles, geography markup language, and PostGIS, and has built in rendering capabilities. JavaSOM is the implementation component of a student's work. It is a basic self-organizing map library that supports input and output of data via XML files. Looking to the shapeReader package, the main starting point of the implementation is the ShapeFile class. It can load an ESRI shapefile and provide access to the features found within. In the example test run, a shapefile from Statistics Canada was obtained. The file contains a road network for the province of Ontario. The AreaOfInterest class facilitates a new view of a set of features filtered either by geometric bounds or a list of feature identifiers. The example creates an instance of this class in order to work with a smaller set of data (the original file has 473,252 features and the new interest contains just 239). The area of interest can be rendered from this class as well. The screenshot below shows the road network contained in the example's area of interest. The shapeReader.graph package contains classes that are used to build a connectivity graph from a set of features and extract information about the graph (namely, degree, closeness, and betweenness). The graph is implemented as a part of the GeoTools graph framework, and as such, the algorithms for computing this information are limited by the capabilities of GeoTools. Finally, the streetClassifier package contains the main driver for the street selection. The main class of note here is the StreetSelector class. It can perform the whole process beginning with an area of interest and a percentage of important roads to select. A new area of interest with only the selected streets is returned. The next screenshot shows a reduced map with the top 80% of roads selected. Most of the general shape has been preserved. There are some improvements that can be made to my impementation, particularly the classification of roads, which I had to do somewhat arbitrarily. Somebody with more knowledge could certainly provide a better assignment of numerical values to the various classes.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708144156/warc/CC-MAIN-20130516124224-00020-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
5,237
20
http://qconlondon.com/london-2012/presentation/Code%20to%20Cloud:%20The%20Cloud%20Foundry%20in-depth,%20hands-on%20tutorial%20from%20the%20experts
code
Tuesday 09:00 - 16:00 CloudFoundry is the open source Platform as a Service (PAAS) project initiated by VMware. Among the most exciting aspects of CloudFoundry is its openness and developer-friendliness: CloudFoundry supports applications written in a range of popular languages and frameworks to de deployed with virtually no restrictions and in a variety of environments: public or private clouds, potentially reusing exiting hardware. This tutorial aims to introduce CloudFoundry in a very practical way. It is broken down as follows: • Introduction to CloudFoundry • Tooling: VMC and STS • Working with Micro CloudFoundry to build Private Clouds • The CloudFoundry Development lifecycle • Writing CloudFoundry enabled apps with Spring, Lift, and Node.js • Creating and managing services: MySql, MongoDB, Redis, etc.. • Scaling, Autoscaling and Clustering your Cloud • Management and Monitoring of Cloud Foundry and Hosted Applications This tutorial is perfect for people who want to get their hands dirty building their own Private Clouds and how to build and scale a Production-Strength Cloud Foundry environment. Full of war stories from the trenches, and given by contributors to the Cloud Foundry project, this tutorial is a no-holds-barred, in-depth take on Cloud Foundry and how to make the most of it for your own applications. Atteedees must bring their own laptop with the following software installed: - Ruby 1.9.2 - The latest version of RubyGems - VMware Workstation, Player or Fusion to be able to run Micro Cloud Foundry (only one of those 3 is needed) - Optionally, STS (SpringSource Tool Suite) should be installed as well Colin Humphreys is Director of Technology at Carrenza. He has spent the last twelve years sitting on the fence between development and operations, delivering solutions for eBay, Volkswagen, Paypal, Cineworld, and others. Colin and the team at Carrenza won a UK IT Industry Award in 2009 with Comic Relief, and were medallists in 2010 with Tribal DDB. He is passionate about “Infrastructure as Code”, “Continuous Delivery”, “Devops”, “PaaS”, “Agile Testing”, and various other buzzwords." Dr David Syer works for VMware in London providing Identity Management solutions for the Cloudfoundry platform. He is also an active Spring committer in many areas (Spring, Spring Security, Spring AMQP, Spring Integration) and is the technical lead on Spring Batch, the batch processing framework and toolkit from SpringSource. He is an experienced, delivery-focused architect and development manager, and has designed and built successful enterprise software solutions using Spring, implementing them in major financial institutions worldwide. David is known for his clear and informative training style and has deep knowledge and experience with all aspects of real-life usage of Java and the Spring framework. He enjoys creating business value from the application of simple principles to enterprise Tareq is principal consultant at OpenCredo, he is a pragmatic, hands-on technical leader with a strong interest in areas such as programming languages, NoSql persistence engines and PaaS. He has extensive experience with the Spring stack and is a committer on the Spring Web Services project. He is also the co-author of 'Programming Spring' and 'Spring in a Nutshell'. Tareq has been delivering innovative solutions based on a variety of modern technologies and platforms, most recently Scala and Cloud Foundry. Tareq regularly updates his personal blog found here: cafebabe.me.
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132007.18/warc/CC-MAIN-20140914011212-00288-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
CC-MAIN-2014-41
3,549
34
https://www.frequencyfoundation.com/product-tag/free/
code
Ongoing research shows most COVID-19 virus strains can be managed by an Ivermectin/Doxycline/Zinc protocol and these strains have specific protein disruptor frequencies which disable the protein mechanisms that allow the SARS-CoV-2 virus to replicate. Some strains are resistant to Ivermectin and may be responsive to Hydroxycloroquine. These strains have a separate protein pathway. A few viral strains are resistant to both Ivermectin and Hydroxycloroquine and follow a third protein pathway mechanism. This update includes protein disruptor frequencies for mutant UK strains B.1.1.7. These frequencies are free and released under a Creative Commons License. #COVID-19 Protein Disruptor Frequencies #copyright 2005-2020 Frequency Research Foundation, USA. Offered for license under the Attribution Share-Alike under license of Creative Commons, accessible at http://creativecommons.org/licenses/by-sa/4.0/legalcode and also described in summary form at http://creativecommons.org/licenses/by-sa/4.0/. By utilizing these frequencies you acknowledge and agree that you have read and agree to be bound by the terms of the Attribution ShareAlike license of Creative Commons.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991829.45/warc/CC-MAIN-20210514214157-20210515004157-00284.warc.gz
CC-MAIN-2021-21
1,172
4
http://www.linuxquestions.org/questions/linux-newbie-8/how-could-i-remove-the-trash-icon-in-kde-i-have-two-10447/
code
This is what I do: Create a directory in /home called trash1 Go to the KGear/Applauncher>preferences>look and feel>desktop>paths and change the trash path to /home/trash1 The icon will disappear from the desktop automatically and you'll retain the r-click>"Move toTrash" functionality without the annoying Trash icon. Now someone tell me how you do this for Gnome
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172902.42/warc/CC-MAIN-20170219104612-00494-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
363
5
https://rdrr.io/github/DnI-Institute/r4ds/
code
This R package for Data Science has a few commonly required functions. It helps to het summary Statistics into a Data Frame. Also, Information Value and Weight of Evidence calculated for all variables of a data frame. And a few more additional usful functions which we need practical data science daily basis. |License||What license is it under?| |Package repository||View on GitHub| Install the latest version of this package by entering the following in R: Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592778.82/warc/CC-MAIN-20180721203722-20180721223722-00615.warc.gz
CC-MAIN-2018-30
575
6
https://www.dartmouth.edu/hr/benefits_compensation/coming_going/index.php
code
Coming & Going Enroll in Benefits Enroll in Benefits as a new employee. Life Event Changes Information on updating benefits due to life status changes. Leaves of Absence Apply for leaves of absences including disability, FMLA, and leaving Dartmouth. Information about benefit coverage when leaving Dartmouth. Dependent & Event Verification Required documentation for dependent and event changes.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650409.64/warc/CC-MAIN-20230604225057-20230605015057-00026.warc.gz
CC-MAIN-2023-23
395
10
http://oraclevsmicrosoft.blogspot.com/2005/02/
code
Oracle sequences and ADO .Net. A few days ago, I twisted my brain on a tricky problem. The point was « the tables in the database have triggers witch inserts a sequence value when inserting a row. The client application uses a dataset and datatables objects with a master-detail relation which requires the primary key to be stored in the datatable objects » That is not a brand new problem. The sequence mechanism is database oriented. That is a good idea. When there are numerous users, it is better to use a centralized id generator. But, an advanced cache oriented data access API like ADO .NET needs to manage this unique identifier without synchronizes the entire dataset. So, last week, I desperately try to build a master/detail dataset model that does not need to know the primary keys of new entries. That was not an elegant solution. Neither it is to use the ADO .net autoId system (as it is client side oriented). I have founded the following solution. Let us have a table named EMP with the following fields:ID_EMP Number(6) ..And the following sequence:CREATE SEQUENCE PSI_ADMIN.S_EMP START WITH 1 INCREMENT BY 1 A typical way to retrieve and manipulate values in .Net is to build dataAdapter object with the insert, delete, update and select Command. For instance:OleDbDataAdapter da = new OleDbDataAdapter() ; da.SelectCommand.CommandText= "SELECT ID_EMP,EMP_NAME FROM EMP" da.UpdateCommand.CommandText= "UPDATE EMP SET EMP_NAME= ? WHERE ID_EMP= ?" ; da.UpdateCommand.Parameters.Add("ID_EMP" , OleDbType.Numeric,6,"ID_EMP"); da.InsertCommand.CommandText= "INSERT INTO EMP(ID_EMP,EMP_NAME) VALUES(?,?)"; da.InsertCommand.Parameters.Add("ID_EMP" , OleDbType.Numeric,6,"ID_EMP"); da.DeleteCommand.CommandText= "DELETE EMP WHERE ID_EMP= ?" ; da.DeleteCommand.Parameters.Add("ID_EMP" , OleDbType.Numeric,6,"ID_EMP"); da.SelectCommand.Connection = cnx ; da.UpdateCommand.Connection = cnx ; da.InsertCommand.Connection = cnx ; da.DeleteCommand.Connection = cnx ; ….where “cnx” is a valid OLEDB connection to the Oracle database. The "da" object may be use for instance to fill a dataset that can be itself manipulated by a datagrid: …where "ds" is a valid dataset. This code will works fine as long as you can provide a valid ID_EMP. If you want to use the Oracle sequence, and that is my point, you need something more. You need to handle the “update” event and retrieve the sequence’s next Val from Oracle. Here is how I implement this: First, handle the event in the dataAdapter’s construction: …da.RowUpdating +=new OleDbRowUpdatingEventHandler(da_RowUpdating); In addition, somewhere else in the class, put the following method:private void da_RowUpdating(object sender, OleDbRowUpdatingEventArgs e) // Is it an insertion ? if (e.StatementType == StatementType.Insert) da.InsertCommand.Parameters["ID_EMP"].Value= (new OleDbCommand("select S_EMP.nextval from Dual",cnx).ExecuteScalar()); When the dataAdapter’s Update method is called, this event fire for each new, deleted or updated row, just before the SQL action is performed. If the action’s type is “insert” the sequence‘s next Val replace the current id_emp parameter. That’s all folks! Hope this helps. Comments are welcome. Back to work Waow! I didn't post for about 20 days! Sorry about this. I wasn't able to post because I was here and also there , I've also been to this place and this one Oups! I nearly forget this island and this one Sorry, that's hard to post code sample today.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256494.24/warc/CC-MAIN-20190521162634-20190521184634-00445.warc.gz
CC-MAIN-2019-22
3,491
48
https://www.graffitiboeke.co.za/index.php?lang=af&id=m&page=Soek&c_id=107&sort=title%20ASC&cat=Education&subcat=Grade-9&limit=15&pageno=5
code
Education: Grade 9 Education - Grade 9 Pythagoras 9 is complementary to learners' class work and normal textbooks and provides them with the opportunity to do extra exercises. The exercises will help learners when doing revision in preparation for tests and examinations. Each of the five learning outcomes and their subsections, as prescribed in the CAPS document, are covered comprehensively in this book. Each module consists of four sections: A brief discussion of the most important theory and typical examples. Exercise A: This exercise consists of simpler questions and is directed towards consolidating and securing learners' knowledge. Exercise B: This exercise consists of more complex questions and is directed towards extending learners' knowledge. Test: Each module concludes with a test. Learners should set a time restriction when they are completing such tests. A reasonable norm is 1 minute for every mark. Please note: Some modules also contain an Exercise C. This mainly applies to modules which contain a large amount of content and the aim is to cluster the work better so as to enhance understanding. (This ranking had not been followed in Module 1.1, though, as this module covers a wide variety of subjects.) The book as a whole comprises two sections: The brief discussion, exercises (questions) and test for each of the five learning outcomes and their subsections form the first part of the book. Complete answers to the exercises (questions) and tests (with mark allocation) form the second part.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00233.warc.gz
CC-MAIN-2021-31
1,524
12
https://coherentlabs.recruiterbox.com/jobs/fk0mhlk/
code
Coherent Labs is building a next-generation Virtual Reality/Augmented Reality web browser and platform. If you want to become part of the VR/AR revolution, join our team in our beautiful centrally-located office in Sofia. - work in a team of the best C++ experts in gaming, rendering, web, and VR technologies; - advance and leave a mark on a proprietary multi-threaded C++ web rendering technology that outperforms traditional browsers more than ten times; - get your hands on all released and *unreleased* VR/AR hardware - GearVR, Oculus Rift, HTC Vive, Playstation VR, Microsoft HoloLens, and more :); - develop professionally by solving complex problems in advanced deep technology We are looking for Senior C++ developers that are interested in performance optimizations, networking, graphics, security and web/HTML rendering. Our company is rooted in the game technology industry where our products are used by hundreds of AAA game developers to create amazing games and experiences. The company maintains the highest levels of professionalism, agile process, code reviews and software design practices.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170380.12/warc/CC-MAIN-20170219104610-00529-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
1,109
8
http://kylerbmjkw.diowebhost.com/10277234/helping-the-others-realize-the-advantages-of-java-project-help
code
We're going to help you to accomplish a great functionality and high-quality within your Java project. We're going to let you know how get mini java projects finished very well. #javatips #javaadvice #tips #studentshomework #assignment #homework #homeworkhelp #onlineservices #onlinehelp #interestingfact #usefulinfo #jfyi #programming #improvement #programmer #developer #professional #Skilled #forbeginners Just about every and every single, suitable from your Dwell chat support for the 1 who was crafting are incredibly well-informed and skillful people today in addition to very humble. In other words, the supply code Guidelines in JAVA are converted to bytecode rather than device code. Though executing the instructions, JAVA virtual equipment interprets the bytecode and interprets into executable device language. Our JAVA programming assignment help delivers swifter services than every other assistance vendors. Properly, we’re here to show you which you could become a Java guru Considerably more quickly than you think with a correct java course help. To provide assignment assistance, We have now designed couple provisions like segmented expert services, independent editing and proofreading support and paraphrasing help. To supply these solutions with utmost proficiency, We have now employed only the finest assignment writers from Australia. I am a mechanical university student from Hong Kong,China. I'm excited about devices, but in our second semester I obtained a programming topics. Programming is incredibly triable undertaking for me. Should the argument price is by now equal to the mathematical integer, then The end result is the same as the argument. When the argument is NaN or an infinity or beneficial zero or unfavorable zero, then The end result is similar to the argument. Our JAVA programming assignment help gurus are highly committed to your achievements and usually takes unending measures to help YOURURL.com you with any programming apps. As outlined by our JAVA programming assignment help experts, it helps the students to attain the normal functions of various superior-degree and lower-amount languages and provides a agency knowledge of the variables, facts types, arrays, Manage movement and operators. Java provides a set of collection lessons, which happen to be much like the STL in C++. There are actually abstract collections, for example Established, and Checklist which give an interface and implementations for example TreeSet and ArrayList. There are actually methods for instance includes which happen to be supplied by the many collections, although the pace of examining contains will depend on the sort of collection, a TreeSet is much faster than an ArrayList. Sets are unordered whilst Lists are requested, meaning for those who insert the values 1,two,3 into see here a Set and right into a List, Then you can certainly get them back again in a similar get from an inventory, but from the Set the purchase will not be preserved, to help you inform you have Those people values, but You can not say just about anything in regards to the get they had been added to the Set. Meta Stack Overflow your communities Sign on or log in to personalize your record. more stack exchange communities company site org I requested listed here to help, Sarfaraj promised me to that he will comprehensive my c programming assignment prior to time and he experienced finished it properly, go to the website I bought ninety five% marks in my assignments, I hugely advocate for yourself, He pretty co-operative Should the argument worth is previously equal to the mathematical integer, then The end result is similar to the argument. Should the argument is NaN or an infinity or positive zero or destructive zero, then the result is similar to the argument.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583700734.43/warc/CC-MAIN-20190120062400-20190120084400-00412.warc.gz
CC-MAIN-2019-04
3,804
14
http://gamedev.stackexchange.com/questions/tagged/physics-engine+efficiency
code
Game Development Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Why do we use physics engines for collision testing or raycasting? There is a thing I don't understand about game engines: why it is so common to use physics engines to do raycasting or collision testing? Say that you have a 3D scene loaded in your scene manager ... Sep 15 '10 at 10:33 newest physics-engine efficiency questions feed Hot Network Questions Is shutdown.exe file necessary to shutdown windows ? Why does SHA-1 have 80 rounds? how to configure signature for user to be used in email Why do share prices fall if profits fall? Is it possible to bypass a recruiter that once introduced you to a firm? Reagan National Airport: Runway 15 closures Can the group of homotopy classes of maps into S^3 be noncommutative? What? No error? Next vs. Continue Birthday paradox with a (huge) twist: Probability of sharing exact same date of birth with partner? Zig-zag on Hawker Hunter's leading edge Why are photos of satellites most often computer generated? Why would spacetime curvature cause gravity? why does something disappear in the aligned environment? Could you please tell me the time? Express y in terms of x 4, 8, 15, 16, 23, 42 Is there an idiom or euphemism for when someone has an average/small penis but knows how to use it? An application of \foreach command in TikZ to generate labels Problem with csname macro expansion "people don’t know what they want until you show it to them" A coworker beat me to resignation. How can I resign in a professional manner? Determine if Strings are equal more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Overflow Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678690318/warc/CC-MAIN-20140313024450-00044-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
2,171
52
http://askubuntu.com/questions/tagged/jockey+graphics
code
We have chosen to use the Nouveau driver for our PCs with nVidia graphic cards. Unfortunately every user who gets connected to those PCs get the message "Additional drivers available" ... even if they ... On my Windows 7 installation an Intel HD Graphics 3000 card/driver shows. However in Ubuntu 12.04, System → Administration → Hardware Drivers shows no proprietary drivers available for the system. I ... When I move my windows around they are really laggy. I had a feeling it's because Ubuntu isn't taking full advantage of my graphics card so i went to install the additional drivers. The normal ...
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704933573/warc/CC-MAIN-20130516114853-00069-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
608
3
https://ruddergroup.com.au/careers/
code
We Need You Now! Electrical & Electronics / Telecommunications / Biomedical Intern Rudder Technology, in collaboration with top universities and research institutions in Australia, focuses on the Research & Design, production and application of artificial intelligence, medical devices and interactive chips. Medical application is one of the main carriers of our R & D direction. The elderly care includes a series of data processing and real-time monitoring, such as non-contact heart rate, blood pressure monitoring, fall warning, etc. Sensor interaction is combined with data analysis of artificial intelligence for dynamic behaviour analysis and cloud data computing, to achieve behaviour prediction and physiological index real-time monitoring. We are a pioneer in the aged care market to develop the next-generation solution and product for the supervision and caring of the senior citizens. The successful candidate will be the primary developer (hardware – radar system) on the exciting new project. Reporting to the Principal Engineer, your major responsibilities will include: - programming on hardware to meet agreed specifications and coordination with our research team and software developers - device integration and system testing - developing requirements and designing specifications for new modules of hardware supporting our R&D activities - testing and supporting our product trial conducted with our hospital and aged care centre partners To be successful in securing this position you will ideally possess: - PhD degree in Electrical/Electronic Engineering, Mechanical Engineering, Computer Science, or other relevant fields - experience in programming and development with the integrated system (e.g. robot) good knowledge of radar - experience working on digital transmit and receive hardware - strong communication skill and cooperation with other development members Other desirable skills about you include: - understanding AI and machine learning models - Python programming In return we offer: - Competitive salary - Unique opportunity of joining prestigious research teams of UNSW and USYD - Central work location in Sydney CBD - Professional and friendly environment - Flexible working arrangement If you are interested in being considered for this rewarding and challenging opportunity at our team, please apply with your resume and an academic transcript via the form below. We will contact you if you fully achieved our requirements.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103646990.40/warc/CC-MAIN-20220630001553-20220630031553-00601.warc.gz
CC-MAIN-2022-27
2,471
24