url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
http://smashessays.com/2021/08/29/scenario-4-you-are-a-sales-representative-for-a-company/ | code | You are a sales representative for a company that encourages staff to log time in the field and away from the office. You are expected to begin and end your day at the office. You notice that each day when you arrive and return another coworker is already there, and you wonder whether this person spends most of their time at the office. At your weekly sales meeting, you are informed of your coworker’s outstanding sales performance. You suspect that this coworker is spending more time flattering the boss instead of working leads in the field and as a result is getting the best client referrals. Your own sales numbers have steadily decreased since this other sales representative was hired. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103915196.47/warc/CC-MAIN-20220630213820-20220701003820-00256.warc.gz | CC-MAIN-2022-27 | 698 | 1 |
https://devblogs.microsoft.com/microsoft365dev/new-ways-to-use-apps-in-microsoft-teams-and-a-tool-to-help-you-build-them/ | code | New ways to use apps in Microsoft Teams and a tool to help you build them
Today we announced new features in Microsoft Teams that make it an even more powerful hub for teamwork – enabling teams to use apps in new ways and allowing them to take quick actions from wherever they are in Teams. What does this mean for developers? Teams apps will be more visible and accessible than ever! Learn more about these new app features.
Introducing the Teams App Studio:
As we launch this new app experience for Teams users, we also want to introduce the Teams App Studio, a new tool, currently in preview, to help you build your own apps. The Teams App Studio makes it easy to start developing or integrating your own service, whether you develop custom apps for your enterprise or SaaS applications for teams around the world. The Teams App Studio streamlines creation of the manifest for your app, and provides other useful tools like the Card Editor and a React control library. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297284704.94/warc/CC-MAIN-20240425032156-20240425062156-00638.warc.gz | CC-MAIN-2024-18 | 973 | 4 |
https://www.blackhatworld.com/seo/url-slug-with-parent-page-does-not-display-url-in-google-serps-good-or-bad.783344/ | code | Hi, i changed the structure of my URL slugs from this (just an example, the sites do not exist): www.asker1.com/car-insurance-online/ to this: www.asker1.com/car-insurance/online (car insurance is the parent page) I noticed that now i dont see the URL in Google SERPS. Just like this example. Look at the second page. You just see the parent page (Companies) under the title. No URL in google serps = no bold exact match words in the serps except the title. Do you think its a huge disadvantage? Thats the first question. The second question: If i try to target a long tail keyword like "car insurance online", is there any difference in this? 1) asker1.com/car-insurance-online/ vs 2) asker1.com/car-insurance/online vs 3) asker1.com/car-insurance/car-insurance-online Or it doesnt matter? Thanks! | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947693.49/warc/CC-MAIN-20180425041916-20180425061916-00155.warc.gz | CC-MAIN-2018-17 | 798 | 1 |
http://lauderdale.lch.schoolinsites.com/?PageName='OrganizationPage'&OrganizationID='37012' | code | Each year County Industries (Lauderdale County High School’s robotics team) competes in the BEST Robotics competition. In this competition, each team is shown the course, given two very extensive rule books, and most importantly we are given the materials with which to build our robot. Our programming team is using MATLAB Simulink software to program the movements of our robot. Our engineering team is in charge of building the robot and using the code provided by our programming team to make the robot's wheels, arm, and claw move. The marketing team is in charge of a marketing booth and presentation for the BEST Robotics competition, along with many other things. This past Saturday, October 8, we competed in the local BEST competition at Northwest Shoals Community College. This year, with all of this effort put in, we placed 2nd in the robotics competition! This means we will move on to compete at South’s BEST in Auburn! South’s BEST competition will be December 3-4. We are so excited about the competition at Auburn! Wish us luck!
Written by Anna Gautney, Head of Marketing, County Industries
2016 "County Industries" Robotics Team Members
Jackson Smith, President and CEO
Clint Newton, Lead Engineer
Anna Gautney, Head of Marketing
Jarod Barksdale, Kevin Brown, Samuel Fink, Lydia Martel, Kelly Maton, Brandon McCafferty, Savannah Patterson, Trenton Sharp, Brylee Sinyard, Braden Spencer, Tyler Veal, Duncan Williams, Kaylee Word
BEST stands for Boosting Engineering, Science, and Technology. The program supplies students with a kit of supplies and challenges them to build a functioning robot to perform a given task. To learn more about BEST, visit bestinc.org.
The 2016 BEST Robotics Competition is named BET THE FARM. Our local hub's season started on August 27 and ended on Game Day at NWSCC on October 8. South's BEST Regional Competition is scheduled for December 2-3 at Auburn University.
---Link to Northwest Alabama BEST Robotics Hub at NWSCC
2015 Video from NWSCC Game Day This is a video of our team's second seeding round in the 2015 Northwest Alabama BEST Robotics competition. The driver is Morgan Grisham and the spotter is Jarod Barksdale.
2015 Robot "Highlight" Video Produced by Jackson Smith | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187206.64/warc/CC-MAIN-20170322212947-00117-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 2,235 | 12 |
https://generalassemb.ly/education/python-programming/online/learn-more/33655 | code | Python is a versatile and widely-used programming language with many applications. Come learn how you can become a proficient and confident Python programmer.
During the upcoming online information session you will:
If you are considering enrollment in an upcoming cohort, it is strongly advised to contact the Admissions team prior to attending this event.
You will receive a link to tune in live in your email before the event.
Participants who complete this workshop will not receive a certificate or any other designation of completion.
This workshop is not designed to prepare individuals to follow or pursue a trade, occupation or profession. It is not designed to improve, enhance, or add to the skills and abilities of an individual relative to occupational responsibilities or career opportunities.
This workshop is not approved by the Division of Private Business and Vocational Schools at the Illinois Board of Higher Education and participation in this workshop/event will not be counted towards an approved IBHE program/course of instruction.
Wednesday, 1 November
You’re on the list!
Keep an eye on your inbox for your ticket and we’ll see you at the event. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510284.49/warc/CC-MAIN-20230927071345-20230927101345-00730.warc.gz | CC-MAIN-2023-40 | 1,175 | 10 |
https://rsprivatizacija.com/enterprise-manager-and-real-application-testing-help-csx-corporation-upgrade-databases-twice-as-fast/ | code | With more than 400 databases supporting critical, packaged and proprietary business applications, including payroll, dispatch and a customer-facing order entry system, international transportation company CSX wanted to take advantage of the enhanced functionality of Oracle Database while minimizing business impact. and downtime during migration. To ensure the process went smoothly, CSX used Oracle Real Application Testing and Enterprise Manager.
Using Oracle Real Application Testing helped CSX streamline the upgrade process and enabled it to complete the database upgrade in less than half the time required for the previous upgrade. the enterprise database which meant a 30% smaller database footprint. Providing critical insights, Oracle Real Application Testing allowed CSX to fully assess the impact of infrastructure changes and refine queries in a test environment before deploying the change to production.
CSX also used Oracle Enterprise Manager to analyze performance data from Oracle Real Application Testing’s SQL Performance Analyzer to assess the impact of prepackaged and custom SQL workloads during upgrades of the Oracle database in its Oracle E-Business Suite environment. By capturing SQL workloads for different peak periods into SQL tuning sets, CSX was able to create a comprehensive library of SQL queries that can be used for change validation.
“Oracle Enterprise Manager and Oracle Real Application Testing provided us with the necessary testing environment that allowed us to mitigate post-upgrade performance issues and ensure a successful upgrade of 400 databases. The fact that the upgrade was completed with no downtime and in half the time of our last upgrade was a critical win for our organization,” said Maritza Gonzalez, Technical Director, Data Management, CSX Corporation.
Additionally, CSX implemented Oracle Enterprise Manager 12c to monitor and manage a combination of over 500 Oracle databases and Oracle Real Application Cluster instances. Oracle Enterprise Manager provides centralized, standardized, and reliable monitoring, which has enabled CSX to manage the growth in the number of servers and databases. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00464.warc.gz | CC-MAIN-2022-33 | 2,162 | 5 |
https://discourse.webflow.com/t/hide-my-web-site-on-a-free-account/42661 | code | Is there any way to hide my webflow web site from the search indexing? Or maybe just hide it completely?
I have a free account on which I am holding a development web site. But is is currently showing up in searches and degrading the SEO of the current active site.
All the options I see to hide the site seem to be limited to paid accounts. If I have to get a paid account to hide the site, then I have no choice but to delete the site and move on from Webflow. | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00205.warc.gz | CC-MAIN-2021-31 | 462 | 3 |
http://forums.linuxmint.com/viewtopic.php?f=46&t=84628 | code | arjot wrote:I have 4GB ram on my laptop and I don't use hibernation function on my laptop so do I really need swap?
proxima_centauri wrote:I would just use this guide to create a SWAP file instead of a partition.
arjot wrote:The space I reserved for swap partition is 4GB & it is shown as unusable space in partition s/w.
/dev/sda*/ swap swap defaults 0 0
Users browsing this forum: No registered users and 12 guests | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701167599.48/warc/CC-MAIN-20160205193927-00102-ip-10-236-182-209.ec2.internal.warc.gz | CC-MAIN-2016-07 | 416 | 5 |
https://www.inesc-id.pt/events/sm-545-spoken-dialogue-systems-progress-and-challenges/ | code | Spoken Dialogue Systems: Progress and Challenges
University of Cambridge –
The potential advantages of statistical dialogue systems include lower development cost, increased robustness to noise and the ability to learn on-line so that performance can continue to improve over time. This talk will briefly review the basic principles of statistical dialogue systems including belief tracking and policy representations. Recent developments at Cambridge in the areas of rapid adaptation and on-line learning using Gaussian processes will then be described. The talk will conclude with a discussion of some of the major issues limiting progress. Bio: Steve Young received a BA in Electrical Sciences from Cambridge University in 1973 and a PhD in Speech Processing in 1978. He held lectureships at both Manchester and Cambridge Universities before being elected to the Chair of Information Engineering at Cambridge University in 1994. He was a co-founder and Technical Director of Entropic Ltd from 1995 until 1999 when the company was taken over by Microsoft. After a short period as an Architect at Microsoft, he returned full-time to the University in January 2001 where he is now Senior Pro-Vice-Chancellor.
His research interests include speech recognition, language modelling, spoken dialogue and multi-media applications. He is the inventor and original author of the HTK Toolkit for building hidden Markov model-based recognition systems (see http://htk.eng.cam.ac.uk), and with Phil Woodland, he developed the HTK large vocabulary speech recognition system which has figured strongly in DARPA/NIST evaluations since it was first introduced in the early nineties. More recently he has developed statistical dialogue systems and pioneered the use of Partially Observable Markov Decision Processes for modelling them. He also has active research in voice transformation, emotion generation and HMM synthesis.
He has written and edited books on software engineering and speech processing, and he has published as author and co-author, more than 250 papers in these areas. He is a Fellow of the Royal Academy of Engineering, the IEEE, the IET and the Royal Society of Arts. He served as the senior editor of Computer Speech and Language from 1993 to 2004 and he was Chair of the IEEE Speech and Language Processing Technical Committee from 2009 to 2011. In 2004, he received an IEEE Signal Processing Society Technical Achievement Award. He was elected ISCA Fellow in 2008 and he was awarded the ISCA Medal for Scientific Achievement in 2010. He is the recipient of the 2013 Eurasip Individual Technical Achievement Award.
Date: 2013-Jun-24 Time: 14:30:00 Room: Anfiteatro do Pavilhão Interdisciplinar, IST Alameda
For more information:
Workshop “Metabolism and mathematical models: Two for a tango” – 2nd Edition
Title: Workshop Metabolism and mathematical models: Two for a tango – 2nd Edition
Dates: October 25-26, 2022
Location: This workshop will be held in a virtual way
The topic of this workshop is metabolism in general, with a special focus, although not exclusive, on parasitology. Besides an exploration of the biological, biochemical and biomedical aspects, the workshop will also aim at presenting some of the mathematical modelling, algorithmic theory and software development that have become crucial to explore such aspects.
This workshop is being organised in the context of two projects, both with the Inria European Team Erable. One of the projects involves a partnership with the University of São Paulo (USP), in São Paulo, Brazil, more specifically the Institute of Mathematics and Statistics (IME) and the Institute of Biomedical Sciences – Inria Associated Team Capoeira – and the other involves the Inesc-ID/IST in Portugal, ETH in Zürich and EMBL in Heidelberg – H2020 Twinning Project Olissipo.
The workshop is open to all members of these two projects but also, importantly, to the community in general.
The program and more details are available here. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00364.warc.gz | CC-MAIN-2022-40 | 4,002 | 15 |
https://stat.ethz.ch/pipermail/bioc-devel/2005-November/000335.html | code | [Bioc-devel] [BioC] Bioconductor1.6(Rgraphviz etc.) install pro blem with red hat9.0
xnyang at seu.edu.cn
Tue Nov 22 18:13:20 CET 2005
I tried the graphviz-2.6.0, the default installed path is still
"/usr/local/lib/graphviz". and the worse is Rgraphviz-1.8.0 "unable to
load shared library '/usr/local/lib/R/library/
Rgraphviz/libs/Rgraphviz.so': " again.
Then I re-tried graphviz-2.2 as mentioned in last email, it works, but
need to type the path for R.
Where is the deault path for graphviz as Rgraphviz looks for?
need more help to simplify that.
Byron Ellis wrote:
> Yes, you've chosen to install graphviz in a place not specified in
> the default library path so you need to set an environment variable.
> Why is this a surprise?
> On Nov 22, 2005, at 8:13 AM, xinan yang wrote:
>> Hi Robert and Byron,
>> Thanks for your prompt reply. It seems work using "sudo make
>> install" !
>> because, in the shell, it "R CMD INSTALL" works, which did not work
>> %whereis graphviz
>> graphviz: /usr/local/lib/graphviz
>> %export LD_LIBRARY_PATH=/usr/local/lib/graphviz
>> %R CMD REMOVE Rgraphviz
>> % R CMD INSTALL Rgraphviz
>> Then I can load the package.
>> But , after open a new shell, it still doesn"t work!
>> > library(Rgraphviz)
>> Loading required package: graph
>> Loading required package: cluster
>> Loading required package: Ruuid
>> Creating a new generic function for 'print' in 'Ruuid'
>> Error in dyn.load(x, as.logical(local), as.logical(now)) :
>> unable to load shared library '/usr/local/lib/R/library/
>> libdotneato.so.0: cannot open shared object file: No such file or
>> Error: .onLoad failed in 'loadNamespace' for 'Rgraphviz'
>> Error: package/namespace load failed for 'Rgraphviz'
>> > sessionInfo()
>> R version 2.2.0, 2005-10-06, i686-pc-linux-gnu
>> attached base packages:
>> "methods" "stats" "graphics" "grDevices" "utils"
>> "base"
>> other attached packages:
>> graph Ruuid cluster
>> "1.8.0" "1.5.3" "1.10.2"
>> I must re-type "export LD_LIBRARY_PATH=/usr/local/lib/graphviz" to
>> let it works!
>> What should I do if I want to update the graphviz-2.2 to graphviz 2.6?
>> Simply download the graphviz-2.2.0, then "./configure","make","sudo
>> make install" in the directory of graphviz-2.6?
>> Should I uninstall the previous one?
>> The same question for R, How to install R-2.2.0 while keep the old
More information about the Bioc-devel | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00759.warc.gz | CC-MAIN-2022-21 | 2,376 | 53 |
https://www.experts-exchange.com/questions/20413636/Fonttransparent-not-working-when-printing.html | code | Printer object appears to ignore .fonttransparent, and still prints white blocking around the text. I have read a routine on this site provided by Paul Hews in year 2000 using:
Public Const TRANSPARENT& = 1
Public Declare Function SetBkMode& Lib "gdi32" (ByVal hdc As Long, ByVal nBkMode As Long)
lngRet = SetBkMode(Printer.hdc, TRANSPARENT)
...where do I place the lngRet statement in my code, and how do I declare lngRet? Or is there another answer to this problem? Many thanks for help. | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986682998.59/warc/CC-MAIN-20191018131050-20191018154550-00463.warc.gz | CC-MAIN-2019-43 | 489 | 5 |
http://librarianinblack.net/librarianinblack/2004/02/geek_pagan_hier.html | code | Which geeks look down upon which other geeks?
Which pagans look down upon which other pagans?
Frighteningly, as both a science fiction fan and someone who studied mythology, I rank rather highly on both charts.
Email (Required for LIB's eyes only)
Notify me of follow-up comments by email.
Notify me of new posts by email. | s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657128304.55/warc/CC-MAIN-20140914011208-00263-ip-10-196-40-205.us-west-1.compute.internal.warc.gz | CC-MAIN-2014-41 | 322 | 6 |
https://docs.crisp.chat/ | code | Welcome to the Crisp Developer Hub
Build apps for 400k+ Crisp users, or just for your own private use.
Start building on the top of the Crisp Platform in a matter of minutes.
Guides & tutorials on how to integrate Crisp with your systems.
In-depth technical references of all available Crisp APIs. | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100535.26/warc/CC-MAIN-20231204214708-20231205004708-00117.warc.gz | CC-MAIN-2023-50 | 297 | 5 |
https://www.wavelengthglobal.com/about | code | The Founding Team
Wavelength is founded with a team of experts in AI, computer vision, and imaging. We got together to develop Artificial Intelligence solutions for ADAS and autonomous vehicles (L2-L5) because we believe that self driving cars will impact people's life the most in the next 50 years and we believe in edge computing.
Chief Executive Officer
Chief Technology Officer
We are a team of passionate creative people who believe in edge computing and using AI to solve problems.
We are constantly looking for like-minded software developers to join us. | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355937.26/warc/CC-MAIN-20210225211435-20210226001435-00282.warc.gz | CC-MAIN-2021-10 | 562 | 6 |
http://community.wikia.com/wiki/Thread:1401735 | code | When i type in a comment, it types it in wikitext; for example, when I do Bold, it puts it in triple apostrophes. It happens on every wiki I comment on, including this one. It bothers me, and I would like to know how to fix it. Is this a glitch?
Please give a link to an example page you are trying to add comments to in VisualEditor. It doesn't behave the same on every page, so knowing a specific example page will help us figure out what's going on.
As I already said, that is the MiniEditor. What I neglected to say is that it's modes are analogous to the visual and source modes of the classic editor. If your browser is not capable of supporting visual mode, it will not support MiniEditor's visual mode.
Since you are using Edge, I am guessing it has been disabled along with the rich-text editor. The reason the rich-text editor was disabled for Edge is that there have been a lot of bugs specifically in Edge and Wikia wanted to save users the hassle of dealing with them while they try to get it fixed. | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00195.warc.gz | CC-MAIN-2018-51 | 1,012 | 4 |
https://hasjob.co/scroll.in/4s0mm | code | This post is over 30 days old. The position may no longer be available
Posted by Scroll.in Development (@scrolldev)
Help power the next generation of publishing by joining Scroll.in, a digital media company focused on the intersection of editorial and technology. Our first offering is a digitally-native news publication, Scroll.in, which brings readers news that matters in an elegant, responsive newsfeed.
About technology team
Tech team is a bunch of fun loving geeks who love open source. We have built products and maintain open source projects that we are proud of.
We provide Mac Air/Pro to all our developers. We have flexible work times.
Learn more about the team here
- Designing and building responsive web applications
- Collaborating with cross-functional teams to define, design, and push new features
- Unit-testing code for robustness, including edge cases, usability, and general reliability
- Working on bug fixing and improving app performance
Continuously discovering, evaluating, and implementing new technologies to maximize development efficiency
The ideal candidate should have:
- Relevant development experience of at least 1+ years
- Familiarity with version control system such as git
- Open source contributions
- Familiarity with React.js
- Familiarity with Vue.js
Apply for this position
Login with Google or GitHub to see instructions on how to apply. Your identity will not be revealed to the employer.
It is NOT OK for recruiters, HR consultants, and other intermediaries to contact this employer
Welcome to Hasjob!
Since 2011, Hasjob has been the place for Indian tech startups to list job opportunities. This is where startups hire before they become big and famous rocketships. All jobs on Hasjob are posted directly by founders or core team members. We do not accept listings from third-party recruiters.
Apply to a job here to join the startup scene in Bangalore, Delhi/NCR, Mumbai, Pune, Chennai, Hyderabad or one of the many other cities.
You can browse all jobs posted in the last 30 days, but logging in is recommended to make the most of Hasjob. Your identity is safe with us. | s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511872.19/warc/CC-MAIN-20181018130914-20181018152414-00480.warc.gz | CC-MAIN-2018-43 | 2,120 | 25 |
http://books.mozdev.org/html/mozilla-chp-6.html | code | Until your project is packaged for distribution, it can't be fully considered a finished application (unless it was designed to work only on the computer where it was created). Making your application distributable, installable, and registrable allows others to use what you have created.
This chapter is divided into four main sections. It starts with a quick overview of the basics of packaging and installing applications. The second section provides details about how to get your application packaged and described so that Mozilla recognizes what it is. The next section specifies how to put your package into a cross-platform installation file that can be installed over the Web onto other machines. The last section provides tips for customizing how your application will look once it is installed.
Several different pieces comprise Mozilla's distribution technology. In fact, Mozilla may have a few more moving parts than other packaging systems because it needs a way to package and install new software uniformly across several different platforms. Figure 6-1 shows the major components of Mozilla's packaging system outlined in black.
As you can see in Figure 6-1, the Cross-Platform Installer (XPI), pronounced zippy or X-P-I, is the archive format used to distribute Mozilla applications. The XPI file contains a script that downloads and installs the application. The package inside the XPI has a manifest that is used to register the new Mozilla-based software with the Mozilla chrome registry.
When a XPI contains a Mozilla-based package such as the xFly sample discussed in Chapter 2 and the following chapters, the installation script also takes care of the package registration process, described in the Section 6.2.2 section later in this chapter. Example 6-1 shows a simple installation script and the kind of information it contains. The Section 6.3.2 section, also later in this chapter, discusses other scripts that may need to be used in the installation process, such as trigger scripts.
Example 6-1. Package installation script
var myFile = "xFly.jar"; initInstall( // initialize the installation "Install xFly", // display name of installation "xFly", // package name "0.0.1", // version of install 1); // flags - an optional argument, // reserved for future use f = getFolder("Chrome"); // specify a target directory setPackageFolder(f); addFile(myFile); // add software to the installation registerChrome( PACKAGE | DELAYED_CHROME, // chrome switch (i.e., type) getFolder("Chrome","xFly.jar"), // destination of package "content/xFly/"); // location of manifest in package if (0 == getLastError( )) // if there have been no errors: performInstall( ); // install "xfly.jar" else // otherwise cancelInstall( ); // cancel the installation.
The installation process requires a few different steps. First an installation must be initialized. Then the software to be installed is added to the specified target directory. Finally, packages in the installation are registered. At this point, the application is installed on a user's computer.
When you install new packages or Mozilla-based software, the chrome registry on the Mozilla side brokers the deal -- reading the manifest, executing the installation script(s), and updating the package information that it maintains internally (storing this information using RDF).
The relationship of the packaging, installation, and registration -- and all pieces involved -- may seem a little complex and idiosyncratic at first, but bear with it. The upshot of this powerful but somewhat diffuse packaging technology is that you can bundle your software, put it on a server, and have users install it by simply clicking a link on a web page when using Mozilla.
It is possible to use this packaging system to bundle any sort of application or extension to an existing Mozilla application. You can install a XPI that adds functionality to the Mozilla browser, such as Mouse Gestures (http://optimoz.mozdev.org/gestures/ ), which enables the execution of common browser commands with mouse movements. You can package new Mozilla development tools and libraries like JSLib (see Chapter 5). You can also create installations for entirely new Mozilla applications. | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148375.36/warc/CC-MAIN-20200229022458-20200229052458-00287.warc.gz | CC-MAIN-2020-10 | 4,224 | 11 |
https://www.hologram.io/guides/data-messages | code | Hologram's Data Engine is a message queue and protocol translation layer designed to route data from your embedded devices to other internet-connected services. It features a low-bandwidth ingestion protocol designed for resource-constrained microcontrollers. The Hologram Nova's standard library exposes the Data Engine as an Arduino Serial-compatible object for sending data.
A message consists of a data payload and metadata. The payload is typically a text string, but can be arbitrary binary data. Most metadata is automatically added on the server based on the message source, but you can also specify one or more topics to control how the Data Engine routes the message.
Topics provide the Data Engine with context on what the data represents, where the data originated, or why the data was generated. The Data Engine uses topics to conditionally route messages to their proper destinations via routing rules.
Topics are arbitrary strings, and a message can have more than one topic associated with it. You should develop your own conventions on how to use topics. Thoughtful use of topics can make it easier to set up new routing rules and integrations without needing to update device firmware.
User topics cannot begin with an underscore character. The Data Engine reserves these names for system topics such as _DEVICE_ID_.
Here are some examples of good topics to add to messages:
It's not always obvious whether to include some information as a topic or encoded in the message payload itself. The Data Router is only able to route messages based on topics, and messages to downstream services can include topics. Therefore, for the most flexibility, it's a good idea to use topics for classifying messages.
In addition to the topics you specify when writing a message, the Data Router will automatically append certain topics based on the source and protocol of the message.
All system topics begin and end with an underscore character, e.g. _SIMPLESTRING_. For security and consistency reasons, you are not able to specify these system topics explicitly.
The most important system-generated topic is the device topic. Every message written to the Data Engine from a device gets a topic in the form of _DEVICE_1234_, where 1234 is the Hologram device ID.
For a complete listing of system tags that the Data Engine may add to your messages, please refer to the System Topics reference.
When sending a message to Hologram's Data Router via the TCP API, you must authenticate as a specific device. The authentication credentials consist of a 8-byte device key. You may view these credentials under Cloud & messaging on the device dashboard page.
If you believe that a device's credentials may have been compromised, you can regenerate them. Each device has one active set of credentials at a time, so the old credentials will no longer work after regenerating.
The TCP API also supports a separate authentication method based off of the SIM card. This method is used by the Hologram Nova if you use it with the Hologram Python SDK in its default sending mode. With this authentication method, there is no need to generate credentials on the dashboard. | s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519784.35/warc/CC-MAIN-20210119201033-20210119231033-00538.warc.gz | CC-MAIN-2021-04 | 3,161 | 14 |
https://soen.ghost.io/choosing-the-right-health-and-performance-monitoring-for-your-sitecore-solution/ | code | In this blog post, I'll be talking about some of the learnings, I've been gathering over the past few months on how to choose and incorporate health and performance monitoring (APM) tools into your Sitecore solutions.
The Big Why
Although a slightly different topic then what I normally blog about, I've always considered monitoring of software solutions to easily be one of the most important things to consider when working with medium to large sized solutions. This is usually a topic most technical minded people can relate to, and agree on should be highly prioritized.
From my daily work, I see that this something that tends to be neglected, one way or another, and there are many reasons for this. One reason is that the team of developers doesn't have the knowledge to determine which tools to use, or even how to use them. Another reason might be that client concludes that it's too costly to invest time and money in such tools in terms of the return on investment, despite the fact that the development team has most likely recommended having such tools available to help track down errors when they occur (and believe me, they will occur eventually, even standard shelf products have errors in them), or in predicting upcoming failures based on trends in previous failure history.
From the perspective of Sitecore, a whole other argument that developers seems to think is feasible is that, if an error occur, why not simply review the log files in Sitecore; that's what they are for, and you can just download the log files and run them through the Sitecore Log Analyzer (SCLA), right?
Let me start by saying that I really appreciate all the work that has gone into making the SCLA, and that I use this (awesome) tool in my daily work. However, I would also like to point out that I prefer not having to log into every Sitecore server, pull down every questionable log file, run each of them through the SCLA and from here compare the log entries across each of the logs from the individual servers, hoping to see some sort of pattern in what might actually be correlated to the error(s).
There has to be a different way, right?
As mentioned, the problem is scaling and in the context of a large solution, getting correct informations about the actual state of the solution. In practice, when you have a medium to large sized solution, not to mention adding Sitecore in the mix, it will take time to manually scan through the log files in order to pin down which server(s) might be causing problems. Moreover, it will be very tricky to get an overall picture of how your solution performs as a whole, since you are lacking a centralized place of gathering and aggregating all information about the current health and performance state of the solution.
Instead, what we want to be able to do is to:
- Monitor the performance of the application to make sure that it is healthy
- Rapidly diagnose applications or systems that are failing
- Monitor live applications, individually or across the entire solution as a whole
- Log events that do not necessarily relate to errors in an application
Let me put your mind to ease by saying, that all of the above are exactly what health and performance monitoring tools are meant to help achieving.
When you decide on using a health and performance monitoring tool, you should be aware that there are a plateau of tools available at your disposal, some free, others priced in the range of tens to hundreds of dollars per month. Although this can be quite overwhelming, the trick is to understand which of the available tools you should be giving a closer look, and how they differ from each other.
To get you started, I've listed some of the tools I've been working with lately to give you an idea of what kind of solutions there are available on the market.
Application Insights is Microsoft's answer of a fully-fledged APM, which can be used to monitor your application(s) while they are running live. Application Insights is aimed at the development team, to help you understand how your app is performing and how it's being used. Application Insights will automatically detect performance anomalies and notify the development team of such.
Once configured, Application Insights will monitor the following about your solution:
- Request rates, response times, and failure rates
- Dependency rates, response times, and failure rates
- Page views and load performance
- AJAX calls from web pages
- User and session counts
- Performance counters from your server machines, such as CPU, memory, and network usage
- Diagnostic trace logs from your app (so that you can correlate trace events with requests)
As for pricing, this definitely depends on whether you go with Microsofts cloud based solution in Azure, or use it on-premises. The cloud based solution offers two pricing options: Basic, and Enterprise. With Basic, you pay based on the volume of data your solution sends to Application Insights, with a 1 GB free allowance per month. In the Enterprise pricing option, you pay for the number of nodes that host your application, and you get a daily allowance of data per node. Each node will cost you around $15 where you are able to send 200 MB data per node each day. If you need to send more data than either the 1 GB included in the Basic, or the 200 MB per node in the Enterprise, this will cost you $2.30 per additional GB sent.
I've personally been using the cloud solution for the past 4-5 months, and I've been very satisfied with the results. It requires very little effort to get Application Insights up and running, and once you have data being send to Application Insights, the different analytics and diagnostics are very easy to use, and you quickly get very good overview on the health and performance of your solution.
Although one of the older players on the field, Elmah.io is a really nice product, which I recommend that you should check out.
In a nutshell, Elmah.io helps monitoring your solution for crashes. In doing so, this helps you getting the overview of the quality of your solution. If an error occurs, you'll get an notification over a variety of communication channels (mail, Slack or such) - heck, Elmah.io even assists you in fixing your bugs by combining error diagnostic information with quick fixes and answers from Stack Overflow (I was quite amazed by this feature, when I first used Elmah.io).
Once the Elmah.io bits have been dropped into your solution and configured appropriately, you get the following facilites without changing a single line of your code:
- Logging of nearly all unhandled exceptions
- A view of the entire log of recoded exceptions
- A view of the full details of any one logged exception
- An e-mail notification of each error at the time it occurs
The pricing is reasonable, ranging from $17 to $89 a month, and depends on how many logs you send to Elmah.io each month - not the amount of data, as is the case for Application Insights.
I've used Elmah.io on different solutions, including a large sized Sitecore solution, and it really helped getting an overview of what was going on in the different logs - however, I should emphasize that Elmah.io only provides you an overview over the crashes and errors, no more or no less. This is not necessarily a bad thing, as Elmah.io might be enough for you to get started getting a better overview of your solutions overall quality state. Then later on, once you may have other needs as well, you may consider switching over to a fully-fledged APM tool.
Other tools worth checking out
A part from the tools mentioned, I also recommend that you check out the following:
Important: I highly recommend that you check out each the tools mentioned and review them closely before picking out which tool to use. Most of the tools comes with a free trial period for around 14 days, which should give you enough time to get a good feel for the tool, and decide if this is something you want to continue moving forward with.
Solution X, I'm choosing you! Now, how do I get you to work (nicely) together with Sitecore?
At this point, you've now settled with the tool you want to use. Naturally, the next question to ask is how do you get it working with Sitecore?
The first thing to do is to go over the contributions from the community (see further down below), and check if there is a "ready to use" implementation you can use straight off the shelf. If you can't find such an implementation, you need to implement it yourself. In this case, you have to be aware that there a few things you need to do, when you want to implement custom logging in Sitecore.
The troubles that arise due to Sitecore's log4net implementation
I think it goes without saying, that Sitecore's log4net wrapper implementation is known for it's limitations. Bas Lijten gives a pretty good summary in one of his blog posts, where he writes:
... Because the log4net implementation is a) outdated and b) being hosted in a Sitecore assembly, it’s not possible to easily use 3rd party solutions for log4net with Sitecore. The 3rd party solutions generally use the newer implementations of the
LogEventInformationclass (which has been altered over time) and they can’t find the log4net assembly, because it isn’t there ...
In practice this means, that if you try to install a log4net extension created for the tool you chosen (like Application Insights log4net appender), you'll quickly see that it won't work once you run your Sitecore solution.
To work around this issue, you have to implement your own log4net appender by inheriting from Sitecore's
AppenderSkeleton implementation. From here, the easiest way is to grab the log4net appender you want to use, decompile it, and re-implement it using Sitecore's appender implementation - you can see examples of how this can been done in the different community contributions.
It's expensive to send data to cloud hosted tools
By default everything in Sitecore is logged, which can be a bit of an issue with cloud based tools, since they typically restricts the amount of data to be logged (or said in another way, you will have to pay the big bucks to keep a log of everything Sitecore logs, by default).
As such, you should restrict the log levels such that you only log messages with level
WARN and above. To give some perspective, we went down from logging 200.000 log entries to around 2.000, by simply filtering out info logs.
In order for you to quickly get started, I've listed the different community contributions I've found available online:
As always, if you got additional details to the content explained in this blog post, or if you know of other contributions that should make it to the list, please drop me a note in the comment section below. | s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00559.warc.gz | CC-MAIN-2023-06 | 10,744 | 55 |
https://nlphighlights.allennlp.org/117_interpreting_nlp_model_predictions_with_sameer_singh/ | code | We interviewed Sameer Singh for this episode, and discussed an overview of recent work in interpreting NLP model predictions, particularly instance-level interpretations. We started out by talking about why it is important to interpret model outputs and why it is a hard problem. We then dove into the details of three kinds of interpretation techniques: attribution based methods, interpretation using influence functions, and generating explanations. Towards the end, we spent some time discussing how explanations of model behavior can be evaluated, and some limitations and potential concerns in evaluation methods. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. Some of the techniques discussed in this episode have been implemented in the AllenNLP Interpret framework (details and demo here: https://allennlp.org/interpret).
Hello, and welcome to the NLP highlights podcast, where we talk about interesting work in natural language processing. The hosts are Matt Gardner and Predeep Dasigi from the Allen Institute for Artificial Intelligence.
All right, today, our guest is Sameer Singh, who is an assistant professor at the University of California. Irvine. It turns out I actually have an office from him down the hall, and I work with Sameer a lot. It’s good to have you on the program with us today Sameer.
Nice to be here. Thanks for inviting me.
Sameer has done a lot of work over many years on interpretation methods for neural net models or machine learning models in general, trying to figure out why models make the predictions that we do. And so today we thought it would be interesting to have Sameer on talking to us about why this problem is interesting and how people solve it. So I guess, Sameer, can you tell us, what do we mean when we talk about interpretations in NLP or machine learning generally?
I think generally speaking interpretations of a model can mean a lot of different things. I think people have been looking at trying to design models that provide interpretability trying to get a global understanding of what the model is doing and things like that. But the stuff that I’ve be most interested in is what, I guess some people are calling instance level interpretations, or instance based predictions where what you’re really interested in is why did a model make a specific prediction? And that’s been the focus of a lot of what we’ve been doing.
And why is this something that people should care about?
So, instant level predictions or interpretations in general, tend to be useful for many different things. I think from when we started doing this work, we realized that these black box models or machine learning models tend to make really accurate looking predictions. But in many cases, that’s just not enough as an evaluation metric. And I think the community really understands that now, and it’s ingrained into more recent PhD students, but when we started doing this stuff, it wasn’t quite as obvious. So initially we were thinking of it as a really good additional evaluation metric. Like if I have the predictions, yes, that’s a signal for how good the model is doing. But if I also know why the model is making a prediction that could potentially, another way to evaluate the models. So that was how we started doing it. Soon after a different use case came up where instead of evaluation, we started thinking of it as debugging, where anytime a model made an error, we were able to go in and see why it made that error.
And that helped us understand what were potentially problems either in the training data or the model. And so on. I think a more general use case, a broader use case, which people are striving for is just to get users from a user centric view, get more confident or more informed about how the model is making it’s decision. So this is useful for many different things, but you can imagine in most key applications of machine learning, the user is still in the loop. And often they’re looking at the output of the model and trying to make the higher level decision based on those predictions. So if they have more information about why the model is doing something, it would just lead to a much better human computer collaboration. So there’s been some work on that end also, not so much in NLP, there are a few standouts, but I think from an ACI or even a user interface point of view, interpretations can be pretty useful.
I guess a canonical example in the last thing that you talked about is like medical use cases. If I’m making a prediction that a doctor is going to use, then you really want to be confident that the model predicted something for the right reason.
Yes. That’s a good application of it. Yes.
And so then to summarize what you said, there are a few different ways, a few different motivations for why you might care about these instance level predictions. Like maybe I’m users say in a medical use case, like we just talked about, or if it’s predicting something for the wrong reason. I think you hinted at this though we didn’t, you didn’t say it explicitly, but if a model is predicting something for the wrong reason, then even like from a machine learning academic, I’m looking at toy problem kind of perspective. I probably won’t generate the model probably won’t generalize as well. If it’s caught on some, you might call spurious pattern in the input data and is making the right prediction for the wrong reason.
Yeah. That’s a good way to phrase it. I think we use accuracy as a proxy for generalization, and we know that that’s not quite enough and explanations or interpretations of predictions can give you a little bit more insight. And yes, if there are spurious relations, accuracy might look good, but explanations, good explanations would not. And that’s one of the use cases.
Okay. So I hope we’ve convinced people that understanding why a model makes the predictions that it makes is an interesting and important problem. Why is this hard? Why can’t we just know apriori what’s going on inside the model?
Yeah. So this is a very interesting question because it has increasingly become harder and harder to do these things. But I think at the very first step, even describing or defining what an explanation is, is quite difficult, especially when you start thinking about, okay, what is the optimization problem that one is trying to solve? I think in a sort of higher level, what the user needs is quite important, but somewhat easy to define. So you can say things like, you know, what is important for the model. That’s very easy to say in English, but when you start thinking about, okay, what is the equation that defines importance, then it gets a little bit tricky. So I think the biggest challenge in interpretability is to define what interpretability is in itself. And I think one of the reasons that makes it difficult is because since we talked about these different use cases, many different use cases need different kinds of explanations. So when you come up with an explanation technique, they may be more useful for evaluation than for increasing a user’s trust in the model. Sometimes it can be at odds because for evaluation, you want something that’s very, very accurate for the model, but from a probably user centric point of view, something that’s too accurate may actually show more of the problems that the model was done, then increase the users trust. So there are all these tradeoffs in just defining the model.
And on that point, the most accurate description of what the model is doing is just the model weights itself and all of the internal computations. And that clearly is incomprehensible to most users. So yeah, like there’s definitely this trade off.
Definitely. Yeah. I think the trade off is even more. There’s a spectrum of even what the user knows and what the user likes and a prediction is something that anybody can understand. Maybe you’re not really good with properties, but let’s keep that aside. But you can imagine some people just want like the English sentence version of the explanation. Some people like decision trees, some people like a nice flow chart, some people like to read code. And when you have all these different, I guess, modalities of what people like to consume, it’s tricky to define what the good explanation my be, and of course this might depend on the task itself. There is no reason to think that an explanation for NLI, even if it’s perfect, would be the same form of explanation would work for something like reading comprehension or machine translation and so on.
Is this a problem that’s like unique to modern, deep learning neural net kinds of methods? Or did we have this problem back in the days when everything was a linear model, are those easier to interpret or is there’s something else missing?
I think there are a couple of different reasons why interpretability has sort of taken everybody’s attention recently. One of them is definitely what you bring up. I think linear models, at least by having a proficient for each of your input features seem like they would be interpretable. And you could probably build on top of that to have approximations that would give you other sort of things that you need from a visualization or interpretability point of view. But I think the biggest sort of push for interpretability has been when things have become nonlinear, where you must have seen in your introductory neural network models, essentially what’s happening is input is getting projected through some nonlinear transformation. And then you have a linear bound being the space well in that nonlinear projection, a lot of things might be happening. And that’s what interpretability really cares about, not just about the sort of final decision boundary.
So interpreting something like that becomes difficult, especially when you talk about universal function on box simulators will, then they can get fairly complicated and the users so need to be able to understand that. The other main reason interpretability has sort of gotten a lot of focus is honestly, these systems have been getting really, really accurate and their use case in real world applications is just increasing. And we as machine learning people, some people are excited by this, but some people are a little bit like, wait, they’re using machine learning for that application. That seems a little bit dangerous. And so I think interpretability has been another way to sort of think about bringing some more sanity to applying machine learnings to more applications. And I think that’s been another push for why explainability has been the center of focus.
Yeah. That’s a really good point pushing, real quick going back to the linear models, you get the, the, I guess you hear a lot that linear models are inherently just better. Like we want to linear approximation because it’s more interpretable or we want to like distill some complex model into a linear version because then I can interpret everything. But is that really true? Because it feels like even a linear model there, if you have overlapping features in any way, then you could get correlations that are hard to interpret. What do you think about this?
Yeah, that is true. I think it sort of all depends on how many features are going into your linear model. I think I would say linear models that get too many features when you’re talking about thousands of features and looking at coefficients of this, a lot of things get quite complicated. So like you brought up the fact that yes, there may be overlapping features, feature correlations and the logistical regression or whatever, but it’s sort of spread out the weight over all of them. So you, as a interpreter needs to understand what the data supervision is like, needs to understand what the feature correlations might be in order to even understand what that explanation is. And yes, that’s one of the big problems. I think some of the other problems that I faced when, I tried to interpret linear models back in the day was mostly the features themselves.
I think not all features are interpretable and it was very common practice to take a bunch of features and take cross products and take out a bunch of features and then do all sort of combinations. And when you have features that get more and more complex themselves, it’s not clear what it means when the coefficients sort of look at one of the cross products, but not the feature itself. And scannables and interpretables overlapping stuff. Also, people sometimes define features by turning a different model and taking its output and creating a feature. And sometimes that’s what neural networks do is they have this they have this information that is not linear. So even though the model is a logical regression, if your features are not interpretable, it makes it very difficult as well.
Yeah. Okay. So we’ve talked about like, why this is interesting. You mentioned it at the end, we didn’t hit on this quite enough at the beginning, but like for any kind of like social application of machine learning, like this is huge, like there’s potential to cause real harm with our models. And so we really want to be sure using some kind of interpretation, maybe like hopefully we have methods that can figure this out for like why the model is doing what it’s doing and that it’s not doing things for the wrong reasons. And we’ve talked about why it’s hard to get this. I think now’s a good time to talk about. Approaches that people have taken to solve this problem and actually interpret these complex models.
One thing I do want to mention about the challenging aspects of this before we go to solutions and some of this will become evident when you talk about solutions is to even think about what it looks like for NLP versus for other tasks and machine learning, other domains and machine learning. And one of the reasons we’ve been focusing a lot on text is because there are some properties of NLP that really aren’t properties of language that really make it difficult to do interpretability, which is not sort of, they don’t quite translate across all domains. So some of those things, the basic thing is that we have discreet inputs as opposed to devalued inputs, but just what sort of you can think of what’s happening in computer vision, but apart from just the inputs being discreet themselves, which makes it a comment or space, the tricky thing is that not all imports are valid, right? So just because the input is discrete doesn’t mean you can take any possible combination of tokens and treat it as like a valid input. So that makes it very tricky to do interpretability research. And finally, it’s very difficult to come up with a notion of distance between inputs as well. So you can’t use Euclidean distance and do some at a very fundamental level. Some of the mathematical tools that are common across machine learning just sort of fail when you apply it to NLP making the solutions a lot more trickier.
Yeah. And when you talk about linear models, a lot of the explanation methods that we’re going to look at, give you some kind of waiting on the input text, and it’s not at all clear that that’s really what you want. But again, I think we’ll probably hit more on this a little bit later in the discussion. So yeah, thanks for bringing that up, but now it seems like a good time to segue into what are the methods that people use to approach interpretability.
Yeah, so I think there’s been a lot of active research. I’m going to give sort of a high level categorization, which may not be the best one, but we’ll go with that. So the first one, I’m just going to call feature attribution, which is a set of family of methods that look at the instance that you are making a prediction on and just attribute importance to the input itself, right? So either to tokens in the input or potentially phases and combinations and things like that, but they’re mostly focusing on what parts of this input are important for my prediction. So that’s called an attribution based ones. Another family of which is recently getting faction is called training data influence methods, but you’re not quite looking at the features of the input, but instead you’re trying to find instances from the training data that are most relevant for the prediction that you made.
So what was most influential when this model was trained from the training data that would lead to this prediction? And finally, I think this is a bigger category, which we may not go into the detail, but I would call it like explanation generation, essentially, where you have a model that’s trained in some way to generate an explanation itself, which could be a future attribution, like one, but it doesn’t have to be, it could be natural language could be anything. The idea is you train something or you try to create an interpretable model. So those are the three high level categories.
Yeah. I think that’s a nice categorization. Do you want to tell us about this feature attribution method first?
Yeah, so feature attribution ones are probably the ones that everybody thinks often the think of explanations. I think it’s very easy to pose. What is the most important part of the input that is useful for the prediction or used by the prediction for the model? And I think the problem itself is slightly trickier to pose when you start thinking about how to do this mathematically. So one way to pose it is to say, okay, if we change the input very, very slightly, what would be the effect on the output? And that sort of gives us sort of one of the earliest methods of interpretability, which was just to take the gradient of the output and use that gradient or look, I think the gradient of the output with respect to the input and see which part of the input have the highest gradient and mathematically what that means that if that input was changed slightly, the prediction would change a lot.
Minor clarification point. You say gradient of the output, but it’s a loss function that we compute gradients on. Can you be a little bit more specific?
So people have tried a bunch of different variations including using the loss function that was used in training, but you can also look at the output probability itself and look at the gradient. I think there’ve been variations where people take gradients of different things or, yeah, there’s a huge line of research as to what the gradiant should be of. But the idea is some function of the output is what you’re doing the gradient over.
Okay. Yeah. The, thing that I’m most familiar with though, as you say, there are other options here is that you take the model’s prediction, you pretend that’s a label, and then you compute the loss that the model would have gotten with that as the label. And that’s what you mean by computing the gradient of the output, right?
Yes. That’s a good way to, I think that’s the most common interpretation of that, yes.
Okay, you were telling us about how these methods work.
Yeah. So the gradient based ones are pretty interesting. You know, if some part of the input is clearly having an effect on the prediction, they’re pretty good at that, but this is also taking the gradient at a single input instance. And we know things like by doing adversarial attacks and stuff like that, that the gradient of the local region around the prediction may not be quite as flat as one imagined it would be. So it’s very impossible that the gradient is quite noisy and sort of behaves in ways that doesn’t make for a good explanation is what I would say. And so there have been a couple of variations of these I’ll bring up only two. I think the easiest one to understand this small grad, but instead of taking just the gradient of the prediction of the instance itself, you actually sample around a little bit around the instances.
So for a little bit in a little bit, add some Epsilon noise to the embeddings and then look at the prediction and then compute the gradient of that, and then average out the gradient with respect to each doping and treat that as the interpretation. So that tends to give likely smoother gradients, there has been some really interesting work or integrated gradients where instead of taking gradients, just at the instance or in the neighborhood around the instance, they look at accumulated gradients over a whole part through the space. So you say something like, I’m going to start with an input, that’s all zero embeddings, and then I’m going to slowly increase those embeddings. Till I get to the original instance and in the process of going from zero to the original instance, what was the gradient for each of the input, through this part. And it gives some nice properties of explanations that are useful, but that’s, that’s sort of one way to integrate reagents into explanation techniques.
So just a clarification question about the process you just described. So right. The small grad and the gradient based models that you just described; they are dependent on the distribution of inputs, right? And it sounds like that solution you get is heavily dependent on your sampling procedures as well.
That’s right. And in fact, like the way they’re defined in some sense are sort of distribution independent too. So with small grad, you just add some epsilon likes to the dopamine embeddings and sort of envision and things like that. That sort of makes more sense. And it’s not clear what if you change an embedding slightly? Is it a different word? But sometimes it might be, sometimes it’s not often, it’s not. With integrated gradient. Also there is this notion of taking a reference instance to start with. So for images, it might be something like an all blank image, like a whole black image that you tried to get in, but should it be black should be white even there it’s a bit tricky. And with a NLP we have sort of decided all single embeddings is the way to start, but it’s not clear if that’s, that’s the one, because if that has never been seen as input during training, it may not be a very meaningful thing to be looking at for the model. So yeah, there are, these sort of concerns, that definitely show up.
Yeah. I wonder if, using a mask token for current transformer models, if that makes more sense than a zero token or an all zero vector.
Yeah. That’s right, another one that might be a good one. I think an UNK token for cases where the model has support for that would be another potentially useful thing to use.
Yeah, though an unknown word token is like, say you replace “the” or some closed class function word with UNK, then like you totally changed the grammatic quality of the sentence.
Yes. That’s true.
In a lot of cases. Yeah. And this a related point, you’ve talked about like sampling in input space, but you actually did a little switcheroo there. You, talked about embeddings, but the actual input space is this discreet language space. I think a lot of these methods were developed for vision where you actually can change things in the actual input pixel space because those are not quite real valued, but close enough that it makes sense. Whereas in text, it seems a lot more problematic.
Yes. So that’s actually a good point. That brings me to the second family of techniques within the attribution explanations, which are a little bit more validity for perturbation based if I can use that word. I think line, which is something we did quite a while ago was one of the first versions of this, where we literally took the input and put and using some perturbation function, which would be domain specific. And by perturbation in many, many different times, we would see what the effect on the output would be. And so it’s important to know that the perturbation is at the input level, we are dropping tokens for the most part and things like that. And then trying to see what the effect on the output would be and create a linear model of this. Right? So for each, I guess, for each of the inputs that was dropped, how often did it change the prediction?
I think there’s an earlier version of this that’s even simpler to understand which is just for prediction difference, where you would only drop one token at a time, for example, and literally look at what the difference in the output would be. And the drop token that causes the biggest change in the output is the most important one. LIME sort of generalized as not to be, to take some form of correlations into account, but in the end it creates a linear model. There’s another variation of this called SHapley values, which has been used for text, I guess increasingly more recently where it uses similar notion as LIME for interpreting it, but in some sense, it tries all possible, define over all possible perturbations input and trying to understand what the SHapley contributions are for each of the input tokens. It’s a little bit more aware of the fact that the perturbation that you’ve made exists in this space of possible perturbation some of them can be bigger changes. Some of them can be smaller changes and gives us some nice properties.
That seems very hard to define for text. Like how can we, can you even talk about the, the space of perturbations? Isn’t this exponential?
Yeah, so when you’re making perturbations and trying to compute the SHapley values, it’s taking into account what tokens appear in each perturbation. Whereas LIME was somewhat agnostic of it. It just looks at a bunch of tokens and assumes that they have a uniform contribution, whereas SHapley takes it into account how many different subsets it has appeared.
So, I guess you’re assuming then a particular kind of perturbation. And so like you can control that set. Like if you allow like arbitrary word substitutions, then like it’s the size of your vocab is like the base of your exponential.
Yeah, so all of these perturbation techniques or most of them assume that you’re just stopping words. And if you’re using more complicated perturbations, I think there will be difficult to be defined.
Okay, Okay, and then there are other methods that are, what are their other perturbation methods,
So there have been some more perturbation techniques that are a little bit more focused on NLP. One of the ones that we worked on was called Anchors, which also was applied to images, but I think NLP was a good application for it, but it was trying to identify what were the sufficient conditions, where conditions here are, you can think of them as tokens. So what are the sufficient tokens for the instance for the prediction to remain the same? Right? So if I give you a sentence, can I pick out a few tokens where as long as those tokens appear in the instance, and you substitute other tokens by similar tokens from your vocabulary, your prediction would remain the same, with pretty high confidence, that was one sort of technique. There was another one that came out based on similar idea called input reduction where the idea was to find the minimum subset of the input that gives the same prediction.
I think the main difference between Anchors and input production was that Anchors, considers other substitutions to other tokens. Whereas input reduction is primary focused on finding the reduced input. Like if your input was just a few tokens, which few tokens would it be so that you get the same prediction. Yeah. So all of these attribution based techniques, including radiant and perturbation we sort of, they all build upon variations of very similar ideas. So we implemented a bunch of them in AllenNLP Interpret, which allows you to compare all of these next to each other. And I think that’s been pretty useful to understand what the differences are between them.
Yeah, that was a fun little project that I was involved in. I guess you described it once as, this is what happens when you put Sameer and Matt in the same room, because you brought in all of this experience on interpretability methods. And I brought in the library and like, how do we, how do we make common APIs to make this easy to use across any model that you want. Yeah, that was, that was a fun project. There’s one thing I want to talk about on the perturbation methods still though, which is, you mentioned this earlier, when you perturb texts, you don’t necessarily get something that’s valid or grammatical. So like how, how can we even understand how, like how accurate or, valid the method is if it’s changing the text in a way that it produces ungrammatical text.
Yeah. That’s, that’s one of the key challenges. I think we are starting with a lot, but these perturbation based techniques is firstly yes. In how we even define a perturbation function that results in valid inputs. And secondly, even if you are able to come up with a perturbation technique that results in balanced sentences say you’re doing background translations or some kind of phrasing. And some of people have been doing word substitutions, as long as the word embeddings are similar to that. How do you communicate to the user, what perturbation function under which this, explanation was generated? So explanation for the same instance using background’s relation, it might differ a lot from something that uses the different perturbation function. And so all of these make it incredibly tricky. I think it ends up being an empirical question in some sense, we will get to that towards the end of the talk as to what makes for a good explanation and what doesn’t.
In practice It’s possible that yes, the inputs might be invalid, but the model’s behavior on them is still useful to understand what’s going on.
Yeah, I guess the challenge of talking about is related to my question about the definition being dependent on the sampling procedure it’s self, right? And in a sense, it sounds like these explanations, why the model chose to do this specific thing for this input, as opposed to all these other things that you could sample from around it There’s some discriminative nature to these explanations, correct?
Yes, that’s a good way to put it. In fact, for Anchors, that’s kind of what we did. We would give them the explanation, but we would also give them examples of instances that we generated along the way, in some sense, that show that okay, for these inputs, look, they’re so different, but since they shared the same tokens, that output is the same. And that’s one way to communicate it. I wouldn’t say we’ve managed to successfully solve this problem. It is a little bit more daunting than to be looking at all these perturbations and from a purely understanding the explanation point of view, it adds more overhead. So it’s unclear whether that’s useful enough.
Yeah. And another way of thinking about this too, is that input reduction, for example, the paper that introduced this used input reduction on SQuAD, the Stanford Question Answering Dataset, and SNLI the Stanford Natural Language Inference dataset, and showed that with very small or very large reductions in the input, the model’s predictions stayed the same. And we can say that, yeah, the inputs no longer valid English or whatever language you’re starting with, but at the same time, if the model still makes the same prediction, then this is highlighting something that is pathological in our model because a person wouldn’t be able to give this same input. And so like this, we think our models perhaps are doing complex grammatical, like they need to actually understand that the grammar of English, but at some level, at least when you force them to make simple predictions, they’re actually to see these methods seem to show that they’re not actually leveraging much of the grammar of English at all. They’re focusing on small things that give away the answer.
Yeah, that’s a good point. I think that with the used input one of the most interesting observations was that not just the prediction stays the same because that, in some sense, is not surprising because you’re forcing the model to make a prediction it has to make one, but the fact that the confidence actually goes up. So even when you remove things that humans would find very important for answering the question and humans would get increasingly more confused when you remove all these important tokens the model on the other hand, keeps getting more and more confident when we remove these tokens that to us seem very important. And that can really indicates the pathological nature of this stuff.
Great, yeah. Okay. I think we’ve covered pretty well. All this whole area of figuring out what parts of my inputs led to a prediction that this whole class of interpretation methods, the second class of methods that you brought up are what parts of my training data led to a particular prediction. Do you wanna tell us about those?
Yeah, so this is some exciting work, I think, sort of reintroduced to the machine learning community by Percy Liang at ICML, I think I would say 2017, which is called Influence Functions. And I think the idea there is to, yeah. Think about how influential was each training data point for a specific prediction. And I think it’s a little bit more difficult for us machine learning people to conceptualize because we seem to think like, Oh, even like that’s what requires such a huge amount of computation just to even compute how important each training point was for a specific prediction, but it makes for a really useful explanation because you know exactly, okay. We predicted this to be a positive review because it looks so similar to this other positive review that was in the training data. I think this notion of example based explanation has been studied a lot in other machine learning tasks, not so much in NLP, but yeah, it’s been incredibly exciting to see a resurgence of this.
There have been a few other approximations of this. Also there was a paper in NeurIPS , a few years ago for Center Point Selection Model. That’s also pretty useful. And increasingly the last couple of years, we’ve seen more and more application of these ideas in NLP itself. There was, one of my students did a graph completion sort of model understanding why graph competition models are making certain predictions and in those models, the creating based ones aren’t quite as useful the attribution ones don’t quite make sense, but influence functions were really key to figuring out, okay, what other ideas in this graph we’re responsible for the model to make a certain prediction. More recently I think ACL had a paper by Byron Wallace’s group that was looking at influence function and embedding it against some of these attribution based techniques for a bunch of applications. So excited about seeing influence based off .
When you described influence functions, it sounded a whole lot to me like just K nearest neighbors. Like, can I just find the nearest neighbor of my input? And is that sufficient? Like how is, what’s different here?
I think that the main difference from just using nearest neighbor on the input is to try and understand what the model thinks is the nearest neighbor, as opposed to just what your row embeddings would give it. But also in some sense, you want to attribute a little bit more to the training process itself, or look at the parameters inside the model and say things like if that training point was not in the training data, how much would my parameters actually change. And that becomes pretty key when you’re, for example, one, let’s say you have a wrong input in the training data. Just one instance of it, it’s possible that that one is instance is changing the prediction of a lot of different inputs, just because it has a single word or a single token, right. And it’s very difficult to imagine nearest neighbors and things like that would catch on this specific single token that’s causing a bunch of predictions to change. So you can definitely imagine cases where nearest neighbor just wouldn’t work.
But if I did nearest neighbor on say like the final layer before I do a softmax over class predictions or something, say sentiment analysis, like my final encoding layer before, what’s essentially a logistical regression on these learned features and I do. So I take that final feature representation and I do nearest neighbors on that. Would it give me essentially the same importance weights on training data as influence functions?
I think so, they represent a point of selected work sort of shows that, that you could imagine a version that does quite similar, but I think there are still key differences where the final leg, you might lose a lot of information about what makes a instance important. And that information might be sort of isolating what was most influential. We know this from sort of BERT and things like that, that a lot of things that happen in the initial layers, at least by our probs, don’t really show up in the final layers, but might be key for actually making a decision about why to predict some things. And I actually talking about bird it’s also really interesting to see what this influence function stuff looks like with BERT and other pre-training language models in the picture where I think the focus so far has been to say, okay, we are going to fine tune this. And let’s just look at what about the fine tuning training data was most influential, but I think an exciting research problem is to understand what about, why did BERT do something and not think about just the fine tuning training data and see what influence is between that,
Yeah. That seems really complicated to go, not just through the fine tuning data, but that like you have two separate training steps with different loss functions that you have to find influence through. That seems really hard and find. Yeah. we, we didn’t talk very much about how exactly the influence function works. Do you have talking about math is hard in a podcast, but can you give a high level description of what’s going on when you’re competing influence functions?
Yeah. So I can sort of give a couple of sentence intuition for how influence function works, but this is just one of the methods they sort of really in how they do. So what influence function does is you have a specific that’s called the test prediction in mind. And first thing you compute is what would be the effect of changing any of the parameters of the model on the prediction itself, right? So you can think of this as just the gradient in some sense, right? So if I were to change my parometer, number 1723, how much will it affect the output? Once you have this information, you can go back and see, okay, for every training point, in my training data, if I was to remove that training instance, how much would the parameter 1723 change? And that’s in some sense, another sort of gradient style update. And this is an approximation of what we do would do if you were to train through convergence, but this sort of taken together gives you a pretty good approximation to what would happen now, especially the ICML paper shows that if you actually do the Oracle experiment of leaving out these training data and pre-training the model, this ends up being really good approximation to that.
And so basically we’re talking about two gradient steps here. And so you’re computing a hessian,
Over your entire training data, which if you have a lot of training data, like say BERT pre-training data, this could be a nightmare.
Actually it’s worse than that. I mean you are talking about training data times the number of parameters.
Yes. And this is all for a single instance sometimes. So sometimes if you’re trying to do this for the whole test set, right. So you want to find out again, what were the most influential training data points over all my test set, or something like that or a given test set, then it becomes even more slower. Yeah. There have been approximations that try to get around this and some of them be about the model and some mixed figures.
Okay, cool. This sounds like an interesting direction. I get the feeling from what you’ve said, that this is still pretty early in its application, especially in NLP, but a really interesting potential Avenue for a bunch of interesting work.
Yes. I would agree with that.
Good, you talked about influence functions and burden based methods. Do, can you give me examples of a specific problems where one of these is a better version of generating data?
Between influence function and gradient base. It’s kind of difficult to see. I think the gradient based ones are really good when you’re are potentially really good when your instances themselves are pretty long. So if you’re, you know, your inputs have paragraphs and things like that. Influence function will give you another paragraph and a question from the training data and that level of information may not be as useful as just telling you, Hey, this is the sentence or the phrase, that led to an answer. So in that sort of situations, I think the gradient based techniques would be more useful. I imagined things like NLI and techniques where it’s very difficult to figure out a single word or a few words that are the most important. You sort of want to say the whole sentence captures what’s going on in that case, I imagine the influence function techniques would be a lot more useful. Again, I would point to the ACL 2020 paper that sort of actually compares these two in a certain way to see how good systems they are. And that might be one of the first steps on trying to see how the interaction of these tools.
Good. So I think we’ve covered the first two classes of interpretation methods that you brought up. So understanding what parts of my input led to a particular prediction and understanding what parts of my training data led to a particular prediction. The last thing that you talked about you called generating explanations. I think the way I might phrase this is instead of taking an existing model as it is, and trying to understand what parts of an input or training data lead to a particular prediction. This third class tries to say, let me bake in interpretability or explanation somehow into my model. So I’m changing my model architecture somehow. Do you want to tell us about this?
That’s a good way to put it? I think in some sense, it makes sense. We’ve been talking a lot about the problems with explainability techniques and like how explanation’s failed to capture one thing or the other, these methods sort of start from the focus of maybe the explanations are as important as the prediction itself and sometimes even more. And so if that’s the case, why not just design models around it. And I think there’s been a lot of work in this area. I’m just going to mention a few, but there are quite a few in this area. One of the more prominent ones recently that came up was this ENLI dataset where they took an NLI dataset and sort of paired it with human explanation. So sentences that some human code as to why a specific pair of sentences was labeled to be contradiction or entailment and so on.
And so there have been a bunch of papers that sort of took this dataset and trained the model to try and generate this explanation. So you get an analyze system at the end that not only tells you what the label should, but also what, why the model thinks that label was reached. And again, the idea is to generalize beyond just the instances that it was provided. I think this whole field has also been called rationalizing, where the goal is, in some sense, even the goal of interpretable or explanation is so much higher than the prediction. That you don’t necessarily even care about what the model is doing, but you want to generate an explanation at the end. So the idea of rationalizing, as opposed to explaining is to say, we want to come up with some rationale for why the model did something. And as long as it’s useful, as long as users like it, that’s a successful rationalization, even if that’s not exactly true to what the model would have. And so these are two sort of works in this area. One last one that we did recently, again at ACL, there were a bunch of papers that were looking into this was to sort of start looking at discrete explanations where your module first tries to generate an explanation. And then based on that explanation tries to make a prediction. Making explanation are really key component of the model itself.
Yeah. Thanks for the overview. I think we’re running a little bit low on time, and this is a large area that could maybe use its own entire episode of the podcast. Cause there’s a lot that could be covered here. So I think maybe we should leave that section as it is. And go on to my final area that I want to talk about, which is how do you know if these explanation methods are actually any good?
That’s a really good question. And we don’t. And honestly, the reason I like to read explanation papers now is mostly to focus on how did they do the evaluation? How did they, what new thing did they come up with to show that these explanations might be useful or might not be? And it’s been really interesting, especially recently in NLP where people have been looking at evaluating and there, I think it is two or three, three papers at ACL, purely focused on evaluating explanations and whether they’re useful or not. So I think like we started, we talked about at the start of the podcast, there are many different use cases of these explanations and each of them bring their own set of evaluation techniques. And I can sort of talk about a few of them that are very easy to understand, but just to know that it’s a sort of ongoing field and it’s probably going to continue for a long time.
There is no standard metric for explanation. For me, I think the most useful metric is does anybody find it useful? And so anything that involves user studies or recreation of a user study that shows why the model, why these explanations are useful is a good evaluation technique. So I think the easiest one to understand is when you do have gold explanations, this is most relevant when you’re generating explanations. But I think it can be used to evaluate attribution and LIME techniques as well. Where you gather, you ask humans to give explanations or ask humans to judge explanations purely on whether it reflects what they think the model should be doing or what a human itself would do, depending on how you’ve gathered this dataset. So this things looks like for machine translation. What would the alignment between the words be if you were to ask a human and then you see if the explanation techniques are bringing up the same alignment or attention, it’s bringing up the same alignment.
For classification people are focused on what are the most important words and evaluating models this way. And I think everybody understands that this is an evaluation. That’s mostly focusing on whether humans agree with what the model is doing or what the expressions are saying. And it doesn’t matter if what the model is doing is the same as explanation or not. And I guess this is going to be the thing in most of the evaluations I bring up, these are all a bunch of what I call necessarily properties from evaluations. Each by itself, you can always attack and say, Oh, this, evaluation doesn’t target that part of the explanation, but that is the hope is if you have enough necessarily distinct necessarily evaluations, you’re going towards something that actually shows that your explanation technique is good.
Yeah. When you were talking about this, it made me worried. Like, it sounds very dangerous because if the explanation you’re talking about here is just highlighting words, then maybe it’s not as dangerous. But if you’re, if you started with like generating an explanation, if my model like outputs a sentence that says why it predicted something, and that thing is supposed to match what a human would say, that actually doesn’t constrain at all the model to actually be doing what it said. Like you could imagine, for instance, some, again, I’m not recommending that anyone actually build an NLP system that does this, but like some kind of like a lending decision that is like for a mortgage application, something that you would hope doesn’t use race at all. And the model might output a description that says, I did not look at these particular sensitive attributes or whatever, but internally the model just did whatever it wanted and it was like totally unfair and biased. And so like, I don’t understand how this is adequate at all. Like this, this has nothing to do with explaining the model behavior, right?
Yes, that’s right. And so, if you are interested in expanding the model behavior, then you have to start thinking of evaluations that are focusing purely on that. Right? So there are a bunch of evaluations that people have done where you don’t even think about the user in the loop or anything like that. What you try to do is figure out using some other techniques, things that are definitely not important for the model or things that are definitely used for the model. So either by controlling the training data a certain way, or looking at the test label and trying to do some reasoning, you come up with these situations where there are cases that the model definitely should not be using and cases where there are things that models should definitely be using. And then trying to see how many times they show up or don’t show up in the explanation and using them as an evaluation.
That’s one way to sort of, if you’re able to set this up, one way you can evaluate it. I think there are other variations of this where you try not to be, you don’t try to construct this artificially, but instead you start at the explanation side of things and then start removing things based on the explanation and see how much the prediction changes. So if the explanation thinks that these two tokens are most important for the model. If I remove them, their predictions could change a lot. There are evaluation techniques that are based on all these ideas, these are all sort of automated and give you some numbers. And again, there is the caveat that is something that looks good on them doesn’t necessarily mean it’s a good evaluation explanation system. So you shouldn’t be creating explanation systems that are really good at these metrics because you want to make sure the other ones that go with it as well.
I do want to bring up a few ones that we focused on that are a little bit more end-to-end. And this goes back to why are these explanations needed? One of the ones that we started with was evaluating models. So is a model good or not. Some of the explanations we’ve done is to say some of the evaluations we’ve done is to take, say two models that are very different from each other, say on the test set performance are significantly different from each other, and then show the explanations for each of these two models for the same instance, doing user and ask them to say, which one do you think makes more sense? Or which one do you think is doing the right thing? And at least the way we set this up, this one was I think it’s pretty promising way to evaluate these explanations because it comes closest to how we expect they might get used. But of course it requires humans and things like that and makes it complicated,
I guess, on, evaluating explanations at some level you could say, I don’t need any external evaluation. I’m computing a gradient of the loss like this mathematically tells me, right. What parts of my input effected my prediction? What do you say to that?
I think that’s somewhat I can just saying we can just print out all the parameters of the model clearly that tells you what the model would be doing. So therefore that’s a good explanation. Obviously that example doesn’t make sense to us because that print out will be a hundred pages long. But the idea is that you need to be aware of the user. You need to be aware of what they’re going to be thinking about when they look at an explanation and how are they going to interpret it. And the mathematical interpretation may not be the one that ends up being useful or ends up being how the user interprets it when you give it to them.
Right. But I can, do these gradient based methods. I can back prop figure out which token if I changed it actually would change my loss the most. And isn’t that just by definition, the thing that was important to the model?
I think the tricky thing is that what you’re computing is if you change the token by an Epsilon, that limits to zero is the accurate definition of the gradient, how useful that is for what you’re actually trying to do is unclear. And I would say not very much.
Yeah, also like again, where we talked about this earlier, but the way I just described this, I was setting up a toy problem that I know was flawed, but you’re looking at essentially a linear version of what’s going on here and looking at single tokens independently. And that’s not actually how any of this works. And so it’s like not for a person, not for a model that has any kind of contextualization or like any kind of notion of grammar. And so like, it’s really not. You might, you might think that, yeah, I’m just competing ingredients and looking at aggregated ingredients. And so this should just work, but no, that’s not how a person would understand what you’re looking at. And it’s not even like you’re summing in ways that throw away a lot of information when you aggregate just on a token level.
Yes that’s that’s right. Yeah. And then that’s one of the reasons why I think the influence space direction might be useful because it sort of gets away a little bit from those kind of assumptions, but of course ends up making a bunch of different conclusions, it’s a different level of computational models.
Yeah, there’s even a worse problem, which is that you can fake the gradients. Right. Do you want to tell us about this?
Yeah. So there has been some work in computer vision and some work that we’ve been doing, the I won’t elaborate to much on things, but yes, even the gradient on a very local level can be controlled and manipulated in a way that allows someone. And this is sort of a pathological case, but allows someone to manipulate what the gradient might look like. And in fact, this is not unique to gradients. We’ve done some work in collaboration with [name] from Harvard that maybe showed that even things like LIME and Shapley values can be manipulated by adversity. So if somebody wants to make sure that the Reese never shows up in the explanation as the main decision making feature, but the model is still using the race, you can create models that do so and are able to fool LIME and SHapley values and other explanation techniques. So that sort of brings into question like an, almost another evaluation of explanation techniques.
How many relatable are they? Are they robust with these kinds of classifiers? It’s again, a good discussion.
People who actually design models, innovative adversarial, do the explanation.
That is true.
And I guess a naive question here is why would people want to do that?
That’s a good question. It all depends on why, how sort of critical these explanation techniques end up being, right? So the goal is that if these explanation techniques are really, really good, they will be deployed and available as often as predictions are. So if a bank wants to reject your loan, instead of just saying, this is why the loan is rejected, this is apart from saying that your loan was rejected, they might also want to explain why the loan was rejected and you expect them to be accurate. So you would say well we trust LIME so you should use LIME to show me what the explanation was. And the bank would very well be like, okay, fine. I’m just going to create this model that when I run LIME on gives me a nice looking explanation, but actually the model might be doing something else.
Yeah. There are governments that are considering, or maybe even have already imposed regulations on when a model makes a prediction, you need to have some kind of explanation for it. And so then the question is how does that explanation get generated? And if there’s a government regulation that has to be passed, there’s an incentive to bypass the intent of the regulation. And so, yeah, you create this problem where you need to be really careful, really careful with deploying any of these explanation methods, if they are susceptible to these attacks. At the same time, that doesn’t mean that they’re bad or that they don’t work for cases that are not adversarial. There’s been some interesting work on this, that there was a series of papers that are interesting. Attention is Not Explanation. And then Attention is Not, Not explanation.
The second one in here, it was like, well, I guess the first one was saying, Hey, look, I can spoof stuff. And the second one was saying, well, yeah, if you spoof stuff, it breaks. But that doesn’t mean that a model that was not intentionally trying to spoof, I have now done too much negation and they can’t recover. But anyway, the point of this second paper is that models that are not adversarial still have useful explanations, like they’re interesting correlations, you can find with the attention as a simple explanation with like actual phenomena in the data in interesting ways. So like these explanation methods, no matter what they are, can still be useful, even if they can be broken in, in adversarial cases.
Yeah. And this is sort of true of machine learning. It’s like machine learning is useful, but there are always caveats. And I guess I always try to give those caveats as well. And so even though we worked on LIME and LIME is incredibly useful, I think it’s, I hope it’s useful, but I do also want to make sure people understand the caveats that it’s not some magic wand, but just to give you exactly what’s inside the model, that would be correct.
Okay. This, is great. This has been a long, interesting conversation, a little bit longer than we normally do. We’ve covered a whole lot, but Sameer, as I always do, I want to give you the opportunity to bring up anything that if there’s anything you want to talk about that we missed or any final thoughts before we conclude,
I think all I want to bring up is that I’m looking for PhD students and postdocs. If you’re interested in that, contact us. And of course, with Matt being down the hall, there’s a bunch of interesting research topics that we we’re looking at. A lot of it looking at machine learning and NLP from a pretty introspective perspective, Are we asking the right questions. How do we even know we’re doing a good job and things of that nature? So if any of those things interest you, I should get in touch with me on that.
Great. Thanks. This has been fun. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00341.warc.gz | CC-MAIN-2021-43 | 59,511 | 107 |
https://community.splunk.com:443/t5/Getting-Data-In/Forwarder-configuration-to-forward-os-data/m-p/21856 | code | I have installed the forwarder at windows machine and my perfmon data is being shown in my indexer when i perform a search by ip address.
The problem i am getting was that the data is not being shown in nix app which u have answered that windows data is not supported in nix app.
I have deployed another forwarder at a Solaris machine but its data is also not being shown in NIX. As I understand it might be the problem in configuration.
What I did is just installed the universal forwarder at machine and have configured the port in its output.conf file. The data of this machine is also being shown when i perform a search by ip however the host is not being listed under host list in NIX app. Do i have to make any further configurations in it ?
Did you configure any inputs on the Solaris machine? If not, you can deploy the full Unix app to the Solaris machine, and enabling the inputs. (i.e. copy the desired stanza headers from default/inputs.conf to local/inputs.conf and setting disabled = false) | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00346.warc.gz | CC-MAIN-2021-43 | 1,005 | 5 |
https://blog.adafruit.com/2017/06/30/foosball-status-and-reservation-system-with-slack-integration-piday-raspberrypi-raspberry_pi/ | code | In a company where I work there is a kicker table. The company occupies many floors and for some of the employees it takes up to 3 minutes to get to the table and…to realize that the table is already occupied.
Therefore an idea arised to build a kind of simple reservation system with some status information available in real time.
Company uses Slack communication tool where every employee has an account. We have even a #kicker channel just for discussions about…kicker. The channel could be used as a kind of “entry point” for reservation and to be informed about current table’s status.
As usual, there were many concepts how to deal with such a system. But generally, one basic rule appeared in all of them: it has to be simple to use without any unimportant steps to perform to reserve and work with the system.
The device and the service are not sticked to the kicker table and can be used for any “common resource” (like ping-pong table, console, etc…) which needs some kind of reservation solution.
Each Friday is PiDay here at Adafruit! Be sure to check out our posts, tutorials and new Raspberry Pi related products. Adafruit has the largest and best selection of Raspberry Pi accessories and all the code & tutorials to get you up and running in no time!
Make a robot friend with Adafruit’s CRICKIT – A Creative Robotics & Interactive Construction Kit. It’s an add-on to our popular Circuit Playground Express, FEATHER and other platforms to make and program robots with CircuitPython, MakeCode, and Arduino. Start controlling motors, servos, solenoids. You also get signal pins, capacitive touch sensors, a NeoPixel driver and amplified speaker output. It complements & extends your boards so you can still use all the goodies on the microcontroller, now you have a robotics playground as well. | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217354.65/warc/CC-MAIN-20180820215248-20180820235248-00323.warc.gz | CC-MAIN-2018-34 | 1,830 | 7 |
https://www.kidswithanedge.com/computer-coding-programming-class-descriptions-for-kids-kids-with-an-edge/ | code | Python is a great programming language for children to learn due to its readability (simpler syntax) and a smaller learning curve than Java. It is also a very powerful language and is used extensively all over the world, including by NASA!
Python is taught at the college/community college level. It is often the pre-requisite to learning Java.
They will learn these concepts by working on small fun projects like puzzles or creating simple games.
Object Oriented Programming
Apply skills towards making games, building applications and/or solving complex problems.
Kids will learn to apply their coding skills in a fun and motivating environment. Small classes will allow for more individualized instruction.
They are intended for Middle Schoolers and High Schoolers who wish to learn in a relaxed, fun, personable and yet motivating environment. They will have plenty of opportunity to ask questions and be creative with their new skills.
This class is an introductory course to web development, intended for kids entering 6th grade and above.
It is structured in a friendly manner where children will enjoy gaining the tools to build their own web page and eventually, their own website. Kids love to express themselves and this a great way to merge creativity with technology to potentially reach a worldwide audience!
This course will help your child learn the basics of HTML. It is intended for users who have little or no knowledge of HTML and who wish to acquire the basic concepts of this language.
After completing this class your child will be able to create a static content page, deepen your knowledge of HTML as well as learn the first concepts of CSS.
This course is designed for beginners in the world of web development who are looking for excellent instruction in the fundamentals.
It is intended for Middle Schoolers and above (including those entering 6th grade) who wish to learn in a relaxed, fun, personable and yet motivating environment where they will have plenty of opportunity to ask questions and be creative with their new skills. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510941.58/warc/CC-MAIN-20231001205332-20231001235332-00385.warc.gz | CC-MAIN-2023-40 | 2,060 | 13 |
http://www.tnttt.com/viewtopic.php?f=22&t=954 | code | Hi...First post, very exciting.
I see 'so-cal teardrops' has posted on this site. And there is a australian co. that makes some fiberglass ones.
I too am trying to figure out who all the manufactures are. Maybe we could have a thread dedicated to manufacturers, testimoneals, and their contact info...It'd be a good way to consolodate all the info floating around. The question is who's got the time?
Happy to join the boards. And look foward to owning my own TD soon.
edit: check this link out http://www.teardrops.net/mfgs | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00328-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 524 | 5 |
https://groupprops.subwiki.org/wiki/Approximate_normalizer | code | This is a variation of normalizer|Find other variations of normalizer |
This article defines a term that has been used or referenced in a journal article or standard publication, but may not be generally accepted by the mathematical community as a standard term.[SHOW MORE]
Definition with symbols
Let be a group and be an element. Then, the approximate normalizer of , denoted as is defined as the set of all elements for which there exist nonzero integers such that .
The approximate normalizer of any element is a subgroup of the whole group. It equals the whole group if the element has finite order (viz, is a torsion element). Thus, the notion makes sense to study only for elements of infinite order.
- On a certain infinite permutation group by Graham Higman, J. Algebra 131 (1990), 359-369 | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00505.warc.gz | CC-MAIN-2021-25 | 798 | 6 |
https://codestudio.io/fullstack | code | Full Stack Developer
Code Studio is a software development company based in Sarasota FL. We are a small team that is very passionate about building custom software. We work on a wide range of software projects but our main focus is building and maintaining custom CRM’s for our clients. All projects are built using Laravel with light Vue.js.
We are looking for a motivated full stack developer who loves Laravel/PHP to help grow our team! Ideally we are looking for a candidate close to our office, but open to remote. We are a laid back crew that is very flexible. We have one goal and thats to build quality products!
Skills We Are Looking For:
- In-depth practical experience with Laravel
- Strong knowledge of MySQL
- Web RTC
- Tailwind CSS
- Experience with GIT
- RESTful Web APIs and JSON
- Strong debugging skills
- Client support skills
- Experience in working remotely
- Laravel Forge
- Laravel Livewire
- Server management
If this seems like the right fit, we would love to hear from you! Please send an email to [email protected] with a few simple items:
- Why would you like to work with us?
- A few sample projects that you are very proud of (Github links would be awesome!).
- Your resume.
- Your ideal salary.
We hope to hear from you soon!! | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228707.44/warc/CC-MAIN-20200925182046-20200925212046-00139.warc.gz | CC-MAIN-2020-40 | 1,272 | 22 |
http://artifactsandrelics.blogspot.com/2018/01/ | code | Anyway, I picked up a mess of interesting games recently. Too many to read, much less catalog in a single post. And that also includes a sprinkling of Steam games (you may have heard about the winter sale, no?).
|Apparently, I haven't played all the games...|
And from that material I draw the topic of this post i.e. comparing sandbox/railroad concepts between then and now. Specifically, I'd like to compare two very different games: Shadows Over Bögenhafen, the classic Warhammer Fantasy adventure, and The Escapists 2, a recent videogame.
Weird comparison, huh? | s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00469.warc.gz | CC-MAIN-2019-43 | 566 | 4 |
https://community.oracle.com/message/8585844 | code | This content has been marked as final. Show 4 replies
FYI, This error only happens on my PC running windows 2000. It does not occur on the QA box, also running windows 2000. It does not occur with our production environment (app server 6.5 running on Solaris)
Hi !!! Not sure if your problem got resolved or not...But please ensure if these thses steps were folllowed before trying to access the fortune application. Also after following the steps please ensure that the request is sent from yuor browser as http://<server-machine>:<server-port>/NASApp/fortune/fortune .....
Hope it helps.
1. Go to the machine which runs the web server.
2. Go to webconnector install directory>/ias/bin
3. Take a backup of the the registry.
4. Make sure that the ldap is running
5. Open the registry by doing ./kregedit
6. Navigate to the following key
7. Note if that value is specified in seconds. This can be changed if you want to. The basic rule is to keep this timeout value lesser than the value you set for your firewall timeout.
8. Restart the web server for the changes to take effect.
Also, following is the setup which was asked to follow for setting up the iAS web-connector to run with IIS.Please verify the settings and ensure the same.
1. Registering the Plug-in on IIS running on Windows.
To register the Plug-in on IIS
Go to Start >Settings> Control Panel > Administrative Tools > Computer Management
The Computer Management window will open.
- In the left pane, click on the + sign to expand Services and Applications > Internet Information Services > Default Web Site.
- Right-click Cgi-bin and select Properties
- Select the Virtual Directory tab
- Under Applications Settings, click Create.
- Select the pull-down menu next to Application Protection.
- Select Low(IIS Process)
- Click OK.
- Rename the gxisapi.dll library to gx.dll and leave it in the cgi-bin directory of the IIS wwwroot (inetput/wwwroot/cgi-bin/).
- Configure the ISAPI filter file, gx.dll, in the following Windows registry entry ( regedit ):
A string key, Filter DLLs, should be added under Parameters, with the following value:
3. Grant permission on the Windows registry, following are the instructions that you need to follow to grant the permission
- Open windows registry with command "regedt32".This is the m/c where iAS webconnector is installed.
- Select key \\local machine\\software\\iplanet\\Appserver\\6.0\\CCS0\\HTTPAPI.
- From menu open security---permission.
- In the dialog box check whether any user called "Everyone" is there or not. If it is not there. ADD a user called "Everyone"
- By default the access permission for "everyone" is special access. Change it to "full control". Be sure that the change of permission applies to sub keys also.
- Restart the webserver(IIS) to get the effect.
4. When you try to open http://<domain-name>:<port>/GXApp , it will throw 403 Forbidden error . To get rid of it
Traverse to control panel ->Administrative tools ->computer management ->IIS->DefaultWebSite->right click on iAS-Samples ->
Do the same for GXApp under Default web site .
After doing the above restart the machine and restart the iPlanet services .
This is still an issue. I am running Iplanet web server 6.0 and app server 6.5 on windows 2000.
1) I can run servlets such as /NASApp/fortune/fortune. The servlet successfuly forards to the JSP page
2) I cannot run JSP pages directly such as /NASApp/fortune/fortune.jsp. Then I get the GX error.
Looking around the registry I noticed that the System application is not configured. The files exist in the IAS_HOME/ias/APPS/System, but the install did not load these into the registry. I have uninstalled and reinstalled the app server several times and this information is not loaded.
The System files should be listed under, HTTPAPI/ServletPatternTrans, J2EEModule,
ClassDef, ClassImpl and NameTrans, but they are not.
Why was System not loaded upon instalation? I did a typical install with all the suggested values. I already had an instance of the web server on the box, I just upgraded the app server by uninstalling the 6.0 version and installing the 6.5 version.
Is there any way to load this short of manually configuring all the entries in the registry?
Thanks for your help,
This was not a showstopper until recently. I installed the 6.0 version of the web server on my PC and was able to compile, install and run the application. Now that have changed the applciation to take advantage of features only available in Java3 and app server 6.5, it no longer will run on version 6.0.
The web server is Iplanet6. | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00157.warc.gz | CC-MAIN-2017-26 | 4,566 | 49 |
https://nscomputing.com.au/blog | code | Debian is the preferred choice of an operating system for all our IT Solutions for Small Business and Home users.
Debian is a free operating system (OS) for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian provides more than a pure OS: it comes with over 59000 packages, precompiled software bundled up in a nice format for easy installation on your machine. Read more about Debian ...
Debian is under continual development. The latest release is Debian 11.3. It is also (currently) known as stable or by its codename "Bullseye". Each version also corresponds to a set of named software repositories (at least one per CPU architecture).
At any given time, there is one stable release of Debian, which has the support of the Debian security team. When a new stable version is released, the security team will usually cover the previous version for a year or so, while they also cover the new/current version. Only stable is recommended for production use.
There are also two main development repositories unstable and testing which are continually updated during the development of the next stable release. The latest packages arrive in unstable (which always has the codename "Sid"). Packages are automatically copied from unstable to testing when they meet criteria such as lack of release-critical bugs, and dependencies being satisfied by other packages in testing.
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.
Thus the Debian LTS team takes over security maintenance of the various releases once the Debian Security team stops its work. Read more about Debian Long Term Support here.
Extended Long Term Support (ELTS) is a commercial offering to further extend the lifetime of Debian releases (after the 5 years offered by the LTS project). It is not an official Debian project. Debian's infrastructure and other Debian resources are not involved. Read more about Debian Extended Long Term Support here.
Debian release timeline
Note: Only Debian versions with the current Standard, Long Term Support (LTS) or Extended Long Term Support (ELTS) are included in this timeline.
Servers are working around the clock doing their job as expected. But like any machine they do require some attention and maintenance to prevent server disastrous failures and data loss. An important part of the server maintenance is server monitoring performed by system administrators to ensure that the server is performing as expected and all problems are discovered and solved before they become serious.
Server monitoring can be done using either manual techniques or automated server monitoring software tools. Even if you are responsible for only 1 server, you will very soon realise that you need a monitoring tool. Yes, we humans need some healthy night sleep.
It is important for a system monitoring tool to just work - all the time, and you should be able to trust it to do so. A system monitoring tool needs to be non-intrusive and you should be able to forget about it once it's installed.
Our server monitoring tool of choice is the MONIT ( https://mmonit.com/monit/ ). Monit is a small Open Source utility for managing and monitoring Unix systems. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations.
That's what is exciting about Monit. Monit is more than just a passive monitoring tool. Suppose in the middle of the night apache is using too much resources (e.g. if a DoS attack is in progress) Monit can stop or restart apache and send you an alert message.
- Proactive: Monit can act if an error situation should occur, e.g. if Exim is not running, Monit can start it again and send you an alert.
- Monitoring daemon processes: Monit is particularly useful for monitoring daemon processes, such as those started at system boot time from /etc/init/ For instance postfix, sshd, apache, mysql, fail2ban, etc.
- Monitoring Files, Dirs and Filesystems: Monit can monitor these items for changes, such as timestamps changes, checksum changes or size changes. This is also useful for security reasons - you can monitor the md5 or sha1 checksum of files that should not change and get an alert or perform an action if they should change.
- Monitoring Network Connections: Network tests can be performed on a protocol level; Monit has built-in tests for the main Internet protocols, such as HTTP, SMTP etc. Even if a protocol is not supported you can still test the server as you can configure Monit to send any data and test the response from the server.
- Monitoring Programs and scripts: With Monit you can test programs or scripts at certain times, much like cron, but in addition, you can test the exit value of a program and perform an action or send an alert if the exit value indicates an error. This means that you can use Monit to perform any type of check you can write a script for.
- Monitoring General System Resources: Finally, Monit can be used to monitor general system resources on localhost such as overall CPU usage, Memory and Load Average.
- Built-in a lightweight HTTP(S) interface: You can use it to browse the Monit server and check the status of all monitored services. From the web-interface you can start, stop and restart processes and disable or enable monitoring of services.
Logging and Alerts
Monit can logging status and error messages to a file or via syslog. We are using dedicated Monit log file in our Debian based systems ( /var/log/monit.log ).
If an event occurs Monit will raise an alert. By default, Monit only sends alert notifications via email. Additionally, a script can be added to send alerts using other means. In our solutions we are using a customised Monit2Telegram script to send Monit alerts to Telegram messenger using a Telegram bot.
Malware (short for malicious software) is any software designed to cause damage to or spy on a computer, server, client, or computer network for the benefit of some third party. The most common types of malware are: viruses, worms, Trojan horses, ransomware, spyware, adware, rootkits and keyloggers.
Malware exploits security defects (security bugs or vulnerabilities) in the design of the operating system, in applications such as browsers, or in vulnerable versions of browser plugins such as Adobe Flash Player, Adobe Acrobat or Reader, or Java SE.
Malware authors exploit software security defects such as bugs or vulnerabilities. Some closed source operating systems feature deliberate back-doors that may be exploited by attackers.
What is Ransomware?
Ransomware is malware that encrypts files on an infected computer, then demands payment in exchange for the decryption key. In the majority of cases the infected computer is running a Microsoft Windows operating system.
Ransomware attacks are typically carried out using a Trojan that is presented as a legitimate file that the user is tricked into downloading or opening when received as an email attachment.
The tricky part of getting your files back is not just having to pay for the ransom, but getting the ransomware authors to honour their promise by decrypting the files. As of October 2013, a strain of ransomware called Cryptolocker was infecting around 150,000 computers each month. In a period of nine months, it is thought to have generated about $3 million in ransom payments.
How to protect yourself from Ransomware?
- Keep your system up-to-date - Install all security updates from trusted sources for operating system, applications and third-party device drivers.
- Develop a good cyber hygiene - Be cautious when opening e-mail attachments and links.
- Design network separation - Keep critical computers isolated from networks.
- Implement a good backup system - To be able to restore all encrypted files after an ransomware attack, you must have at least one file version backup up that is not affected by the ransomware attack.
Our Backup Server Solution offers you possibility to restore your lost files from multiple restore points in the past.
This example has 12 possible restore points of different ages, ordered from oldest (371.8 days) to the newest (less than 1 day old):
"Source code" is a computer program in its original, human readable form written in a computer programming language. To be executable this source code must be translated (compiled or interpreted) into non-human readable, computer machine code, also called object code.
The majority of end users never see the source code of programs that they run on their computers. Thus they are not able to see what these programs are doing on their computers or how their personal data is used by these programs.
Most likely you have heard or read about the terms Free Software, Proprietary Software, Open Source Software, Closed Source Software, Shareware, Freeware, etc. But what do all these terms mean? How are they different from one another, and what implications do these differences have to the security and privacy of your computers and personal data?
What is Free Software?
The creators/founders of the GNU Project - Free Software Foundation explain that:
“Free software” means software that respects users' freedom and community. Roughly, it means that the users have the freedom to run, copy, distribute, study, change and improve the software. Thus, “free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer”. We sometimes call it “libre software,” borrowing the French or Spanish word for “free” as in freedom, to show we do not mean the software is gratis.
We campaign for these freedoms because everyone deserves them. With these freedoms, the users (both individually and collectively) control the program and what it does for them. When users don't control the program, we call it a “non-free” or “proprietary” program. The nonfree program controls the users, and the developer controls the program; this makes the program an instrument of unjust power.
What is Proprietary Software?
Proprietary software, also called closed-source software, is a non-free computer software for which the developer or owner retains intellectual property rights exclusively.
Only the original owner of the software is legally allowed to view and modify the source code. Users of proprietary software must unconditionally trust them that there is no malicious code running on their computers and misusing their data.
A proprietary program puts its developers or owner in a position of power over its users. This power is in itself an injustice. The initial injustice of proprietary software often leads to further injustices: Malicious functionalities.
Some examples of malicious functionalities:
- Back doors: Any feature of a program that enables someone who is not supposed to be in control of the computer to send it commands. Examples: Spying, altering users data or settings, installing, deleting or disabling other programs.
- Digital Rights Management, or “DRM”: Functionalities designed to restrict what users can or can’t do with the data on their computers.
- Proprietary Incompatibility of a program with third party software that operates on the same data types. A fairly common sort of incompatibility is the use of secret formats or protocols. This directly blocks or hinders users from switching to any other program and, in particular, from switching to free software which can liberate the device the software runs on.
- Proprietary Surveillance: Collecting user data and sharing it with third parties.
- Proprietary Tethers: Tethering a product or program means designing it to work only by communicating with a specific server. That is always an injustice since it means you can't use the program without a connection to that server. It is also a secondary injustice if you can't communicate with the server in an alternative way. In some cases, tethering is used to do specific nasty things to the users: eBooks “bought” from Microsoft's store check that their DRM is valid by connecting to the store every time their “owner” wants to read them. When Microsoft closes this store, it will brick all DRM'ed eBooks it has ever “sold” unless they become generous enough to deactivate this aspect of the DRM code.
What is Freeware?
Freeware is closed source software available free of charge. ZERO $, but you aren't allowed to know exactly what this program, running on your computer is really doing with your data.
- Adobe PDF Reader
- Kik Messenger
- Google Chrome
What is Shareware?
Shareware is proprietary closed source software distributed free of charge to users, either with limited features or on a time limited trial basis. To use it after the time limit, you have to pay for the software.
Shareware limitation examples:
- Adware - Contains ads for generating revenue to developers
- Donationware - Offers optional payment option
- Nagware - Often begs users to pay for a licence to continue using the program
- Demoware - A feature limited demonstration version of the software
What is Open Source Software?
People very often confuse open source with free software. They are close, but not interchangeable. All Free Software is Open Source Software, but not all Open Source Software is Free Software.
Open source means you can see the source code, but without the free software aspect there can be restrictions on how you use the source code. Open source developers may let you look at the source code, but you may not be allowed to actually run binaries that are compiled from it: Look but don't touch, and don't run. You may also be allowed to build binaries, but only with limited features. Finally, and most important in practice, many products containing computers check signatures on their executable programs to block users from installing different executables; only privileged companies can make executables that can run on the device, or can access its full capabilities.
Many Android products contain non-free executables of Linux, even though its source code is under GNU GPL version 2.
The criteria for open source without the free software aspect are concerned solely with the licensing of the source code. Thus you can end up with non-free executables that was compiled from free and open source code.
Only Free Software gives users (not just the developer) ultimate control over the software and, subsequently, over their devices and data.
Here you'll find the most recent and useful information about IT solutions and services for personal and small business users.
Covering a range of IT topics with a focus on security and privacy of IT solutions for small business, we aim to help you to select the most appropriate custom IT solutions to satisfy your small business needs. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473370.18/warc/CC-MAIN-20240221034447-20240221064447-00477.warc.gz | CC-MAIN-2024-10 | 15,028 | 76 |
https://news.ycombinator.com/item?id=21403091 | code | | Hello Community!
My name is Charles-Eugene Loubao, I am a software developer who recently turned into an Indie Maker and I am sharing my new product with you today.|
Micro CRM is a Customer Relationship Managment web app built to be easy to use and intuitive. Most CRMs can be complicated to use and come with an expensive price tag. Micro CRM is built to fill that need for a much simpler and cheaper contact managment platform that offers compeling features without being overwhelming.
What can I do with it ?
- Keep all your contacts in one place - Timestamped notes can be used to keep track of events associated with your contacts, or as a call log.
By getting the Premium Plan you also can also:
- Import your existing contacts from Excel CSV files - Organize easily with tags - Create email reminders to help you remember follow-ups
What's next ?
Micro CRM is in it's early days and I am planning on adding the following features:
- Search - Sorting and Filtering - Custom fields - Contact attachments (files, links, images, etc) - Team Collaboration - Possible Integrations (email, calendar, Slack, etc)
How much does it cost ?
Micro CRM is free to use for manual entry and simple contact managment. The premium plan is $5/month and for a limited time I am offering a free 30 days trial with no credit card required when you create your account.
Head to https://microcrm.cc and create your free account today! | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548537.21/warc/CC-MAIN-20191213020114-20191213044114-00296.warc.gz | CC-MAIN-2019-51 | 1,419 | 13 |
https://github.com/mj1856/ravendb.contrib | code | RavenDB Community Contributions
These projects are for community contributions to RavenDB, such as extension methods, base classes and helper methods. Feel free to fork this project, add your own code, and submit a pull request.
This project is maintained and supported by the RavenDB community, not by Hibernating Rhinos. For questions, please visit the RavenDB Google Group. | s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823482.25/warc/CC-MAIN-20171019231858-20171020011858-00572.warc.gz | CC-MAIN-2017-43 | 376 | 3 |
https://digcouponcodes.com/is-offers-com-legit-reddit-free-music/ | code | Updated: 0 sec ago
Apr 01, 2021 · So, that is the list of sites you can join if you want to get paid to listen to music. All the sites listed above are legit and free to join. So, all you really need to invest is a bit of your time and some effort. And as mentioned earlier, if you want to maximize your earnings, you can join around 5 to 7 sites to gain access to more opportunities. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00125.warc.gz | CC-MAIN-2021-43 | 385 | 2 |
https://www.wizbangblog.com/2007/04/14/shes-not-the-messiah-shes-a-ve/ | code | Every now and then, Scott Adams performs a little sociological experiment on his blog readers. He’ll post a position, a question, a dilemma, designed to elict certain responses. It’s always done so cunningly that almost no one smells the setup, and he usually ends up proving just what he’d argued for a little while ago — and had been roundly denounced for. Here’s a perfect example.
I’m nowhere near that clever. If I was, then my recent posting on Mary Cheney and other children of politicians would have been a perfect example.
In that, I argued that, pending exceptional circumstances, the children of politicians should be off-limits in discussions about their parents’ political views. There was quite a bit of disagreement.
But the one odd thing I noticed, and should have realized before writing it (so I could pull a Scott Adams-style punking), was that there is an amazing phenomenon behind those who want to make hay out of Mary Cheney.
They are arguing that Mary Cheney is a legitimate, valid political figure and target — and wish to use that right to praise her.
It’s almost Pythonesque. Remember “Life Of Brian?” There were hordes of people convinced that Brian was the Messiah, despite his protests, and demanded the right to worship him — no matter how much he did to discourage them. (“How shall we fuck off, O Lord?”)
What keeps this from being perfectly Pythonesque is that their admiration for Mary Cheney is not sincere. She’s not a role model to them, she’s a cudgel. They’re objectifying this woman, refusing to recognize her as an individual and instead simply as a tool — the only thing that matters is that she’s gay and involved with a partner and they’re expecting a child. Not all that different from a lot of other women around the country — she just happened to poorly choose her father.
And these days, that’s all that’s necessary. | s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00597.warc.gz | CC-MAIN-2022-27 | 1,911 | 8 |
https://ravelys.github.io/possession-student-boy-using-his-power-to-possess-a-girl-and-become-he.html | code | Hi! I’m back with my new STORY! - So please, Enjoy the stories! Thanks
My grammar is bad :D. I hope you all can understand the plot. Cheers
► SUBSCRIBE ►►► https://goo.gl/nJu5S2
► SUPPORT ME ►►► https://www.patreon.com/pstgclip
► MORE VIDEO ►►► https://goo.gl/jR7wSS
I also have other fiction series and short stories.
Please check it out in my playlist if you are interested.
Please like and subscribe if you enjoyed it ♡.
Feel free to comment!
(Feel free to subscribe if you are new!)
Music from Youtube Library :)
Ghost Story by Kevin MacLeod is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00750.warc.gz | CC-MAIN-2022-40 | 677 | 12 |
https://netty.io/4.1/api/io/netty/handler/codec/compression/Lz4XXHash32.html | code | implementation for use with
has a particularly nasty implementation
that allocates a single-element byte array for
In addition to that, it doesn't implement an overload that accepts a
as an argument.
Combined, this means that we can't use
and can't use
because of its atrocious performance
with direct byte buffers (allocating an array and making a JNI call for every byte
checksummed might be considered sub-optimal by some).
Block version of xxHash32 (
), however, does provide
method that is efficient and does exactly
what we need, with a caveat that we can only invoke it once before having to reset.
This, however, is fine for our purposes, given the way we use it in
, followed by one
, followed by | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473819.62/warc/CC-MAIN-20240222125841-20240222155841-00877.warc.gz | CC-MAIN-2024-10 | 705 | 17 |
https://forums.sqlteam.com/t/multiple-criteria-happens/9302 | code | Ok, I guess what I was trying to do will not work. I assumed if I kept the question easy, I could get an answer and then edit it to fit what I actually need but the way the answer is built, I can't figure it out on my own.
What I actually need is this.
Count the Doc_No that have partnumber '10-38' as line_type 01 or 15 and also has a line_type 9 on the same Doc_No but the partnumber in Line_type 9 cannot be partnumber 'CH92' or 'CH93'
SELECT COUNT(Doc_NO) AS Count
WHERE (PartNumber = '10-38' AND Line_type in ('01','15'))
and not PartNumber in ('CH92','CH93') AND LINE_TYPE = '09'
Thank you for the help and I'm sorry I was not clear in my first question. | s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743011.30/warc/CC-MAIN-20181116111645-20181116133645-00195.warc.gz | CC-MAIN-2018-47 | 660 | 7 |
https://devoxxuk19.confinabox.com/speaker/julia_malasok | code | Devoxx UK 2019
from Wednesday 8 May to Friday 10 May 2019.
Julia serves as a software engineer for UK-based fintech startup Monese in Tallinn, Estonia. After quitting her boring job she learned how to code independently, experiencing the profound joy delivered from crazy monoliths, live database transformations, alpha-stage framework utilisations in production and forever much more. Driven by a burning question of “How money works”, Julia has been swimming in the fintech pool for the past several years, presently contributing to the Monese success story as a part of the company’s core banking team.
See also https://monese.com/
Do you have 100 microservices on your backend all exposing an API? And Android and iOS apps trying to communicate with them?
You would probably create a funnel-like API which forwards app calls, collects and returns the data. Then the app guys ask you to return one more field to the endpoint or create a new endpoint. A simple GET request (DB-wise). You plan, you build, you release. App engineers are waiting. Product guys are waiting. Users are waiting!
Then you wake up one day and see that you have an age-old endpoint which returns half of the database. You have to put an end to this mess! What options are there?
In this session we will share the result of our GraphQL and Atlassian Braid exploration journey. We are building a new distributed services architecture which will combine GraphQL backends into one schema. The result should be a fast, safe and easily maintainable cluster of microservices. | s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526446.61/warc/CC-MAIN-20190720045157-20190720071157-00141.warc.gz | CC-MAIN-2019-30 | 1,551 | 8 |
http://www.eventid.net/display-eventid-11197-source-Dnsapi-eventno-1528-phase-1.htm | code | The "specific error code" mentioned in the event description is in most cases 0x2751 (or 10065 in decimal) and this means that the computer was unable to determine a route to the IP address that it tried to reach. For example if there is no default gateway and the computer is asked to connect to a host that it's not located on the same network segment a similar error code will be generated by the TCP/IP stack. So, to return to this message, what this means is that the computer tried to update its records in DNS but for some reason was unable to determine how to get there (i.e. if the network card is disabled surely there is "no route" to any host).
Check the permissions of the computer in the Forward Lookup Zone on the DNS. Make sure DOMAIN\COMPUTERNAME$ has the write permission. A symptom is the presence of an account that is no longer authenticated by the server.
Look at your zone files, forward and reverse lookup and make sure that there is only one entry for reverse IP and only one forward name lookup. Also, check the security of these entries to make sure the user can update them.
As per Microsoft: "The update request for an A record could not be completed. Possible causes include:
- There is no network connectivity.
- The zone file is not configured to accept updates.
- The zone could not be found.
- The server is unavailable". See MSW2KDB
for more details on this event.
As per Microsoft: "Event ID 11197 is generated every time that the network protocol stack is rebuilt". If this problem appears when you install Microsoft Virtual Server 2005, Microsoft states that this problem appears because the installation of Virtual Server 2005 networking causes the network protocol stack to be rebuilt in Windows. See ME843237
for details on this issue.
This error is generally reported when a network card has been disabled. | s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146809.98/warc/CC-MAIN-20200227191150-20200227221150-00191.warc.gz | CC-MAIN-2020-10 | 1,848 | 12 |
https://www.thevaluechain.eu/en/blogs/migrating-dangerous-goods-data-in-sap-s-4-hana | code | Contact the expert
When switching from one system to another, it’s important to make sure that all data is transferred correctly. We ran into some interesting challenges when migrating our own dangerous goods data.
Unfortunately, due to the fact that these processes are relatively new in S/4HANA only a small amount of information about it could be found online. This inspired us to write this blog post to contribute to the community and help others who are struggling with the same difficulties. Think of it as a step-by-step guide for your own data migration.
Below we explain each step in the process of migrating dangerous goods data.
The first step is to create a migration project.
We use the Migrate Your Data app in the Migration Cockpit.
Figure 1: Migrate Your Data – Migration Cockpit
Once you’ve created your migration project and its various migration objects, you can start migrating your dangerous goods data.
Make sure you upload this data in the following order:
- PC – Product compliance info
- DG – Assessment for unpackaged products (content-based)
- DG – Assessment for packaged products
Select your template from the drop-down list for the migration object you want to upload.
Select Download Template -> Download CS Files.
Figure 2: Download the data migration template
Make sure the material is compliance-relevant before you start uploading. You can change this setting in the material master data. Open the Manage Product – Master Data app.
Figure 3: Manage Product – Master Data
Select your material and click Edit. You can now make changes in the Product Compliance tab.
Figure 4: Decide if your product is compliance-relevant or not
Product compliance Info
The first two tabs (Introduction and Field List) contain useful information about this process, so be sure to read through it carefully. Because this information is so thorough, I won’t discuss it here.
The most important tab in this file is Chemical Compliance View, which contains the following columns.
Figure 5: Excel file Product Compliance Info
Start by entering the Internal Number. This is a unique code containing letters (A-Z) and up to 80 characters. This number is for internal use only and will not be shared with clients or external parties.
You can enter the number in the following path:
Figure 6: Path to customising internal number range
Figure 7: Define internal number range
The second column (Responsible Unit) shows the group of employees responsible for the specific packaged or unpackaged product. You can configure this information in the following path:
Figure 8: Path to customising responsible units for dangerous goods
Figure 9: Define responsible units for dangerous goods
The third column (Responsible Unit for Dangerous Goods) can have the same value as the previous column.
The next tabs (until the Product Master Assignment) can be populated with information about the product itself.
In the tab Product Master Assignment you can link the internal number you just entered to the product number. This product number is a unique code derived from the product master that can be shared with clients.
In the next column you can choose to enter an X or to leave it blank. If you enter an X, it will retrieve the name from the product master linked to the product number.
The next three tabs contain optional information that does not have to be entered for a successful upload process. In the last tab you can assign a ‘goal’ to your product. Enter the unique internal number (A-Z) and link it to a goal. This can be a goal defined in the system.
Once you have double-checked all entered data, save it as an XML file. You can also save it as an XLS file if you want to make changes to it later.
Now you can start the upload process.
Go to the migration object and click Upload File.
Figure 10: Step 1 in the data upload
Figure 11: Upload your XML file
Select the file you just created.
If you selected the right template, a successful transfer notification will appear.
Figure 12: Success notification
Click on Prepare. The following message will appear:
Figure 14: Preparing staging tables
Click on Prepare Staging Tables.
The following message will appear:
Figure 15: Information message during the preparing of the staging tables
Click on the Monitoring tab to see the status.
Figure 16: How to open the monitoring view
If any errors appear, they can be viewed in detail here.
These errors must first be resolved before the process can continue.
Figure 17: Success notification
If the staging tables were successfully uploaded, the system will suggest the next step:
Figure 18: Step 3 in the data upload
Click here to confirm any statuses. This is not always necessary.
Once you’ve completed this process, the system will suggest the next step:
Figure 19: Step 4 in the data upload
Select Start Simulation.
Figure 20: Starting the simulation
A message will appear when the process has started.
Return to the Monitoring tab to view the status and check for any errors.
Figure 21: Simulation notification
If there are no errors, the following notification will appear:
Figure 22: Success notification
In the Options column, click Show Messages for a detailed overview of the upload process.
Figure 23: Additional option to see all the details
Open the My Unpackaged Dangerous Goods app
and click To Be Classified for an overview of your materials.
Figure 24: App to see result after uploading the product compliance file
Figure 25: Result after uploading the product compliance file
Once you complete this step, you can continue uploading the next file: Assessment for Unpackaged Products (Content-Based).
Figure 26: Excel file Assessment for Unpackaged Products (Content-Based)
Before the release of SAP S/4HANA 2020, you had to upload the text-based regulation and enter the dangerous goods texts in different languages. The current content-based regulation contains all information for the UN numbers.
Like the previous file, this one also includes an introduction tab. Be sure to read this information carefully.
In the first tab (Product), enter the internal number of the unpackaged product. This is a unique code containing letters (A-Z) and up to 80 characters. This number is for internal use only and will not be shared with clients or external parties. It is the same number as the one in the first tab of the Product Compliance file (Chemical Compliance View). The same number must be entered in the Purpose Management tab.
In the second tab (Basic Classification), enter the same internal number as in the previous tab.
In the second column (Compliance Requirement Version ID), enter an R value for the transport type. For example: R00925 is linked to ADR R00926 with IMDG.
This information can be found in the Manage Compliance Requirements app. The field containing these values is called the ID of the Business Configuration Object. This is hidden by default and can be opened in the Settings tab (see Figure 28).
Figure 27: Manage Compliance Requirements – Dangerous Goods app
Figure 28: How to add the ID of the business configuration object
The third column must be populated according to the process status of the classification. This is a required field with two possible options: RE for ‘released’ and IP for ‘in progress’. In the Transport Permissions column, 01 stands for ‘allowed’ and 02 stands for ‘not allowed’.
The next column concerns whether a product can be classified as a dangerous good. If you enter 01, it will not be classified as a dangerous good. If you enter 02, it will be classified as a dangerous good. The ID is a four-digit number used to identify dangerous substances during transport.
In the next column, enter the prefix that precedes the identification of the dangerous good. This is usually UN, NA or ID.
In the Packaging Group field, enter a Roman numeral from the Product Safety Data Sheet. Finally, enter the variant from the Dangerous Goods List. This is only required if a UN number is linked to multiple variants in the regulation (e.g. if a product is transported by road and water, it is subject to two different regulations). You can find this information in the Manage Compliance Requirements – Dangerous Goods app.
Figure 29: Manage Compliance Requirements – Dangerous Goods app
Go to the tab Dangerous Goods List. This setting is hidden by default but can be made visible in settings.
Figure 30: How to add the variant
Repeat the same migration steps you took for the Product Compliance Info file. Use the migration object DG – Assessment for Unpackaged Product (Content-Based).
The technical product name can also be added. In our example we chose not to do this. This information can be found in the Product Safety Data Sheet.
You can find your material in the My Unpackaged Dangerous Goods app.
Figure 31: My Unpackaged Dangerous Goods app
Go to the app Analyse Unpackaged Dangerous Goods.
Here you will find your material and the associated data you uploaded.
Figure 32: Analyse Unpackaged Dangerous Goods app
Figure 33: Details unpackaged dangerous goods
When this data has been successfully migrated, you can move on to the final step: migrating the packaged product. Just like the previous files, the first tab contains an introduction with a lot of useful background information.
The Product tab contains the following fields:
Figure 34: Excel file assessment for packaged product
Enter the internal number here. In the next column you can describe the outer packaging. The column after that displays the total quantity of the outer packaging.
Enter the units of measure in the next table. You can also enter a description of the inner packaging, if applicable. Sometimes, dangerous goods are transported in single-layer packaging, depending on the quantity and the mode of transport.
In this case, there is no need to mention the outer packaging. This information can be found on the Product Safety Data Sheet.
As with the outer packaging, enter the quality and units here, as well as the number of units in a single outer packaging.
Go to the Regulations tab.
Figure 35: Excel file: Assessment for Packaged Product
In the first column, enter the internal number to create the link. The other columns can be filled in the same way using the information about the unpackaged file.
Go to the Modes of Transport tab.
Figure 36: Modes of Transport tab in the file Assessment for Packaged Product
Create a link to the internal number. Enter the ID number, depending on the transport mode. See the previous sections above to find out where this information can be located.
Complete the other columns as described.
Once you have entered all information, you can start the upload process. Follow the same steps as the previous files. Use the migration object DG – Assessment for Packaged Product.
For results, go to the app My Packaged Dangerous Goods – to Be Classified.
Figure 37: My Packaged Dangerous Goods – To be Classified
Here you can follow your progress. Once everything has been uploaded correctly, it will be removed from the My Packaged Dangerous Goods – To Be Classified list.
Figure 38: Progress during upload packaged products
This blog post was written to share information about uploading dangerous goods. I hope it helps! If you have any questions, don’t hesitate to contact us.
Feel free to share your thoughts and feedback in a comment and of course, follow my profile for more content! 🙂 | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00063.warc.gz | CC-MAIN-2023-50 | 11,450 | 117 |
https://menoforder.com/blogs/blog/coding-languages-for-crypto | code | Are you new to crypto and want to dive into the technical side? Or maybe you want to learn how to develop your own blockchain technology. Here are some coding languages for crypto you can learn to educate yourself.
C# is similar to Java and C++. Initially created as a Microsoft language, now it's a popular language for blockchain technology.
For example, Stratis is a "Blockchain-as-a-Service" provider that helps businesses create apps on Blockchain platforms.
C# developers can build apps that run across all devices and operating systems.
Also, C# is an OOP programming language, so it's built for speed and efficiency.
Fun fact: The original developers wrote Bitcoin in C++.
C++ is useful for smart contract development on the EOS blockchain.
A good reason to learn C++ is its large use case.
Video game developers love C++. It’s the backbone for most high-end games. Safari, Chrome, Firefox, etc. use C++ because of their speed and efficiency.
Core banking systems use C++ as the backend coding language. Banking applications need to process millions of transactions daily and require high concurrency and low latency support.
And C++ is the core language for a lot of machine learning libraries in the backend because of its speed.
Python is a great place to start if you've never learned any coding languages and want to learn more about blockchain development.
In the crypto space, Python is used heavily for testing and development.
It also has a huge community to help with any questions or projects.
There are tons of Python libraries, plugins, and other resources to help you start.
If you're interested in Ethereum or any Ethereum-based blockchains, check out Solidity.
Solidity is one of the fastest-growing coding languages for crypto. It was created for writing smart contracts that run on ETH.
You might recognize some of the syntax. It's statically typed with curly braces.
If you're a complete newbie, the Solidity community has a vast amount of resources and projects to help you start.
There are tons of documentation available on GitHub, StackExchange, and the Solidity Gitter Chat group.
Go or Golang is a coding language developed by Google. It's fast and lean to allow multitasking without killing resources.
It's a beginner-friendly coding language if you're new to programming.
Go is a great programming language for building fast and efficient Blockchain systems.
It is the best language for creating hyper ledger fabric, which is a foundation for developing applications for Blockchain.
Geth, or “Go-Ethereum,” is an Ethereum client written in Go.
Whichever option you choose, it’s crucial to stay consistent with it. If it’s a website, a video, or an app on your phone, commit to daily lessons.
Learning code is like learning a foreign language; practice it every day.
Also, stick with one learning tool if you can.
You may want to jump into the next coding language when you finish a course. Instead, build projects based on what you just learned to reinforce your knowledge. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474569.64/warc/CC-MAIN-20240224212113-20240225002113-00874.warc.gz | CC-MAIN-2024-10 | 3,018 | 29 |
https://www.gconsult.us/dvdcd/your-dvdcd-is-not-working/ | code | I was on a Fort Wayne computer service call last month and one of the things they wanted to do was play a CD on their computer, but they were having problems. I tried to eject the tray of the CD/DVD player and it slowly opened. The CD was placed in the tray and closed. The audio CD wouldn’t play and the CD LED wouldn’t light up. I opened the Windows 7 Device Manager and didn’t even see a CD/DVD drive listed which isn’t good. I ran the “Scan for hardware changes” part of Device Manager and it didn’t find the CD/DVD drive. It looked like it had died but to verify, I tried one of my tricks to see if Windows was the problem or it it was hardware. The computer was booted into the BIOS setup and it didn’t even see the CD/DVD drive so I knew the CD/DVD drive had failed. One more thing was tried: The computer cover was removed and I disconnected the drive and reattached it to a different power cable and data cable and data port. It still didn’t respond so the owner was informed and they said to leave it alone for now. These CD/DVD drives don’t last forever and do fail eventually after heavy use.
- Why isn’t my smartphone Google contact notes showing on the computer?
- Now I know why I had problems installing programs in Windows 11
- I finally own a Microsoft Windows 11 laptop. Fort Wayne computer repair
- Do I train myself in Windows 11 or just focus on Windows 10, Linux and the Apple OS?
- What is happening with my Kitchen laptop’s wireless? | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00834.warc.gz | CC-MAIN-2022-49 | 1,482 | 6 |
https://docs.joomla.org/Help310:Menus_Menu_Item_Menu_Item_Alias/fr | code | Menus Menu Item Menu Item Alias/fr
From Joomla! Documentation
Used to create a link from one Menu Item to another Menu Item. This link can be to another Menu's, Menu Item or within the same Menu. See Quick Tips for use.
How To Access
To create a new Alias Menu Item:
- Select Menus → [name of the menu] from the drop-down menu on the back-end of your Joomla! installation (for example, Menus → Main Menu).
- Click the New Toolbar button to create a new menu item.
- Click the Menu Item Type Select button and then click the Menu Item Alias link under System Links.
To edit an existing Menu Item Alias, click its Title in Menu Manager: Menu Items.
These are the required settings:
- Menu Title: The title that will display for this menu item.
- Menu Item Type: The Menu Item Type selected when this menu item was created. This can be one of the core menu item types or a menu item type provided by an installed extension.
- Menu Item. Select a Menu Item name to point Alias Menu Item name at. The drop down list is sectioned by Menu Names, with a list of Menu Item names under each Menu Name.
- Menu. Shows which menu the link will appear in.
Leave the Alias blank if the Menu Item Alias has the same Parent (in the same Menu Name).
- Alias. The internal name of the item, also used in the URL when SEF is activated. Normally, you can leave this blank and Joomla will fill in a default value. The default value is the Title or Name in lower case and with dashes instead of spaces. You may enter the Alias manually. The Alias should consist of lowercase letters and hyphens (-). No blank spaces or underscores are allowed. Non-Latin characters can be allowed in the alias if you set the Unicode Aliases option to 'Yes' in Global Configuration. If this option is set to 'No' and the title includes non-Latin characters, the Alias will default to the current date and time (for example "2021-09-22-12-04-10").
- Use Redirection. If set to Yes then visitors will be redirected to the linked menu item.
See Menu Item Manager: Edit/New Menu Item for help on fields common to all Menu Item types which includes:
Module Assignments Tab
- Leave the alias field empty if the menu item alias and the menu item linked to by the alias have the same parent.
- A Main Menu Item Alias could link to an Article Menu's 'Some Menu Type' Menu Item. By using Module Assignments, a possible use would be to replace the Main Menu with the Article Menu when the Alias Menu Item is clicked. A return to Main Menu when another Alias Menu Item is clicked pointed back to a Menu Item in the Main Menu.
At the top left you will see the toolbar:
The functions are:
- Save. Saves the menu item and stays in the current screen.
- Save & Close. Saves the menu item and closes the current screen.
- Save & New. Saves the menu item and keeps the editing screen open and ready to create another menu item.
- Cancel. Closes the current screen and returns to the previous screen without saving any modifications you may have made.
- Help. Opens this help screen. | s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585768.3/warc/CC-MAIN-20211023193319-20211023223319-00293.warc.gz | CC-MAIN-2021-43 | 3,027 | 28 |
https://meta.askubuntu.com/questions/1394/what-is-the-private-beta | code | There is a silver badge Beta, it says Actively participated in the private beta.
What is the private beta?
Ask Ubuntu once started off as a simple proposal on Area 51. During which people helped to shape and define the parameters which the site would operate under. Once this was defined the proposal went through a commitment phase to see how much traction it would gain and test how many people were interested in this idea. After that Stack Exchange launched https://askubuntu.com/ - a private beta to see if the site could survive a public beta launch. As you can imagine it then launched into public beta where anyone could join and shortly after public beta became the site you see today; full launched as Ask Ubuntu.
You can check out each part of the phase, from definition to launch on the Area 51 profile. | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817128.7/warc/CC-MAIN-20240417013540-20240417043540-00787.warc.gz | CC-MAIN-2024-18 | 815 | 4 |
http://wearables.sys-con.com/?q=node/2535087 | code | |By Business Wire||
|February 11, 2013 03:29 PM EST||
Go Daddy, the Web’s top platform for small businesses, has acquired M.dot Inc., the leading mobile app for small business website creation and management. The addition of M.dot to Go Daddy’s extensive product offerings gives Go Daddy’s 11 million customers an easy way to create and manage compelling mobile websites right from their smartphones.
“We’re pleased to welcome M.dot to our growing Go Daddy family,” said Go Daddy CEO Blake Irving, who made the formal announcement at today’s ribbon-cutting ceremony for the new Go Daddy office located in Sunnyvale, Calif.
“M.dot’s global vision of a mobile future for small businesses fits beautifully with what our customers need right now,” Irving said. “Go Daddy wants to help businesses connect with their customers wherever they are in the world, and that means providing killer mobile technology. It also means helping our customers manage their business from anywhere, anytime with the simplicity of their smartphone or tablet. With M.dot, small businesses can compete on a big-time level without spending big-time money. We’re completely stoked to have M.dot on board.”
Over the last three years, mobile Internet traffic has grown more than 10 times from one percent to 13 percent of total Web usage1. In that same time period, smartphone and tablet shipments surpassed PCs. Go Daddy customers are using their smartphones to manage their online presence more and more every day as a means to find and acquire customers. The M.dot acquisition provides Go Daddy’s customers with an innovative platform to capitalize on these trends.
M.dot was founded by Dominik Balogh and Pavel Serbajlo in June 2012. They have the backing of leading Silicon Valley investors including Floodgate, SV Angel and Archimedes Labs, and have been featured by Apple in the iTunes App Store on multiple occasions. The company will operate from Go Daddy’s Silicon Valley office, which is a major new center of engineering and product design for the company. Over the next year, Go Daddy plans to double its current Sunnyvale-based staff, from 40 to 80.
“We’ve been impressed with Dominik and Pavel throughout this process,” said Go Daddy President of Products and Technology Jason Rosenthal. “These guys understand, like no others we’ve seen, how mobile is top-of-mind for small businesses right now. They’ve solved some very important usability and technology problems around mobile site creation and management, which make them unique. Dominik and Pavel embody the kind of creativity and entrepreneurial drive we value at Go Daddy and most importantly, they’ve built the foundation for an extraordinary product our customers will love.”
M.dot co-founder Dominik Balogh pointed out the shared values of the two companies. “Go Daddy is exactly the kind of technology company we wanted to join because they cater to and understand the small business customer,” Balogh said. “We’ve always believed small business is essential for a healthy economy, and Go Daddy has extensive experience as the industry’s leading platform for small enterprises. Joining Go Daddy means our mobile software will now be accessible through much larger distribution channels and will help even more small businesses achieve greater success. It also gives us the benefit of integrating with Go Daddy’s immense customer support foundation, which I believe is a significant competitive advantage in this segment.”
“Go Daddy was built from scratch and is now a multi-billion dollar company – their industry expertise and achievements are inspiring,” said M.dot co-founder Pavel Serbajlo. “We're looking forward to taking mobile software for small businesses to the next level with Go Daddy. We share the same vision for mobile. We’re passionate about small businesses, start-ups and entrepreneurs … we all want to build software products that enable our customers to succeed online.”
The M.dot app offers mobile website creation based on pre-loaded templates with easy-to-use custom features, such as store location, driving directions, tap-to-call, galleries, business hours and price lists, all in a user-interface designed for mobile device screens. It also includes a blog feature, which allows users to write and insert photos and video with rich text capabilities. It integrates with Facebook, Twitter, Flickr, YouTube and Dropbox.
M.dot is available in the iTunes App Store at: https://itunes.apple.com/us/app/m.dot/id565017826?ls=1&mt=8
Financial terms of the M.dot acquisition have not been disclosed.
In late 2011, Go Daddy received an investment from KKR, Silver Lake and Technology Crossover Ventures designed to build upon Go Daddy’s commitment to providing high quality products and services to its customers.
Since then, the company has also embarked on international growth, including expansion into India. Go Daddy also has facilities in Arizona, California, Colorado, Iowa, Toronto, Amsterdam and Singapore.
To apply to join the Go Daddy team, please visit www.GoDaddy.com/Jobs.
Read why our customers recommend Go Daddy.
About Go Daddy
Go Daddy is the world's largest domain name provider, Web hosting provider and new SSL certificate provider, focused on helping small businesses grow larger. Go Daddy provides dozens of cloud-based services and is the largest worldwide mass-market hosting provider by annual revenue, according to 451 Research (Mass-Market Hosting Report-Fall 2012), and is the #1 provider of net-new SSL certificates for 2012, according to the Netcraft, LTD Secure Server Survey. To learn more about the company, visit www.GoDaddy.com/PR.
- Go Daddy Operating Company, LLC -
Copyright © 2013 GoDaddy.com, LLC All Rights Reserved.
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new data-driven world, marketplaces reign supreme while interoperability, APIs and applications deliver un...
Oct. 9, 2015 08:00 AM EDT Reads: 287
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Oct. 9, 2015 08:00 AM EDT
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context w...
Oct. 9, 2015 07:30 AM EDT
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Oct. 9, 2015 07:00 AM EDT Reads: 5,880
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
Oct. 9, 2015 06:00 AM EDT Reads: 1,411
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll share tips on how to speed up business initiatives, harness Big Data and remain one step ahead by apply...
Oct. 9, 2015 05:15 AM EDT Reads: 514
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
Oct. 9, 2015 04:00 AM EDT Reads: 573
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
Oct. 9, 2015 04:00 AM EDT Reads: 503
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete end-to-end walkthrough of the analysis from start to finish. Participants will also be given the pract...
Oct. 9, 2015 03:00 AM EDT Reads: 296
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
Oct. 9, 2015 03:00 AM EDT Reads: 731
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of robomq.io, and Fred Yatzeck, principal architect leading product development at robomq.io, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at the same time reduce Time to Market (TTM) by using plug and play capabilities offered by a robust IoT ...
Oct. 9, 2015 02:00 AM EDT Reads: 2,220
Today’s connected world is moving from devices towards things, what this means is that by using increasingly low cost sensors embedded in devices we can create many new use cases. These span across use cases in cities, vehicles, home, offices, factories, retail environments, worksites, health, logistics, and health. These use cases rely on ubiquitous connectivity and generate massive amounts of data at scale. These technologies enable new business opportunities, ways to optimize and automate, along with new ways to engage with users.
Oct. 9, 2015 02:00 AM EDT Reads: 170
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
Oct. 9, 2015 02:00 AM EDT Reads: 212
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Oct. 9, 2015 01:45 AM EDT Reads: 7,028
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes about through a Communications Platform as a Service which allows for messaging, screen sharing, video...
Oct. 9, 2015 12:00 AM EDT Reads: 1,138
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
Oct. 8, 2015 10:00 PM EDT Reads: 600
The IoT market is on track to hit $7.1 trillion in 2020. The reality is that only a handful of companies are ready for this massive demand. There are a lot of barriers, paint points, traps, and hidden roadblocks. How can we deal with these issues and challenges? The paradigm has changed. Old-style ad-hoc trial-and-error ways will certainly lead you to the dead end. What is mandatory is an overarching and adaptive approach to effectively handle the rapid changes and exponential growth.
Oct. 8, 2015 09:00 PM EDT Reads: 122
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...
Oct. 8, 2015 05:30 PM EDT Reads: 236
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT.
Oct. 8, 2015 04:30 PM EDT Reads: 7,473
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Oct. 8, 2015 02:45 PM EDT Reads: 502 | s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00251-ip-10-137-6-227.ec2.internal.warc.gz | CC-MAIN-2015-40 | 17,330 | 61 |
https://www.math.lsu.edu/~bourdin/defectmechanics/backtracking/ | code | One of the issues in the numerical implementation of Francfort and Marigo’s energy lies in its non-convexity. Indeed, this energy (and the regularizations used in its implementation) can sometimes possess many local minimizers, in which the numerical method might get trapped. Because -convergence preserve only global and some local minimizers, local minimizers of the regularized energy may have little physical relevance.
The goal of the backtracking algorithm (Bourdin, 2007) is to detect a type of local minima frequently encountered in numerical experiments, using the crack growth condition and the form of the elastic potential. Considering a family of monotonically increasing loads, it is easy to show that for any two load increments , the deformation fields , and crack sets , must satisfy the condition
where and correspond to the bulk and surface part of the fracture energy, respectively.
If this condition is not met, then cannot be a global minimizer of the fracture energy. In the Backtracking Algorithm, if such a violation is detected, one returns to load increment , and initializes the minimization algorithm with built from .
The following figures illustrate the Backtracking on a simple traction experiment on an elongated beam. For this problem, it is possible to show that there exists a critical load increment below which the optimal configuration correspond to an un-cracked domain. Above this threshold, any configuration with a transverse crack is a global minimizer. The leftmost figure represent a typical energy evolution. The red line represents the theoretical energy, the dashed line the numerical total energy. For loads in the 2.5-8.5 range, the numerical scheme fails to bifurcate towards the cracked solution and converges to a local minimizer that violates the condition above. The rightmost figure illustrates the outcome of the backtracking algorithm on this example. At first (steps 1, 2 in the figure), the algorithm converges to local minimizers. When it eventually bifurcates towards the cracked solution, (step 3) a violation of the backtracking condition is detected and the algorithm returns to the critical load increment. As the load is incremented, the algorithm converges now to the proper solution (step 5).
Backtracking algorithm applied to a uniaxial tension experiment from (Bourdin, 2007). | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480622.9/warc/CC-MAIN-20190216145907-20190216171907-00394.warc.gz | CC-MAIN-2019-09 | 2,351 | 6 |
https://qwiklabs.com/focuses/2554/reviews?locale=en&page=4 | code | Using Open Data with Amazon S3Go to Lab
When user is prompted to enter his one-time code , make auto-jump-to-next 4-digit bucket . besides that , great and very explanatory lab
This is very informative and helpful.
I could not download the items and the page kept crashing. | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647901.79/warc/CC-MAIN-20180322170754-20180322190754-00330.warc.gz | CC-MAIN-2018-13 | 273 | 4 |
https://www.labcognition.com/onlinehelp/en/change_password.htm | code | The command "Change Password" invokes a dialog which enables
the user to change his personal password. This password is used for all
security related actions in the software. The command is only available
if the user has the permission to change is password. The permission to
change passwords is issued by the administrator using the User
Management in the security setup. | s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202688.89/warc/CC-MAIN-20190322180106-20190322202106-00538.warc.gz | CC-MAIN-2019-13 | 373 | 6 |
http://www.thefoodsection.com/shoppinglist/bakeware/ | code | Broom for Baking
The Swiffer has dealt a serious blow to brooms everywhere, but one thing it can't do is test whether your cake is fully baked. The Amish Cake Tester Broom (found via Book of Joe) is comprised of corn husk straws that can be broken off and used as cake testers. $6.99, including a poem, at CHEFS.
If you've ever had to resort to using an empty wine bottle as a rolling pin (I have), you can appreciate the playful design of this rolling pin that evokes the shape of a bottle of Bordeaux, but won't leave any wine stains in your biscuits. $21.40 at Atypyk. | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00290-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 571 | 3 |
http://3dog.ru/user/view/5309 | code | After this, the 404 error started showing up every time I tried to open the dashboard.I researched the issue and tried all the usual fixes.After all, you probably won’t want to spend an enormous amount of time to build a website using Word Press and end up starting all over again with Squarespace (or vice versa).In this review, we’ll benchmark Squarespace vs Word Press in the following 5 categories plus our conclusion: Simplistically, Word Press is an open source platform, meaning that their codes are open to everybody to use and customize.
This site runs on Word Press hosted through a Go Daddy account.
I cleared my Internet cache, I tried installing and using Fire Fox to access the dashboard, and I tried posting a new post (since the 404 error indicated that it wasn’t finding any content).
I also tried accessing the dashboard from a second computer and I got the same 404 error. The problem was simple: my Web site was working and delivering content but I couldn’t access the dashboard to add any new content or manage the site in any way shape or form.
The strange thing was that I could also access site statistics but I couldn’t get to the dashboard.
I was settling in for a long couple of days rebuilding the site and manually reloading the Word Press software, my theme, and disabling plugins. | s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209040.29/warc/CC-MAIN-20180814131141-20180814151141-00485.warc.gz | CC-MAIN-2018-34 | 1,321 | 6 |
http://stackoverflow.com/questions/6575079/is-there-a-generic-term-for-chained-programming | code | Most obvious example is jQuery where the result of a function is the modified object, to which you can apply same or different functions. An example below is from dan's blog, but examples are abundant over the internet.
var $inner = $("<div>inner</div>") // append it to a new outer div .appendTo("<div>outer</div>") // next change the jQuery chain to the "outer" div .parent() // append the outer div to the body .appendTo("body") // finally, go back to the last destructive command, // giving us back a pointer to the "inner" div .end();
Is there literature available on this technique? I'have only seen this being used in an imperative way (if you'd call $inner.append($moreInner) $inner is modified). Would it make sense to use an functional approach with this kind of programming (ie keep the state of the objects unaltered and return a clone of the modified object). | s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021587780/warc/CC-MAIN-20140305121307-00026-ip-10-183-142-35.ec2.internal.warc.gz | CC-MAIN-2014-10 | 872 | 3 |
http://alexbea.com/blog/ergonomic-breakpoint-variables/ | code | Ask 20 front end developers what they like to name their responsive breakpoints and you’ll get… probably 10 or so answers. Most people use similar ones, but many of us like to switch it up. For me, I’ve landed on a system I like based on screen interaction context.
To be more clear, what I’m talking about here are variable names for responsive breakpoint values while using CSS preprocessors. I use bracketed Sass, but all the pre- or post-processors I’m aware of (and worth using) have some system for using variables. They allow us to say that
$color-primary is always
#663399, then never need to remember in what order those 3s, 6s, and 9s go.
This is arguably even more useful for responsive breakpoints, where there might be three to six (or more?) values to remember. And what if we decide that one of the values should probably be about 100px higher? The point is that using variables for these are important.
I think it’s fair to say that the most typical set of breakpoint variables is some language variation of:
$small $medium $large $extra-large
Maybe those are prefixed with
$breakpoint-. Maybe they’re abbreviated to
$sm, $md, $lg, $xl. But that’s usually what I’ve seen.
My main issue with this is that it’s difficult to expand. I find that depending on the design it can be useful to have more than four breakpoints. In English at least, adding more involves lots more prefixing, such as
$medium-large. This gets clunky, and more importantly it’s very subjective. What’s “small” to me might be “medium” to the next person. My
$xl might be your
The second most popular system is probably the device route.
$phone or $mobile $tablet $tablet-portrait $laptop $desktop
This definitely provides more options for the mid-range values and is also less subjective. Great improvements. My issue with this is somewhat pedantic. Phones can get pretty big, and tablets can be fairly small… as well as bigger toward laptop sizes. Laptops can be really tiny. You get my point.
Worse is when breakpoints are named after specific devices, such as
$iPad (it’s almost always Apple stuff).
Ergonomic context variable names
It’s important I mention that I didn’t come up with this myself. I remember a while back hearing a discussion of breakpoint naming on the Shop Talk Show podcast and they referenced a CSS-Tricks post that shared some breakpoint ideas and solicited comments for more. I browsed the comments with suggestions both simple and silly (e.g., gollum, bilbo, gimly, aragorn, gandalf, balrog). Then I came on one comment by Kevin Powell referencing a tweet by Luke Wroblewski.
@TrentWalton wrist, palm, lap, desk, wall, mall sized screens. human ergonomics won't change. devices will.— Luke Wroblewski (@lukew) November 27, 2012
I liked Luke’s point that human ergonomics were, if not unchanging, more stable than device names and types. So without further ado, my breakpoint variables are:
$bp-palm $bp-hands $bp-hands-wide $bp-lap $bp-desk $bp-wall
Luke’s list included “wrist” and “mall sized screens”. I’ve yet to be called on to make a site for anything larger than a wall-sized screen and anything lower than “palm” should be suitable for wristwatch size screens (though I admit I rarely test for anything that small). I also like some short prefixing for clarity.
I also added some in-between levels. People interact with devices in their palm, but also devices they need both their hands to hold (e.g., most tablets). My
$bp-hands-wide is essentially a landscape tablet width. I don’t love the name, but I’ve not thought of anything better yet.
I can hear the argument that all I’ve done is taken the device-type approach to variables and renamed them. Totally valid. And a screen built into a kiosk might be the size of one that I’d hold wide in my hands while it’s embedded into a refrigerator-sized machine—so we’re no longer interacting while holding in our hands.
As with anything in web development, there are trade-offs. Also, since these variables are part of the build process and not the end product it’s most important that these be meaning for for the current and future developers. After that, personal preference takes us the rest of the way. The device-type approach works for many people, but I like this better. I still do think that the subjective small-medium-large approach is problematic, however.
If your’e curious about my actually sizes, these are mine from a project I’m working on now:
$bp-palm: 20em; // 320px $bp-hands: 37.5em; // 600px $bp-hands-wide: 53.125em; // 850px $bp-lap: 75em; // 1200px $bp-desk: 100em; // 1600px
em relative unit is based on the current standard 16px default font size. I also haven’t had need for a
$wall size yet. I might start that around 2400px or something. Totally pulling that number out of the air in the moment.
If you want a deeper dive into that, check out the provocatively named, but thoughtful post, The 100% correct way to do CSS breakpoints. Gilbertson’s focus on ranges for devices rather than hard, well, breakpoints spoke to me. Clearly his thoughts on variable naming had less of an effect.
Also check out the inspiring CSS-Tricks post and its comments for more brainstorming and front end dev silliness, too.
Updated (2017-03-14): Initially incorrectly stated that
$bp-hands-wide was related to portrait tablet orientation. The breakpoint is meant to reflect landscape tablet orientation. | s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401624636.80/warc/CC-MAIN-20200929025239-20200929055239-00632.warc.gz | CC-MAIN-2020-40 | 5,469 | 36 |
https://guia.unl.pt/en/2019/fcsh/program/4356/course/722131042 | code | Linguística Computacional (not translated) - 2nd semester
Acquisition of knowledge in the field of Computational Linguistics allowing students:
a) to develop skills for the analysis and understanding of natural languages aiming at computation;
b) to understand the levels of computer-aided treatment involved in the processing and computation of natural languages;
c) to know the different current strategies and tools for the processing and computation of natural languages;
d) to acquire methodologies for analysis of linguistic data envisaging the modelling of linguistic knowledge for computational purposes;
e) to make use, in practical scenarios, of the acquired knowledge.
Raquel Fonseca Amaro
Weekly - 3 letivas + 1 tutorial
Total - Available soon
Allen, J. (1995) Natural Language Understanding, Menlo Park, CA: Benjamim Cummings.
Baldwin, T. (2005) General-Purpose Lexical Acquisition: Procedures, Questions and Results, in Proceedings of the Pacific Association for Computational Linguistics 2005, Tokyo, Japan.
Bolshakov, I. & Gelbukh, A. (2004) Computational Linguistics: Models, Resources, Applications, México: IPN, UNAM, FCE.
Branco, A., Mendes, S. & Ribeiro R. (eds.) (2004) Language Technology for Portuguese: shallow processing tools and resources, Lisboa: Edições Colibri.
Manning & Schütze (1999) Foundations of Statistical Natural Language Processing, MIT Press.
Mitrov, R. (2003) The Oxford Handbook of Computational Linguistics, Oxford: Oxford University Press.
Pustejovsky, J. (1995) The Generative Lexicon, The MIT Press.
Sag & Wasow (1999) Syntactic Theory - A Formal Introduction, Stanford: CSLI Publications.
Theoretical and practical classes and tutorial guidance, with resource to case studies and practical application of the acquired knowledge, including:
i) topic presentation and explanation by the teacher;
ii) discussion and analytic analysis of relevant literature on the addressed topics;
iii) practical application of acquired knowledge in individual and collective essays within specific tasks.
Continuous evaluation, including the following components: individual and collective essays, with presentation and discussion in class (accounting for 40% of the final grade); and final individual test (accounting for 60% of the final grade).
Conteúdo em Inglês
1. Linguistics, computation and natural language processing
1.1 Computation vs. natural language processing
1.2 Linguistic modelling, computation and processing
2. Areas and levels of analysis of Computational Linguistics
2.1 Shallow processing
2.2 Information retrieval and summarization
2.3 Machine translation
2.4 Natural language interface
3. Computational Linguistics theoretical grounding and strategies
3.1 Structuralist approach and Chomsky
3.2 Context-free grammars and transformational grammars
3.3 Valencies, interpretation and constraints
3.4 HPSG and unification
3.5 Corpus Linguistics
3.6 Automatic acquisition
4. Practical application to Portuguese
4.1 Regular expressions
4.2 Computational grammars for generation and parsing
4.3 Computational lexica
Programs where the course is taught: | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00100.warc.gz | CC-MAIN-2022-40 | 3,108 | 44 |
http://www.scrapbookmax.com/forums/threads/8760-Photoshop-Elements-Wordart | code | I am trying to create wordart in Photoshop Elements and I haven't a clue what I am doing. I have read dozens of tutorials but still can't get it to work out for me. When I do get my wordart created and I save it, there is always a white background behind it. How do I remove the background? Is there anyone here who could maybe walk me though step-by-step on how to do this? Please.
Crystal aka inspiredmommie | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00303-ip-10-171-6-4.ec2.internal.warc.gz | CC-MAIN-2016-44 | 409 | 2 |
https://physics.stackexchange.com/questions/1816/orbital-mechanics-of-dragons-egg/1828 | code | In the novel Dragon's Egg, the human crew use one asteroid to swing other asteroids in place to counter the gravity of the neutron star. I understood that it was similar to a gravity sling shot, but I wasn't able to fully get how the crew were able to move the smaller asteroids in place using the big one. Can anyone explain that further?
I know nothing of this book, but I do know a little about N-body gravitational interactions. When N >= 3, you can do just about anything you want with a little propulsion, but it may take a very long time. This has been proposed by NASA (good approximate for hard SF) as a way of sending probes to the outer solar system: http://en.wikipedia.org/wiki/Interplanetary_Transport_Network
As far as the stability questions, it is known that you can put 7 or 8 equal-mass objects on a stable circular orbit around their center of mass: http://adsabs.harvard.edu/abs/1988A%26A...205..309S
After introducing an external mass and its accompanying tidal forces, I imagine there is still some stability regime, which could be pretty close to the NS, especially if the "asteroids" have WD density, their constellation could be on the order of 1km, while the NS radius is probably closer to 15km.
When a small spacecraft slingshots around a large moon or planet, it can greatly increase its speed at the expense of very slightly decreasing the speed of the larger body. Given a method by which to move a larger body, you could very simply cause it to move smaller bodies by traveling near enough to attract them. More complicated would be moving it in such a way that the smaller body was given a specific increase in absolute velocity, which would be the "reverse slingshot".
Not an answer but to give context:
There are eight asteroids: 2 large and 6 smaller ones. All the asteroids have been collapsed by injecting their core with magnetic monopoles that makes them collapse to white dwarf density. The large ones were originally 250km in diameter and collapsed to 100m.
The larger asteroids are on highly elliptical orbits. | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00427.warc.gz | CC-MAIN-2022-21 | 2,054 | 8 |
https://forum.playhive.com/t/is-there-a-plancke-io-for-the-hive/11981 | code | Like an unofficial improved stats checker for this server? I know there’s a bot but is there a site?
50/50 shot this gets deleted but yes there is https://hive.paroxity.net/
This post was flagged by the community and is temporarily hidden.
Thanks! (29 characters)
(no offense) why are year old accounts posting spam?
Looks ugly but the official api works. | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655929376.49/warc/CC-MAIN-20200711095334-20200711125334-00407.warc.gz | CC-MAIN-2020-29 | 357 | 6 |
https://mmo-indie.com/download/PVPZoneAndRealms | code | Download PVP Zone And Realms
- Restaure old condition system ( I was informed of an incompatibility with certain older versions of Unity)
- Added an idea to improve (disabled at the moment)
- Fixed: a random problem if a player connects while already in a pvp zone
- Fixed an error when entering or exiting a "PVP Zone Area" and playing with a client build in Client&Server
- minor edit, change realms in Resources
- Add support for uMMORPG 2D
- Add item for change REALM
- Remove client Check if is PVP Zone server only !
- Remove server logic on client client build
- For this version : See changelog on discord
Add various PVP types to your game and limit the regions where PVP combat can take place! | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510516.56/warc/CC-MAIN-20230929122500-20230929152500-00725.warc.gz | CC-MAIN-2023-40 | 703 | 12 |
http://bartwullems.blogspot.com/2016/06/vsts-buildprivate-build-slots-you-only.html | code | By default when you are using VSTS(aka Visual Studio Online), you get the following extra services as part of your subscription:
Build and Deployment: Use this task-based service to create, queue, and monitor cross-platform builds. Use Hosted Agents that Microsoft runs, or Private Agents that you run so that you can install custom software.
Build(XAML)/ Build and Deployment: Create build definitions and run them in Visual Studio Team Services. Configure builds to run on demand at specific intervals, or for continuous integration (CI). Builds are charged per minute for the actual amount of computing time used to build your project.
Cloud-based Load Testing: Create load tests using Visual Studio Ultimate 2013, Visual Studio Enterprise 2015, or later. Run those tests in Visual Studio Team Services. Load tests are measured and billed in virtual user minutes: the number of virtual users multiplied by the number of minutes that you set up for the load test run.
Your Visual Studio Team Services account includes free amounts of these additional services:
Build (XAML) / Build and Deployment: combined 240 minutes (4 hours) per month
Cloud-based Load Testing: 20,000 virtual user minutes per month
With the new Build and Deployment you get 2 things:
- One hosted build agent that runs on a build server provided by Microsoft
- One private build agent that you can install on your own hardware(virtual or physical)
I wasn’t aware of the second thing, so I got into troubles when I tried to add a second private build agent and ended up with some weird errors. Luckily it is easy to buy extra private agents through the Azure Portal. Here are the steps you need to follow: https://www.visualstudio.com/docs/setup-admin/team-services/get-more-build-or-load-testing-vs#AzurePortal
Remark: The hosted XAML build service will be retired by September 2016. | s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00330-ip-10-233-31-227.ec2.internal.warc.gz | CC-MAIN-2017-13 | 1,858 | 12 |
https://tyronline71.ru/forum/?serial=8019 | code | To extract the RSA private key from the PEM, run the following command: openssl rsa -in.
If your certificate is secured with a password, enter it when prompted. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. For more information about the team and community around the project, or to start making your own contributions, start with the community page. EVP & TLS: Add necessary EC_KEY data extraction functions, and use them libssl code uses EVP_PKEY_get0_EC_KEY() to extract (https://tyronline71.ru/forum/?serial=6212) certain basic data from the EC_KEY. This fixes a crash in the unwinder when this function is i. Create an elliptic-curve public/private key pair with genpkey. OpenSSL is an open source SSL implementation. AWS Documentation AWS CloudHSM User Guide. I installed on first server, all good.
The -label parameter specifies the certificate's label within the key database. This is the password you gave the file upon exporting it. Format PEM_KEY_FILE using a text editor Remove "Bag attributes" and "Key Attributes" from this file and save. I understood everything but not the format of the private keys. In this tutorial, we demonstrate how to extract a private key from the Java KeyStore (JKS) in your projects using OpenSSL and Keytool. Open a command prompt, and move to the OpenSSL-Win32\bin directory, using: cd C: \OpenSSL\bin Execute the following command to export Private Key file: openssl pkcs12 -in. If your private key is encrypted, you will be prompted for its pass phrase. If you have landed on this tutorial and do not have PFX certificate file please visit: Migrate (move) SSL certificate from Windows to Linux. He has my password to some website I visited. The private key is used to decrypt, and to sign things.
- Openssl - CSR: Extract PKCS#10 contained in a PKCS#7
- Use RSA public key to generate private key in Openssl
- The Hitchhiker's Guide to Using OpenSSL for Managing
- How to use OpenSSL: Hashes, digital signatures, and more
- Generate OpenSSL RSA Key Pair from the Command Line
- Extract a private key from a Gnu Keyring file
Encryption - How to decrypt a '.enc' file that has been
A product key generator is very important as it is responsible for the generation of the special product key. The RSA key file can either be a PEM format private key or a PKCS#12 keystore (typically a file with a. pem; type "quit", followed by the "ENTER" key. The Certificate Authority (CA) provides you with your SSL Certificate (public key file). The private key consists of numeric values, two of which (a modulus and an. Convert p12 private key to PEM, if necessary. This includes creating customized forms and controls. The windows implementation has been done by Shining Light. With the Federal Reserve set to. 2020 Download Free BitCoin Private Key Finder and generator and try your luck. What appears to be a decrypted private key has been extracted from a Cyberoam UTM certificate and published online in a move that could place businesses.
Your web server creates two cryptographic keys, a Private Key and a Public Key. Use the password you specified earlier when exporting the pfx. Please Sign Up or Login to. To extract only public key first we need to convert the pfx file to pem which contains both private and public key, and. New ways of enabling machines to wirelessly talk with each other are propagating as fast as blockchain startups. The private key is kept secret and is never shared with anyone. GeoStudio 2020 7.1 + Crack Keygen/Serial Date added: Jan 2020. If you are annoyed with entering a. Using a private key to attach a tag to a file that guarantees that the file was provided by the holder of the private key is called signing, and the tag is called a signature.
Making sense of SSL, RSA, X509 and CSR
Powerquest Drive Image 7 Freeware. OpenSSL 3.0 is the next major version of OpenSSL that is currently in development and includes the new FIPS Object Module. Windows 7 Operating system will not install without the. How do I convert and export key/certificate pair from jks to pkcs12 format. To do this, please use the following commands to convert your files into different formats. If you only want the private (https://tyronline71.ru/forum/?serial=1345) key file, you can skip steps 5 and 6. Q&A for information security professionals. To check that the public key in your cert matches the public portion of your private (https://tyronline71.ru/forum/?serial=1345) key, you need to view the cert and the key and compare the numbers. As per survey Havij is listed as one of the finest and widely used tool used for finding SQL Injection vulnerabilities on a web page.
- How to extract private key from pfx file using openssl
- Errata Security: Verifying the Comodo Hacker's key
- GitHub - nyteshade/KeyCertExtractor: A hacked together GUI
- OpenSSL - User - how to extract the private key out of the
- Easeus Data Recovery Wizard 10.8 License Key Generator
- Extract Public key from Private Key
- Mass Effect 3 Cd Key Generator
- Licensing - Looking for a license key algorithm
- How to recover the private key of an SSL certificate in an
- Exporting a certificate's private key to file (pem, cert, pfx)
In the simplest case, this can be a self-signed certificate, but if you want to distribute a website to a wide audience, the certificate must be signed by a certificate authority trusted by your entire target audience. Convert PEM PEM to DER openssl (hop over to this website) x509 -outform der -in. The Knox website emphasizes that individuals and organizations use. A resource for generating RSA public keys. A key pair, consisting of a private key and a public key, is a set of security credentials that you use to prove your identity when connecting to an instance. Openssl documentation: Generate RSA Key. Copy the PFX or P12 file to the same location as your OpenSSL program (or specify the location in the command line). For a list of vulnerabilities, and the releases in which they were found and fixes, see our Vulnerabilities page. I have a program that was created for me, and it has a 7 day trial.
Click Next, and then click Finish. Remove Private key password openssl rsa -in [HOST] -out [HOST] Enter the passphrase and [[HOST]] is now the unprotected private key. In Confirm password, type the same password again, and then click Next. The new requirements that I have are that I also need to extract a CRL from that PKCS12. Run the following OpenSSL command to generate your private key and public certificate. The following output is displayed. Using OpenSSH in Windows 10. The first thing I tested was using the OpenSSH utilities normally to generate a few key-pairs and adding them to the ssh-agent. Next step is extracting the public key certificate from the pfx file, there is a direct command in OPENSSL to extract the public key certificate from the pfx file but the generated file will contain public key certificate and some. After entering import password OpenSSL requests to type another password twice.
Public/Private key encryption is a method used usually when you want to receive or send data to thirdparties. I can use the Export-PFXCertifiacte cmdlet to get [HOST] file with a password that contains both the certificate and the key, but I need to have the key as a separate file. To verify this open the file using a text editor. It will also cover th. This reveals the RSA parameters, as labelled below in red. How to generate RSA keys for use with encrypted forms On Windows. The 3 files I need are as follows (in PEM format): an unecrypted. Exporting the public key from a JSK is quite straightforward with the keytool utility, but exporting the private (look what i found) key is not allowed. Right now, I'm generating keys via ssh-keygen which I put [HOST].
|1||Generate public key and private key with OpenSSL in||37%|
|2||OpenSSL - OpenSSL Tutorials||91%|
|3||Openssl - Load Private Key||17%|
|4||How to use openssl for generating ssl certificates private||80%|
|5||Dreamweaver private key sftp||58%|
|6||Hack dmg x7 private||18%|
|7||Hack sign extra ost||32%|
|8||Dmg hack metin2 private||17%|
More than 25, 000 alumni work and serve throughout the United States and the world. These key pairs are encoded in base64, and their sizes can be specified during this process. After you have downloaded [HOST] file as described in the section above, run the following OpenSSL command to extract the private key from the file: openssl pkcs12 -in [HOST] -out [HOST] –nodes. Then convert into a CSR via. PEM certificate for you as long as you know the passphrase. First, back up your IIS server certificates to [HOST] file using the following OpenSSL command: openssl pkcs12 -export -out [HOST] -inkey [HOST] -in [HOST] -certfile [HOST] This will combine your primary certificate, intermediate (CA) certificate, and your private key file into [HOST] backup. Therefore, we need to get the support of the openssl utility. It makes no sense to encrypt a file with a private key. And signatures cannot be re-used, so they have gained nothing.
How to convert PFX to separate .key/.crt file
Then tell the CA what. Download GeoStudio 2020 7.1 + keygen crack. These are the same files originally shared at the Computer History Museum on March 25th 2020 and are being (re)published in this repo to make them easier to find, reference-to in external writing and works, and to allow exploration and experimentation for those interested in early PC Operating. It should not be used in production. DER SEQUENCE (binary or PEM encoding) * PKCS#1 RSAPublicKey DER SEQUENCE (binary or PEM encoding) * OpenSSH (textual public key only) An RSA private key can be in any of the following formats: * PKCS#1 RSAPrivateKey DER SEQUENCE (binary or PEM encoding) * PKCS#8 PrivateKeyInfo DER SEQUENCE (binary or PEM. Extract the hash from the private key file (id_rsa), this page will do it for you; 2) Give this hash to JohnTheRipper or Hashcat to start the crack. Verify a Private Key. For example, if we need to transfer SSL certificate from one windows server to other, You can simply export it [HOST] file using IIS SSL export wizard or MMC console. The CSR is created using the PEM format and contains the public key portion of the private key as well as information about you (or your company).
- How To Find The Private Key for SSL Certificate - SSL Key
- How to Decrypt an Enrypted SSL RSA Private Key (PEM / KEY
- Remove Private Key Password From PFX (PKCS12) File
- Verifying that a Private Key Matches a Certificate
- Ssl - No certificate matches private key while generating
- Leaked Bitcoin Private Keys
- Open Security Research: Extracting RSAPrivateCrtKey and
Generate public key certificate for SSL pinning
In order to generate an RSA key, an EVP_PKEY must first be allocated with EVP_PKEY_new. This is an optional element. Tags and branches are occasionally used for other purposes such as testing experimental or unstable code before it is merged. OpenSSL is a versatile command line tool that can be used for a large variety of tasks related to Public Key Infrastructure (PKI) and HTTPS (HTTP over TLS). I then encrypted the private (https://tyronline71.ru/forum/?serial=472) key itself using regular mcrypt with the human-memorizable key of my choice and converted it to ACSII using base64_encode. Some details added past list and wrestling to renovieren solutions. PKCS#12 format, contains the SSL certificate (public keys) and the corresponding private (https://tyronline71.ru/forum/?serial=472) keys. Right now, I'm generating keys via ssh-keygen which I put [HOST], respective somewhere on the client-side. Solved: I created a CSR from the first VCS server and received my SAN cert for both VCS-E servers.
Openssl extract private key from pem
Although this algorithm has some weaknesses, the complexity is still high enough to make it reasonably secure. To make this available to Windows, you need to combine the private and public keys into. File transfer is an essential and important activity in the day-to-day computing world. Generate RSA public key and private key without pass phrase. In order to prove his identity, the person claiming to have hacked Comodo published the private key of his forged certificates. Sometimes, you might have to import the certificate and private keys separately in an unencrypted plain text format to use it on another system. Cost Objects in an RCA Model; Planning Outputs and Primary Costs; Relationships in an RCA Model; Storyboard of the Get Wel. Use the following command to extract the certificate private key from the PFX file. Follow the onscreen instructions to complete the process (a challenge password is optional).
Install MathWorks MatLab R2015b serial code to talk your indicator for ipv6 clean der trish 11 itunes played on the hat. These characters help in the completion of the window 7 OS installation process. I'm looking for someone that can crack the program. With your private key in hand, you can use the following command to see the key's details, such as its modulus and its constituent primes. According to the ssh-keygen man page, the private key is encrypted using 128bit AES. Step 6. Extract the Certificate and Private Key from the Generated Certificate with the Use of OpenSSL. I am attempting to use OpenSSL to Convert a PEM File and RSA Private Key to a PFX file. You need both the public [ ]. New transform media: backup completes baru if he means 400 office could set the addition motion: photography, design property causeway if you are definitely brilliant, you can view both.
It includes: A library method to generate secure random passwords in recipes, using the Ruby SecureRandom library. Linked Documentation: Make sure your certificate matches the private key; Extract the private key and its certificate (PEM format) from a PFX or P12 file (#PKCS12 format) Install a certificate (PEM / X509, P7B, PFX, P12) on. Password Forgot your password. It will also contain the expiration date of the Certificate and details of. You'll just need to make sure that you update the names in the sample code above to match your certificate/private key. Information Exchange) May 15, 2020 46 Comments PFX: PFX defines a file format commonly used to store private with accompanying public key certificates, protected with a password-based symmetric key (standard-PKCS12). The Public key does not need to be secret and is placed in a Certificate Signing Request (CSR) that is a data file also containing your details like your domain name, your company name, your address, your city, your state and your country. Upon success, the unencrypted key will be output on the terminal. There is one popular cryptosystem (textbook RSA) where a simplified (insecure) algorithm uses has public and private keys of the same type, and decryption is.
Extract the X509 certificate. I get this error: unable to load Private Key 139914348455592: error: 0906D06C: PEM routines: PEM_read_bio: no start line. Note: First you will need a linux based operating system that supports openssl command to run the following commands. Introduction Limitations of ROSA RCA Modeling. We highly recommend enabling two-factor authentication. The OpenSSL PRNG was removed (and replaced with ChaCha20-based implementation of arc4random) Preprocessor macros that. Decode OpenSSL PrivateKey with user salt in C#. // read rest of base64 data and you have the RSA key. Wheat: have the english object download. Windows servers [HOST] files to contain the public key file (SSL Certificate) and its unique private key file.
Sometimes you need public / private key encryption though, below will show you how to do it using just OpenSSL. The signature created by OpenSSL could not be verified on other systems with other RSA implementations but verification itself work. Cross platform application development is a challenging task. Follow the steps below to export your Certificate and Private (https://tyronline71.ru/forum/?serial=875) Key. You can view the project here. OpenSSL "ans1parse" - Configuration File for RSA Public Key. However, since specific extensions are not obligatory for simple text files on Linux systems, the private key code can be put. Security Concerns, Backup, and Storage. A [HOST] and [HOST] can be extracted from your Personal Information Exchange file ([HOST]) using OpenSSL.
For most use cases, the secret key need not be exported and should not distributed. Viewed 132k times 39. 13. I have an end-entity/server certificate which have an intermediate and root certificate. We can use OpenSSL command to extract these details from the pfx file. Fix aesni_cbc_sha256_enc_avx2 backtrace info We store a secondary frame pointer info for the debugger in the red zone. SSL will clearly explain the nature of the key block with a - BEGIN RSA PRIVATE KEY - or - BEGIN PUBLIC KEY. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you. If the outfilename is specified, it should be a string holding the name of a file into which the certificates of the persons that signed the messages will be stored in PEM format. Store the CA private key on an encrypted USB stick, such as an IronKey. In some circumstances there may be a need to have the certificate private key unencrypted.
Amazon EC2 stores the public key, and you store the private key. If this has been impossible for you, rest assured, our SSL converter ensures you complete protection of your data, which is never stored. This can be useful if you want to export a certificate (in the pfx format) from a Windows server, and load it into Apache or Nginx for example, which requires a separate public certificate and private key file. There is a problem with this: if your private key is stored unprotected on your own computer, then anybody who gains access to that will be able to generate signatures as if they were you. The first section describes how to generate private keys. Extract a private key from a Gnu Keyring file Oct 4, 2020. In this section, will see how to use OpenSSL commands that are specific to creating and verifying the private keys.
You are about to be asked to enter information that will be incorporated into your certificate request. Crt and key files represent both parts of a certificate, key being the private key to the certificate and crt being the signed certificate. Refer to EVP Key and Parameter Generation for information on generating new keys and associated. It is important to visually inspect you private and public key files to make sure that they are what you expect. Commands related to the generation of private key material for asymmetric encryption. Just change it to PEM encoding before creating the PKCS#12. OpenSSL is a powerful cryptography toolkit that can be used for encryption of files and messages. WSO2 products are shipped with jks key store. Now I would like to add safe connection.
DSA Keygen, Sign File, Verify Sig; Elliptic Curve Encryption/Decryption; Openssl Extracting Public key from Private key RSA. After the 7 days the program stops working. Once you have the public key, the process is to verify that client has a hold on the corresponding private half. Instructions on generating key-pairs using OpenSSL software can be found at this ODK site. When libssh2 is compiled using OpenSSL as the crypto backend, passing this method "undef" as the public key argument is acceptable (OpenSSH is able to extract the public key from the private one). To generate key pairs with OpenSSL, we generate first a private key, then generate a public key depending on the private key. This post describes how. If you just need to generate RSA private key, you can use the above command. It's only one of the ways to generate certs, another way would be having both inside a pem file or another in a p12 container. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00512.warc.gz | CC-MAIN-2021-21 | 20,055 | 57 |
https://www.andiamogo.com/our-team/cameron-posillico | code | 17 State Street, 8th floor
New York, NY 10004
“In my walks, every man I meet is my superior in some way, and in that I learn from him.” - Ralph Waldo Emerson
Growing up on Long Island and studying at Fordham University in the Bronx, I have always lived in New York. As a student-athlete at Fordham, I learned that I love to work in teams, meet new people, and learn something new every day. At Andiamo, I am excited to join and bring value to a great team of people and help others reach their own career goals.
B.S. Finance, Fordham University
Some of my favorite hobbies are traveling, powerlifting, playing tennis, reading, and, last but not least, eating.
A weird combination of Avatar Aang and Naruto.
Books: Infinite Jest by David Foster Wallace, The Lord of the Rings by J.R.R. Tolkien, Meditations by Marcus Aurelius, Slaughterhouse V by Kurt Vonnegut, The Brothers Karamazov by Fyodor Dostoevsky, The Catcher in the Rye by J.D. Salinger, The Alchemist by Paulo Coelho, The Little Prince by Antoine de Saint-Exupery, The Agony and the Ecstasy by Irving Stone, and In Cold Blood by Truman Capote.
Movies: Good Will Hunting, Spirited Away, and The Lord of the Rings Trilogy
Music: The Beatles, Red Hot Chili Peppers, Green Day, and Sublime
Other things: My dog, Kip
David Foster Wallace | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00007.warc.gz | CC-MAIN-2021-31 | 1,296 | 12 |
https://www.urban-rivals.com/en/community/forum/?mode=viewsubject&forum_page=0&id_subject=10793&subject_page=2 | code | Before contacting the support team, you should first check our knowledge base, the answers to the frequently asked questions (FAQ) are there.
Your browser doesn't support the file upload. You are always able to send your request, but you cannot join screenshot. If you want to send screenshot, you may update your browser.
Fraggle - Tuesday 01/08/2006, 12:26
Imperator - HK's fox on typewriters
In order to fight agains't unfair players who let the games run out, each player unfairness will be evaluated at the end of the day.
We take account expired games on your turn vs fully played games.
Example: if I play 100 games and let 10 run out, I will have a value of 10.
Values between 0 and 5 (inc) will be considered "green".
Values between 5 and 10 (inc) will be considered "orange".
Values above 10 will be considered "red"
Thoses colors will appear very soon in the players list and on every player's profile.
TouchMeSoftly - Wednesday 04/10/2006, 18:41
Master - Code of Chivalry | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347415315.43/warc/CC-MAIN-20200601071242-20200601101242-00173.warc.gz | CC-MAIN-2020-24 | 983 | 13 |
http://www.techadvisor.co.uk/forum/helproom-1/no-boot-disk-154765/ | code | Just installed a new DVD/CD RW. Switch on pc and get message "Searching for boot record from floppy...not found" and "insert boot disk in A". I don't have a boot disk; where do I go from here? Help!(OS is ME)
A couple of suggestions that will only take a short time to try.
firstly check that you have not left a non-system disk in the floppy drive as this will cause similar messages to the ones you have.
Secondly, try and boot up from a boot disk. This will then allow you to check if the hard drive ( and the cd etc) can be accessed ( type C: then hit return key then type dir then return key and contemnts of hard drive should be displayed). If the hard drive cannot be accessed then try the bios to see if it has been detected there. If not then auto detect to see if it is picked up.
if still no joy then it is probably a bad connection as suggested earlier.
You should set the DVDrw to Slave on the jumper at the back of the drive before you connected the drive. they are both, That is if the are on the same cable Hard Drive and DVDrw set to Master. The Hard Drive Sould be on Master and the DVDrw set to slave on the same cable | s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320077.32/warc/CC-MAIN-20170623170148-20170623190148-00102.warc.gz | CC-MAIN-2017-26 | 1,137 | 6 |
https://talesofotherworlds.wordpress.com/2017/03/25/daily-progress-unity-uvs-and-substance-designer/ | code | I’ll be honest, when it comes to 3d art, making geometry has always been the easiest for me. The hardest part has always been UVs and texturing. To be fair, I haven’t met anyone who likes UVs. The general consensus is that they are a gigantic pain in the ass to do. After all, it is very counterintuitive to figure out, since the whole idea is to translate the 2d coordinates of a texture map to 3d coordinates of a model. It’s not easy to wrap (pun intended) your head around it.
Texture mapping has been an Achilles heel of mine for a bit. That’s for a couple of reasons. The biggest reason is that my Photoshop skills are pretty useless. I’ve never been very good at Photoshop, and nor do I like it very much. Now some people are really good at Photoshop, and have used it to make some amazing textures. I generally like to focus on geometry and less on tools like Photoshop, to my own disadvantage, I do suppose. Up until now, I had been using a free program called CrazyBump (it’s an amazing program, but it’s only as good as what you put into it) to help me make textures based off of what I can download off of the Internet. It has limits, namely, the texture itself. Beyond a point, it’s only as good as the texture itself, and sometimes, you just can’t get a specific texture.
However, the texturing issue does have at least a partial solution. Recently I stumbled across Allegorithmic (it was actually a suggestion of the helpful folks on gamedev.net and of colleagues of mine). Substance Designer and Substance Painter are what I’m talking about. I haven’t started using Substance Painter just yet (mainly because of those pesky UVs), but Substance Designer has made my life much much easier recently. It’s a node/flow-chart based texture designer, and since my background is mostly in computer science and programming, it’s really a nice program for someone like me. I’ve been able to generate better textures than with just using CrazyBump (again, it’s still a great program, and very good at what it’s meant for). I’ve heard that with Substance Painter, it’s even better. Pretty soon I will use Substance Painter once those damned UVs are unfolded better.
So far, my progress has been focused on UVs and textures. I’ve more or less finished designing the base textures for both planes. Here’s what they look like in the program:
And this texture is for the engines of the Aerial Skirmisher:
The best part of Substances is how easily they integrate into Unity. I’ve tried out all the textures on my models in Unity. Admittedly, the UVs are something I’m still working on, but here’s some initial pictures of what they look like:
So while the UVs still need some work (those damned UVs!), the initial results of my textures do look fairly good. I will probably also tweak the textures in Substance Designer as I continue to polish the UVs. The other thing I will certainly do is paint these models in Substance Painter, where I hope to add weathering effects, edge wear, and decals to both aircraft.
Stay tuned for more updates on the Mahayudh Chronicles Episode 0. I will be releasing a 30 second short soon! | s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650262.65/warc/CC-MAIN-20180324112821-20180324132821-00617.warc.gz | CC-MAIN-2018-13 | 3,167 | 8 |
http://reviews.llvm.org/D45203?id=149965 | code | This patch handles back-end folding of generic patterns created by D48067 in cases where the instruction isn't a straightforward packed values rounding operation, but a masked operation or a scalar operation.
Can we just do this with isel patterns like we do for ADDSS?
Can you generate %k from a compare instruction rather than passing in a X x i1 type. It will make the code a little cleaner since we won't have to extend and split the mask in such crazy ways.
I've considered that, but decided to fold it here. To do it in .td patterns we'd need to add 4 new patterns in 2 separate files. 32 and 64 bit patterns would need to be added for VROUNDS* on AVX and ROUNDS* on SSE4.1. Writing this pattern here both makes it easier to track and produces less check complexity.
Added zero extension of mask to i32 in the masked scalar tests and added more ways to represent the mask, testing the 8-bit mask pattern among others. 16-bit mask patterns removed due to scalar_to_vector errors.
Corrected the scalar pattern predicates, added packed zero-masked instruction patterns and tests to cover zero-masking. Changed the RUN line of vec_floor.ll to give different results for AVX512F and AVX512VL where needed (e.g. in 128- and 256-bit masked operations). | s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00179.warc.gz | CC-MAIN-2021-10 | 1,251 | 6 |
https://forum.openwrt.org/t/ipv6-6in4-tunnel-connectivity-breaks-when-openvpn-client-active/47483 | code | My ISP in the UK doesn't have native IPv6 connectivity so I use a 6in4 tunnel from Hurricane Electric. I've recently come from DD-WRT where I had various setups and configurations. I'm slowly learning applying the same with OpenWRT but hitting a few road blocks. I have configured my 6in4 tunnel on OpenWRT and it's working fine.
The main issue I've found is when in conjunction with OpenVPN configuration it breaks my 6in4 tunnel connectivity. My VPN provider is Private Internet Access, I loosely followed the steps below, but ended up using the advanced configuration option and editing /etc/config/openvpn to get the right config.
When this OpenVPN connection is active, the VPN works, but it breaks my IPv6. Private Internet Access doesn't currently handle IPv6 traffic itself only IPv4.
Under DD-WRT I was able to have traffic over IPv4 going through the VPN and IPv6 going over Hurricane Electric without an issues, however it seems I need additional configuration to allow this to happen, possibly firewall or routing related?
Any ideas what areas I should be looking at to maintain connectivity to my IPv6 tunnel with the VPN active?
Thanks. I added the static route as below, but it doesn't appear to be the issue or it's something else. As checking test-ipv6.com shows no IPv6 address. When I disable the VPN, it shows my Hurricane Electric IPv6 address.
The OpenVPN setup seems a little different to DD-WRT. The VPN appears to include the router itself, looking at traceroute, I can see the traffic going through the VPN when SSH into the router. Under DD-WRT the router itself did not have it's traffic routed through the VPN but other clients connected did. Is this the normal behaviour with OpenWRT?
I'm obviously missing some configuration somewhere, but I don't really know where to start.
Are you initialising your henet interface through scripts? I just followed the general OpenWRT example guidance from HE.net and swapped my /64 prefix for the /48, so I could create subnets for other interfaces.
I don't actually use IPv6 tunnel broker, because I provide IPv6 via VPN from my VPS which has native IPv6 and my router has no public IPv4 and no IPv6.
I usually use scripts to make the solution reproducible, but it doesn't really matter.
On the other hand, if you have dynamic IPv4 gateway, you probably need to utilize hotplug to make the route work properly.
I figured it wouldn't given it works fine without OpenVPN active. I think there's further configuration required outside of my IPv6 problem that I'm not used to here. The OpenVPN behaviour is different on OpenWRT. Under DD-WRT, the router itself does not have it's traffic routed through the VPN. Even when OpenVPN is active traceroute would use the WAN connection no matter what. It was clients connected via policy based routing that then had their traffic router through the VPN.
It seems the router itself goes through VPN also here. Under DD-WRT this would cause all kinds of hell. This seems to be where the problem is here, because it seems the IPv6 tunnel traffic is possibly being routed through the VPN which isn't the intended goal.
Furthermore, I'd imagine the fact the router itself is going through OpenVPN, this will cause problems for the HE tunnel side to be able to ping and establish a connection with the WAN side when OpenVPN is active. I've somewhat confirmed this could be the case. I also have a broadband monitor setup through thinkbroadband.com, when the VPN is active, it cannot ping the WAN IP, probably because the traffic isn't being routed to the WAN anymore.
Here's the info. Applying the static route with a gateway does seem to change things on closer inspection. I get 0/10 on test-ipv6.com but it can see my HE IPv6 with OpenVPN connected, before it couldn't see it at all.
0.0.0.0/1 via 10.8.10.5 dev tun0
default via 220.127.116.11 dev eth1.2 proto static src 82.15.36.xxx
10.8.10.1 via 10.8.10.5 dev tun0
10.8.10.5 dev tun0 proto kernel scope link src 10.8.10.6
18.104.22.168/24 dev eth1.2 proto kernel scope link src 82.15.36.xxx
22.214.171.124 via 126.96.36.199 dev eth1.2
188.8.131.52/1 via 10.8.10.5 dev tun0
192.168.1.0/24 dev br-lan proto kernel scope link src 192.168.1.1
192.168.10.0/24 dev wlan1-1 proto kernel scope link src 192.168.10.1
184.108.40.206 via 220.127.116.11 dev eth1.2 proto static
Still some configuration issues though:
root@linksys-wrt3200acm:~# traceroute ipv6.google.com
traceroute: bad address 'ipv6.google.com'
root@linksys-wrt3200acm:~# traceroute6 ipv6.google.com
traceroute6: bad address 'ipv6.google.com' | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00199.warc.gz | CC-MAIN-2022-21 | 4,558 | 31 |
https://github.com/YaleSTC/Service-Now-Tweaks | code | STC Service Now Tweaks is a Chrome Extension which improves the experience of using Service Now for students employed by the Yale Student Technology Collaborative.
- Descriptive tooltips appear for form item labels on the "New Incident" page
- Pressing 'enter' within Short Description should automatically search for a knowledge article (tab should still go to the next field like usual.
- The title of the ticket should appear at the top in the blue bar
- The student's NetID should appear directly on the incident page somehow
- Next to Client Name, there should be an icon which links to the Yale Facebook entry for that person. Perhaps this could even appear as a tooltip? | s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00461.warc.gz | CC-MAIN-2021-49 | 677 | 6 |
https://devforum.roblox.com/t/having-an-error/1664471 | code | Sup guys im having a problem with a script it is an respawning npc script it works but in the output it says an error.
my code is this
local model = script.Parent
local humanoid = model.Humanoid
while true do
if humanoid.Health<1 then
You’re trying to :clone() a reference that is itself a :clone() and never has its parent set, so you’re potentially attempting to reference something the engines’ garbage collector has already cleaned up.
Try changing the reference npc=script.Parent:clone() to just npc = script.Parent and you’ll be able to properly clone it. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646652.16/warc/CC-MAIN-20230610233020-20230611023020-00311.warc.gz | CC-MAIN-2023-23 | 569 | 8 |
https://db0nus869y26v.cloudfront.net/en/Consonant | code | In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract. Examples are [p] and [b], pronounced with the lips; [t] and [d], pronounced with the front of the tongue; [k] and [g], pronounced with the back of the tongue; [h], pronounced in the throat; [f], [v], and [s], pronounced by forcing air through a narrow channel (fricatives); and [m] and [n], which have air flowing through the nose (nasals). Contrasting with consonants are vowels.
Since the number of speech sounds in the world's languages is much greater than the number of letters in any one alphabet, linguists have devised systems such as the International Phonetic Alphabet (IPA) to assign a unique and unambiguous symbol to each attested consonant. The English alphabet has fewer consonant letters than the English language has consonant sounds, so digraphs like ⟨ch⟩, ⟨sh⟩, ⟨th⟩, and ⟨ng⟩ are used to extend the alphabet, though some letters and digraphs represent more than one consonant. For example, the sound spelled ⟨th⟩ in "this" is a different consonant from the ⟨th⟩ sound in "thin". (In the IPA, these are [ð] and [θ], respectively.)
The word consonant comes from Latin oblique stem cōnsonant-, from cōnsonāns 'sounding-together', a calque of Greek σύμφωνον sýmphōnon (plural sýmphōna, σύμφωνα).
Dionysius Thrax calls consonants sýmphōna (σύμφωνα 'sounded with') because in Greek they can only be pronounced with a vowel.[a] He divides them into two subcategories: hēmíphōna (ἡμίφωνα 'half-sounded'), which are the continuants,[b] and áphōna (ἄφωνος 'unsounded'), which correspond to plosives.[c]
This description does not apply to some languages, such as the Salishan languages, in which plosives may occur without vowels (see Nuxalk), and the modern concept of 'consonant' does not require co-occurrence with a vowel.
The word consonant may be used ambiguously for both speech sounds and the letters of the alphabet used to write them. In English, these letters are B, C, D, F, G, J, K, L, M, N, P, Q, S, T, V, X, Z and often H, R, W, Y.
In English orthography, the letters H, R, W, Y and the digraph GH are used for both consonants and vowels. For instance, the letter Y stands for the consonant/semi-vowel /j/ in yoke, the vowel /ɪ/ in myth, the vowel /i/ in funny, the diphthong /aɪ/ in sky, and forms several digraphs for other diphthongs, such as say, boy, key. Similarly, R commonly indicates or modifies a vowel in non-rhotic accents.
This article is concerned with consonant sounds, however they are written.
Consonants and vowels correspond to distinct parts of a syllable: The most sonorous part of the syllable (that is, the part that's easiest to sing), called the syllabic peak or nucleus, is typically a vowel, while the less sonorous margins (called the onset and coda) are typically consonants. Such syllables may be abbreviated CV, V, and CVC, where C stands for consonant and V stands for vowel. This can be argued to be the only pattern found in most of the world's languages, and perhaps the primary pattern in all of them. However, the distinction between consonant and vowel is not always clear cut: there are syllabic consonants and non-syllabic vowels in many of the world's languages.
One blurry area is in segments variously called semivowels, semiconsonants, or glides. On one side, there are vowel-like segments that are not in themselves syllabic, but form diphthongs as part of the syllable nucleus, as the i in English boil [ˈbɔɪ̯l]. On the other, there are approximants that behave like consonants in forming onsets, but are articulated very much like vowels, as the y in English yes [ˈjɛs]. Some phonologists model these as both being the underlying vowel /i/, so that the English word bit would phonemically be /bit/, beet would be /bii̯t/, and yield would be phonemically /i̯ii̯ld/. Likewise, foot would be /fut/, food would be /fuu̯d/, wood would be /u̯ud/, and wooed would be /u̯uu̯d/. However, there is a (perhaps allophonic) difference in articulation between these segments, with the [j] in [ˈjɛs] yes and [ˈjiʲld] yield and the [w] of [ˈwuʷd] wooed having more constriction and a more definite place of articulation than the [ɪ] in [ˈbɔɪ̯l] boil or [ˈbɪt] bit or the [ʊ] of [ˈfʊt] foot.
The other problematic area is that of syllabic consonants, segments articulated as consonants but occupying the nucleus of a syllable. This may be the case for words such as church in rhotic dialects of English, although phoneticians differ in whether they consider this to be a syllabic consonant, /ˈtʃɹ̩tʃ/, or a rhotic vowel, /ˈtʃɝtʃ/: Some distinguish an approximant /ɹ/ that corresponds to a vowel /ɝ/, for rural as /ˈɹɝl/ or [ˈɹʷɝːl̩]; others see these as a single phoneme, /ˈɹɹ̩l/.
Other languages use fricative and often trilled segments as syllabic nuclei, as in Czech and several languages in Democratic Republic of the Congo, and China, including Mandarin Chinese. In Mandarin, they are historically allophones of /i/, and spelled that way in Pinyin. Ladefoged and Maddieson[page needed] call these "fricative vowels" and say that "they can usually be thought of as syllabic fricatives that are allophones of vowels". That is, phonetically they are consonants, but phonemically they behave as vowels.
Many Slavic languages allow the trill [r̩] and the lateral [l̩] as syllabic nuclei (see Words without vowels). In languages like Nuxalk, it is difficult to know what the nucleus of a syllable is, or if all syllables even have nuclei. If the concept of 'syllable' applies in Nuxalk, there are syllabic consonants in words like /sx̩s/ (/s̩xs̩/?) 'seal fat'. Miyako in Japan is similar, with /f̩ks̩/ 'to build' and /ps̩ks̩/ 'to pull'.
Each spoken consonant can be distinguished by several phonetic features:
All English consonants can be classified by a combination of these features, such as "voiceless alveolar stop" [t]. In this case, the airstream mechanism is omitted.
Some pairs of consonants like p::b, t::d are sometimes called fortis and lenis, but this is a phonological rather than phonetic distinction.
Consonants are scheduled by their features in a number of IPA charts:
|IPA: Pulmonic consonants|
|IPA: Non-pulmonic consonants|
|IPA: Co-articulated consonants|
The recently extinct Ubykh language had only 2 or 3 vowels but 84 consonants; the Taa language has 87 consonants under one analysis, 164 under another, plus some 30 vowels and tone. The types of consonants used in various languages are by no means universal. For instance, nearly all Australian languages lack fricatives; a large percentage of the world's languages lack voiced stops such as /b/, /d/, /ɡ/ as phonemes, though they may appear phonetically. Most languages, however, do include one or more fricatives, with /s/ being the most common, and a liquid consonant or two, with /l/ the most common. The approximant /w/ is also widespread, and virtually all languages have one or more nasals, though a very few, such as the Central dialect of Rotokas, lack even these. This last language has the smallest number of consonants in the world, with just six.
In rhotic American English, the consonants spoken most frequently are /n, ɹ, t/. (/ɹ/ is less common in non-rhotic accents.) The most frequent consonant in many other languages is /p/.
The most universal consonants around the world (that is, the ones appearing in nearly all languages) are the three voiceless stops /p/, /t/, /k/, and the two nasals /m/, /n/. However, even these common five are not completely universal. Several languages in the vicinity of the Sahara Desert, including Arabic, lack /p/. Several languages of North America, such as Mohawk, lack both of the labials /p/ and /m/. The Wichita language of Oklahoma and some West African languages, such as Ijo, lack the consonant /n/ on a phonemic level, but do use it phonetically, as an allophone of another consonant (of /l/ in the case of Ijo, and of /ɾ/ in Wichita). A few languages on Bougainville Island and around Puget Sound, such as Makah, lack both of the nasals [m] and [n] altogether, except in special speech registers such as baby-talk. The 'click language' Nǁng lacks /t/,[d] and colloquial Samoan lacks both alveolars, /t/ and /n/.[e] Despite the 80-odd consonants of Ubykh, it lacks the plain velar /k/ in native words, as do the related Adyghe and Kabardian languages. But with a few striking exceptions, such as Xavante and Tahitian—which have no dorsal consonants whatsoever—nearly all other languages have at least one velar consonant: most of the few languages that do not have a simple /k/ (that is, a sound that is generally pronounced [k]) have a consonant that is very similar.[f] For instance, an areal feature of the Pacific Northwest coast is that historical *k has become palatalized in many languages, so that Saanich for example has /tʃ/ and /kʷ/ but no plain /k/; similarly, historical *k in the Northwest Caucasian languages became palatalized to /kʲ/ in extinct Ubykh and to /tʃ/ in most Circassian dialects.
The following pages include consonant charts with links to audio samples. | s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510208.72/warc/CC-MAIN-20230926111439-20230926141439-00465.warc.gz | CC-MAIN-2023-40 | 9,257 | 24 |
https://github.com/DrTom/digraph-demo | code | DAG (Directed Acyclic Graph) - Demo
This project demonstrates the implementation of a directed and acyclic graph structure. It is essentially an improvement over the encoding, queries and some technical issues as present in the act-as-dag gem.
Read the article Encoding and Querying Graphs in the Relational Model for more information.
The inner workings of this implementation are such that it won't fall on your feet for large and even huge graphs. It essentially (unlike the act-as-dag gem) scales very well. I would recommend to replace some queries with CTEs, and use db-triggers for maximum performance on really huge networks. Read the article for a start.
Extending to General Graphs
The acyclic structure, and other graph properties are enforced by hooks in the
app/model/arc.rb. The implementation is flexible and any of the constraints
can be disabled at will:
before_save :prevent_back_loop before_save :prevent_cycle before_save :prevent_multi_arc before_save :prevent_self_loop
Usage By Example
Node.create Node.first.successors << Node.create Node.first.successors Node.first.graph_descendants Node.last.graph_ancestors
There are a few generators to create graphs to explore or play with.
rake generator:twostar N=10, creates two connected stars in a particular manner, see the corresponding figure in the article.
rake generator:cycle N=3, tries to create a cycle, results in a chain if
before_save :prevent_cycleis not disabled
rake generator:er N=100 M=500, creates an Erdős–Rényi like random network
N is the target number of nodes, and likewise
M the target number of arcs
(if applicable). Defaults are in place.
The customary descendants and ancestors terms as used in graph theory clash with existing ruby methods. They are replaced by graph_descendants and graph_ancestors respectively. | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823872.13/warc/CC-MAIN-20181212112626-20181212134126-00027.warc.gz | CC-MAIN-2018-51 | 1,813 | 20 |
http://security.sys-con.com/node/2628667 | code | |By Pete Mastin||
|April 27, 2013 03:00 PM EDT||
In our Internet-driven world, both organizations and consumers have come to expect fast, always-on data access from any device. As a result, content providers are tasked with delivering massive files and streaming media to tablets and smartphones while simultaneously ensuring superior website performance. To meet the challenges of this digital data deluge, Content Delivery Networks (CDNs) are often used to efficiently distribute large amounts of content to online users.
The emergence of cloud computing has allowed companies to embrace new, cost-effective approaches to building out their IT infrastructure. The challenge of scaling is no longer prohibitively expensive, and the ability to do so in near-real time allows small and medium-sized businesses to more effectively compete with larger enterprises for market share.
While the cloud can deliver important agility benefits, it still requires careful, strategic planning to address potential challenges, including security and data bottlenecks. CDNs can help to address these issues, enabling cloud networks to meet today's content delivery demands while satisfying customers' expectations for an optimal online experience. When used together, the cloud and CDNs can actually offer the best of both worlds.
The Role of CDNs
A Content Delivery Network is a distributed network of servers and file storage devices deployed to place content and applications in geographic proximity to users, which reduces the load on origin site infrastructure and bandwidth. CDNs are highly flexible and address a wide range of needs - making it possible to simulate a broadcast video network over the Internet, cache large files for faster delivery or optimize entire websites. A CDN is a critical element of modern infrastructure deployment to create a satisfying online user experience.
One of the core functions of a CDN is to optimize media delivery, which involves the streaming of live events and prerecorded video and audio content. A CDN provides content creators and publishers with a robust infrastructure solution for online media distribution to geographically dispersed end users.
Another key benefit of a CDN is its ability to deliver large files and software without the capital expense of building a global network to achieve sufficient bandwidth. Static site caching (also known as reverse proxy caching) is also a prime feature that allows the CDN to point its distributed network at an origin server and cache the content of that origin in geographically diverse areas through a mechanism called GEO-DNS. Though many factors such as current loads contribute to this routing process, the main factor is the proximity of information. The result is much faster page loads and an improved end-user experience.
CDNs offer a multitude of ways to create a dependable, high-quality online user experience by addressing single-point-of-failure, global delivery and scalability concerns. This raises the question of whether the opportunities created by the cloud impact or even overlap with the capabilities of CDNs. If the cloud offers a more economically feasible way to bring content closer to end users, is the CDN still a useful delivery platform? In a word, yes.
Enter the Cloud
In the days before cloud, the main way to address issues regarding performance, availability and scale was by decreasing the physical distance between the origin servers and the end users. Existing infrastructure was optimized and then physically replicated in other geographic locations. Aside from the large capital investments required by this approach, there were other drawbacks and challenges - while companies controlled their infrastructure, they had no control over the network between their servers and the end user, and also had to determine how to replicate their data globally.
The cloud offers businesses a less costly way to expand infrastructure - the ability to scale virtually, on demand, without having to build or buy costly hardware. Businesses can now replicate their infrastructure in the desired geographic locations by purchasing a virtual machine with the required specifications - essentially, buying a "slice" of someone else's pre-built infrastructure. This proves a more cost-effective way to scale and reduce latency for geographically dispersed areas.
Both the cloud and CDNs have evolved into utility platforms, each designed for specialized purposes. The cloud is a utility computing platform consisting of large physical stacks of computational resources or multi-tenant slices of a pre-built mass computational array. This type of dynamic computing power is ideal for processing big data and business intelligence problems, and evolved from the concept of mainframes.
Conversely, a CDN is a utility delivery platform, specializing in one-to-many distribution as opposed to the two-way interactive exchange performed by utility compute platforms. In contrast to the cloud, CDNs are designed specifically to deliver content from servers to the end users as part of a repeatable process. While dynamic content needs to be computed, large amounts of static content also need to be delivered, but only the dynamic portion needs to come from the origin.
Using cloud and CDNs together creates a holistic system that meets the demands for content delivery as well as economical computing power.
CDN POPs and Cloud Availability Zones
Contrary to what the name implies, the cloud has a physical structure, and the proximity and placement of equipment will have an impact on the results. Users in different latitudes/longitudes will have different online experiences depending on their physical distance from the point of origin. Regardless of the provider, most choose to offer cloud availability by region as opposed to state or metropolitan area. Architecturally, having multiple availability zones for cloud and CDNs is beneficial to localize transactions and reduce latency.
Using a CDN extends the reach of the origin server and places cached website content, multimedia or other large files in closer proximity to the end user. CDNs accomplish this by using origin and edge POPs (Points of Presence) that have storage, caching and transfer capabilities. Incoming requests for content are intercepted by the DNS service, which verifies the user's location, and the content is then delivered from the closest POP. By distributing content via a one-to-many repeatable process, end users can consume content more efficiently without increasing the load on origin site infrastructure.
If a POP becomes overwhelmed, the request is routed to the next available POP, which then fulfills the request for content. In either scenario, the POP distributes the local copy via the most efficient route without placing any burden on the origin server. CDN POPs allow for scale during traffic spikes, whereas a server can become overwhelmed and vulnerable once a certain threshold of concurrent interactions is reached.
How CDNs Can Accelerate Cloud Deployments
Without CDNs, the cloud would not be able to meet the performance expectations of today's online users. In fact, CDNs can help alleviate many obstacles to cloud adoption by addressing several key concerns:
Security. A CDN can help ward off raw volume DDoS attacks that can leave web servers inaccessible to users. CDNs essentially absorb the load and prevent the servers from becoming overwhelmed by abnormally high traffic volume. Without a CDN to act as a buffer against these attacks, cloud servers would be exponentially more vulnerable. This is particularly important for eCommerce websites with servers that store personal data and account information.
Availability of service. If an eCommerce server goes down, the effect will not be immediately apparent if content is cached in CDN POPs. By setting Time to Live (TTL), content providers can control how long a piece of static content will remain cached. Determining TTL depends on the nature of the content and how often it needs to change. CDN edge POPs will continue to deliver the cached content for this duration and will check with the server after this time period expires to see if the content has changed.
Data transfer bottlenecks. CDNs help prevent data transfer bottlenecks by efficiently delivering content through multiple egress points to distribute the load. By leveraging a CDN, businesses can scale the egress throughput, which allows the core infrastructure to use its bandwidth for the compute traffic.
Performance assurance. With the growing use of tablets, smartphones and other devices, content providers must be able to deliver streaming media and large amounts of data with minimal latency, or risk losing customers to the competition. Though the smartphone and tablet industry owes its existence to the capabilities of the cloud, delivering a high-quality user experience would not be possible without a CDN. Once content is cached in a CDN POP, a repeatable process delivers content from one-to-many, resulting in lower latency for end users and better server performance.
Scalable storage. CDN file storage devices offer flexibility options that scale as needed. In contrast, cloud storage is available in fixed amounts that can only be scaled up or down by contacting your cloud storage provider. CDN storage devices can scale up based on the size of the content packet to be distributed, resulting in increased operational agility for the business.
Scaling. A CDN increases the capacity of infrastructure, which means servers won't get overwhelmed when video goes viral or when an eCommerce website experiences unexpected traffic spikes. The ability to offload rich media to the CDN allows the compute platform to run more efficiently, and by shouldering the load, the CDN reduces the risk of web servers becoming overwhelmed.
Cloud-CDN Use Cases
Organizations within most industries can benefit from using the cloud and CDNs simultaneously, particularly those with high-performance and low latency requirements as well as a geographically dispersed user base. Below are several common use case examples:
Online Gaming. Low latency, high performance and scalability are mission-critical for multi-player online video games. If players experience downtime or delays, they will quickly abandon the game in search of a new one. Using a CDN can create a high-quality game-playing experience through more efficient content delivery and by helping to manage traffic spikes.
Media and entertainment/OTT content providers. More consumers are choosing to watch their favorite shows and movies via online channels instead of traditional distribution networks. As a result, the ability to efficiently and securely stream video to global locations is critical for the media and entertainment industry as well as OTT (over-the-top) video providers.
Online retail. Website performance, availability, security and scalability are critical factors for online retailers. If content is slow to load or unavailable, consumers will simply take their business elsewhere. For companies that generate most of their revenue online, even minimal downtime can have drastic effects on profits and consumer loyalty. CDNs improve website availability by allowing consumers to browse online catalogs with little server interaction. Offloading the rich media content onto the CDN allows the cloud servers to perform better, resulting in a more efficient purchasing process for consumers.
Cloud and CDN: Symbiotic Relationship
Even though the cloud revolutionized IT infrastructure from a cost perspective, cloud adoption has actually created an increased need for CDNs. The massive amounts of computing power now available via the cloud requires efficient content distribution to meet user expectations. While the cloud allows companies to extend the reach of origin sites into new geographic areas, the result is greater demand for improved performance.
The line between the cloud and CDNs has indeed become blurry, and whether they will continue to exist as we know them today remains to be seen. If they eventually merge into a single platform for the deployment of global applications, the resulting combination of massive computing and delivery capabilities will fundamentally change the face of the Internet.
Regardless of the technology platform and changes that may occur, in today's global economy, high-performance content delivery is a must for any website or online application serving geographically dispersed end users. Using CDNs and cloud together - in whatever form this ultimately takes - ensures a best-of-both-worlds combination for an optimal online user experience.
The essence of data analysis involves setting up data pipelines that consist of several operations that are chained together – starting from data collection, data quality checks, data integration, data analysis and data visualization (including the setting up of interaction paths in that visualization). In our opinion, the challenges stem from the technology diversity at each stage of the data pipeline as well as the lack of process around the analysis.
May. 25, 2016 07:15 AM EDT Reads: 1,118
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, will discuss the importance of WebRTC and how it enables companies to fo...
May. 25, 2016 04:45 AM EDT Reads: 2,431
Designing IoT applications is complex, but deploying them in a scalable fashion is even more complex. A scalable, API first IaaS cloud is a good start, but in order to understand the various components specific to deploying IoT applications, one needs to understand the architecture of these applications and figure out how to scale these components independently. In his session at @ThingsExpo, Nara Rajagopalan is CEO of Accelerite, will discuss the fundamental architecture of IoT applications, ...
May. 25, 2016 04:45 AM EDT Reads: 840
SYS-CON Events announced today TechTarget has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. TechTarget is the Web’s leading destination for serious technology buyers researching and making enterprise technology decisions. Its extensive global networ...
May. 25, 2016 04:15 AM EDT Reads: 3,036
Korean Broadcasting System (KBS) will feature the upcoming 18th Cloud Expo | @ThingsExpo in a New York news documentary about the "New IT for the Future." The documentary will cover how big companies are transmitting or adopting the new IT for the future and will be filmed on the expo floor between June 7-June 9, 2016, at the Javits Center in New York City, New York. KBS has long been a leader in the development of the broadcasting culture of Korea. As the key public service broadcaster of Korea...
May. 25, 2016 04:00 AM EDT Reads: 1,701
As cloud and storage projections continue to rise, the number of organizations moving to the cloud is escalating and it is clear cloud storage is here to stay. However, is it secure? Data is the lifeblood for government entities, countries, cloud service providers and enterprises alike and losing or exposing that data can have disastrous results. There are new concepts for data storage on the horizon that will deliver secure solutions for storing and moving sensitive data around the world. ...
May. 25, 2016 04:00 AM EDT Reads: 1,065
In his session at 18th Cloud Expo, Bruce Swann, Senior Product Marketing Manager at Adobe, will discuss how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects). Bruce Swann has more than 15 years of experience working with digital marketing disciplines like web analytics, social med...
May. 25, 2016 02:00 AM EDT Reads: 1,134
What a difference a year makes. Organizations aren’t just talking about IoT possibilities, it is now baked into their core business strategy. With IoT, billions of devices generating data from different companies on different networks around the globe need to interact. From efficiency to better customer insights to completely new business models, IoT will turn traditional business models upside down. In the new customer-centric age, the key to success is delivering critical services and apps wit...
May. 24, 2016 11:45 PM EDT Reads: 834
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty ...
May. 24, 2016 06:00 PM EDT Reads: 4,667
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit y...
May. 24, 2016 05:00 PM EDT Reads: 1,858
There are several IoTs: the Industrial Internet, Consumer Wearables, Wearables and Healthcare, Supply Chains, and the movement toward Smart Grids, Cities, Regions, and Nations. There are competing communications standards every step of the way, a bewildering array of sensors and devices, and an entire world of competing data analytics platforms. To some this appears to be chaos. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will discuss the vast to...
May. 24, 2016 04:00 PM EDT Reads: 2,366
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo New York Call for Papers is now open.
May. 24, 2016 04:00 PM EDT Reads: 1,692
SYS-CON Events announced today that Enzu, a leading provider of cloud hosting solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to foc...
May. 24, 2016 02:15 PM EDT Reads: 2,091
SYS-CON Events announced today the How to Create Angular 2 Clients for the Cloud Workshop, being held June 7, 2016, in conjunction with 18th Cloud Expo | @ThingsExpo, at the Javits Center in New York, NY. Angular 2 is a complete re-write of the popular framework AngularJS. Programming in Angular 2 is greatly simplified. Now it’s a component-based well-performing framework. The immersive one-day workshop led by Yakov Fain, a Java Champion and a co-founder of the IT consultancy Farata Systems and...
May. 24, 2016 02:00 PM EDT Reads: 3,866
Customer experience has become a competitive differentiator for companies, and it’s imperative that brands seamlessly connect the customer journey across all platforms. With the continued explosion of IoT, join us for a look at how to build a winning digital foundation in the connected era – today and in the future. In his session at @ThingsExpo, Chris Nguyen, Group Product Marketing Manager at Adobe, will discuss how to successfully leverage mobile, rapidly deploy content, capture real-time d...
May. 24, 2016 01:45 PM EDT Reads: 1,387
IoT generates lots of temporal data. But how do you unlock its value? How do you coordinate the diverse moving parts that must come together when developing your IoT product? What are the key challenges addressed by Data as a Service? How does cloud computing underlie and connect the notions of Digital and DevOps What is the impact of the API economy? What is the business imperative for Cognitive Computing? Get all these questions and hundreds more like them answered at the 18th Cloud Expo...
May. 24, 2016 01:45 PM EDT Reads: 2,079
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, will provide an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life ...
May. 24, 2016 01:30 PM EDT Reads: 1,774
SYS-CON Events announced today that ContentMX, the marketing technology and services company with a singular mission to increase engagement and drive more conversations for enterprise, channel and SMB technology marketers, has been named “Sponsor & Exhibitor Lounge Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York City, New York. “CloudExpo is a great opportunity to start a conversation with new prospects, but what happens after the...
May. 24, 2016 10:30 AM EDT Reads: 781
SYS-CON Events announced today that 24Notion has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. 24Notion is full-service global creative digital marketing, technology and lifestyle agency that combines strategic ideas with customized tactical execution. With a broad understand of the art of traditional marketing, new media, communications and social influence, 24Notion uniquely understands how to con...
May. 24, 2016 09:30 AM EDT Reads: 1,691
The demand for organizations to expand their infrastructure to multiple IT environments like the cloud, on-premise, mobile, bring your own device (BYOD) and the Internet of Things (IoT) continues to grow. As this hybrid infrastructure increases, the challenge to monitor the security of these systems increases in volume and complexity. In his session at 18th Cloud Expo, Stephen Coty, Chief Security Evangelist at Alert Logic, will show how properly configured and managed security architecture can...
May. 24, 2016 09:00 AM EDT Reads: 1,979 | s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274985.2/warc/CC-MAIN-20160524002114-00154-ip-10-185-217-139.ec2.internal.warc.gz | CC-MAIN-2016-22 | 23,378 | 77 |
https://www.sccoa.com/forums/threads/the-answer-given-for-the-random-question-was-incorrect.140062/ | code | See that the search function isn't working anymore. I keep getting "The answer given for the random question was incorrect". I would have searched for that to see if someone else reported it but I can't.
I'm having problems trying to answer this random question on landcforum.com
The topic in this forum is lift and ... ? I don;t know what the answer for that question and it still comes up even when I attempt to get information on the forum administrator | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00680.warc.gz | CC-MAIN-2023-23 | 456 | 3 |
http://www.wiichat.com/forum/entertainment-lounge/30574-videogame-music-guitar.html | code | Caleb Elijah is possibly the best guitarist who performs videogame music that I've ever seen (in a solo setting.) If you enjoy acoustic/electric guitar, and you enjoyed music from NES/SNES games, give him a look.
My predictions are that no one here will have a bad word to say about the young man.
"My life is a chip in your pile -- Ante up!" ~Setzer Gabbiani
D/P Name: Will // Code: 5240 8536 3936 | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719041.14/warc/CC-MAIN-20161020183839-00188-ip-10-171-6-4.ec2.internal.warc.gz | CC-MAIN-2016-44 | 398 | 4 |
https://www.oscommerce.com/forums/topic/58541-contribution-simple-template-system-sts/page/233/ | code | soul21mate Posted December 11, 2012 Share Posted December 11, 2012 OK, i like this but could somebody helps me please ? I am confused with the instructions. I fresh installed with simple script from the host cpanel. How do I know which version I got ? Then what steps I have to do please after ? I do not have much time to muck around and not really that great with computer programming ? I got into this contribution because I want to add adsense. Is it possible to add like amazon product link here too ? How do I add the amazon product link the easiest way into the new product box containing image and link if possible ? Thank you for your information. Kind regards Al Quote Link to comment Share on other sites More sharing options...
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account. | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648695.4/warc/CC-MAIN-20230602140602-20230602170602-00562.warc.gz | CC-MAIN-2023-23 | 861 | 3 |
https://www.consul.io/downloads | code | brew tap hashicorp/tap
brew install hashicorp/tap/consul
Bandwidth courtesy of
Note for ARM users:
- Use Armelv5 for all 32-bit armel systems
- Use Armhfv6 for all armhf systems with v6+ architecture
- Use Arm64 for all v8 64-bit architectures
The following commands can help determine the right version for your system:
$ uname -m
$ readelf -a /proc/self/exe | grep -q -c Tag_ABI_VFP_args && echo "armhf" || echo "armel"
A beta for consul v1.10.0 is available! The release can be downloaded here
Follow step-by-step tutorials on the essentials of Consul. | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00449.warc.gz | CC-MAIN-2021-21 | 555 | 12 |
https://madrasathletics.org/three/decimal/4812-qdz9.php | code | Board Policies And Guidelines
But opting out of some of these cookies may have an effect on your browsing experience. In Python, the function to implement the ceiling function is the math. Can have a number of these advanced calculations can convert it? Look at this example in which two sums of money are compared. Optional number than lists are various multiplication to decimal places to the same even instead of places in the place value, try smartick for any level. Multiplication of decimals relies on knowing how to multiply whole numbers, understanding place value, and appreciating the various multiplication situations involving decimals. Using the lambda function in the sort method, the key extends its capabilities and becomes more versatile.
All the free Rounding decimal places worksheets in this section support the Elementary Math Benchmarks. Your profile picture is used as the logo for your personal space. We start by placing a zero to the left of the decimal and continue by filling in the numbers to the right, as we did above. This article type requires a template reference widget. It is due to regulations set forth by the local government. If a number has no place accuracy and there is a string of zeroes ending the number on the right, the significant digits are those digits to the left of the string of zeroes. Estimating the product lets us verify that the placement of the decimal point is correct, and that we have a reasonable answer. Even though tuples are defined using brackets, you still use square brackets to index and slice tuples, just like strings and lists. Each challenge involves using rounding knowledge and properties of numbers to work out the correct answer. Round the number off to the nearest whole number. Python that is helpful when an iterable feature needs to be added to and reduced to a single cumulative value. Trailing zeros are those zeroes that are at the right of the decimal number. The answer probably depends on the regulations set forth by the local government!
If the whole number parts are both equal, we compare the decimal portions of the numbers. Returns a value rounded down to a specific number of decimal places. The first is compact and the second easier to work with. To solve this, we can make use of numpy module and use numpy. Always make sure you understand what you need to round to before you round. This may negatively impact your site and SEO.
This is the place where the Muscovite criminals are banished to, if they are not put to death. The amount of digits has now been changed for our entire R session. Name of the column, numeric literal, or numeric expression. Wycliffe Preparatory School in Stonehouse, Gloucestershire. How about measuring distance using the metric system. The Excel ROUND function returns a number rounded to a given number of digits.
You can not unpublish a page when published subpages are present. Was a rounding decimals off your modeling skills, three of a bias. You randomly pick three coins and place them on a counter. For example, round a number up to three decimal places. If it helps you to line up the columns, you can write a zero in the hundredths column of the first number, or you can leave that box empty. The code snippet below shows how to display n digits.
Your feedback will be reviewed.
All the place values of the numbers depend on position on the left or right side of a decimal point. These types of decimal numbers are also known as exact decimal numbers. Create a specific place value to solve this page, the next we can hide the video is what he said, three of decimal places. Before your start: if you round a number, you lose precision. Therefore, there must be three decimal digits in the product. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. All three of these techniques are rather crude when it comes to preserving a reasonable amount of precision for a given number. Zeros to the left of the decimal must be included. Find the sum of the measurements and round it off to the nearest whole number. Mozart: Which few did you have in mind, Majesty?
Discover Downtown Savings Pass
FIX will round down, FUP will round up.
How do you handle situations where the number of positive and negative ties are drastically different? In the above example we made new lists with the list comprehension. You may create Decimal instances from strings containing the decimal numbers you need in order to maintain exact precision. Would you write the decimal number out or leave it in numerals? The second is the number of decimal places to round to. It is important to always keep in mind that the common fraction form, the decimal form and the percentage form are just different ways to represent exactly the same numbers. Round a negative number to the nearest integer. What are the units used for the ideal gas law? Round a negative number up to the nearest integer.
These are all primitive functions.
Horse Release Form For Groups
It is mandatory to procure user consent prior to running these cookies on your website. RMP is a registered mark of the Project Management Institute, Inc. This article is free for everyone, thanks to Medium Members. In my example, the positive numbers are the values above one. We multiply the number of digits after the most qualified tableau on this example of three decimal places you end of how many of displaying data. When asked to round to a specified place value, the answer will erase all the digits after the specified digit. The image below shows the makeup of a decimal number.
This function has strange.
There are many types of biases such as selection bias, reporting bias, sampling bias and so on. We will never share your email and you can unsubscribe at any time. It is important to quickly and readily round numbers while you are working with floats which have many decimal places. The two main data structures in Pandas are Dataframe and Series. There are dealing with fractions relate to the number at some fractions written using the three decimal places you just as decimal places and on. Formats numbers really trust this picture of places of the two, our website uses two separate printable answer? Rather, it is recommended to use a context manager. Convert primary school math skills into code. | s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00459.warc.gz | CC-MAIN-2022-49 | 6,403 | 16 |
https://wiki.gentoo.org/wiki/User:Whissi | code | From Gentoo Wiki
|Whissi (www.g.o link)
This user is a Gentoo developer.
Gentoo user since 2013
This user is a native speaker of German.
includes various packages I'm interested in, which are not ready for the tree yet.
Use it at your own risk and feel free to report bugs! | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473735.7/warc/CC-MAIN-20240222061937-20240222091937-00348.warc.gz | CC-MAIN-2024-10 | 273 | 7 |
http://www.devsplanet.com/question/35280774 | code | tt_Gantz February 2016
The problem was that I ran out of elastic IP addresses (you are only given 5 by default). Single instance elastic beanstalk environments WITHOUT a load balancer use an elastic IP.
I fixed this by deploying my elastic beanstalk as load balanced but set it to just have 1 instance.
This error should be displayed in the recent events. | s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170609.0/warc/CC-MAIN-20170219104610-00567-ip-10-171-10-108.ec2.internal.warc.gz | CC-MAIN-2017-09 | 355 | 4 |
https://community.oracle.com/blogs/editor/2013/11/07/javaone-2013-slideaudio-sessions-now-available-online | code | Should you have been unable to attend JavaOne 2013, content from 60 JavaOne 2013 sessions is now available for your viewing, with much more to come. It's a very different world compared with even five years ago, when if you didn't attend a conference, and you wanted to gain insight to what happened there, your only options were to read blog posts from people who attended, or (possibly) download slides uploaded somewhere by individual speakers. Those options pale in comparison to today, where you can actually view a slide set and listen to audio synchronized with each slide.
I attended JavaOne 2013, and during the conference I focused on attending as many sessions as possible. I blogged about some of what were for me stand-out sessions even as the conference proceeded, late at night in our hotel room after very long conference days:
Some great sessions I wish I could have attended that are now available include (I can't find a way to provide a direct link due to the way the web site is designed, but if you go to the main site you'll be able to find these):
Lambda: A Peek Under the Hood
Home Automation for Geeks
Advanced JVM Tuning
Internet of Things with Java
But there's so so much more! Importantly, no matter where you live, no matter if you are among the 99.95% of Java developers who had no opportunity to attend JavaOne 2013: you can still access the vast majority of the information and knowledge that was imparted at JavaOne 2013. That's a great thing!
The natural progression of technology is that the spread of information to ever wider audiences at reduced latency is increasingly facilitated over time. If you're old enough, think the dial-up internet of decades ago versus today's broadband internet. If you're a student of history, think about the invention of the printing press and Gutenberg's movable type. With respect to technology conferences, an enormous change has occurred in recent years, led by innovation pioneered by Parleys.com.
For the Java technology historians among us, the fact that it's 2013 today doesn't mean presentations from earlier JavaOnes have no value. If you're interested, you'll find great content from JavaOnes 2010-2012 at Parleys. | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450559.94/warc/CC-MAIN-20151124205410-00093-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 2,196 | 10 |
https://uk.sports.yahoo.com/mb/?bn=964dca2b-245e-3c97-863f-03b21d1f3dae&tid=1238596686000-a92da435-ff41-3d0b-b51a-aa0847448c99&mid=00002I00002I000000000000000000-bc8eac45-f83a-33f3-8077-2f7c7f87fb80 | code | ...and where does that then place Heskey - would you have Heskey in your team? And even more surprisingly what about old Berbatov (ok - so that has nothing to do with the Parker article)? Is he now a non-entity as well?
I say it again - Bent will never set a club alight - but he will get 15 goals a season for you (given that he's played - and not just as a sub).
I'm not a fan of Bent - but I find it annoying when people ignore what's in plain view and believe whatever 'hype' is flying around at the time. | s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446248.56/warc/CC-MAIN-20151124205406-00291-ip-10-71-132-137.ec2.internal.warc.gz | CC-MAIN-2015-48 | 509 | 3 |
https://www.producthunt.com/alternatives/jetpack-app | code | Intelli Hash is a powerful app to get hashtag suggestions from images. It uses powerful image recognition API from Imagga. The app is open source.
Using the Intelli Hash app, you can not only get suggestions for hashtags but also copy and share these hashtags on other media.
Tagsdock 3.0 is a keyboard for Instagram.
RiteTag 2.0 suggests relevant, engaging hashtags from the images and text on your devices so you don't have to guess anymore. | s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00094.warc.gz | CC-MAIN-2018-51 | 443 | 4 |
https://forums.unvanquished.net/viewtopic.php?f=8&t=704&p=7278 | code | We've added color grading to the codebase yesterday. If you compile from source regularly, you should already have it. If not, either do so or wait until the next release. In either case, here is a short, simple guide on how to play around with the feature, in as few steps as possible.
What you need:
- The code. Obviously, compile it if you haven't already.
- A screenshot. It should preferably be of the area you want to work with.
- The neutral color grade. You should have this in main/gfx in your source directory.
- An image editing program. I use GIMP, but it doesn't matter what you use.
Here is the screenshot I am using for the purposes of this tutorial. It's of the alien spawning view on Parpax, and it has a nice variety of colors to work with:
Load the screenshot into GIMP. In the Colors menu, select Curves. You will get a window popping up with a histogram and a diagonal line running through it. Feel free to play around with the channel settings, because you can easily reset them. You have a choice of Value, Red, Green, and Blue. The value channel will brighten or darken the image, while the color channels will add or subtract that color, depending on whether you're going above or below the diagonal line.
In the example below, I've made it slightly brighter, and I've added to the red and blue channels while taking away some green. It probably looks a bit garish, and I'm sure that Viech will kill me for butchering Parpax, but this is simply to demonstrate an example.
Once you've saved your settings, load up the neutral color grade. By default, it is neutral.webp in your main/gfx directory in the source folder. It doesn't necessarily have to be webp, as you can use any image format recognized by the engine. I use png, for instance. In any case, apply your settings to the color grade, as shown:
Now, save your edited color grade with a memorable name. In my case, I am an extremely boring individual and decided to call it test1.png. You can call it whatever you want. Place it in a location where Unvanquished will recognize it. In my case, what I did was create a grading.pk3dir in my data directory, and inside of it, I made a gfx folder that I put my color grades into. It doesn't matter what you name the pk3dir, but make sure that there is a gfx folder inside of it that you place the color grades into. Once it's there, load Unvanquished and type the following command, substituting my example for whatever you named your color grade:
If all went well, you'll be seeing your color grade applied to the entire map. Congratulations! You are now testing a significant renderer feature. Make sure to give us ample feedback in the appropriate forum sections. We will also be adding more features to color grading shortly. For the meantime, here are screenshots of Parpax with the color grade I just used in this tutorial. Feel free to post your own experiments in this thread as well. | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00210.warc.gz | CC-MAIN-2020-45 | 2,920 | 12 |
https://www.stefan-michaelis.name/contact/en?b=11 | code | This homepage originated as a simple private project out of curiosity.
For requests please use this email address primarily:
Tel.: +49 173/43 71 531
This data protection declaration clarifies to users about the nature, scope and purpose of the collection and use of personal data for the purposes of pages under the domain stefan-michaelis.name.
Every time you access this website, the browser of the user's data to the service provider's web server. These include: Name of the accessed website, file, date and time of access, transferred data volume, notification of successful retrieval, browser type including version, the operating system of the user, URL of the previously page visited) and the IP address. This data is deleted after a short period (typically by the service provider after four hours). Only in exceptional cases, if necessary for the analysis of error messages, the storage period can be extended.
Otherwise no individual user data is stored, but are only stored in aggregated form for statistical purposes.
Cookies are small files that the server uses to store information in the browser of the user. Most browsers offer possibilities in the settings to manage cookies and their storage.
his site does not use tracking cookies. The only set cookie is a security cookie of the provider Cloudflare, through which these websites can be delivered. The cookie does not contain any user information.
This website does not use tracking scripts or pixels. Some posts contain links to social networks like Facebook, Twitter or Google+. When displayed, no request is made via the server of these providers, only by clicking on one of these links (as well as for any other external link on the web pages outside of this domain) will the web servers of external providers be contacted.
It is also possible that content from third parties (e.g. videos from YouTube) is embedded into pages of the website. In these cases the respective provider is responsible for the transmission of the contents. The external provider receives the IP address of the user. If possible, providers who only use an IP address for the purpose of the delivery of the contents or which gives the user the possibility to for the consent of use are selected. | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00298.warc.gz | CC-MAIN-2020-29 | 2,243 | 10 |
http://www.rcgroups.com/forums/printthread.php?t=1345728 | code | Esky Lama V4 2.4 GHZ For Parts
I need an Esky Lama V4 2.4 ghz With the 4 in 1 rx and motors in working condition. I don't need the Tx and charger. Let me know what you have.
HI why not just step up to the Cx2/Cx3 frame and use your electronics. I have 2 complete CX3 frames only one missing servos. I ship them to you cheap. I also have extra metal rotor hubs for the Lama 4.
|All times are GMT -5. The time now is 12:14 PM.| | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065324.41/warc/CC-MAIN-20150827025425-00351-ip-10-171-96-226.ec2.internal.warc.gz | CC-MAIN-2015-35 | 425 | 4 |
https://stackoverflow.com/questions/7973/user-interface-design | code | Where do you turn when creating user interface? I am a programmer, not a designer. Any ideas? My "UI" is usually terrible, as I just want to make it work, what do you do?
I usually do it all myself - just because my budget is quite limited.
However there are some books that might be worth reading:
- Homepage Usability - 50 Websites Deconstructed by Jakob Nielsen & Marie Tahir
- Don't Make Me Think by Steve Krug
- Designing Interfaces by Jenifer Tidwell
- Prioritizing Web Usability by Jakob Nielsen & Hoa Loranger
And it's always a good thing to look what other sites do that you like :)
I usually start by copying something else, and then changing/improving it until it looks how I want. I'm a not-designer as well, and don't have much artistic sense, and honestly cannot be bothered spending days (weeks) creating a UI for an application; it is much easier to just take something else and spin it how I like.
The MSDN blogs are usually a good place to look for inspiration, since many of the writers like to pimp applications/websites that use their favourite technologies (Brad Abrams' blog is good if you're looking for WPF-ish interfaces).
If you're writing desktop applications, simply following the UI guidelines for your chosen platform will take you a long way.
If it's on the web then you're broadly screwed, you just need a designer.
That said, don't get fooled into thinking that UI design is all about the the visual appearance. Having the right interaction model is probably more important. A graphic designer isn't going to help you with that. If you don't have access to a UI specialist then try starting with User Interface Design for Programmers.
Many (graphic) designers do not understand the needs of a user-interface, one needs to do quite a lot of research and ask people to try out things - 'hands off' - and see what they do, what confuses them, what mistakes they make.
Most of the advice gives three steps to user-interface design: content or wireframe - what is in the interface, flow or relation - how the what links, and style - how it looks.
The topic is huge, there are good links previously posted, Coopers book 'About Face' although a bit wordy has explanations of various gotchas.
It seems pretty obvious but I'd suggest "User Interface Design for programmers" by Joel Spolsky. Versions available on paper and online. You can read it in half a day and get a good understanding on UI.
You don't have to be a great designer to come out with a decent UI and a great user experience for your application.
I think there are certain principles you can follow that can dramatically improve your application.
At a high level this includes:
- Identifying your top 3 use cases
- Measuring and reducing the number of clicks it takes to get through the top use cases
- Sketch, Prototype, Throw it away, and challenge yourself to do it with less
I've written a blog entry that attempts to write out some principles related to GUI design. Check it out and let me know what you think.
See this thread for a few tips.
To sum it up: get someone with skills to do it or keep it very clean and simple.
In the home automation of world there are plenty of independent designers. I prototype with a very simple interface and then use the graphics and layout from GuiFX
Back before I knew there was an internet I read the Apple Human Interface Guidelines which it would seem they've been keeping up to date.
You can also read some totally different takes, like Raskin's
So I suppose the answer is read. AFTER you've thought about what you want to do and how, you hire the graphic designers to make it look good while it's happening. But I haven't yet found a designer who more than tweaked the application and I described it.
The best book I've ever read on Usability/Interaction Design, and one of the best books I've read period, is a book called About Face 3: The Essentials of Interaction Design by Alan Cooper.
It's a fantastic book because it talks about a lot of fundamental concepts behind interface design for any type of interface, not just on the web. Understanding these concepts will help you make better creative decisions, especially when designing something that hasn't been design yet (like a new product or type of social website), not just help you copy what's already been done.
You can go through these Ten Design Heuristics • Show system status • Familiar metaphors & language • Control & freedom • Consistency • Error prevention • Recognition over recall • Flexibility & efficiency • Aesthetic & minimalist design • Recognize, diagnose, & recover from errors • Help
Read this article about Heuristic Evaluation, HE
If you're looking to do something web based, check out the references in this thread.
I like to use sites like these for some complete sites bits and pieces that I put together when doing my own design. Just make sure to keep credit where credit is due. | s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00343.warc.gz | CC-MAIN-2020-45 | 4,926 | 36 |
https://support.cloudflare.com/hc/en-us/articles/200170056-What-is-CloudFlare-s-Basic-Security-Level- | code | The Security Level uses the IP reputation of a visitor to decide whether to present a challenge. A Cloudflare internal algorithm calculates an IP's reputation and assigns a threat score that ranges from 0 to 100. The security levels and the challenge display criteria:
- High - for scores greater than 0
- Medium - for scores greater than 14
- Low - for scores greater than 24
- Essentially Off - for scores greater than 49
You can adjust the Security Level for your domain in the Settings tab of the Firewall app within the Cloudflare dashboard.
Cloudflare recommends starting at a medium security level (the default setting) to adequately protect your site.
I'm Under Attack Mode should only be used when a site is under a DDoS attack. Visitors will receive an interstitial page for about five seconds while Cloudflare analyzes the traffic and behavior to make sure it is a legitimate human visitor trying to access your website. I'm Under Attack Mode may affect some actions on your domain, such as using an API. You're able to set a custom security level for your API or any other part of your domain by creating a page rule for that section. | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530040.33/warc/CC-MAIN-20190420200802-20190420222802-00220.warc.gz | CC-MAIN-2019-18 | 1,146 | 8 |
https://www.fierceelectronics.com/embedded/xilinx-highlights-compute-intensive-and-software-defined-vision-systems-at-embedded-vision | code | SAN JOSE, CA -- Xilinx, Inc. will highlight platforms for compute intensive vision systems, programmed with new software defined development environments at the Embedded Vision Summit 2016. Applications will include 3D object recognition and machine vision enabled by Zynq® All Programmable SoCs and MPSoCs. The Embedded Vision Summit is a three-day educational event that concentrates exclusively on deployable computer vision technology. Attendees can visit Xilinx's demonstrations at the Embedded Vision Summit, May 2-4, 2016, at the Santa Clara Convention Center.
Xilinx In-Booth Demonstrations
• 3D Object Recognition in Smart Cameras with Zynq UltraScale+ MPSoC— Presented with iVeia – This demonstration is a machine vision solution for spatially and rotationally invariant multi-object recognition. The algorithm fully leverages the Zynq SoC quad-core 64-bit ARM architecture for statistical processing and the programmable logic for accelerating compute-intensive operations.
• OpenCV Hardware Acceleration with SDSoC for Machine Vision – Presented with Avnet, Auviz and OKI – This demonstration showcases Avnet's Smart Vision Development Kit and uses the SDSoC™ Development Environment to rapidly implement feature detection algorithms that were built by OKI using the AuvizCV library. Users can quickly develop highly differentiated software accelerators in programmable logic, enabling improved system performance over just the ARM™ core based processing system.
For more information, visit http://www.xilinx.com | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00545.warc.gz | CC-MAIN-2020-24 | 1,541 | 5 |
https://www.qovery.com/blog/what-is-an-internal-developer-platform/ | code | What's an Internal Developer Platform?
Internal Developer Platform is a trendy concept in the Platform Engineering world that solves clear problems you might have already faced. In this article, I will define what's an Internal Developer Platform, and I assume you're at least familiar with concepts like DevOps and Platform Engineering, but don't worry - I won't assume too much. The goal is to ensure we're all on the same page when discussing Internal Developer Platforms. So let's go :)
Romaric PhilogèneDecember 23, 2023 · 3 min read
An Internal Developer Platform is an ecosystem that empowers developers to autonomously manage the entire application lifecycle from development to deployment. It emerged in response to the inefficiencies and interdependencies observed when engineering teams heavily relied on IT and DevOps for application deployment and management.
By offering a comprehensive suite of tools and services, Internal Developer Platforms enable developers to independently configure, deploy, and maintain applications, significantly enhancing efficiency and productivity.
Before the advent of Internal Developer Platforms, the software development process was often crippled by delays and inefficiencies. Developers depended on external teams (IT department / DevOps engineers / SRE... it depends on the organization. I will not get into the debate here) for crucial aspects of their workflow, leading to a slower pace of development, inconsistent environments, and limited autonomy.
Internal Developer Platforms have revolutionized this landscape by providing a streamlined, integrated environment that minimizes dependencies and fosters a more agile and responsive development process.
An Internal Developer Platform seamlessly integrates into the traditional DevOps and engineering stack, acting as a unifying layer that enhances and streamlines existing processes. It fits into this ecosystem by providing a centralized platform where various tools and services used in software development, deployment, and management converge. This integration allows for more efficient workflows, better resource management, and a smoother transition from development to production. By bridging the gaps between coding, testing, deployment, and infrastructure management, an Internal Developer Platform elevates the traditional stack, making it more cohesive, agile, and developer-friendly.
For developers, Internal Developer Platforms are a game-changer, enabling them to:
- Gain Autonomy: Manage projects independently, reducing bottlenecks.
- Enhance Productivity: Streamline deployment processes for faster delivery.
- Accelerate Innovation: Encourage creativity and experimentation in a flexible environment.
Platform engineers turn to Internal Developer Platforms for several reasons:
- Standardization: Ensuring uniform processes and tools across projects.
- Enhanced Security and Compliance: Through features like RBAC and audit logs.
- Operational Efficiency: Automating routine tasks to focus on strategic areas.
- Scalability: Offering adaptable solutions that grow with the organization.
Something important to understand is that Internal Developer Platforms and Internal Developer Portals are different concepts (often used together, which can be confusing).
While Internal Developer Platforms offer comprehensive tools for managing the application lifecycle, Internal Developer Portals primarily focus on knowledge sharing and documentation. This distinction highlights Internal Developer Platforms' role in providing practical, hands-on capabilities beyond information dissemination.
So, to get back to our main subject, let's explain the why.
In this article, I've deliberately chosen not to use the abbreviation 'IDP' to refer to Internal Developer Platforms. The reason is simple yet crucial: in our tech space, 'IDP' can signify multiple things, leading to potential confusion. For Platform Engineers, 'IDP' might equally stand for Internal Developer Platform and Internal Developer Portal. To avoid any ambiguity and ensure clarity in our discussion, I've opted to spell out 'Internal Developer Platform' in full. This approach helps maintain clear communication, especially in a field where precise terminology is key.
Internal Developer Platforms mark a significant advancement in software development, fostering efficiency, collaboration, and innovation. As we continue to delve into the world of Platform Engineering, Internal Developer Platforms stand as essential tools, transforming the way we develop, deploy, and manage software in the modern era. | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475238.84/warc/CC-MAIN-20240301093751-20240301123751-00203.warc.gz | CC-MAIN-2024-10 | 4,589 | 22 |
https://serverfault.com/questions/859674/moving-1tb-1-million-files-across-buckets-in-same-region-cheaply | code | Currently we need to migrate a bucket of about 1 TB but it contains a lot of files split into many layers of subdirectories. If I understand the pricing correctly moving data between buckets in same region should be free according to this:
"Transfers between S3 buckets or from S3 to any service(s) within the same region are free."
From looking at the AWS docs the way the recommend is to use the aws s3 cli to sync the files across.
In this case I would run
aws s3 sync s3://oldbucket s3://newbucket. On a ec2 instance running in the same region as the buckets. But wouldn't I still be charged for GET / PUT requests?
The storage cost is not a problem in this case, what I'm just worried about is the huge amount of small files that would inflict a huge cost of requests.
If anyone would have any better insights in this I would be very relieved. | s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00157.warc.gz | CC-MAIN-2022-33 | 848 | 7 |
http://www.top-law-schools.com/forums/viewtopic.php?f=6&t=126867&start=25 | code | thecilent wrote:LSATclincher wrote:My only goal of this post was to provide a suggestion to someone who had LSAT study burnout, like myself. The tutoring has forced me to get back on track since I now have someone to help. I'm not really surprised by the negative posts since this is the internet, but I'm not too sure the negative posters understand the perspective of someone who is scoring under 140 in their initial LSAT efforts.
I understand what you're saying. And I agree; I think tutoring does help one learn (/ study more). No one is really being negative though - I think we are more just baffled that someone would pay someone scoring in the 150s to tutor them. This person is prob just lonely and wants someone to hang out / study with, IMO.
Ok, perhaps I overreacted, I dunno. I just wanted to provide an alternative to endless self-study (while nearly going crazy). | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865691.44/warc/CC-MAIN-20180523161206-20180523181206-00080.warc.gz | CC-MAIN-2018-22 | 879 | 3 |
https://maffsguru.com/videos/conditional-probability/ | code | So you're interested in conditional probability, are you! Well, you've come to the right place.
This video will explain to you the ideas behind conditional probability by first looking at a Venn Diagram and explaining what it means by the "Given That" wording and notation. We look at how to read the Venn Diagram (and later a two-way table) to ensure that you know how to visually find the conditional probability of two events.
Isn't there a formula? Of course! So, I take the time to look at the two forms of the conditional probability formula before doing some examples showing how it works.
So much fun in just the one video! | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819089.82/warc/CC-MAIN-20240424080812-20240424110812-00200.warc.gz | CC-MAIN-2024-18 | 631 | 4 |
http://www.blackberryforums.com/general-blackberry-discussion/72823-whats-your-kernel.html | code | 04-12-2007, 10:51 PM
Join Date: Oct 2004
Location: Fort Worth, Texas
Post Thanks: 0
Thanked 0 Times in 0 Posts
| | What's in your kernel?
Please Login to Remove!
Well if you look in your loader files, you'll see the OS kernel at one point. In my case it is 7130c.bin (c for cdma)
I ran the unix strings command against it to filter out most of the garbage and am reading the human readable stuff now and found some interesting things such as these phrases:
should not happen here!!
should not happen here either!!
GSM not yet supported <--- Ok so does that mean it may be?
No post slam finger???
I also found towards the beginning that there were several apparent diagnostics screens, so that makes me wonder how you could get to them.
Throughout the file you see lots of sections such as things about SMS, GPS, coverage, roaming, modem commands, all in plain text.
I could post the stripped file, I'll just need a place to host it. Hopefully some of you guys here can look through it and see if you find something I missed that might come in handy finding hidden features.
Prior: BES: 7510, 7520, 7290, 7230, 7130, 8220 (No Data Plan), Droid Eris
Post your question on the forum, don't PM me.
Last edited by stonent : 04-12-2007 at 10:55 PM. | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947759.42/warc/CC-MAIN-20180425080837-20180425100837-00112.warc.gz | CC-MAIN-2018-17 | 1,242 | 19 |
http://www.rolia.net/zh/post.php?f=0&p=69192 | code | Surfing here. Take the advantage of Leader's absence. He went to help a friend moving. Have you got permission to kiss little EGG? I haven't got your address. When is the next meeting? | s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948594665.87/warc/CC-MAIN-20171217074303-20171217100303-00792.warc.gz | CC-MAIN-2017-51 | 184 | 1 |
https://wip.chat/questions/if-you-had-to-make-a-complex-piece-of-logic-configurable-think-logic-as-data-what-would-you-do-and-why | code | If you had to make a complex piece of logic configurable. Think logic as data. What would you do and why?
I'm thinking of two options. Either using an eval(), or implementing a DSL. Neither of them makes me really wanna.
What would you do? Got any better suggestions?
I would consider another pattern, Swizec. There is one, which is actively used by D3.js. Basically you create an object and then each instance method returns that object back. This way you can create a neat chain of settings without any ugly config objects or time consuming DSL. Example: instance = new Instance(); instance.setMemory(300).setStorage(1000).setMemoryAlertThreshold(0.8).start(). There is no context of what you work on, but hope it helps. | s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00322.warc.gz | CC-MAIN-2019-51 | 722 | 4 |
https://www.exploreconsulting.com/resources/netsuite-provider-newsletter/ | code | With a focus on cloud-based business systems and a long running track record of successful customer engagements, Explore Consulting provides a diverse group of technologists and consultants with backgrounds in many different industries. We specialize in areas such as NetSuite Services, Amazon Services, Web Development and Design, Digital Marketing, Custom Development and Staffing.
Explore has been a leading NetSuite Solution Provider for more than 16 years and has won 28 NetSuite awards including NetSuite Americas Solution Provider Partner of the Year in both 2014 and 2011. In 2016, Explore was honored as the only NetSuite Partner to make Presidents Club 6 years in a row and earned the inaugural SuiteCommerce Partner of the Year. In addition to being a repeat global Top 5 NetSuite Reseller, Explore is a 12-time NetSuite Star Award Winner. | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864957.2/warc/CC-MAIN-20180623093631-20180623113631-00328.warc.gz | CC-MAIN-2018-26 | 850 | 2 |
https://discuss.pytorch.org/t/resize-is-not-consistent-with-the-new-torchvision-read-image/112875 | code | I am trying to move my code to the new torchvsion API that uses
read_image to torchscript my preprocessing with the model.
I found that the resize method on a PIL image is different that the resize method now on the tensor_image. Maybe I am missing something, what I am doing on the model now is:
self.transforms = nn.Sequential( T.Resize([192, 256]), T.ConvertImageDtype(torch.float), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) )
As you can see, it appears that the resizing on the torch tensor is coarser.
So if my model was trained using the old API with PIL, it produces weird reults on this coarser images.
This is very similar to opencv.imread no antialising method.
PD: The cast methods are just to show the tensors, this comes from fastai. | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00302.warc.gz | CC-MAIN-2022-21 | 762 | 8 |
https://lists.stg.fedoraproject.org/archives/list/[email protected]/message/PCE2EZODOFIAX3CCTIX2HB6G3ITDP6RA/ | code | --- Comment #17 from Anders Wäänänen <waananen(a)nbi.dk> ---
(In reply to comment #16)
I just prepped an updated package for Fedora 17:
Could you test if this resolve this issue?
(Note: don't use this package on fedora 15/16 as it isn't compatible)
Yes this update solves the problem in my mock Fedora 17 environment.
You are receiving this mail because:
You are on the CC list for the bug. | s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390755.1/warc/CC-MAIN-20200526081547-20200526111547-00537.warc.gz | CC-MAIN-2020-24 | 393 | 8 |
https://www.marigold.dev/projects | code | We are a team of specialists looking to make impactful contributions in the interpreter, concurrency, storage and other critical aspects of the Tezos blockchain. We currently focus our attention and energy on five different categories of projects:
Tezos Core Development
LIGO (Language & Compilation)
LIGO Package Manager
Poke app trainings
We are a collaborative organization dedicated to Tezos blockchain. Join us to build projects and upgrade the protocol! | s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817455.17/warc/CC-MAIN-20240419203449-20240419233449-00360.warc.gz | CC-MAIN-2024-18 | 459 | 6 |
http://wowprogramming.com/forums/development/198 | code | Posted by goblinsleez on Thu, 05 Nov 2009 00:18:15
Hi everyone. I am new to the scene of wow addon scripting, but i have previous experience with other programming languages. So i started to dabble cause i love wow. I started with a fairly easy addon to learn a few key elements. I have created a simple addon based off the tutorials found within WoWAddonStudio .... where if i select a unit target , or mouse over the unit target , it changes accordingly in the frame i am working with.
The problem i have run into is that when i press escape, which in turn deselects all targets .. the addon throws an error nil value which makes sense to me , cause it can not display a name that does not exist. So having some prior knowledge to how a script works i have tried and tried again using a script like this ...
function Frame1_OnEvent() if (event == "PLAYER_TARGET_CHANGED" && UnitName("target") != nil) then FontString1:SetText("Hello " .. UnitName("target") .. "!"); end if (event == "UPDATE_MOUSEOVER_UNIT") then FontString1:SetText("Hello " .. UnitName("mouseover") .. "!"); end end
anyone that knows any amount of scripting should see right away what i am trying to accomplish , but apparently the syntax is completely invalid for what im trying to do.
I need to test that there is a valid unitname or if there is not.... because if i deselect the target , that is when i get the error....
Im sure this is a very simple snippet , but i have had zero success so far. Anyone able to help poor ole me out ? Thanks in advance!
Posted by jnwhiteh on Thu, 05 Nov 2009 10:59:00
You have to give the SPECIFIC text of the error. Most likely you are reading it wrong, or it's saying something other than what you'd suspect. The easiest way to do something if the name exists is:
if UnitName("target") then -- Do something if the name exists else -- Do something if it doesn't exist end
Please post the specific error message. | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886946.21/warc/CC-MAIN-20180117142113-20180117162113-00432.warc.gz | CC-MAIN-2018-05 | 1,919 | 11 |
http://www.wiihacks.com/printthread.php?t=102929&pp=10&page=1 | code | Looking for help with Emu Apps on HDD
So I spent all weekend working on my Wii using all the great tutorials here (thanks to all involved with those).
I finally have the Wii modded as shown in the "Softmod ANY Wii" tutorial from this site. Followed the directions exactly. Went further set up my 2TB Western Digital HD (Fat32, etc etc as per this site's USB HD tutorial). Decided to use WiiFlow, and have been successfully backing up and playing my Wii games flawlessly.
I decided to move on to Emulators for the classic systems. I want to store them all on the USB HD (2gig SD card won't cut it), but I am having trouble. I put all the paths on the HD, roms in the right place, etc. I am stuck because I can't figure out how to get the HBC to launch any of the Apps. So pretty much when I load HBC, it just kind of shows what's on the SD card, can't figure out how to point it at the HD.
Thanks in advance for the help! | s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00624-ip-10-145-167-34.ec2.internal.warc.gz | CC-MAIN-2017-17 | 920 | 5 |
http://shaunwill.blogspot.com/2010/07/interracial-chocolate-ads.html | code | I guess these ads are racy in Ghana. But I got bored halfway through. And the originality of the idea is none existant. But the ad exists...so I post it.
As one reader at copyranter pointed out "At least they didn't call the candy 'Mandingos.'" I'm not sure if that joke is funny or not....but atleast they didnt. | s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591596.64/warc/CC-MAIN-20180720115631-20180720135631-00113.warc.gz | CC-MAIN-2018-30 | 313 | 2 |
https://valentifashion.com/products/signature-navy-blue-suit | code | Whether you’re closing the deal or spicing up a date, empower yourself with a suit that responds to you. This premium-blended wool piece is laced with elastic Lycra allowing movement and comfort. All our suits are:
- Thoughtfully constructed with the highest quality materials to maximize durable comfort.
- Molded to ensure optimal fit and flexibility for your body. | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00342.warc.gz | CC-MAIN-2022-40 | 369 | 3 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.