url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://gimp-portable.com/gimp-portable-on-ubuntu-download-and-install-guide/
code
Here is the full guide on how to install GIMP Portable on Ubuntu. Follow the steps to install it and run it. This works for me in Kubuntu 16.04 64bit. It should also work in other 16.x 64bit ‘buntus. It might even work in equivalent version of Mint. A long shot possibility in other Debian based versions. Nothing clever, it is assembled using a script and the ‘buntu ‘gimp-edge’ ppa. This updates regularly and I will attempt to keep the following to the current build. More dependency hell with Ubuntu 18.x versions. This Gimp 2.9.9 appimage is stuck with this release which is Dec 2017 Still very usable, no recent feature additions, just bug fixes Download GIMP Portable for Ubantu How to install GIMP Portable on Ubantu Unzip and run. It will run from your home partition. Same as a regular gimp, first run it creates a gimp profile, in this case ~/.config/GIMP/2.9 It will copy all your resources from ~/.gimp-2.8 so to save a lot of editing, temporarily rename ~/.gimp-2.8 and a new empty Gimp 2.9 profile is created. Here is the images for GIMP portable on ubantu I am not keen on those dark themes, so this is my preference + a shot of one nice addition to Gimp 2.9 – new layer options. Plenty other new things to explore in GIMP portable on Ubuntu. bimp17x64 <<< this has a dependency libpcre3 – install that package from your linux repository. djvu-read <<< mainly for those academic documents gmic_gimp179 <<< last of the series, can be installed alongside ver 2.0 installed. resynthesizer_gui29 <<< small change to gui with added options This is a development version, there will be bugs. If Gimp 2.9.9 – any version – crashes a lot, first thing to check is Edit -> Preferences -> System Resources -> Number-of-Threads-to-use Set this to 1 Also do checkout plugins for GIMP Portable The user interface of GIMP is designed by a dedicated design and usability team. This team was formed after the developers of GIMP signed up to join the OpenUsability project. A user interface brainstorming group has since been created for GIMP, where users of GIMP can send in their suggestions as to how they think the GIMP user interface could be improved. GIMP is presented in two forms, single and multiple window mode; GIMP 2.8 defaults to the multiple-window mode. In multiple-window mode a set of windows contains all GIMP’s functionality. By default, tools and tool settings are on the left and other dialogues are on the right. A layers tab is often to the right of the tools tab, and allows a user to work individually on separate image layers. Layers can be edited by right-clicking on a particular layer to bring up edit options for that layer. The tools tab and layers tab are the most common dockable tabs.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510697.51/warc/CC-MAIN-20230930145921-20230930175921-00077.warc.gz
CC-MAIN-2023-40
2,735
19
https://www.pdfchm.net/tag/sure/
code
Logic of Analog and Digital Machines Computer Science is a very young discipline compared to most others. Alan Turing published the seminal paper of the field in 1936. Around the same time, the militaries in Germany, UK, and US commissioned the first digital electronic computer projects. One of these, the Colossus at Bletchley Park in the UK, was used to break the... C++ Common Knowledge: Essential Intermediate Programming What Every Professional C++ Programmer Needs to Know—Pared to Its Essentials So It Can Be Efficiently and Accurately Absorbed C++ is a large, complex language, and learning it is never entirely easy. But some concepts and techniques must be thoroughly mastered if... Oracle PL/SQL Best Practices In this compact book, Steven Feuerstein, widely recognized as one of the world's leading experts on the Oracle PL/SQL language, distills his many years of programming, teaching, and writing about PL/SQL into a set of best practices-recommendations for developing successful applications. Covering the latest Oracle release, Oracle Database 11g,... |Result Page: 82 81 80 79 78 77 76 75 74 73 |
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00098.warc.gz
CC-MAIN-2020-40
1,118
9
https://thesharepointfarm.com/2018/10/sharepoint-ring-site-move/
code
SharePoint Ring - Site Move!15 Oct 2018 | SharePoint 2016 One of the tools I maintain is sharepointring.com. This tool converts various values (typically hex) into their textual representation. This allows, for example, for an administrator to determine a specific permission level a user will need based on a permissions mask check failure in the ULS log. To get to the change for the site itself, I’ve moved it from Azure to GitHub… Running on Blazor. Blazor is C# WebAssembly (Wasm) allowing one to (more or less) run C# in a browser without having to use a web server that supports ASP.NET. This can significantly reduce costs as hosting providers who offer ASP.NET (or PHP, Python, etc.) are often more expensive than those who host plain HTML. Blazor is experimental and “not for production” but… eh, it’s a site I can play around with. Blazor, or more specifically Wasm, does not work in IE. Sorry! But it does work in Chrome, Edge, and Firefox. The same features and functionality are still there in the site. The design is slightly different (if you haven’t noticed, I’m not a web dev and am using the Blazor template :)), and that’s about it from a user perspective.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00155.warc.gz
CC-MAIN-2020-29
1,194
5
https://candidats.net/knowledge-base/email-system/email-settings/
code
The page is at Settings->Administration->General E-Mail Configuration. This page has three sections. From address of outgoing email, Outgoing email testing and enable/disable automatic emails. From address of outgoing email This email address is used for the from address of outgoing emails. Outgoing email testing The outgoing mail can be tested here enable/disable automatic email The important event are listed with a checkbox to enable/disable automatic emails
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00864.warc.gz
CC-MAIN-2023-40
464
7
https://xbosoft.com/blog/defect-removal-effectiveness-agile-testing-qa/
code
In our recent agile testing webinar, we had an outstanding question on Defect Removal Effectiveness (DRE). For DRE calculation, I believe below is the correct measure. Can you please double confirm? DRE = Defects found pre release/DFP+Defects found in prerelease Yes, your calculation is correct. Ours is actually the inverse and would be called Defect Escape Rate (wanting to keep low) instead of Defect Removal Effectiveness. Let’s do an example just to illustrate. Let’s suppose you found 120 defects pre release. Then after you released the software, let’s say, 15 defects in one month. Therefore, the calculation is: 120/(120+15) = 88.88 This would put you in the middle of the pack. Since we are talking about agile, do we really need to specify 90 days of period here? This is an excellent question. With agile, although you may work in 2-week sprints, and demonstrate ‘working software’ to your Product Owner, that doesn’t mean you necessarily ‘release’ to the end users. So, if you are doing a real release every 2 weeks, then the 90 days would not make sense. With most of our clients, they may not decide to actually release the software in sync with their sprints, but something slower. In any case, you need to be consistent. Defects found in production during a 1 week period is obviously going to be smaller that found in a 90 day period. Also, you have to think about what kind of defects you count, i.e. P1-P4, and whether you weight them all evenly. To get more info, on agile testing and metrics, you may want to download our agile test plan white paper which has contains items you should include in an agile test plan and discusses what you don’t need too!
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00421.warc.gz
CC-MAIN-2020-05
1,695
13
https://www.drupal.org/node/247152/commits
code
Set new default weights per node type Give Articles a default weight of 5, so the extra field is after Body (0) and before Tags (10). Give Pages a default weight of 101, so the extra field is after Body (100), and the same as Links (101) but seems to display after Links instead of before, unfortunately. Other node types default to the same weight as Pages. Add cache contexts to vary caching by URL Fixes issue where the AddToAny instance (an AddToAny Block, for example) was rendered with static attributes (sharing whichever URL/title was cached first) instead of dynamic attributes (sharing the current node's URL/title).
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00425-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
626
6
https://bentrask.com/?q=hash%3A%2F%2Fsha256%2F7c8153cfb8879dbb2c470780
code
For a long time I have put off thinking about mutability in content addressing systems. Thankfully, Jesse Weinstein pointed out that talking up flaws in other systems without presenting my own alternative was a little unfair. So I’ve spent some time putting together an initial but hopefully comprehensive overview of mutability in a distributed/decentralized content addressing system. My original plan was to have applications submit individual changes (commits), and then synthesize groups of changes into snapshots of files, like the working tree in Git. Unfortunately, after Jesse’s comments I quickly realized this wouldn’t actually work: StrongLink would provide addressing of the commits, but there would be no ability to resolve file hashes (without relying on some other application). After more thought, I eventually came up with a list of possible approaches: - Centralized, like the Web today - Blockchains (e.g. Namecoin) - Independent immutable files (commits, as above) - Last write wins (including public-key addressing) - Native diffs I don’t think centrally controlled mutability will ever go away, but obviously I’m trying to do something different. Blockchains and the like are quite interesting, but I want something that works offline (meaning AP). The last three options are where it gets interesting. As mentioned, using one file per “edit” is very easy and obvious. Each file can have arbitrary meta-data for querying. This is still the best option for things that don’t really have/need a strict identity, like a forum thread or instant messaging history (for posts and messages respectively). Each file has a useful content address (e.g. for replies). The next option, which currently seems quite popular, is public-key addressing. However, I see this as a subset of last-write-wins. The basic idea is that you query the system for a particular identifier (for example a public key, or just a filename), and then you use the most recent matching result. To my understanding, this is not just how IPNS works currently, but how it’s intended to work even once it’s complete. In theory, public-key addressing should prevent conflicting writes. However for a user with multiple devices (e.g. laptop and phone), there still exists a race condition between updating (especially if the laptop is asleep, or they can’t sync for some other reason). Because the storage system has no deeper understanding of the semantic changes, there is no way to resolve conflicts. There’s another problem with this approach: meta-data. If meta-data is assigned to each raw file, then when a mutable file is “renamed” (pointed at a different raw file), the meta-data stays behind. In theory it’s possible to attach meta-data directly to the mutable handle, but this adds complexity and might be difficult in a distributed setting (exercise for the reader). For the record, StrongLink has pretty good support for last-write wins addressing right now. In fact, it’s quite powerful because it works with arbitrary queries. You can ask for “the latest file named X and signed by Y” or whatever you want. There are two limitations currently: 1. StrongLink doesn’t yet verify digital signatures itself (known issue), and 2. there is no URI format/protocol for actually addressing files in this way. Actually, there’s a third problem, StrongLink’s queries might actually be too powerful, making it hard for other systems to use the same address format. But it would be straightforward to define a portable subset. Even /ipns/ address resolution might be possible (it depends on how IPFS computes them, which I haven’t looked into). There’s one other concern with public-key addressing: it gives the person with the key a lot of power over the links. Less than the traditional Web gives to each server operator, but even so, it’s a very similar model. Think of it like a directional channel between two people. This is probably good for “websites” representing corporations or individuals, but for “documents” it doesn’t seem that great. (StrongLink obviously keeps a full history of writes, as every system supporting this model should.) Also… when you store each write as a separate, immutable, “raw” file, you still need some kind of deduping or diffing to prevent large amounts of space being wasted. StrongLink doesn’t currently have this, and other systems which do block deduplication achieve it at great cost (IMHO). Okay, finally, onto the last option. One more option is what I call “native diffing” (as opposed to application-level diffing described initially, or the implicit diffing that might be part of last-write-wins compression). This is how Git really works. It has its own diff engine(s) (and accepts plugins), and has basically a full understanding of how changes are made and how to merge them. However, there are some problems: - Complex file formats (mainly binaries) require complex diff plugins - The system cannot always resolve merge conflicts - Files must be “assembled” to be addressed or loaded Wouldn’t it be nice if we could address all of these problems in “progressive enhancement” way? Well, in that case, the best way to handle fully mutable documents in StrongLink (including decentralized collaboration) is by… storing diffs as meta-data. Pretty simple, right? StrongLink already stores meta-data as a flexible, recursive CRDT. So the application itself can represent its edits however necessary for that file format. And StrongLink doesn’t stop you from getting conflict-free merges, if possible. If there are conflicts, the application is already there to (help) resolve them. Over time, if a particular change representation became popular (say just diff/patch for text files), support could be added directly to StrongLink as a module (or even as a reverse proxy). The reason I find this truly compelling is because of the following question: what address should a mutable file be known by? - In the case of file composition, the answer is there is no address (which makes sense because the particular set of components can be changed with queries) - In the case of last-write-wins, the address is some query (such as public key, which is effectively random) - In the case of meta-data-based mutability, the answer is the file’s original hash To me it doesn’t seem like there’s a better answer. The fact is that mutable files can be changed in unpredictable ways, so it’s impossible to choose a name up front that will always reflect its current content. But at least the file’s original content is still unique and meaningful. There’s still another question: if you start a blank new file that you intend to edit, what should it be called? I believe the answer is “it’s up to the user.” A mutable file’s initial content (and thus, address) will determine whose decentralized edits will be able to merge with it. - If your file is conceptually unique (or private), choose a random string - If you want to collaborate with a small group of people, you should mutually agree on a “file name” - If you want to share changes with an unknown group or everyone, choose a short and easily guessable name The ultimate test of this would be a real-time collaborative editor, like EtherPad or Google Docs. I believe a fully decentralized EtherPad clone based on StrongLink is fully possible using this approach. Now for a disclaimer: StrongLink’s meta-data support is still incomplete. For now, mutable meta-data is basically faked client-side, meaning performance slows down the more edits a file has. Fixing this is already a high priority, and it’s straightforward enough that complications are unlikely to arise. And to address Jesse’s implicit question: how is this different from Camlistore specifically? It differs in two ways: 1. the address of a mutable file is the original hash of the file, and 2. based on the file’s content, “useful” collisions are likely to occur (or easy to create), allowing two disconnected nodes to add a file and even begin editing it without duplication. I think it’s clear that StrongLink’s design is fairly complex. There are many different ways to achieve any particular goal, and those differences will have ramifications for mutability, querying, and other attributes. Hopefully this article is enough to show that the design is sufficiently powerful to handle (ahem) unanticipated needs, in a way that is still possible to reason through and makes sense in retrospect.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575168.82/warc/CC-MAIN-20190922053242-20190922075242-00181.warc.gz
CC-MAIN-2019-39
8,564
44
https://community.oracle.com/thread/2414338?tstart=-1&messageID=10537835
code
GC agent installed in AGENT_ORACLE_HOME RDBMS installed in RDBMS_ORACLE_HOME Database and listener up and running, monitored by Grid Control, this implies that the AGENT_ORACLE_HOME/sysman/emd/targets.xml contains the correct credentials for the database. The listener name given ion RDBMS_ORACLE_HOME/network/admin/listener.ora is identical to the name of that listener target in Grid Control. AGENT_ORACLE_HOME/network/admin/sqlnet.ora must exist. SETUP steps needed for operating system AIX: A. AIX is using its native snmp deamon pointing to the snmpdv3 deamon (this is the default on AIX5L): [hostname]/root> ls -l /usr/sbin/snmpd lrwxrwxrwx 1 root system 9 Jul 24 2009 /usr/sbin/snmpd@ -> snmpdv3ne* B. The script AGENT_ORACLE_HOME/network/snmp/peer/start_peer has needs to be corrected on AIX : 1. the locations of "whoami" needs to be corrected from "/usr/ucb/whoami" to "/bin/whoami" 2. the command starting the os snmp deamon must be changed to None of the following processes must be upp and running: snmpd, emsubagent, master_peer, encaps_peer The following ports must not be allocated: snmp, smux, 161, 1161, 199 run as root: ORACLE_HOME= <full path to the AGENT_ORACLE_HOME> This must start the following processes as root: master_peer, encaps_peer, snmpd As agent ORACLE_HOME owner run AGENT_ORACLE_HOME/bin/emctl start subagent The snmp_*.ora files needs to be copied from AGENT_ORACLE_HOME/network/admin to RDBMS_ORACLE_HOME/network/admin Restart the listener, query the listener status "RDBMS_ORACLE_HOME/bin/lsnrctl status <listener_name>" must show: SNMP ON
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00350-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
1,577
23
https://www.kenyans247.com/11164/php-get-the-contents-of-a-web-page-rss-feed-or-xml-file-into-a-string-variabl.html
code
| Viewing this topic: 1 guest(s) viewing this topic Live Kenyan TV • Live Kenyan Radio |PHP: Get the contents of a web page, RSS feed, or XML file into a string variabl by Kenyans247(m): Sun 22, March, 2020 10:52am| You will often have the need to access data that resides on another server, whether you are writing an online RSS aggregator or doing screen scraping for a searching mechanism. PHP makes pulling this data into a string variable an extremely simple process. You can go with the really short method: $url = “https://www.howtogeek.com”; $str = file_get_contents($url); The only problem with that method is that some web hosts have url access blocked in the file methods, for security reasons. You may be able to use this workaround method instead: $crl = curl_init(); $timeout = 5; curl_setopt ($crl, CURLOPT_URL,$url); curl_setopt ($crl, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($crl, CURLOPT_CONNECTTIMEOUT, $timeout); $ret = curl_exec($crl); Sections: International Forum, Environment, Homepage, Food, Autos, Technology Market, Phones, Programing, Education, Building/Architecture, TV/Movies, Gaming, Literature, Forum Games, Jokes Etc, Rwanda Forum, Ugandan Forum, Burundi Forum, South Sudan Forum, Somalia Forum,Kenyans247 - Copyright © 2019 - 2020 Sande Kennedy. All rights reserved. See How To Advertise. Disclaimer: Every Kenyans247 member is solely responsible for anything that he/she posts or uploads on Kenyans247.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00411.warc.gz
CC-MAIN-2020-16
1,446
18
http://waywardplatypus.deviantart.com/
code
Sorry about the lack of art recently. I've been very busy in my life. Assessments and scenarios in my life have taken time away, and I was out of paper for a while. But be warned, when I post some more art, there will be some surprises here. And to point out, it's going to be my birthday in around a month. Sorry if this is a little small.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164035309/warc/CC-MAIN-20131204133355-00060-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
340
3
https://www.cloudbolt.io/author/baunfire-alyssa/
code
It’s the convenient untruth: There is a common misconception that software that is flexible and extensible is also necessarily complex and hard to use. The thinking goes, if a product allows you to extend and customize it, it must be hard to set up and use. SAP is the classic example of this. Their ERP has long been the archetype of extremely powerful software that can do anything you want. However, lots of time is required from SAP professional services to get it functional within an organization. On the other end of the spectrum lie most consumer-facing mobile apps. They are simple to use, but generally have limited configurability or extensibility. I can’t program my own extensions for the Strava app on my phone (unfortunately). This misconception saves product managers, designers, and software engineers a lot of work. If they can make the case that extensibility and simplicity cannot co-exist, it makes their job much easier. Just because most software products lie on the line in the graph above does not mean that it is impossible to create a product that is both easy to use and extensible. When Auggy da Rocha and I founded CloudBolt, we did so with the specific intention of building a product that was both usable out-of-the-box without any customization, but that also allowed more advanced administrators to customize the product to look and act exactly the way required for their organization. Achieving both of these simultaneously was not easy, and did not come for free, but we knew that this combination of attributes was necessary to provide a solution customers would love. When we in the software industry get past the misconception that simplicity and flexibility cannot co-exist, we find it’s an intriguing and worthy challenge to figure out how to achieve both at once. Simplicity and powerful extensibility are not unrelated, but they can be independently improved and evaluated. Here’s a better way to visualize the graph above: I propose that we hold software to a higher standard, and that we use this list of attributes as a scorecard to help in the evaluation of software. This scorecard is also valuable for software development teams to evaluate the quality of their own software, and to strive for improvement. Though these attributes are focused on enterprise software, they are applicable for many consumer-facing products/services as well. The 11 Attributes of Extensible & Consumable Software - One can set up a new account (or, if on-prem, one can install the product) in 30 minutes or less - Value seen within the first hour of use - Full external-facing API - The product’s UI uses the user-facing API exclusively. In other words, the UI does not depend on any API calls or interfaces that are not documented and available to customers. - Authentication is customizable within the product’s UI - The UI has a customizable look & feel - The UI can be extended with custom modules - Back end logic that can be extended with custom modules in various hook points - Customizations & extensions are upgrade-safe - Full documentation and support for the API, extension framework, and hook points - Rich library of readily-available examples of extensibility Techniques for Achieving All 11 Attributes Some tips on how to achieve more of the attributes above: - Setting up a public location with many extensibility examples. At CloudBolt, this takes the form of public github repository (called the CloudBolt Forge) and a CloudBolt server (called the CloudBolt Content Library), which each installation of CB knows how to query and display available examples so that users can browse extensions and find one to start with. - Include time into the software design and implementation schedule to build the right API for new features, build a simple UI, and also make the feature extensible & customizable. - If you do not have sufficient time to make it easy and extensible, start with extensibility, and schedule a project to make it easy in the immediate next release. This is a more specific incarnation of Ken Beck’s rule for software: “Make it work, make it right, make it fast. ” One example of this within CloudBolt is synchronizing user permissions. We had very little time to get this feature in initially, so we just exposed a new hook point at the time users log in which allowed customers to write their own synchronization logic, starting with some examples we provided. Not long after that, we followed up with a full UI for configuring synchronization of user permissions that obviated the need for looking at or changing any code. - Collaborate in brainstorming. Get a group of creative people together around a [virtual] whiteboard, throw around ideas good and bad for ways to make the software both customizable and easy to use. You may be surprised at what you’re able to come up with together, and at how fun it is. - Use the extensibility framework internally! Any time a new feature needs to be developed, ask the question: what if we had to implement this without any changes to the core product code, just using the extensibility framework. If it’s too hard, the extensibility of your product could use improvement. - Constantly ask what the easiest way to provide customizability and extensibility would be. Instead of making the customer learn a domain specific language (DSL), could we allow them to program extensions in a language they already know? Instead of requiring them to code from scratch, could we give them a working chunk of code and then let them change it to meet their needs? Or, instead of requiring them to code, could we create a configuration UI for customizing the product? Today, IT professionals have precious little time to administer enterprise systems, but they are also expected to integrate many of them with each other, and also to customize each. Only through enterprise software solutions that are simple to use and administer, and also easy to extend and customize, can we empower IT professionals to succeed in their work. The importance of delivering software that achieves this combination of attributes previously thought to be incompatible is growing constantly, with rising user expectations and the proliferation and heterogeneity of systems that need to be managed. Experience the leading hybrid cloud management and orchestration solution. Request a CloudBolt demo today. CloudBolt integrates with a great myriad of technologies in many different categories, with more integrations being added in every release. However, the defining characteristic of CloudBolt’s integrations is not their breadth, but their depth. For IT administrators, it is not enough for their cloud platform to have surface-level integration; it needs meaningful, substantive support for other technologies so the IT organization can provide a fully-automated self-service experience to their internal (and sometimes external) customers. One illustrative example is CloudBolt’s support for VMware. I will list all the ways in which CloudBolt integrates with VMware. The main takeaways are that CloudBolt has the most complete integration with VMware of any cloud management platform, and without all of these features within the integration, the IT team will be left to handle situations manually. The full benefit of automation is only realized when the integration goes this deep. The net effect of having this depth of automation for and integration with VMware (and other on-premises virtualization systems) is that IT teams have their private datacenters leveled up with functionality traditionally relegated to the public cloud – self service provisioning, management, and decommissioning of environments by end users without IT’s intervention, modeling cost tracking and implementing shameback/showback, policy-based features such as order approval workflows, enforcement of expiration dates for resources, and power scheduling for VMs to ensure systems do not consume resources when they are not needed. Features of CloudBolt’s Integration with VMware - VM Build process: - Discovery of available clusters, networks, datastores, datastore clusters, VM templates - Provisioning using: - templates stored in the VMware Content Library - normal VM templates - a blank VM and a network-boot based build system such as Razor or Cobbler - Creation & management of multiple disks - Creation & management of multiple network interfaces - support for specifying adapter type - Guest OS customization - Static IP - Setting annotations - Auto-selection of the datastore with the most free space - Datastore clusters - Following progress of the build task - Linked clone builds - Templatized specification of which VMware folder in which to place new VMs - Storage management - Define rate multiplier for premium storage backends - Limit access to storage backend per group or environment - VM management: - Creating new ones - Auditing existing ones and deleting any past expiration - Adding and removing disks, NICs, CPU, memory - All the policy-based management features that CloudBolt supports on other platforms, including execution of remote scripts, tracking of expiration dates, chargeback, power scheduling, the ability to log into VMs remotely from the CloudBolt web UI - VM Discovery: - Detection and storage of 26 distinct attributes on virtual machines - Automatically updating CloudBolt’s dynamic inventory database every 30 minutes with any changes to these attributes on VMs - History tracking for all changes, including those made in CloudBolt and those discovered. This allows reporting on change history over time, including the built-in ability to compute how CPU and GB memory hours each server used over the course of a month. VMware Cloud on AWS - Everything supported by CloudBolt’s vCenter integration above - Deployment and configuration of fenced networks - Support for load balancers - Support for edge gateways - Micro-segmented firewall rules (tagging) - Extension-friendly API wrapper to enable any NSX use cases like NAT’ing, custom route configurations, etc. - Discovery of flows: - Automatic discovery of flow inputs - Ability to map those inputs to CloudBolt attributes - Ability to set up a flow at any trigger point in CloudBolt, which allows it to run at many points during the provisioning process, be presented as a button on the server details page, run as a recurring job and many other places. - This allows reuse of existing investment in vRO workflows for the CloudBolt customers who have switched from vRA to CloudBolt - Deploying VMs via vCloud Director - Discovery of existing VMs - Management of existing VMs - Useful for orgs that want to modernize their cloud platform, but are not yet ready to remove vCloud Director For a cloud platform to deliver on its promise of self-service IT, it needs to have deep interactions with external systems, feature-rich integrations which do IT’s previously-manual work for them. This post covers the depth of CloudBolt’s integration with VMware, but a similar dive could be taken into all the other integrations CloudBolt provides out of the box and as importable content in its hosted Content Library. As always, if there are more aspects of integration you would like to see, we would love to hear from you. The list above has been fueled by excellent ideas from our customers over the last nine years. See these powerful VMware integrations in action. Request a CloudBolt demo today. CloudBolt sponsored and attended VMworld 2019 in San Francisco (with 12 CloudBolters in attendance!) and it was an energy-packed event. I’ll summarize some of the news and talk from the conference here. VMWare’s main announcements Last week, VMware announced the release of: - Tanzu – their Kubernetes orchestrator, essentially an answer to Google’s Anthos. - Project Pacific – the effort to embed a container runtime into vSphere and provide visibility into both containers and VMs from within the vSphere UI. - Updates to VMware Cloud on Amazon Web Services (AKA VMC on AWS) – including accelerated GPU Services and a new study showing cost savings in moving to VMC on AWS. Analysis of VMware’s Direction Shift of focus from IT to developers VMware has traditionally focused on selling products and services to IT departments, but their messaging and product direction are steering toward selling to developers. This is likely in response to VMware’s observation that the locus of decision-making and the budget for technology are shifting toward development teams over time. Embracing of containers With both Project Pacific and Tanzu, it’s clear that VMware is now betting on containers and does not want to miss that train. These two projects will embed a container runtime in vSphere and provide a Kubernetes cluster management tool (playing in the same space as Google’s Anthos). Emphasis on VMC on AWS VMC on AWS is a key part of the hybrid cloud story that VMware is delivering. The idea is to keep running workloads on VMware ESXi, and using vCenter to manage them, but the servers run in data centers owned by AWS instead of customer-owned and operated data centers. This is appealing as it allows large organizations to swap their capex spend out for opex, and to do it without making major changes to applications to run using modern public cloud services and/or containers. The possibility remains that organizations could move applications off VMC on AWS to just AWS or a different cloud, so it will be interesting to see how VMware handles that long-term. vRA 8 Announced VMware officially announced vRealize Automation 8, the rewrite of their Cloud Management Platform. We talked with a lot of vRA 7.x customers who are wondering what the path forward looks like for them. vRA 8.0 will have some good new features (like more agnostic public cloud support, more flexible blueprints than vRA 7, and potentially enhanced extensibility), but that it will have a subset of the features of vRA 7. It remains to be seen when upgrades from vRA 7 to 8 will be supported, or how difficult they will be when that day comes. It’s also unclear whether old-style extensions will be supported. What this Means for CloudBolt Since 2011, CloudBolt has been focusing on meeting the needs of both: - Empowering developers with a simple self-service way to obtain the resources they need to do their job AND - Turning the central IT team into superheroes, giving them unmatched visibility and the ability to orchestrate and automate everything At CloudBolt, we are passionate about the themes VMware brought up. Here’s how CloudBolt stacks up in these themes: |Empowering developers & IT admins |✅ Since 2011 |Easy upgrades of CloudBolt |✅ Since 2012 |Agnostic hybrid cloud support |✅ Since 2013 |Infinite and easy extensibility |✅ Since 2013 |Solid support for GCP and Azure |✅ Since 2014 (plus 6 other public clouds in the ensuing years) |✅ Since 2014 |✅ Since 2015, and getting deeper in every CloudBolt version |VMC on AWS support |Coming in CloudBolt 9.1 in December Summarizing, CloudBolt has been focused on the themes that matter most to IT and developers and the product has matured over many years of releases and management of production environments for global 2000 companies. We look forward to heading back to VMworld in 2020, and in the meantime you can find us at upcoming VMware User Group (VMUG) gatherings in Boston (9/25), Atlanta (10/2) and Phoenix (10/30) this fall. Stop by to chat with us! Want to see how CloudBolt stacks up with vRA? Download our datasheet today. At CloudBolt, we believe that software solutions should be easy to maintain, manage, and understand. We also believe they should be self-regulating and self-healing, when possible. You will see a focus on this starting in 8.4—Tallman but also continuing through our 9.x releases, which will give you better visibility into CloudBolt’s internal status, management capabilities directly from the web UI, and reduce the number of times you need to ssh to the CB VM to check things or perform actions. CloudBolt 8.4—Tallman introduces a new Admin page called “System Status” which provides several tools for checking on the health of CloudBolt itself. The System Status Page in 8.4—Tallman To see the System Status page in your newly installed/upgraded CloudBolt 8.4-Tallman, navigate to Admin > Support Tools > System Status. You will see a page that looks a bit like this: There are three main parts of this page. 1. CloudBolt Mode This section provides a way to put CloudBolt into admin-only maintenance mode. This prevents any user who is not a Super Admin or CloudBolt admin from logging in or navigating in this CloudBolt instance. This is useful for times when you need to perform maintenance on CloudBolt (eg. upgrading it, making changes to the database, etc), and you want to prevent users from accessing it while in an intermediate state, but you yourself need to perform some preparation and verification within the CB UI before and after the maintenance. 2. Job Engine This section shows the status of each job engine worker, each running on a different CloudBolt VM now that active-active Job Engines are supported. It also shows a chart of all jobs run in the last hour and day per job engine. When things are healthy, and the job engines are not near their max concurrency limit, there should be a fairly even split of how many jobs are being run by each worker. 3. Health Checks This section has several kinds of checks: - Indications of the health of a specific service, as would be seen from the Linux command line when running `service <name> status` - Tests of OS-level health, such as a check of available disk space on the root partition - Functional tests, which perform some basic action to make sure systems are working properly. Functional tests in 8.4—Tallman include writing a file to disk and deleting it, creating an entry in the database and deleting it, and adding an entry to memcache and deleting it. Ensuring the health of the systems that underlie CloudBolt can help you quickly hone in on the root cause of an issue, and we hope that the system status page will help narrow the time it takes to troubleshoot and resolve issues with CloudBolt. What’s Next for the System Status Page We have some ideas for what we might add next: - Uptime metrics for each job engine worker - The average time for jobs to complete for each worker - Disk space checks for all partitions on the CB VM - CPU, memory, I/O, and network utilization for the CB VM - Uptime for the CB VM as a whole - Network health checks, including: - testing DNS lookups - testing pinging the gateway - testing connections to any configured proxies If there are any of these that seem like they would be especially useful to you, we’d love to hear that to help us prioritize. We’d also love to hear any additional ideas you have for this new page! You can contact us here, and if you don’t already have CloudBolt to try out the new features download our free 25 VM Lab Public clouds provide an easy path to deploying virtual machines (VMs), but this ease of deployment, if not properly managed, can lead to a proliferation of uncategorized, mysterious, and often expensive VMs. This reviled situation is known as VM Sprawl. Networking Appliance Company: Case Study Case in point, a major networking appliance company had adopted public cloud usage team by team, and wound up with more than a dozen AWS, Azure, and GCP public cloud accounts. The result was multiple interfaces to see and manage all of their VMs, and no unified way to track each VM’s purpose, owner, or lifecycle. The inability to gain visibility into their VMs was costing the enterprise IT team money and time, and exposing them to unnecessary security risks. Step 1: Automatically Set Expiration Dates The first step they took to get control over the situation was to install CloudBolt and to connect it to all of their public cloud accounts. When connected to a virtualization system or public cloud account, CloudBolt automatically discovers all of the VMs, networks, images, etc, and begins tracking and reporting them. Standing up CloudBolt and connecting all of their public cloud accounts took less than half a day and gave the IT admins a single web interface where they could see all of their resources across their various public cloud accounts. It’s worth noting that CloudBolt synchronizes with the providers’ inventory every 30 minutes, so if a user powers off a VM, creates a new one, or changes an existing one (ex. adding memory to it), that change will appear in CloudBolt within half an hour. Step 2: Plug-in Next, the customer added a post-sync plug-in to CloudBolt that automatically set expiration dates on all of their VMs (here’s a sample CloudBolt plug-in that can be used as a starting point). They then emailed all of their developers and other users of the public clouds to let them know that their VMs would be turned off in 60 days, unless they went into CloudBolt’s web UI, claimed their VMs, and changed or removed the expiration date. This step was extremely valuable to the organization as it allowed them to know which of their VMs were still needed, and which users and groups they belonged to. By the 60 day mark, they saw that half of their public cloud VMs were powered down, drastically reducing their public cloud costs. CloudBolt’s discovery and expiration date management had enabled the equivalent of sticking a post-it note on the shared refrigerator saying “Unclaimed food will be discarded Friday at 5pm”. The end result: chaos became order, their costs decreased drastically, no important VMs were lost, and the clean up took place without incident. Step 3: Compare Before & After Costs After these two steps, the IT department was able to contrast their pre-CloudBolt public cloud bills with their post-CloudBolt bills and present the savings to their peers and management. The strong accolades and recognition received for their work enabled the IT team to continue with additional improvements. Self-Service IT & Power Schedules Another improvement made was the roll-out of self-service provisioning of resources in any public cloud or any of their private virtualization systems (VMware and OpenStack). Depending on the group and the environment for the new resources, expiration dates with maximum lengths were set up to be enforced by CloudBolt (here’s a video on how those expiration dates are set up). In addition, the team set up automated power schedules in CloudBolt so it would turn off VMs when they were not in use (saving even more money), implemented limits & quotas on groups and environments, and added approval processes for any orders in production environments. For this IT department, striking back against VM sprawl saved money, time and increased visibility, and also greatly enhanced navigation of the infrastructure. Now, anyone with appropriate permissions can see which VMs belonged to who, how they are changing over time, plus notes detailing their purpose. CloudBolt also allowed the entire IT organization to collaborate and troubleshoot more effectively. It established a common interface for cross-team collaboration, where IT and their internal customers gained a common control plane for infrastructure and applications. Their DevOps practices were advanced and CloudBolt provided them with a place to automate and standardize their practices and processes. To learn more download our Solution Overview or get a free 25-server lab license today. History of Technology: From Mainframe to Hybrid Cloud From Mainframe to Hybrid Cloud For Tech’s Future, the Only Constant is Change Computing infrastructure has come a long way in the last 50 years and the rate of change continues to rise. In order to inform our view of where infrastructure and computing platforms are going in the future, we need to take a look at the past and how far we’ve come… 1960s – Mainframes & Timesharing During the mainframe epoch, companies had only a few large computers, occupying entire rooms. They usually required physical access to use (though logging in over phone lines with primitive remote terminals started to emerge in the ‘60’s). Maintenance of these machines required several physical operators, and automation of this maintenance was not widely considered. 1970s – Advent of Personal Computers During the ‘70’s, computers, started showing up on people’s desks, albeit in a limited manner, instead of filling entire rooms. Administration of these ‘desktops’ was still done locally. 1980s – Networks Arise The ’80’s saw a growth, dare I say widespread connectivity of computers, including modems appearing everywhere, DNS’ creation in ’83, usenet, gopher, and even the inception of the World Wide Web (WWW) in ’89. Still, servers that large organizations owned and operated remained on a relatively small scale, and administration of these servers was done on a one-off basis. 1990s – The Web and the Proliferation of Physical Servers The ’90’s saw drastic change to computing infrastructure. The first popular web browser (Mosaic) was released in ’93 and suddenly more servers were needed than before. Companies quickly moved from having 10’s or 100’s of servers, to having tens of thousands of servers. This shift required a new approach to server management. No longer could you hire one system administrator to manage each server, (or a few), instead they each needed to manage hundreds, an insurmountable task without automation. Cfengine was released as the first configuration management tool in 1993, but the demand for automation greatly outpaced available solutions. 2000s – Virtualization In the early ‘00’s, The maturation of Linux catalyzed a shift from traditional, proprietary Unix systems (such as Solaris, HP-UX, and IBM AIX, which ran on proprietary hardware) to RedHat and other Linux variants, running on commodity hardware. This was fueled by, and in turn further fueled the expansion of the number of servers that needed to be managed. The early ‘00s saw the emergence of a new class of products called Data Center Automation (DCA) products, including Opsware and Bladelogic, however, Sun and HP had their competitors too. Virtualization became accepted in the mid-’00s, first in dev/test labs, but increasingly in production environments. This let companies slow the growth in physical hardware, while still adding virtual servers, each of which with its own running OS. This resulted in more operational efficiency, but management nightmares abounded as it became harder to track, patch, upgrade, and secure all of the operating systems running in a datacenter. More mature configuration management tools rose from this chaos, notably Puppet and Chef (founded in 2005 and 2008, respectively), and the term DevOps was coined in 2007. 2010s – Public Cloud While AWS was released (in non-beta form) in 2008, it wasn’t widely adopted as a platform for enterprise computing until the 2010s. Suddenly, all of the frustrations of dealing with one’s own data centers could be solved with a credit card. No more waiting for the IT team to create your VM, no more dealing with the possibly obstructionist networking, database, or security teams… Companies throughout the 2010s have shifted back and forth between bearishness and bullishness on the public cloud (often depending on how recent and shocking their last bill was). However, serious, enterprise IT shops are realizing that hybrid cloud is truly the best solution, and the end goal. Hybrid cloud enables them to use a mix of public clouds and their own datacenters, choosing the best environment for each workload. Hybrid also allows IT admins to use public cloud during times when demand is above average, scaling back down when demand subsides, so they do not wind up with a gigantic bill. The Advent of a Hybrid Cloud Management Platform The first pre-release of what is now CloudBolt was created in 2010. The gap that my co-founder Auggy and I saw was that, in larger companies, the interface to the IT organization was broken. Developers (and other folks who needed IT resources), would submit a ticket to IT, and then wait weeks to get what they needed (often a VM, but sometimes a physical server, network change, storage allocated, etc). This problem was exacerbated in the ‘90s and ‘00s with the rise in demand of access to servers and VMs, and was made highly contentious by the advent of public cloud (“If I can get a VM in minutes from AWS, why do I have to wait weeks for my IT people to get me one?!”). CloudBolt brings the experience of using a company’s private datacenter up to par with the public cloud experience, and spans the eras from the physical server age, through VMs and public clouds, to the container-based and serverless age of computing. Most large companies today have a bit of each of these eras represented, and now CloudBolt provides them with a unified, easy-to-use interface to provision, manage, and control all of the artifacts of past eras in a consistent way, and from a common web interface and API. 2020s – Hybrid, Containers, and Serverless So that brings us to the next decade. The one thing we know for sure in today’s world is that the only constant is change. Want to learn more? Download the CloudBolt Solution Overview or check out our videos for more info!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473558.16/warc/CC-MAIN-20240221202132-20240221232132-00231.warc.gz
CC-MAIN-2024-10
29,557
192
https://english-bookys.com/courses/53384-net-essentials-linq-for-xml
code
.NET Essentials: LINQ for XML - Category: Courses - Views: 20 - Posted on: 17/04/2021 04:33 The first explanation you typically hear about Microsoft LINQ is that it provides an in-language query tool to manipulate the contents of arrays and lists. Explore LINQ further and you’ll find it works with other popular data sources like XML files. In this course, instructor Walt Ritscher shows you how LINQ to XML uses LINQ extension methods to read, create, search, and manipulate XML in a simplified way. Walt walks you through LINQPad, the lightweight, powerful code editor and code runner that is used in this course, then explains how to load XML into different LINQ classes. He covers how you can get different elements and attributes from XML and some of the ways you can work with elements and attributes, after getting them. Walt describes a variety of query operators that you can use. He concludes with a discussion on how you can create and edit XML structure with LINQ.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00367.warc.gz
CC-MAIN-2021-39
979
6
https://www.chemengr.ucsb.edu/research/modeling-theory-simulation/multiscale-simulations
code
The Shell group is developing fundamental approaches to coarse-grained models and to linking simulations across length and time scales. Their unique information-theoretic approach is able to quantify information loss upon coarse-graining in such a way that permits systematic generation of optimal coarse models. They continue to expand this general theoretical framework and are developing robust simulation methods and algorithms to enable large-scale yet physically accurate coarse-grained models of complex molecular systems. A collaboration between the Leal and Shell groups is also developing multiscale strategies for coupling the continuum transport equations with mesoscale and molecular simulations of fluid flow. These methods are being used to investigate transport problems in which both molecular and macroscopic length and time scales play an important role, for example, when molecular resolution is needed at an interface to capture complex phenomena or provide an appropriate hydrodynamic boundary condition. The Fredrickson group has translated force-matching techniques from particle-based simulations to affect systematic coarse-graining of polymer field theories. The methodology has a close relationship to renormalization group theory and when fully optimized will enable seamless simulations across length scales ranging from a few nanometers to microns and beyond. Another effort in the group involves the development of strategies for mapping complex-valued (D+1)-dimensional field theories to real-valued D-dimensional field theories with significantly reduced computational complexity. The Peters group has made foundational advances in the development of rigorous but practical strategies for investigating long time-scale events through powerful rare events methods.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819067.85/warc/CC-MAIN-20240424045636-20240424075636-00452.warc.gz
CC-MAIN-2024-18
1,797
4
http://twscript.paulanorman.info/docs/html/TheArrayconstructor.html
code
The Array constructor Since the Array constructor is ambiguous in how it deals with its parameters, it is highly recommended to always use the array literals - notation - when creating new arrays. [1, 2, 3]; // Result: [1, 2, 3] new Array(1, 2, 3); // Result: [1, 2, 3] ; // Result: new Array(3); // Result: new Array('3') // Result: ['3'] In cases when there is only one argument passed to the Array constructor, and that argument is a Number, the constructor will return a new sparse array with thelength property set to the value of the argument. It should be noted that only the length property of the new array will be set this way, the actual indexes of the array will not be initialized. var arr = new Array(3); arr; // undefined 1 in arr; // false, the index was not set The behavior of being able to set the length of the array upfront only comes in handy in a few cases, like repeating a string, in which it avoids the use of a for loopcode. new Array(count + 1).join(stringToRepeat); The use of the Array constructor should be avoided as much as possible. Literals are definitely preferred. They are shorter and have a clearer syntax; therefore, they also increase the readability of the code. Created with the Personal Edition of HelpNDoc: Create iPhone web-based documentation
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806388.64/warc/CC-MAIN-20171121132158-20171121152158-00445.warc.gz
CC-MAIN-2017-47
1,292
15
http://www.dzone.com/links/just_another_manic_cyber_monday_are_you_ready.html
code
Virtually any network-based application can be made faster by optimizing the number of bytes... more » In this step-by-step guide, Paul Mariduena and Morgan Johnson start with an existing sample... more » This series of posts covers our experiences whilst running ‘Coding Dojo’ sessions over a 3... more » AMAZON Still Uses OpenID! States not only hold large amount of personally identifiable data, which can be used for... more » You are willing to redesign your webpage? But have you considered all possibilities? Why do... more »
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646180.24/warc/CC-MAIN-20141024030046-00196-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
541
6
http://libraryclips.blogsome.com/2006/04/12/
code
Out of all the newsmastering portals Kinja is the first I’ve seen in a folksonomy environment, and as we go on you will see it is one step away from being a fully-fledged Reading List folksonomy. I guess it is a feed folksonomy, but you also get the beauty of seeing feeds organised into user river of news, as well as tag river of news (user and general level), as well as related sites river of news…so it goes futher to being a newsmaster folksonomy. How it works Load in some feeds via an OPML…great start…or add a few at a time…there is also a bookmarklet to add sources…add some tag/s to your feeds. There you have it, a very simple low-key river of news public reader that archives posts…here’s my 50 or so favourite feeds…click on a tag to read content from feeds with a particular tag. This is what Blogdigger Groups, MySyndicaat, SuprGlu, and others are missing, ie. folder/tags to groups feeds to view a tag/folder river of news…Kinja provides a spliced feed for every tag/folder river of news, but not an OPML Reading List. Here’s my mix. Here’s my feed. Here are my sources. Here’s my Reading List. Click on a tag to view a mix at the tag level…or grab a feed at the tag level. You can view posts and backlinks from just one source…also view a whole lot of statistics. This view also lists related sources from the Kinja community (based on the common tags). You can see a river of news for related sites for a given source, here is the river of news from related sources to Library Stuff. When you view a source you will see that people have assigned tags, clicking on a tag will show a river of news from sources with that tag…here’s a topic about RSS. In a nutshell You can make your own and also view other people’s river of news, just like many of the other newsmastering services, but since you can tag your sources, then you have tag based river of news as well. If you view the contents of just one source, it shows other sources who also have that tag (related sources), and you can add these to your river of news with one easy click. Also you can see a river of news from sources related to the current source on your page (based on common tags) …so in all you have 4 type of river of news: user, tag (user), tag (community), and related sources. You can’t search your own river of news, but you can search a keyword…but this is basically searching for a tag, you will see the tag “RSS” is the same as a search for the term “RSS“. (actually this is the search return page you would see first, then you can see content based on time). NOTE: there isn’t a tag for OPML, but there is a search for OPML Reading List folksonomy What about related user accounts to mine based on common sources? Or tag your user river of news, so others can discover river of news digests by tag…this would give you list of user accounts with a given tag, and maybe even read a mega-river of news. This is kind of coming back to a Reading List folksonomy, only with a river of news to boot…hang on, this is a Reading List folksonomy in a way, as every user has an OPML you can share and discover…imagine this also at the user tag level (ie. a Reading List for each of your tag river of news). [ADDED: you don’t need to imagine, I just noticed each tag river of news has a spliced feed and it’s own OPML Reading List, so it’s like a feed folksonomy where each tag of feeds supplies an OPML, so this goes a step further than the other feed folksonomies so far] Also, as I mentioned before if you could tag your user account, this would really be a Reading List folksonomy, as you could share discover Reading Lists by tag. [ADDED: I guess you could say all your feeds with a specific tag could be a Reading List in it’s own right] Technorati Favourites could choose to do something similar, we already have our own river of news, and we can easily add sources, but we aren’t tagging our sources or tagging our river of news in order to share and discover (that is, sharing and discovering feeds, river of news, reading lists).
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698150793/warc/CC-MAIN-20130516095550-00097-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
4,093
30
https://notmuchmail.org/pipermail/notmuch/2019/027968.html
code
[BUG] emacs: notmuch-mua-attachment-check finds triggering string inside forwarded messages ekeberg at kth.se Wed May 8 10:19:18 PDT 2019 I have found what seems to be a bug, or at least a misbehaviour of the "missing attachment warning" implemented by the otherwise so nice It works fine to detect the regexp for attachments in simple messages. The problem is that it also triggers the warning if a matching string is present inside a forwarded message. This is particularly annoying when forwarding messages originating from MS-Exchange since those seem to always include a hidden header "X-MS-Has-Attach" where the word "Attach" constantly leads to false missing-attachment warnings. Would it be possible to somehow restrict the regexp search to the part of the message actually being authored? Would it be too simplistic to end the search at the first occurrence of "\n\n<#" ? More information about the notmuch
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250603761.28/warc/CC-MAIN-20200121103642-20200121132642-00095.warc.gz
CC-MAIN-2020-05
915
15
http://hulen.com/java-in-memory-cache/
code
Lets look at creating and using a simple thread-safe Java in-memory cache. It would be nice to have a cache that can expire items from the cache based on a time to live as well as keep the most recently used items. Luckily the apache common collections has a LRUMap, which, removes the least used entries from a fixed sized map. Great, one piece of the puzzle is complete. For the expiration of items we can timestamp the last access and in a separate thread remove the items when the time to live limit is reached. This is nice for reducing memory pressure for applications that have long idle time in between accessing the cached objects. There is also some debate weather the cache items should return a cloned object or the original. I prefer to keep it simple and fast by returning the original object. So the onus is on the user of the cache to understand modifying the underlying object will modify the object in the cache as well. Notice this is also an in-memory cache so objects are not serialized to disk. Lets review the Cache implementation below. The cache object has a protected inner class CachedObject which tacks on a timestamp to the object that will be used later for expiring objects from the cache. The class is actually pretty simple with the exception of the internal cleanup thread. The thread for cleaning up items sleeps for the preset time supplied to the constructor and wakes and processes the cache expirations synchronously. This is important because if you have a large cache it may take some time before the cleanup method is called again because it’s total cleanup time + timer interval. I prefer this method vs. a timer callback because the cleanup thread will not add extra load to the system if it is behind. Notice the cleanup code synchronizes on the map and copies all the keys to another list to delete. This will allow the map to keep processing requests on different threads for adding to the map while the cleanup thread removes objects form the cache. But the user also needs to be aware that for each cleanup call we must lock the cache and iterate over the entire set, which, might cause a noticeable pause when under high load. Then we loop through all the keys expiring objects from the cache as well as yielding the thread for others processing in between expiring individual objects. I hope people find the Java in-memory cache useful. There are a lot of different caching modules out in the wild, but I wanted to introduce a simple thread-safe in-memory cache without the overhead of having to implement Serializable or Cloneable. Utils - earnstone-utils-0.1-all.zip
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00439.warc.gz
CC-MAIN-2021-21
2,621
5
https://phabricator.wikimedia.org/p/danshick-wmde/
code
Fri, Jan 22 As a technical writer I'd like to be able to write instructions to a user manually installing Scribunto manually; these instructions exist in part, but I'd benefit from knowing about any special install instructions or problems. The results/worklog from the install process for OAuth and integration with QuickStatements should flow into the hands of the technical writer for inclusion here: https://www.mediawiki.org/wiki/Wikibase/Suite#OAuth The results/worklog from the install process for QS should flow into the hands of the technical writer for inclusion here: https://www.mediawiki.org/wiki/Wikibase/Suite#QuickStatements Tue, Jan 12 Mon, Jan 11 Fri, Jan 8 Dec 17 2020 Nov 26 2020 Thanks for the context, @Peachey88 . My suggestion here is that there be a text label, regardless of the icon. Nov 25 2020 Nov 18 2020 Nov 3 2020 Thanks for the followup. Here's hoping @ShiehJ will have her team weigh in on this. Oct 22 2020 Oct 21 2020 Reviewed and edited. Oct 19 2020 Oct 14 2020 I've done as you requested. I respectfully request that if Herald allows functionality that is rejected by organizational policy, that functionality should be disabled if possible or at least red-flagged on the page that enables it, or on https://www.mediawiki.org/wiki/Phabricator/Help/Herald_Rules . Thanks for testing! Beat me to it :) Thank you, I tried it! Thanks @Kizule ! I opened https://phabricator.wikimedia.org/T265453 since you said I would need another Phab task for it. If I can set my own Herald rules, please direct me to the documentation. Much appreciated -- Thank you for the speedy action! I don't know what a Herald rule is, but I assume you're saying I'd need one for setting a default assignee. Will do -- For WMDE-Technical-Writing the default assignee should be me. Oct 9 2020 PR to come today. Oct 8 2020 Looking forward to reviewing it -- Oct 6 2020 unfortunately I don't think they help users who don't know Wikidata's policies, i.e. when to delete/archive, to solve case 1 Howdy, I'm the technical writer in question. As is probably clear to everyone involved, properly fine-tuning this message depends on a good understanding of what's happening behind the scenes. Oct 5 2020 Sep 28 2020 Good to know. I'll accept the PR and ask further questions elsewhere :) Is there a reason we should have a copy of this file on the wikiba.se website instead of linking to the source? It's not linked anywhere on the site itself, and I think it's a relic from the previous version of the site. Sep 24 2020 If there is still valid concern about broken links, the next step is to check the logs of the webserver hosting doc.wikimedia.org for 404s and other related errors. Jul 2 2020 Thank you @Dzahn ! I am often flummoxed by usernames with spaces in 'em, but this worked. Much appreciated -- Jul 1 2020 Thank you! Could someone let me know what credentials to use for https://turnilo.wikimedia.org/ ? I am able to log into superset (thanks, Kris!) with my shell username and Wikitech password. Jun 24 2020 Signed. Thank you all! Jun 23 2020 Content is live. Jun 15 2020 Processing feedback received end of last week. Jun 5 2020 Preview sent to stakeholders for review. Email sent to @KFrancis . Thanks all! Jun 4 2020 May 13 2020 May 7 2020 But for now: https://github.com/wmde/wikiba.se/pull/8 Yes, although the resources page is going to get worked over bigtime in the coming month, putting this up top immediately is wise. Apr 27 2020 I'm still very much coming up to speed on what docs exist, but I would love to be kept in the loop on this task.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565541.79/warc/CC-MAIN-20210125092143-20210125122143-00111.warc.gz
CC-MAIN-2021-04
3,566
58
https://docs.microsoft.com/en-us/answers/questions/564478/user-profile-disks-fault-tolerance.html
code
Do I understand correctly that Scale Out File Server (SOFS) allows one of the cluster nodes to be shut down while users are using the disk? Shutdown or failure of one of the cluster nodes will not cause the user to stop working and the user disk will remain connected. Our servers are located in Azure. we can use two-node Storage Spaces Direct scale-out file server for UPD storage in Azure not using virtual servers for this? Do I understand correctly that the disk for the cluster must be physical. Those. I cannot use a virtual disk for a cluster, as this disk array must be available for two or more cluster nodes?
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00417.warc.gz
CC-MAIN-2021-49
619
5
http://www.antoniocayonne.com/blog/2015/4/25/cut-off-friday-april-2415
code
I woke up and my phone, for the 4th day in a row, was dead. That got me. So frustrated. Felt like I'd lost all contact with the world - with my wife. Bought a new one. Reflected on how lucky it was I was able to do that, relatively painlessly. And I shut my damn mouth.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00160.warc.gz
CC-MAIN-2019-39
269
5
http://forums.macrumors.com/archive/index.php/t-64853.html
code
View Full Version : Firefox fills the IE void Mar 20, 2004, 01:38 AM Link: Firefox fills the IE void (http://www.macbytes.com/link.php?sid=20040320023806) Posted on MacBytes.com (http://www.macbytes.com) Approved by Mudbug Mar 20, 2004, 02:27 AM As a Web developer, the bane of my existence used to be Netscape back in the bad old 4.x days of the browser wars. But now, years later, in wonderful XHTML/CSS land, the bane of my existence is IE (on Windows). I can't tell you how many times I've crafted the perfect Web design, rendered beautifully by Safari and Firefox -- only to see it mangled to death by IE. Then I have to do all kinds of crappy CSS workarounds and funky HTML hacks to get IE to look right. And believe me, it's not my code at fault -- there are numerous documented IE bugs all over the Web. If FireFox had 80% of the market and Safari had 20% of the market, I'd be the happiest Web developer in the world. As it is, I live in constant dread of IE bugs plaguing my most wonderful designs. Ah, but one can dream.... Mar 20, 2004, 08:52 AM I've now made the push at my company to no longer offer support for certain browsers so we can push forward and utilize advances in web design and development in our web apps. We now require IE 6, Netscape 6 or higher, Mozilla 1.0 or higher, safari 1.0 or higher. I also try to support other major browsers when possible(opera, omniweb). i check to see that firefox works but it is not officially supported as its still technically in beta. There are so many free browser alternatives that we support. We can't have someone come to us complaining about rendering issues with ie 5(especially with its box model issues). IE 5 support has even been dropped by MS, our developers can no longer download IE 5.5 through their msdn subscriptions for testing. So without a proper test bed, we cut support. We will not in any way, shape or form support NS 4.x( I was an avid ns 4.x user prior to NS 6) its got way to many quirks. Mar 20, 2004, 07:46 PM If you must work on Windows Firefox is the way to go. But I would still take Safari anyday. I'm sure the "find as you type" feature will find its way into a future version of Safari.
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824970.20/warc/CC-MAIN-20140820021344-00168-ip-10-180-136-8.ec2.internal.warc.gz
CC-MAIN-2014-35
2,184
15
https://bugzilla.redhat.com/show_bug.cgi?id=175603
code
Red Hat Bugzilla – Bug 175603 Evolution LDAP auto-completion causes hang Last modified: 2007-11-30 17:11:19 EST Description of problem: Have configured Evolution to use IMAP, SMTP and LDAP server for network mailbox. Configurations are exactly the same as for a fully working setup under RHEL 4 and Fedore Core 4. Autocompletion is turned ON for the LDAP server only in all cases. When entering part of an address under the FC5test1 installation, evolution-data-server goes into a 100% CPU loop (I don't know how to figure out how it's getting into that state I'm afraid - sorry). As it stands, it's been in the loop for 10 minutes. The first time this happened, I had the Contacts application as well and it came up with the crash window for evolution-data-server, but the window was un-responsive so I couldn't fill out a bug. Version-Release number of selected component (if applicable): 18.104.22.168-1 How reproducible: Happens every time I try to get an auto-completion to work Steps to Reproduce: 1. Set up Evolution with IMAP, SMTP and LDAP servers 2. Turn autocompletion ON for LDAP server and OFF for local server 3. Create new mail and type part of an address in The Evolution and Compose a message windows become unresponsive and don't re-draw, and the evolution-data-server process starts taking 100% CPU time. Expected address to be auto-completed from a list of possible matches from the Additional info: As stated above, identical setups work perfectly well under Fedora Core 4 (fresh install with no updates) and RHEL 4 WS (fresh install with updates from RHEL Satellite server). Trying to wake this bug up as it would be a shame for it to go out in this state. Finding that the bug does actually exist on Fedora Core 4 also. Main work desktop is a Fedora Core 4 machine. Machine uses 100% CPU after sending an e-mail - always caused by evolution-data-server. It is then also impossible to send an e-mail using the LDAP address book. Killing the evolution-data-server process solves the CPU usage issue, and allows the LDAP address book to be used. These bugs are being closed since a large number of updates have been released after the FC5 test1 and test2 releases. Kindly update your system by running yum update as root user or try out the third and final test version of FC5 being released in a short while and verify if the bugs are still present on the system .Reopen or file new bug reports as appropriate after confirming the presence of this issue. Thanks Closing as NOTABUG as I am no longer in a position to test this. I think it was still an issue when FC5 was released, but the memory is distant. No longer working for a company with this kind of mail config.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744649.77/warc/CC-MAIN-20181118201101-20181118223101-00232.warc.gz
CC-MAIN-2018-47
2,692
42
https://www.redhat.com/archives/libvir-list/2007-December/msg00326.html
code
Daniel P. Berrange wrote: On Mon, Dec 17, 2007 at 04:38:54PM +0000, Richard W.M. Jones wrote:Daniel P. Berrange wrote:The libvirt SSH tunnelling support requires 'nc' to operate. The libvirt RPM does not, however, have any dependancy on 'nc'. So by default it is pure luck whether you can use SSH tunnelling after installing the libvirt package & starting the daemon. Even though we don't technically need it on the client end, I figure nc is so small we may as well add a dep to the main libvirt RPM. This ensures nc is present anywhere the daemon is.It's an obvious +1 for this RPM dependency.I wonder if we should also check at configure time for the version of nc in Debian which doesn't have the '-U' option? Even though someone might compile on Debian but use the resulting client to connect to a Red Hat system ... The original plan was to bundle a 'nc' replacement ('libvirtd-cat') for people to run on the remote system.Does debian have 'socat' by chance ?# socat stdio unix-connect:/var/lib/xend/xend-socket GET /HTTP/1.1 500 Internal server error Content-length: 0 Expires: -1 Content-type: application/sxp Pragma: no-cache Cache-control: no-cacheIf so, we could install a 'nc' shell script which called to socat to emulate it on Debian ? Yes it does: http://packages.debian.org/socatand so does Fedora (but not RHEL...). Should we switch entirely to using socat? Another option would be to make it configurable and/or trying to work out a runtime way to detect the presence of each. Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903 Description: S/MIME Cryptographic Signature
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866358.52/warc/CC-MAIN-20180624044127-20180624064127-00272.warc.gz
CC-MAIN-2018-26
1,803
5
https://docs.raima.com/rdm/15_2/ug/ref/cpp-exception_8h.html
code
Header for C++ exceptions. More... Include dependency graph for cpp-exception.h: This graph shows which files directly or indirectly include this file: |the rdm_exception class This class implements the exception thrown by the RDM CPP API More... |The RDM C++ Namespace. Header for C++ exceptions.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00697.warc.gz
CC-MAIN-2022-49
297
6
https://www.pcgamingwiki.com/wiki/User_talk:Deton24
code
Hey, so I was thinking of cleaning up Rayman 2's page soon just like I did with Tonic Trouble, the biggest problem with that page is that there are a lot of fixes on it that vary in quality and many of them might have to be deleted. Some fixes have poor grammar that make them hard for me to understand, some use rather outdated methods, and a lot of them have different versions for DirectX 6 and Glide 2, but the recommended way to play Rayman 2 is with the Glide API due to better compatibility and more graphical settings. Also thanks for finding my Tonic Trouble page cleanups to be useful. :D Cleaning up Rayman 2 Well. So it was you who added clean up note. I see it this way - Whatever you don't understand, feel free to ask. We will rewrite it. Whatever you find ungrammatical, feel free to write about. Majority of fixes I do understand, or were written/edited personally by me. I'll explain you everything, and it will be rewritten. I coming up with conclusion that, all those fixes are still needed in some cases, and despite nGlide being the best, we can only write about it, but without deleting other information. That's among others because, sometimes GXSetup cause significant problems with changing API. As for consolidating fixes, I'm looking forward to hear how you see that resolved. Currently I don't see anything wrong with latest workarounds written for game errors (since cleanup notice appeared). They are necessary. Still, they can be rewritten in some other form you propose.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00638.warc.gz
CC-MAIN-2022-33
1,503
10
https://codedump.io/share/wpgTSwcE0DAn/1/create-windows-installer-for-java-programs
code
I'm a Java beginner. I already created a simple GUI application that display will "hello world" label. But, how can I create an installer from .java or .jar for windows. Let's say that I have created a useful application and want to share it with my friends to install it in their PC without they need to know what is JRE, or how to download JRE. Deploy the app. from a web site using Java Web Start. Ensure the user has the minimum Java using deployJava.js (linked from the JWS info page).
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860776.63/warc/CC-MAIN-20180618183714-20180618203714-00058.warc.gz
CC-MAIN-2018-26
490
7
https://tin6150.github.io/psg/emcCelerra.html
code
<-- Please click if you found this site useful ;-) EMC Celerra 101 Celerra is the NAS offering from EMC. Control station is the management station where all admin commands are issued: https://celerra-cs0.myco.com/ # web gui URL. Most feature avail there, including a console. ssh celerra-cs0.myco.com # ssh (or rsh, telnet in) for CLI access VDM (vdm2) / DM (server_2) (AVM stripe, volume, etc) storage pool (nas_pool) Export can export subdirectory within a File System. All FS are native Unix FS. CIFS features are added thru Samba (and other EMC add ons?). CIFS share recommended thru VDM, for easier migration, etc. NFS share thru normal DM (server_X). Physical DM can mount/export FS already shared by VDM, but VDM can't access the "parent" export done by a DM. VDM mounts are accessible by underlaying DM via /root_vdm_N Quota can be on tree (directory), per user a/o group. Commands are to be issued thru the "control station" (ssh) (or web gui (Celerra Manager) or Windows MMC SnapIn (Celerra Management).) Most commands are the form: typical options can be abreviated, albeit not listed in command usage: -l = -list -c = -create -n = -name -P = -Protocol nas_halt # orderly shutdown of the whole NS80 integrated. # issue command from control station. IMHO Admin Notes Celerra sucks as it compares to the NetApp. If you have to manage one of these suckers, I am sorry for you (I am very sorry for myself too). I am so ready to convert my NS-80 integrated into a CX380 and chuck all the Data Mover that create the NAS head. There are lot of catchas. More often than not, it will bite you in your ass. Just be very careful, and know that when you need to most to change some option, count on it needing a reboot! The "I am sorry" quote came from a storage architect. One of my former boss used to be a big advocate of EMC Celerra but after having to plan multiple outage to fix things (which NetApp wouldn't have to), he became a Celerra hater. Comments apply to DART 5.5 and 5.6 (circa 2008, 2009) Some good stuff, but only marginally: - Windows files are stored as NFS, plus some hacking side addition for meta data. This mean from the getgo, need to decide how to store the userid and gid. UserMapper is a very different beast than the usermap.cfg used in NetApp. - Quota is nightmare. Policy change is impossible. Turning it off require removing all files on the path. - Web GUI is heavy Java, slow and clunky. And if you have the wrong java on your laptop, well, good luck! - CLI is very unforgiven in specification of parameters and sequences. - The nas_pool command shows how much space is available, but give no hints of virtual provisioning limit (NetApp may have the same problem though) - CheckPoint is more powerful than NetApp's Snapshot, but it requires a bit more setup. Arguably it does not hog up mainstream production file system space due to snapshot, and they can be deleted individually, so it is worth all the extra work it brings. :-) Below is a sample config for a brand new setup from scratch. The general flow is: - Setup network connectivity, EtherChannel, etc - Define Active/Standby server config - Define basic network servers such as DNS, NIS, NTP - Create Virtual CIFS server, join them to Windows Domain - Create a storage pool for use with AVM - Create file systems - Mount file systems on DM/VDM, export/share them # Network configurations server_sysconfig server_2 -pci cge0 -o "speed=auto,duplex=auto" server_sysconfig server_2 -pci cge1 -o "speed=auto,duplex=auto" # Cisco EtherChannel (PortChannel) server_sysconfig server_2 -virtual -name TRK0 -create trk -option "device=cge0,cge1" server_sysconfig server_3 -virtual -name TRK0 -create trk -option "device=cge0,cge1" server_ifconfig server_2 -c -D TRK0 -n TRK0 -p IP 10.10.91.107 255.255.255.0 10.10.91.255 # ip, netmask, broadcast # Create default routes server_route server_2 -add default 10.10.91.1 # Configure standby server server_standby server_2 -create mover=server_5 -policy auto # DNS, NIS, NTP setup server_dns server_2 oak.net 10.10.91.47,184.108.40.206 server_nis server_2 oak.net 10.10.89.19,10.10.28.145 server_date server_2 timesvc start ntp 10.10.91.10 server_cifs ALL -add security=NT # Start CIFS services server_setup server_2 -P cifs -o start #Create Primary VDMs and VDM file system in one step. nas_server -name VDM2 -type vdm -create server_2 -setstate loaded #Define the CIFS environment on the VDM server_cifs VDM2 -add compname=winsvrname,domain=oak.net,interface=TRK0,wins=220.127.116.11:18.104.22.168 server_cifs VDM2 -J compname=vdm2,domain=oak.net,admin=hotin,ou="ou=Computers:ou=EMC Celerra" -option reuse # ou is default location where object will be added to AD tree (read bottomm to top) # reuse option allows AD domain admin to pre-create computer account in AD, then join it from a reg user (pre-granted) # the ou definition is quite important, it need to be specified even when # "reusing" an object, and the admin account used much be able to write to # that part of the AD tree defined by the ou. # EMC seems to need the OU to be defined in reverse order, # from the bottom of the LDAP tree, separated by colon, working upward. # When in doubt, use the full domain account priviledges. server_cifs VDM2 -J compname=VDM2,domain=fr.gap.net,admin=engineer,ou="ou=Servers:Resources:FCUK" server_cifs VDM2 -J compname=uk-r66,domain=uk.gap.net,admin=installer,ou="ou=Servers:Resources:ENUK" server_cifs VDM2 -J compname=uk-r66,domain=uk.gap.net,admin=administrator,ou="cn=Servers:cn=Resources:ou=FCUK" server_cifs VDM2 -J compname=server2,domain=uk.gap.net,admin=engins,ou="cn=Servers:cn=Resources:ou=ZJCN" exact username and ou path depends on your AD tree design. in test Windom eng/ins acc don't cut it, cuz need an admin user account with (essentially) full Windom priv # there is option to reset password if account password has changed but want to use same credential/object again... best to resetserverpasswd other troubleshooting commands: ... server_kerberos -keytab ... server_cifssupport VDM2 -cred -name WinUsername -domain winDom # use test domain user credentials server_cifssupport VDM2 -cred -name Installer -domain winDom # use prod domain user credentials server_viruschk server_4 # check to see if CAVA is working for a specific data mover # Confirm d7 and d8 are the smaller LUNs on RG0 nas_pool -create -name clar_r5_unused -description "RG0 LUNs" -volumes d7,d8 # FS creation using AVM (Automatic Volume Management), which use pre-defined pools: # archive pool = ata drives # performance pool = fc drives nas_fs -name cifs1 -create size=80G pool=clar_archive server_mountpoint VDM2 -c /cifs1 # mkdir server_mount VDM2 cifs1 /cifs1 # mount (fs given a name instead of traditional dev path) server_export VDM2 -name cifs1 /cifs1 # share, on VDM, automatically CIFS protocol ## Mount by VDM is accessible from a physical DM as /root_vdm_N (but N is not an obvious number) ## If FS export by NFS first, using DM /mountPoint as path, ## then VDM won't be able to access that FS, and CIFS sharing would be limited to actual physical server nas_fs -name nfshome -create size=20G pool=clar_r5_performance server_mountpoint server_4 -c /nfshome server_mount server_4 nfshome /nfshome server_export server_4 -Protocol nfs -option root=10.10.91.44 /nfshome nas_fs -name MixedModeFS -create size=10G pool=clar_r5_performance server_mountpoint VDM4 -c /MixedModeFS server_mount VDM4 MixedModeFS /MixedModeFS server_export VDM4 -name MixedModeFS /MixedModeFS server_export server_2 -Protocol nfs -option root=10.10.91.44 /root_vdm_6/MixedModeFS ## Due to VDM sharing the FS, the mount path used by Physical DM (NFS) need to account for the /root_vdm_X prefix See additional notes in Config Approach below. Make decision whether to use USERMAPPER (okay in CIFS only world, but if there is any UNIX, most likely NO). Decide on Quotas policy Plan for Snapshots... An IP address can be used by 1 NFS server and 1 CIFS server. server_ifconfig -D cge0 -n cge0-1 can be done for the DM; cge0-1 can still be the interface for CIFS in VDM. Alternatively, the DM can have other IP (eg cge0-2) if it is desired to match the IP/hostname of other CIFS/VDM. Export FS thru VDM first, then NFS export use the /root_vdm_N/mountPoint path. Use VDM instead of DM (server_2) for CIFS server. A VDM is really just a file system. Thus, it can be copied/replicated. Because windows group and many other system data is not stored at the underlaying Unix FS, there was a need to easily backup/migrate CIFS server. For multi-protocol, it is best to have 1 VDM to provide CIFS access, and NFS will ride on the Physical DM. CAVA complication: The Antivirus scanning feature must be connected to a physical CIFS server, not to a VDM. This is because it is 1 CAVA for the whole DM, not multiple instance for multiple VDM that may exist on a DM. Global CIFS share is also required. May still want to just use physical DM with limited windows user/group config, as that may not readily migrate or backup. Overall, still think that there is a need of 2 IP per DM. Maybe VDM and NFS DM have same IP so that it can have same hostname. But the Global CIFS share will ride on a Physical DM with a separate IP that user don't need to know. Finally, perhaps scrap the idea of VDM, but then one may pay dearly in replication/backup... Create a Server * Create a NFS server - Really just ensuring a DM (eg server_2) is acting as primary, and - Create logical Network interface (server_ifconfig -c -n cge0-1 ...) (DM always exist, but if it is doing CIFS thru VDM only, then it has no IP and thus can't do NFS export). * Create Physical CIFS sesrver (server_setup server_2 -P cifs ...) VDM to host CIFS server (nas_server -name VDM2 -type vdm -create server_2 -setstate loaded) + Start CIFS service (server_setup server_2 -P cifs -o start) + Join CIFS server to domain (server_cifs VDM2 -J ...) Create FS and Share Note that for server creation, DM for NFS is created first, then VDM for CIFS. - Find space to host the FS (nas_pool for AVM, nas_disk for masoquistic MVM) - Create the FS (nas_fs -n FSNAME -c ...) - Mount FS in VDM, then DM (server_mountpoint -c, server_mount) - Share it on windows via VDM (server_export -P cifs VDM2 -n FSNAME /FsMount) - Export the share "via the vdm path" (server_export -o root=... /root_vdm_N/FsMount) But for FS sharing, it is first mounted/shared on VDM (CIFS), then DM (NFS). This is because VDM mount will dictate the path used by the DM as /root_vdm_N. It is kinda backward, almost like lower level DM need to go thru the higher level VDM, blame in on how the FS mount path ended up... File System, Mounts, Exports nas_fs -n FSNAME -create size=800G pool=clar_r5_performance # create fs nas_fs -d FSNAME # delete fs nas_fs size FSNAME # determine size nas_fs -list # list all FS, including private root_* fs used by DM and VDM server_mount server_2 # show mounted FS for DM2 server_mount VDM1 # show mounted FS for VDM1 server_mount ALL # show mounted FS on all servers server_mountpoint VDM1 -c /FSName # create mountpoint (really mkdir on VDM1) server_mount VDM1 FSNAME /FSName # mount the named FS at the defined mount point/path. # FSNAME is name of the file system, traditionally a disk/device in Unix # /FSName is the mount point, can be different than the name of the FS. server_mount server_2 -o accesspolicy=UNIX FSNAME /FSName # Other Access Policy (training book ch11-p15) # NT (both unix and windows access check NTFS ACL) # UNIX (both unix and windows access check NFS permission bits) # NATIVE (default, unix and nt perm kept independent, careful with security implication! Ownership is only maintained once, Take Ownership in windows will change file UID as viewed from Unix.) # SECURE (check ACL on both Unix and Win before granting access) # MIXED - Both NFS and CIFS client rights checked against ACL; Only a single set of security attributes maintained # MIXED_COMPAT - MIXED with compatible features NetApp Mixed Mode is like EMC Native. Any sort of mixed mode is likely asking for problem. Stict to either only NT or only Unix is the best bet. server_export ALL # show all NFS export and CIFS share, vdm* and server_* # this is really like "looking at /etc/exports" and # does not indicate actual live exports. # if FS is unmountable when DM booted up, server_export would # still show the export even when it can't possibly exporting it # The entries are stored, so after FS is online, can just export w/ FS name, # all other params will be looked up from "/etc/exports" server_export server_4 -all # equivalent to "exportfs -all" on server_4. # no way to do so for all DM at the same time. server_export VDM1 -name FSNAME /FSName server_export server_2 -Protocol nfs -option root=10.10.91.44 /root_vdm_6/FSName ## Due to VDM sharing the FS, the mount path used by Physical DM (NFS) need to account for the /root_vdm_X prefix (1) server_export server_4 -Protocol nfs -option root=host1:host2,rw=host1,host2 /myvol (2) server_export server_4 -Protocol nfs -option rw=host3 /myvol (3) server_export server_4 -Protocol nfs -option anon=0 /myvol # (1) export myvol as rw to host1 and host2, giving them root access. # subsequently add a new host to rw list. # Celerra just append this whole "rw=host3" thing in there, so that the list end up having multiple rw= list. # Hopefully Celerra add them all up together. # (2) Alternatively, unexport and reexport with the updated final list. # (3) The last export add mapping of anonymous user to map to 0 (root). not recommended, but some crazy app need it some time. # there doesn't seems to be any root squash. root= list is machine that is granted root access # all other are squashed? The access= clause on Celerra is likely what one need to use in place of the traditional rw= list. ## Celerra require access to be assigned, which effectively limit which host can mount. ## the read/write list is not effective (I don't know what it is really good for) ## access= (open to all by default), and any host that can mount can write to the FS, ## even those not listed in rw=... ## (file system level NFS ACL still control who have write, but UID in NFS can easily be faked by client) ## In summary: for IP-based access limitation to Celerra, access= is needed. ## (can probably omit rw=) ## rw= is the correct settings as per man page on the control station. ## The PDF paints a different pictures though. # NFS share is default if not specified # On VDM, export is only for CIFS protocol # NFS exports are stored in some file, server_export VDM1 -name ShareName\$ # share name with $ sign at end for hidden need to be escaped server_export VDM1 -unexport -p -name ShareName # -p for permanent (-unexport = -u) server_umount VDM1 -p /FSName # -p = permanent, if omitted, mount point remains # (marked with "unmounted" when listed by server_mount ALL) # FS can't be mounted elsewhere, server cannot be deleted, etc! # it really is rmdir on VDM1 Advance FS cmd nas_fs -xtend FSNAME size=10G ## ie ADD 10G to existing FS # extend/enlarge existing file system. # size is the NET NEW ADDITION tagged on to an existing FS, # and NOT the final size of the fs that is desired. # (more intuitive if use the +10G nomenclature, but it is EMC after all :-/ nas_fs -modify FSNAME -auto_extend yes -vp yes -max_size 1T # modify FSNAM # -auto_extend = enlarge automatically. DEF=no # -vp yes = use virtual provisioning If no, user see actual size of FS, but it can still grow on demand. # -max_size = when FS will stop growing automatically, specify in G, T, etc. Defualt to 16T, which is largest FS supported by DART 5.5 # -hwm = high water mark in %, when FS will auto enlarge Default is 90 nas_fs -n FSNAME -create size=100G pool=clarata_archive -auto_extend yes -max_size 1000G -vp yes # create a new File System # start with 100 GB, auto growth to 1 TB # use virtual provisioning, # so nfs client df will report 1 TB when in fact FS could be smaller. # server_df will report actual size # nas_fs -info -size FSNAME will report current and max allowed size # (but need to dig thru the text) Server DM, VDM nas_server -list # list physical server (Data Mover, DM) nas_server -list -all # include Virtual Data Mover (VDM) server_sysconfig server_2 -pci nas_server -info server_2 nas_server -v -l # list vdm nas_server -v vdm1 -move server_3 # move vdm1 to DM3 # disruptive, IP changed to the logica IP on destination server # logical interface (cge0-1) need to exist on desitnation server (with diff IP) server_setup server_3 -P cifs -o start # create CIFS server on DM3, start it # req DM3 to be active, not standby (type 4) server_cifs serve_2 -U compname=vdm2,domain=oak.net,admin=administrator # unjoin CIFS server from domain server_setup server_2 -P cifs -o delete # delete the cifs server nas_server -d vdm1 # delete vdm (and all the CIFS server and user/group info contained in it) Storage Pool, Volume, Disk, Size AVM = Automatic Volume Management MVM = Manual Volume Management MVM is very tedious, and require lot of understanding of underlaying infrastructure and disk striping and concatenation. If not done properly, can create performance imbalance and degradation. Not really worth the headache. Use AVM, and all FS creation can be done via nas_fs pool=... nas_pool -size -all # find size of space of all hd managed by AVM potential_mb = space that is avail on the raid group but not allocated to the pool yet?? nas_pool -info -all # find which FS is defined on the storage pool server_df # df, only reports in kb server_df ALL # list all *MOUNTED* FS and check points sizes # size is actual size of FS, NOT virtual provisioned size # (nfs client will see the virtual provisioned size) server_df ALL | egrep -v ckpt\|root_vdm # get rid of duplicates due to VDM/server_x mount for CIFS+NFS access nas_fs -info size -all # give size of fs, but long output rather than table format, hard to use. nas_fs -info -size -all | egrep name\|auto_ext\|size # somewhat usable space and virtual provisioning info # but too many "junk" fs like root_fs, ckpt, etc nas_volume -list # list disk volume, seldom used if using AVM. /nas/sbin/rootnas_fs -info root_fs_vdm_vdm1 | grep _server # find which DM host a VDM Usermapper in EMC is substantially different than in the NetApp. RTFM! It is a program that generate UID for new windows user that it has never seen before. Files are stored in Unix style by the DM, thus SID need to have a translation DB. Usermapper provides this. A single Usermapper is used for the entire cabinet (server_2, _3, _4, VDM2, VDM3, etc) to provide consistency. If you are a Windows-ONLY shop, with only 1 Celerra, this maybe okay. But if there is any Unix, this is likely going to be a bad solution. If user get Unix UID, then the same user accessing files on windows or Unix is viewed as two different user, as UID from NIS will be different than UID created by usermapper! UID lookup sequence: When a windows user hit the system (even for read access), Celerra need to find a UID for the user. Technically, it consults NIS and/or local passwd file first, failing that, it will dig in UserMapper. Failing that, it will generate a new UID as per UserMapper config. - SecMap Persistent Cache - Global Data Mover SID Cache (seldom pose any problem) - local passwd/group file - Active Directory Mapping Utility (schema extension to AD for EMC use) - UserMapper database Howwever, to speed queries, a "cache" is used first all the time. The cache is called SecMap. However, it is really a binary database, and it is persisten across reboot. Thus, once a user has hit the Celerra, it will have an entry in the SecMap. There is no time out or reboot that will rid the user from SecMap. Any changes to NIS and/or UserMapper won't be effective until the SecMap entry is manually deleted. Overall, EMC admit this too, UserMapper should not be used in heterogeneous Windows/Unix environment. If UID cannot be guaranteed from NIS (or LDAP) then 3rd party tool from Centrify should be considered. server_usermapper server_2 -enable # enable usermapper service server_usermapper server_2 -disable # even with usermapper disabled, and passwd file in /.etc/passwd # somehow windows user file creation get some strange GID of 32770 (albeit UID is fine). # There is a /.etc/gid_map file, but it is not a text file, not sure what is in it. server_usermapper server_2 -Export -u passwd.txt # dump out usermapper db info for USER, storing it in .txt file server_usermapper server_2 -E -g group.txt # dump out usermapper db info for GROUP, storing it in file # usermapper database should be back up periodically! server_usermapper server_2 -remove -all # remove usermapper database # Careful, file owner will change in subsequent access!! There is no way to "edit" a single user, say to modify its UID. Only choice is to Export the database, edit that file, then re-Import it back. # as of Celerra version 5.5.32-4 (2008.06) When multiple Celerra exist, UserMapper should be synchronized (one become primary, rest secondary). server_usermapper ALL -enable primary=IP. Note that even when sync is setup, no entry will be populated on secondary until a user hit the Celerra with request. Ditto for the SecMap "cache" DB. p28 of configuring Celerra User Mapping PDF: Once you have NIS configured, the Data Mover automatically checks NIS for a user and group name. By default, it checks for a username in the form username.domain and a group name in the form groupname.domain. If you have added usernames and groupnames to NIS without a domain association, you can set the cifs resolver parameter so the Data Mover looks for the names without appending the domain. server_param server_2 -facility cifs -info resolver server_param server_2 -facility cifs -modify resolver -value 1 repeat to all DM, but not applicable to VDM Setting the above will allow CIFS username lookup from NIS to match based on username, without the .domain suffix. Use it! (Haven't seen a situation where this is bad) server_param server_2 -f cifs -m acl.useUnixGid -v 1 Repeat for for all DM, but not for VDM. This setting affect only files created on windows. UID is mapped by usermapper. GID of the file will by default map to whatever GID that Domain User maps to. Setting this setting, unix primary group of the user is looked up and used as the GID of any files created from windows. Windows group permission settings retains whatever config is on windows (eg inherit from parent folder). Unlike UserMapper, which is human readable database (and authority db) which exist one per NS80 cabinet (or sync b/w multiple cabinet), the SecMap database exist one per CIFS server (whether it is physcial DM or VDM). server_cifssupport VDM2 -secmap -list # list SecMap entries server_cifssupport ALL -secmap -list # list SecMap entries on all svr, DM and VDM included. server_cifssupport VDM2 -secmap -delete -sid S-1-5-15-47af2515-307cfd67-28a68b82-4aa3e server_cifssupport ALL -secmap -delete -sid S-1-5-15-47af2515-307cfd67-28a68b82-4aa3e # remove entry of a given SID (user) from the cache # delete would need to do for each CIFS server. # Hopefully, this will trick EMC to query NIS for the UID instead of using one from UserMapper. server_cifssupport VDM2 -secmap -create -name USERNAME -domain AD-DOM # for 2nd usermapper, fetch the entry of the given user from primary usermapper db. nas_version # version of Celerra # older version only combatible with older JRE (eg 1.4.2 on 5.5.27 or older) server_version ALL # show actual version running on each DM server_log server_2 # read log file of server_2 A number of files are stored in etc folder. server_file server_2 -get/-put ... eg: server_file server_3 -get passwd ./server_3.passwd.txt would retrieve the passwd file local to that data mover. Each File System have a /.etc dir. It is best practice to create a subdirectory (QTree) below the root of the FS and then export this dir instead. On the control station, there are config files stored in: Server parameters (most of which require reboot to take effect), are stored in: /nas/site/slot_param for the whole cabinet (all server_* and vdm) /nas/server/slot_X/param (for each DM X) Windows MMC Plug in thing... Snapshots are known as CheckPoint in EMC speak. Requires a SaveVol to keep the "copy on write" date. It is created automatically when first checkpoint is created, and by default grows automatically (at 90% high water mark). But it cannot be strunk. When the last checkpoint is deleted, the SaveVol is removed. GUI is the only sane way to edit it. Has abilities to create automated schedules for hourly, daily, weekly, monthly checkpoints. Backup and Restore, Disaster Recovery For NDMP backup, each Data Mover should be fiber connected to a tape drive (dedicated). Once zoning is in place, need to tell data mover to scan for the tapes. Change to use Filesize policy during initial setup as windows does not support block policy (which is Celerra default). Edit the /nas/site/slot_param on the control station (what happen to standby control station?) add the following entry: param quota policy=filesize Since this is a param change, retarded EMC requies a reboot: server_cpu server_2 -r now Repeat for additional DM that may exist on the same cabinet. Two "flavor" of quotas: Tree Quota, and User/Group quota. Both are per FS. Tree Quoata requires creating directory (like NetApp qtree, but at any level in the FS). There is no turning off tree quota, it can only be removed when all files in the tree is deleted. User/Group quota can be created per FS. Enableling require freezing of the FS for it to catalog/count the file size before it is available again! Disabling the quota has the same effect. User/Group quota default have 0 limit, which is monitoring only, but does not actually have hard quota or enforce anything. Each File System still need to have quota enabled... (?) Default behaviour is to deny when quota is exceeded. This "Deny Disk Space" can be changed (on the fly w/o reboot?) GUI: File System Quotas, Settings. CLI: nas_quotas -user -edit config -fs FSNAME ++ repeat for Tree Quota ?? But by default, quota limit is set to 0, which is to say it is only doing tracking, so may not need to change behaviour to allow. Celerra manager is easiest to use. GUI allows showing all QTree for all FS, but CLI don't have this capability. Sucks eh? :( EMC recommends turning on FileSystem quota whenever FS is created. But nas_quotas -on -tree ... -path / is denied, how to do this??!! # Create Tree Quota (NA QTree). Should do this for each of the subdir in the FS that is directly exported. nas_quotas -on -tree -fs CompChemHome -path /qtree # create qtree on a fs nas_quotas -off -tree -fs CompChemHome -path /qtree # destroy qtree on a fs (path has to be empty) # can remove qtree by removing dir on the FS from Unix host, seems to works fine. nas_quotas -report -tree -fs CompChemHome # display qtree quota usage # per user quota, not too important other than Home dir... # (and only if user home dir is not a qtree, useful in /home/grp/username FS tree) nas_quotas -on -user -fs CompChemHome # track user usage on whole FS # def limit is 0 = tracking only nas_quotas -report -user -fs CompChemHome # display users space usage on whole FS From Lab Exercise nas_quotas -user -on -fs FSNAME # enable user quota on FsNAMe. Disruptive. (ch12, p22) nas_quotas -group -on -mover server_2 # enable group quota on whole DM . Disruptive. nas_quotas -both -off -mover server_2 # disable both group and user quota at the same time. ++ disruption... ??? really? just slow down? or FS really unavailable?? ch 12, p22. nas_quotas -report -user -fs FSNAME nas_quotas -report -user -mover server_2 nas_quotas -edit -config -fs FsNAME # Define default quota for a FS. nas_quotas -list -tree -fs FSNAME # list quota tree on the spefified FS. nas_quotas -edit -user -fs FSNAME user1 user2 ... # edit quota (vi interface) nas_quotas -user -edit -fs FSNAME -block 104 -inode 100 user1 # no vi! nas_quotas -u -e mover server_2 501 # user quota, edit, for uid 501, whole DM nas_quota -g -e -fs FSNAME 10 # group quota, edit, for gid 10, on a FS only. nas_quotas -user -clear -fs FSNAME # clear quota: reset to 0, turn quota off. nas_quotas -on -fs FSNAME -path /tree1 # create qtree on FS (for user???) ++ nas_quotas -on -fs FSNAME -path /subdir/tree2 # qtree can be a lower level dir nas_quotas -off -fs FSNAME -path /tree1 # disable user quota (why user?) # does it req dir to be empty?? nas_quotas -e -fs FSNAME -path /tree1 user_id # -e, -edit user quota nas_quotas -r -fs FSNAME -path /tree1 # -r = -report nas_quotas -t -on -fs FSNAME -path /tree3 # -t = tree quota, this eg turns it on on # if no -t defined, it is for the user?? nas_quotas -t -list -fs FSNAME # list tree quota To turn off Tree Quotas: - Path MUST BE EMPTY !!!!! ie, delete all the files, or move them out. can one ask for a harder way of turning something off??!! Only alternative is to set quota value to 0 so it becomes tracking only, but not fully off. Quota Policy change: - Quota check of block size (default) vs file size (windows only support this). - Exceed quota :: deny disk space or allow to continue. The policy need to be established from the getgo. They can't really be changed as: - Param change require reboot - All quotas need to be turned OFF (which requires path to be empty). Way to go EMC! NetApp is much less draconian in such change. Probably best to just not use quota at all on EMC! If everything is set to 0 and just use for tracking, maybe okay. God forbid if you change your mind! server_cifssupport VDM2 -cred -name WinUsername -domain winDom # test domain user credentials server_cifs server_2 # if CIFS server is Unjoined from AD, it will state it next to the name in the listing server_cifs VDM2 # probbly should be VDM which is part of CIFS, not physical DM server_cifs VDM2 -Unjoin ... # to remove the object from AD tree server_cifs VDM2 -J compname=vdm2,domain=oak.net,admin=hotin,ou="ou=Computers:ou=EMC Celerra" -option reuse # note that by default the join will create a new "sub folder" called "EMC Celerra" in the tree, unless OU is overwritten server_cifs server_2 -Join compname=dm112-cge0,domain=nasdocs.emc.com,admin=administrator,ou="ou=Computers:ou=Engineering" ... server_kerberos -keytab ... Other Seldom Changed Config server_cpu server_2 -r now # reboot DM2 (no fail over to standby will happen) server_devconfig server_2 -probe -scsi all # scan for new scsi hw, eg tape drive for NDMP server_devconfig ALL -list -scsi -nondisks # display non disk items, eg tape drive /nas/sbin/server_tcpdump server_3 -start TRK0 -w /customer_dm3_fs/tcpdump.cap # start tcpdump, # file written on data mover, not control station! # /customer_dm3_fs is a file system exported by server_3 # which can be accessed from control station via path of /nas/quota/slot_3/customer_dm3_fs /nas/sbin/server_tcpdump server_3 -stop TRK0 /nas/sbin/server_tcpdump server_3 -display # /nas/sbin/server_tcpdump maybe a sym link to /nas/bin/server_mgr /nas/quota/slot_2/ ... # has access to all mounted FS on server_2 # so ESRS folks have easy access to all the data!! # "typically thing needed by support # file saved to /nas/var/emcsupport/...zip # ftp the zip file to emc.support.com/incoming/caseNumber # ftp from control station may need to use IP of the remote site. server_user ?? ... add # add user into DM's /etc/passwd, eg use for NDMP Network interface config Physical network doesn't get an IP address (for Celera external perspective) All network config (IP, trunk, route, dns/nis/ntp server) applies to DM, not VDM. # define local network: ie assign IP server_ifconfig server_2 -c -D cge0 -n cge0-1 -p IP 10.10.53.152 255.255.255.224 10.10.53.158 # ifconfig of serv2 create device logical name protocol svr ip netmask broadcast server_ifconfig server_2 -a # "ifconfig -a", has mac of trunk (which is what switch see) server_ifconfig server_2 cge0-2 down ?? # ifconfig down for cge0-2 on server_2 server_ifconfig server_2 -d cge0-2 # delete logical interfaces (ie IP associated with a NIC). server_ping server_2 ip-to-ping # run ping from server_2 server_route server_2 a default 10.10.20.1 # route add default 10.10.20.1 on DM2 server_dns server_2 corp.hmarine.com ip-of-dns-svr # define a DNS server to use. It is per DM server_dns server_2 -d corp.hmarine.com # delete DNS server settings server_nis server_2 hmarine.com ip-of-nis-svr # define NIS server, again, per DM. server_date server_2 timesvc start ntp 10.10.91.10 # set to use NTP server_date server_2 0803132059 # set serverdate format is YY DD MM HH MM sans space # good to use cron to set standby server clock once a day # as standby server can't get time from NTP. server_sysconfig server_2 -virtual # list virtual devices configured on live DM. server_sysconfig server_4 -v -i TRK0 # display nic in TRK0 server_sysconfig server_4 -pci cge0 # display tx and rx flowcontrol info server_sysconfig server_4 -pci cge4 -option "txflowctl=enable rxflowctl=enable" # to enable rx on cge0 # Flow Control is disabled by default. But Cisco has enable and desirable by default, # so it is best to enable them on the EMC. Performance seems more reliable/repeatable in this config. # flow control can be changed on the fly and it will not cause downtime (amazing for EMC!) If performance is still unpredictable, there is a FASTRTO option, but that requires reboot! server_netstat server_4 -s -p tcp # to check retrnsmits packets (sign of over-subscription) .server_config server_4 -v "bcm cge0 stat" # to check ringbuffer and other paramaters # also to see if eth link is up or down (ie link LED on/off) # this get some info provided by ethtool .server_config server_4 -v "bcm cge0 showmac" # show native and virtualized mac of the nic server_sysconfig server_2 -pci cge0 -option "lb=ip" # lb = load balance mechanism for the EtherChannel. # ip based load balancing is the default # protocol defaults to lacp? man page cisco side must support 802.3ad. # but i thought cisco default to their own protocol. # skipping the "protocol=lacp" seems a safe bet The .server_config is an undocumented command, and EMC does not recommended their use. Not sure why, I hope it doesn't crash the data mover :-P server_netstat server_x -i # interface statistics server_sysconfig server_x -v # List virtual devices server_sysconfig server_x -v -i vdevice_name # Informational stats on the virtual device server_netstat server_x -s -a tcp # retransmissions server_nfsstat server_x # NFS SRTs server_nfsstat server_x -zero # reset NFS stats # Rebooting the DMs will also reset all statistics. server_nfs server_2 -stats server_nfs server_2 -secnfs -user -list .server_config server_x -v "printstats tcpstat" .server_config server_x -v "printstats tcpstat reset" .server_config server_x -v "printstats scsi full" .server_config server_x -v "printstats scsi reset" .server_config server_x -v "printstats filewrite" .server_config server_x -v "printstats filewrite reset" .server_config server_x -v "printstats fcp" .server_config server_x -v "printstats fcp reset" When server_2 fail over to server_3, then DM3 assume the role of server_2. VDM that was running on DM2 will move over to DM3 also. All IP address of all the DM and VDM are treansfered, including the MAC address. Note that when moving VDM from server_2 to server_3, outside of the fail over, the IP address are changed. This is because such a move is from one active DM to another. IP are kept only when failing over from Active to Standby. server_standby server_2 -c mover=server_3 -policy auto # assign server_3 as standby for server_2, using auto fail over policy Lab 6 page 89 server_standby server_2 -r mover # after a fail over, this command failback to original server # brief interruption is expected, windows client will typically reconnect automatically (ms office may get error on open file). If using the integrated model, there only way to peek into the CX backend is to use navicli command from the control station. navicli -h spa getcontrol -busy # see how busy the backend CX service processor A is # all navicli command works from the control station even when # it is integrated model that doesn't present navisphere to outside workd # spa is typically 22.214.171.124 # spb is typically 126.96.36.199 # they are coded in the /etc/hosts file under APM... or CK... (shelf name) ./setup_clariion2 list config APM00074801759 # show lot of CX backend config, such as raid group config, lun, etc nas_storage -failback id=1 # if CX backend has trespassed disk, fail them back to original owning SP. Pro-actively replacing drive # Drive 1_0_7 will be replaced by a hot spare (run as root): # -h specify the backend CX controller, ip address in bottom of /etc/hosts of control station. # use of navicli instead of the secure one okay as it is a private network with no outside connections naviseccli -h 188.8.131.52 -user emc -password emc -scope 0 copytohotspare 1_0_7 -initiate /nas/sbin/navicli -h 184.108.40.206 -user nasadmin -scope 0 copytohotspare 1_0_7 -initiate # find out status/progress of copy over (run as root) /nas/sbin/navicli -h 220.127.116.11 -user nasadmin -scope 0 getdisk 1_0_7 -state -rb Sys admin can create accoutn for themselves into the /etc/passwd of the control station(s). Any user that have login via ssh to the control station can issue the bulk of the commands to control the Celerra. the nasadmin account is the same kind of generic user account. (ie, don't join the control station to NIS/LDAP for general user login!!) There is a root user, with password typically set to be same as nasadmin. root is needed on some special command in /nas/sbin, such as navicli to access the backend CX. All FS created on the Celerra can be accessed from the control station. - EMC PowerLink - EMC Lab access VDM2 DART 5.6 Released around 2009.0618. Included Data Dedup, but must enable compression also which makes deflation a cpu and time expensive, not usable at all for high performance storage. DART 5.5 mainstream in 2007, 2008 psg101 sn50 tin6150
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00502.warc.gz
CC-MAIN-2021-25
38,070
567
https://cj.tsukuba.ac.jp/courses/queueing-theory-graduate-institute-of-electrical-engineering/
code
Queueing Theory National Taiwan University 1. Introduction of Queueing Model and Review of Markov Chain 2. Simple Markovian Birth and Death Queueing Models (M/M/1, etc) 3. Advanced Markovian Queueing Models 4. Jackson Queueing Networks 5. Models with General Arrival or Service Pattern (M/G/1, G/M/1) 6. Discrete-Time Queues and Applications in Networking To provide the basic knowledge in queueing models and the analysis capability of the queueing models in telecommunications, computers, and industrial engineering Midterm 45% Final Exam 45% Homework (including programming and simulations) 10% Online Course Requirement Site for Inquiry Please inquire about the courses at the address below.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710941.43/warc/CC-MAIN-20221203212026-20221204002026-00127.warc.gz
CC-MAIN-2022-49
695
7
https://wantedfornothing.com/works/ilec/
code
We helped ILEC develop a live streaming web application that works across desktops, mobiles, and tablets to create a seamless user experience. branding, UX, UI, web application Wanted for Nothing consulted ILEC to help enhance their user experience for live streaming events that included conferences, meetings, and concerts. We helped to create a web application that was easy-to-use, quick to load, and also compatible across all devices, not just desktop. Through our initial research, we were able to create a layout that highlighted important details and made interacting with other live-viewers much more inviting and simple. Through our efforts, we ended up developing something that was able to connect people from around the world and share their ideas and thoughts in real-time. this live streaming app took our production company to the next level! very happy with the work WFN did!Launch Demo Page We conducted A/B testing and decided on a darker color scheme as we wanted the dashboard to be easy on the eye and the focus to be on the live stream video. WFN’s challenge was to convey different information into one dashboard without it feeling cluttered or confusing. We wanted the event details and live chat to be easily accessible but to also not distract the viewer from the live stream. We also ensured that the general settings can be accessed on the same page without having to leave the stream. Viewers were also able to filter through comments by different segments, such as positions or departments. We developed a login and signup system that can be easily integrated and customized to suit ILEC’s clients. Logins and signups can be conditional to certain e-mails and allowed the user to select who they are in a drop-down menu. Our client requested for the web application to be fully functional across desktops, mobiles and tablets. This involved a lot of testing from UX/UI research, right through to development and execution. WFN was also able to solve the problem of users accessing different widgets, such as the chat and event details, without having to leave the live stream. We were able to implement this successfully across all screens, even on mobile when space was a large constraint. looks impressive? let’s work together! send over your project and get a quote from our team.get a quote UX/UI Design / Web Development / Web Applications / Tech Consultancysee case study UX/UI Design / Web Development / Branding / Animationssee case study UX/UI Design / E-Commerce / Web Applications / Tech Consultancysee case study UX/UI Design / Web Development / E-Commerce / Tech Consultancysee case study
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00104.warc.gz
CC-MAIN-2023-14
2,639
14
https://lists.yoctoproject.org/g/yocto/message/55700
code
okay...I think I have a more interesting question now... In the package I am building I have some Fortran code that requires `libquadmath` I see that `gcc-runtime` provides the library but I need the library present in `recipe-sysroot/lib` when my `do_compile` runs Is there a way for me to do that? My current approach is to build my image, copy the libraries/includes to my recipe and `install` them in `recipe-sysroot` before `do_compile` This doesn't seem like the correct approach but I am not sure how else to do it at this point Any help would be greatly appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00161.warc.gz
CC-MAIN-2022-27
574
7
https://www.sitri.com/platform/industry-associations/msig/
code
MEMS & Sensors Industry Group (MSIG) is the trade association advancing MEMS and sensors across global markets. As the “go-to” resource for globally linking the MEMS and sensors supply chain to strategic markets, MIG helps companies in and around the MEMS and sensors industry to make meaningful business connections. Device manufacturers, software designers, materials and equipment suppliers, foundry partners, market analysts, and OEM integrators all plug into the MIG network to form alliances that will move their businesses forward; over 180 companies comprise the MEMS & Sensors Industry Group. SITRI is a proud member of MSIG and co-sponsors the MSIG Conference Asia event, held annually in Shanghai since 2014. A close partner and active member of MSIG, SITRI contributes to MSIG events throughout the year in sponsorships, speaker selection committees, speakers and content to help make MSIG continually relevant and a vital voice for the MEMS and Sensors industry. Click here for information on becoming an MSIG member.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00811.warc.gz
CC-MAIN-2023-50
1,034
3
https://www.coincommunity.com/forum/topic.asp?TOPIC_ID=393814
code
I got to looking through some more pocket change and I found this 1986 Penny. I thought maybe it was a DDR for a minute, but a quick search yielded no known varieties. I'm still not convinced that it's not a DDR, so I figured I'd best post it here and see what you guys think. Some of the doubling (like the top right corner of the memorial and on the T in UNITED) looks almost like damage as opposed to actual doubling, but I'm still too inexperienced to be able to make the call. Looks like a higher bounce of Machine Doubling. Just affected the tops of the devices. Why is this not a doubled die? The devices are the normal sizes, but ticked on the tops of the devices. This could be ejection doubling? (Another form of Machine Doubling, caused by ... You guessed it, the machine)
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00199.warc.gz
CC-MAIN-2021-17
783
2
http://search.sys-con.com/node/2489283
code
|By PR Newswire|| |December 18, 2012 02:36 PM EST|| Dr. Stacy Childs, a Board Certified Steamboat Springs, CO physician specializing in Urology, has been selected for America's Top Doctors® for Cancer by Castle Connolly Medical Ltd. STEAMBOAT SPRINGS, Colo., Dec. 18, 2012 /PRNewswire-USNewswire/ -- Castle Connolly Medical Ltd., America's trusted source for identifying Top Doctors, has published its 8th annual edition of America's Top Doctors® for Cancer and Steamboat Springs's Stacy Childs, MD has been selected for inclusion. The 8th edition of America's Top Doctors for Cancer includes over 2,600 top cancer care physicians in the United States. Selected physicians, including Dr. Childs, are among the top 1% of cancer doctors in over 40 specialties and subspecialties for the care and treatment of cancer. Castle Connolly Top Doctors® for Cancer are selected each year by Castle Connolly Medical Ltd. after being nominated by peers in an online nomination process. Nominations are open to all board certified MDs and DOs and each year tens of thousands of physicians cast many tens of thousands of nominations. Nominated physicians are selected by the Castle Connolly physician-led research team based on criteria including medical education, training, hospital appointments, disciplinary histories and much more. About Stacy Childs: a short profile by and about the honoree: Dr. Childs has been in the practice of urology since 1977 and has twenty-eight years of clinical research experience. He has authored over seventy-five medical publications and was the editor-in-chief of a urology journal for eighteen years. He now serves on the editorial board of the journal Urology. He is a prostate cancer survivor, himself, and is available for consultations and second opinions. For more information on this Castle Connolly Top Doctor for Cancer, please visit Stacy Childs's profile on www.castleconnolly.com. Castle Connolly Medical Ltd.'s President and CEO Dr. John Connolly has this to say about Dr. Childs's recognition: "The 8th edition of America's Top Doctors for Cancer, published in November 2012, is the most comprehensive guide of its kind and is the result of contributions from doctors and medical experts from across the United States. Dr. Childs was one of more than 2,600 cancer doctors nominated by Board Certified peers and selected by our research team at Castle Connolly Medical Ltd. Selection for the guide is an impressive accomplishment worthy of recognition. The United States is home to so many talented and committed cancer care professionals, yet some stand out. My congratulations to Dr. Childs." To find out more or to contact Dr. Stacy Childs of Steamboat Springs, CO, please call call 970-871-9710, or visit Urologyclinicpc.com.. This press release was written by American Registry, LLC and Castle Connolly Medical Ltd., with approval by and/or contributions from Stacy Childs and was distributed by PR Newswire, a subsidiary of UBM plc. Castle Connolly Medical Ltd. identifies top doctors in America and provides consumers with detailed information about their education, training and special expertise in printed guides and online directories. It is important to note that doctors do not and cannot pay to be included in any Castle Connolly guide or online directory. Learn more at http://www.castleconnolly.com. American Registry, LLC, recognizes excellence in top businesses and professionals. For more information, search The Registry™ at http://www.americanregistry.com. SOURCE American Registry SYS-CON Events announced today that BMC Software has been named "Siver Sponsor" of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. BMC is a global leader in innovative software solutions that help businesses transform into digital enterprises for the ultimate competitive advantage. BMC Digital Enterprise Management is a set of innovative IT solutions designed to make digital business fast, seamless, and optimized from mainframe to mo... May. 26, 2016 12:00 PM EDT Reads: 1,965 Join us at Cloud Expo | @ThingsExpo 2016 – June 7-9 at the Javits Center in New York City and November 1-3 at the Santa Clara Convention Center in Santa Clara, CA – and deliver your unique message in a way that is striking and unforgettable by taking advantage of SYS-CON's unmatched high-impact, result-driven event / media packages. May. 26, 2016 12:00 PM EDT Reads: 2,179 SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 18th International CloudExpo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, New York, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. May. 26, 2016 11:45 AM EDT Reads: 2,168 18th Cloud Expo, taking place June 7-9, 2016, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some... May. 26, 2016 11:45 AM EDT Reads: 2,987 In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, will provide an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life ... May. 26, 2016 11:30 AM EDT Reads: 1,833 SYS-CON Events announced today that ContentMX, the marketing technology and services company with a singular mission to increase engagement and drive more conversations for enterprise, channel and SMB technology marketers, has been named “Sponsor & Exhibitor Lounge Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York City, New York. “CloudExpo is a great opportunity to start a conversation with new prospects, but what happens after the... May. 26, 2016 11:30 AM EDT Reads: 953 With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty ... May. 26, 2016 11:15 AM EDT Reads: 2,459 What a difference a year makes. Organizations aren’t just talking about IoT possibilities, it is now baked into their core business strategy. With IoT, billions of devices generating data from different companies on different networks around the globe need to interact. From efficiency to better customer insights to completely new business models, IoT will turn traditional business models upside down. In the new customer-centric age, the key to success is delivering critical services and apps wit... May. 26, 2016 11:00 AM EDT Reads: 1,044 Designing IoT applications is complex, but deploying them in a scalable fashion is even more complex. A scalable, API first IaaS cloud is a good start, but in order to understand the various components specific to deploying IoT applications, one needs to understand the architecture of these applications and figure out how to scale these components independently. In his session at @ThingsExpo, Nara Rajagopalan is CEO of Accelerite, will discuss the fundamental architecture of IoT applications, ... May. 26, 2016 10:45 AM EDT Reads: 1,061 The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discuss how businesses can gain an edge over competitors by empowering consumers to take control through IoT. We'll cite examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He'll also highlight how IoT can revitalize and restore outdated business models, making them profitable... May. 26, 2016 10:45 AM EDT Reads: 2,674 SoftLayer operates a global cloud infrastructure platform built for Internet scale. With a global footprint of data centers and network points of presence, SoftLayer provides infrastructure as a service to leading-edge customers ranging from Web startups to global enterprises. SoftLayer's modular architecture, full-featured API, and sophisticated automation provide unparalleled performance and control. Its flexible unified platform seamlessly spans physical and virtual devices linked via a world... May. 26, 2016 10:15 AM EDT Reads: 1,941 In his session at 18th Cloud Expo, Bruce Swann, Senior Product Marketing Manager at Adobe, will discuss how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects). Bruce Swann has more than 15 years of experience working with digital marketing disciplines like web analytics, social med... May. 26, 2016 10:00 AM EDT Reads: 1,230 SYS-CON Events announced today that EastBanc Technologies will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. EastBanc Technologies has been working at the frontier of technology since 1999. Today, the firm provides full-lifecycle software development delivering flexible technology solutions that seamlessly integrate with existing systems – whether on premise or cloud. EastBanc Technologies partners with p... May. 26, 2016 10:00 AM EDT Reads: 2,022 SYS-CON Events announced today that 24Notion has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. 24Notion is full-service global creative digital marketing, technology and lifestyle agency that combines strategic ideas with customized tactical execution. With a broad understand of the art of traditional marketing, new media, communications and social influence, 24Notion uniquely understands how to con... May. 26, 2016 10:00 AM EDT Reads: 1,758 SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management... May. 26, 2016 08:45 AM EDT Reads: 2,928 Companies can harness IoT and predictive analytics to sustain business continuity; predict and manage site performance during emergencies; minimize expensive reactive maintenance; and forecast equipment and maintenance budgets and expenditures. Providing cost-effective, uninterrupted service is challenging, particularly for organizations with geographically dispersed operations. May. 26, 2016 08:45 AM EDT Reads: 1,728 The IoTs will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development proc... May. 26, 2016 07:45 AM EDT Reads: 1,844 SYS-CON Events announced today Object Management Group® has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. May. 26, 2016 06:15 AM EDT Reads: 2,304 SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful... May. 26, 2016 05:45 AM EDT Reads: 2,663 SYS-CON Events announced today that MobiDev will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobile software company with over 200 develope... May. 26, 2016 05:15 AM EDT Reads: 2,457
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276131.38/warc/CC-MAIN-20160524002116-00215-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
13,927
55
https://www.tradingtechnologies.com/help/x-trader/automated-trading-windows/algo-variable-pane/
code
Algo Variable Pane You are viewing X_TRADER Version 7.17 and higher. For earlier versions, click here The Algo Variable pane is located in the center of the Algo Dashboard window and allows you to modify algo variables, create templates, and launch algos that you have selected in the Algo Explorer Pane. There are two types of variables displayed in the Algo Variables pane, algo specific and standard. Algo-specific variables are added within ADL® (Algo Design Lab) and can include such things as order quantity or instrument type. The common variables display below the algo specific variables and consist of the Algo Instance and Client Disconnect Action. Any value changes made to both specific and common variables can be saved as a template for future use. Tip: Algo variables can also be linked to an Excel spreadsheet. Allows you to do the following Starts the selected algo. Review the Audit Trail if an algo fails to start. Displays the variable name that was defined in ADL® (Algo Design Lab). Algo specific variables vary for each algo, and can include things such as quantity or instrument type. Order routing credentials can be specified for each Instrument Block. Algo common variables consist of the following: Displays a short description of the algo. When you select an Algo Definition or an Algo Template, any missing or invalid parameters are marked with a “!” icon. When you hover over the icon, a tooltip is displayed that explains why the parameter is invalid as shown in the following figure. Selecting Order Routing Credentials in the Algo Variable Pane The Algo Variable Pane allows you to select routing credentials for each Instrument Block used in an Algo Definition. An instrument variable must have a valid Broker (X_TRADER ASP), Order Gateway (for a non-Autospreader contract), or a Customer Default selected before an instance of the algorithm can be started. If enabled, the per-market customer order routing selection for an OTA overrides the routing selected in MD Trader or the Market Grid, as well as the customer selection for the Instrument Block. However, if per-market customer selection is enabled but you select routing credentials for the instrument block, this setting overrides the per-market setting (that is, you can toggle between customer routing selections). Note: Per-market routing for an Autospreader contract does not override the customer routing that was manually configured for that contract. For an OMA, customer routing cannot be changed for any working orders. However, if you change the customer routing from per-intrument to per-market, for example, any subsequent orders launched by the algo will be routed with the changed (e.g., per-market) customer selection. To select order routing credentials in the Algo Variable Pane for X_TRADER ASP: - Click the drop-down arrow in the instrument variable field to open the Instrument Explorer. - Select a market, product type, product, and Instrument (contract). - Click the Routing button to select a broker and/or customer account, as well as an order routing gateway. In a X_TRADER ASP environment, you must select a broker for each Instrument Block used by an algo. After selecting a customer, enter any other order routing or clearing account information (FFT2, FFT3, etc.) that may be required. If per-market customer selection is enabled in Global Properties, select a customer and account from the <Market> Routing field in the Common Variables section of the Algo Variable Pane. - Click OK. The following figure shows how to select routing credentials in the Algo Variable Pane in a X_TRADER ASP environment: For details about selecting an Order Gateway or Broker in MD Trader, refer to MD Trader - Order Entry.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705737.21/warc/CC-MAIN-20190120102853-20190120124853-00197.warc.gz
CC-MAIN-2019-04
3,735
25
https://www.geoportti.fi/online-workshop-stac-how-to-find-and-use-spatiotemporal-data-easily-3-6-2023/
code
This workshop is for new users of STAC – Spatio-Temporal Asset Catalog. STAC is a specification to describe geospatial datasets with temporal dimension. It drastically simplifies searching and downloading datasets. STAC includes only the metadata of datasets and links to the actual files. The data files themselves are usually stored in the cloud. STAC is most often used for remote sensing imagery and other raster data, but it can be used also for vector and point cloud data. STAC is especially suitable for time-series applications. CSC has recently opened Paituli STAC, which includes several Finnish datasets and more will be added soon. The workshop is free of charge, but registration is required. More information and registration: https://ssl.eventilla.com/stac_2023
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00582.warc.gz
CC-MAIN-2023-50
779
3
http://blogs.msdn.com/b/usisvde/archive/2009/08/31/microsoft-it-s-top-10-sql-server-2008-features.aspx
code
Are you a startup? Get BizSpark cloud access Get up to $3,700 of cloud benefits Don’t have MSDN? Here’s cloud access Today, more than 4,700 Microsoft® SQL Server® 2008 instances with about 100,000 databases on dedicated hosts support 1,300 applications across several teams in Microsoft Information Technology (Microsoft IT). Because it has deployed SQL Server 2008 since early beta, Microsoft IT has had time to assess and report the top 10 features of the product in its environment. Article, 116 KB, Microsoft Word file Following are 10 key reasons why Microsoft IT is excited about the enhancements in SQL Server 2008. To view the entire article click here.
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096773.65/warc/CC-MAIN-20150627031816-00102-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
667
9
https://www.layer0.co/integrations/nuxtjs-shopify
code
Nuxt.js is an open-source, serverless framework based on multiple frameworks including Vue.js, Node.js, Webpack and Babel.js. Created to build fast, complex isomorphic applications quickly. The framework handles the complex pre-coding configuration and UI rendering for your app, so that developers can focus on writing code. Additional benefits of Nuxt.js include automatic code-splitting, page caching and prefetching, bundling, and static site generation. Nuxt.js is fully capable of delivering the speedy websites that consumers demand. We performed a study to discover which modern frontend framework delivers the fastest websites and found Nuxt.js leads the pack, in front of React, Angular, Vue.js and Next.js. Layer0 is proud to be a Nuxt.js sponsor to help promote modern, open-source frameworks working to facilitate a faster Web. Shopify is a popular eCommerce platform that has attracted some of the world’s biggest brands from its scalability, flexibility, and simplicity that comes with it. Not to mention, it's highly customizable. Shopify offers a plethora of plugins and APIs, including "Storefront API" which is Shopify's modern GraphQL API that can be used to implement shopping, account management, and checkout flows in a portable frontend. The platform also offers multiple responsive themes out of the box, including Debut, Brooklyn, Narrative and others based on Liquid templating. Shopify is one of the easiest platforms to take headless because its APls are robust, consistent, have a modern GraphQL format, and are well documented. The API coverage, however, is not complete and throttling issues occur. Furthermore, the platform does not have any out of the box tooling for a headless frontend or server-side rendering (SSR). In fact, the built in Liquid templating language is not suitable for SSR. To truly support SSR on Shopify, requires running and maintaining a fleet Node.js servers. While some customers are running portable frontend on the platform, microservices between the Storefront API and their frontend must be created to optimize the APIs and minimize the amount of client-side logic. Layer0 lets you deliver an instant loading Nuxt.js website on Shopify in a matter of hours. The average website on Layer0 see median speeds of 320ms loads, as measured by Largest Contentful Paint (LCP), which is a critical metric in Core Web Vitals. With that said, Layer0 is much more than a website accelerator. It is a Jamstack platform for eCommerce. Layer0 makes websites faster for users and simpler for frontend developers. The platform was built specifically for large-scale, database-driven websites, such as eCommerce and Travel, and includes the following benefits: With Layer0, your Nuxt.js Shopify storefront will have sub-second page loads and your developers will be empowered to control edge caching and reduce rework using the various developer productivity tools that come with the platform. Check out Tie Bar, a sub-second Shopify website on Layer0. This mens' retailer delivers 500ms page loads and ranks #1 in the search engine for key phrases on Layer0. If you're currently using Nuxt.js, use the this guide to deploy your Nuxt.js application on Layer0 and get sub-second loads in as little as 1 hour of work. "Bringing our site to Layer0...helps us keep our top ranking position in search engine result pages." “Everyone has commented on how blazing fast our site is, which is thanks to Layer0. It was one of the fastest implementations we’ve ever seen” "We want to be able to fail fast, and [Layer0’s] split testing is a very important piece for us to do this." “The way you factored in A/B testing is better than any tool I have seen.”
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00602.warc.gz
CC-MAIN-2022-21
3,704
12
http://linux.derkeiler.com/Newsgroups/comp.os.linux.setup/2004-02/0316.html
code
From: Matt Webster (mattjw_at_bigpond.net.au) Date: Sat, 07 Feb 2004 01:15:02 GMT I have just installed Debian 3.0r1. i type "tasksel" as root user and select the packages i want to install and then press finish. A whole lot of text flys down the page and on the last line it says E: Sorry, broken I have installed this off a magazine cover cd (Australian Personal Computer). There are two cd's involved. The first in the actual OS and the second is a cd that has applications that the magazine has put together. I have tried it with both cd and get the same response. What can i do to fix this problem?
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121778.66/warc/CC-MAIN-20170423031201-00572-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
603
10
https://mynissanleaf.com/viewtopic.php?f=31&t=2478
code
This is probably an important observation. Gas cars overestimate their mpg all the time - my Rx400h consistently gets 24mpg by pump and odometer, but the in-car display shows 25.3. Why should an electric car be any different - it's still engineered by people who prefer to display optimistic results.wsbca wrote:..The only thing that really matters (to me anyway) as far as judging efficiency is the odometer reading divided by a utility meter, TED, or Killawatt reading. Code: Select all Trips Total Consumption Regeneration Distance Energy 1 0.8kWh 1.2kWh 0.4kWh 4.8miles 5.7miles/kWh 2 0.4kWh 0.9kWh 0.5kWh 3.3miles 7.7miles/kWh 3 0.6kWh 0.9kWh 0.3kWh 3.2miles 5.3miles/kWh 4 0.8kWh 1.3kWh 0.5kWh 4.0miles 5.1miles/kWh 5 0.4kWh 0.5kWh 0.1kWh 1.4miles 4.0miles/kWh 6 1.1kWh 1.5kWh 0.4kWh 4.1miles 3.8miles/kWh
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00376.warc.gz
CC-MAIN-2019-51
811
3
https://kw7ob.com/namecheap-ssl-activation-iis-well-known/
code
Namecheap Ssl Activation Iis .Well-Known If you are looking to host your website on a cheap domain name provider, then look no further than Namecheap. Namecheap offers affordable domain registration prices and a free website builder. You can also get a year of free PositiveSSL, cPanel, and CDN for your website. They accept credit/debit cards as well as PayPal and Bitcoin. There is also a 30-day money back guarantee. Check out our complete review of Namecheap. Namecheap makes it easy to sign up for an account. Simply fill out the form with your information, and then make a payment. Within seconds, your account will be active. You’ll receive a welcome email with instructions on how to get started. You can even contact Namecheap support if you run into any issues. You’ll receive support within a few days and get your questions answered. Another feature of Namecheap’s bulk domain search tool is its powerful bulk domain search tool. You can enter up to 5,000 keywords, and exclude premium domains. You can also filter domain names by desired endings. This allows you to quickly find the perfect domain name. You can also buy and sell domains on Namecheap’s marketplace. To make money, you can also sell your domain. Namecheap’s control panel is another great feature. Their control panel looks like a typical cPanel, but it’s free of flashy skins. It’s simple to use and has all the necessary settings. Softaculous allows you to install applications. In addition to its cPanel, Namecheap also offers a drag-and-drop website builder. You can customize the template by adding content, depending on your needs. Namecheap isn’t proud of its site’s speed, but it is very fast. Our test site was up and running in 2.5 seconds, and we made 62 requests at the same time. Namecheap scored 100% in all speed metrics we examined. However, it is not the fastest site hosting provider in terms of performance. It’s an option for small websites. Namecheap is a great choice if you are looking for affordable domain registration and hosting. In addition to offering cheap domain registration, they offer web hosting and site management services. The company also offers free privacy protection and domain name security, full DNS access, and an extensive knowledge base. Namecheap is a great option, but it’s worth the risk. While all domain name registrars have their pros and cons, there’s no doubt that Namecheap ranks among the best. Namecheap is a great choice for people who are looking to transfer their website domains and are new to the internet. They also provide hosting plans for small sites and sell SSL certificates and privacy protection services. The company offers excellent service and a 100% uptime guarantee. Overall, Namecheap is a great choice for beginners. With its low price, you can get a great domain name and quality hosting for your website. “
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00578.warc.gz
CC-MAIN-2023-06
2,891
8
http://www.daxiongmao.eu/wiki/index.php?title=NFS_image_creation
code
NFS image creation This article explains how to initialize a new NFS image. By now you should already know and have a NetBoot kernel. !! Don't forget to backup the kernel's libraries and modules !! NFS image folder You should create a folder to host the new NFS image, for instance QA: mkdir -p /nfs/qa chmod -R 777 /nfs/qa This folder must be registered in NFS server configuration (/etc/exports). >> See NFS server Get a minimal Operating System Go to your NFS image folder (it should be empty): Thank to Deboostrap you can initialize a lot of *Nix versions! You need to adjust the following command to the distribution you'd like to run: ## Ubuntu 14.04 debootstrap trusty /nfs/qa !! This step is quite long... Depending on your network and CPU it can take up to 10 or 15 minutes !! Copy kernel's libraries and modules First thing to do is to copy your kernel's libraries and modules into your new NFS image. cp -r /tftpboot/sources-images/trusty/lib/modules /nfs/qa/lib/ cp -r /tftpboot/sources-images/trusty/usr/src/ /nfs/qa/usr/src/ - /tftpboot/sources-images/trusty is your TFTP kernel name - /nfs/qa/ is your new NFS image Clean the default settings When you use debootstrap the current DNS settings are set as "default". You should clean that !!! echo "" > /nfs/qa/etc/resolvconf/resolv.conf.d/base echo "" > /nfs/qa/etc/resolvconf/resolv.conf.d/head echo "" > /nfs/qa/etc/resolvconf/resolv.conf.d/original echo "" > /nfs/qa/etc/resolvconf/resolv.conf.d/tail You don't need to clean the "/etc/resolv.conf" file, since that one will be re-generate on runtime. Even better! By keeping the current /etc/resolv.conf you'll be able to get Internet connection when you'll be inside the NFS image as "chroot". By default your client will have the same hostname as the server due to the "debootstrap" installation. :( You MUST clean that in order to retrieve the name from your DNS. echo "" > /nfs/qa/etc/hostname Regarding "hosts", you should only keep the loopback settings. It should look like this: 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00596.warc.gz
CC-MAIN-2022-21
2,108
32
https://www.testkingit.com/ConvergenceTechnologiesProfession-certification/
code
Convergence Technologies Profession Certifications TestKing It is not so easy to yield successful results. If you want to get something, you must pay for the equal efforts. As we all know, no pains, no gains. Stop envying social elites. You can also obtain a wonderful future by persistent efforts. Maybe our Convergence Technologies Profession exam cram can give you some help. At least, you must enrich yourself by learning knowledge. The more knowledge you study, the much wiser you are. Our Convergence Technologies Profession actual test material is responsible for our customers. You will benefit a lot after finishing your study on our study guide. We hope that everyone can be brave enough to try our Convergence Technologies Profession study guide. In the last ten years, we have been focusing on researching the Convergence Technologies Profession exam cram. Although we have come across many problems, our company has successfully overcome them. Now, the quality of our latest training material reaches the highest level. We are happy that our Convergence Technologies Profession actual test has won many customers’ support. Your satisfaction of our test engine is the greatest motivation for us to move forward. We are grateful that our efforts finally pay off. So why not have a try? Your choice of our Convergence Technologies Profession study guide is absolutely correct. Stop hesitating. We are waiting for your coming.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00140.warc.gz
CC-MAIN-2021-25
1,437
3
https://mail.python.org/pipermail/python-ideas/2010-September/008186.html
code
[Python-ideas] Prefetching on buffered IO files solipsis at pitrou.net Tue Sep 28 22:33:39 CEST 2010 On Tue, 28 Sep 2010 09:44:38 -0700 Guido van Rossum <guido at python.org> wrote: > But AFAICT unpickle doesn't use seek()? > > But, if the stream had prefetch(), the unpickling would be simplified: I > > would only have to call prefetch() once when refilling the buffer, > > rather than two read()'s followed by a peek(). > > (I could try to coalesce the two reads, but it would complicate the code > > a bit more...) > Where exactly would the peek be used? (I must be confused because I > can't find either peek or seek in _pickle.c.) peek/seek are not used currently (in SVN). Each of them is used in one of the prefetching approaches proposed to solve the unpickling (the first approach uses seek() and read(), the second approach uses read() and peek(); as already explained, I tend to consider the second approach much better, and the prefetch() proposal comes in part from the experience gathered on that approach) > It still seems to me that the "right" way to solve this would be to > insert a transparent extra buffer somewhere, probably in the GzipFile > code, and work in reducing the call overhead. No, because if you don't have any buffering on the unpickling side (rather than the GzipFile or the BufferedReader side), then you still have the method call overhead no matter what. And this overhead is rather big when you're reading data byte per byte, or word per word (which unpickling very frequently does). (for the record, GzipFile already has an internal buffer. But calling GzipFile.read() still has a large overhead compared to reading data directly from a prefetch buffer inside the unpickler object) More information about the Python-ideas
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363301.3/warc/CC-MAIN-20211206133552-20211206163552-00206.warc.gz
CC-MAIN-2021-49
1,763
31
http://www.blackberryforums.com/rim-software/78506-missing-meeting-requests-invites.html
code
05-29-2007, 04:00 PM Join Date: May 2007 Post Thanks: 0 Thanked 0 Times in 0 Posts | | Missing meeting requests / invites Please Login to Remove! We recently got a dozen blackberries at my company and a few days ago some of us stopped receiving meeting requests on our blackberries. They were working fine before, and are still working for some people. We have 8700s and some kind of software on our exchange server. We can still see the meeting requests in the Inbox in Outlook (still unread), and the meetings show up in the Blackberry calendar, but the meeting requests do not show up at all in the Inbox on the blackberry. Any ideas on how to fix this?
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00294.warc.gz
CC-MAIN-2018-13
656
9
https://www.scarletcalzature.it/en/prodotto/larianna-slingback-bl1413-rt-fuchsia-barbie/
code
L’ARIANNA Slingback BL1413/RT Fuchsia/Barbie -SNAKE-EFFECT LAMINATED LEATHER UPPERS -SOFT LEATHER LINING -SPOOL HEEL HEIGHT 2.5 CM Handcrafted in Italy DELIVERY IN 2 WORKING DAYS WE ARE HERE FOR YOU CONTACT US MADE EASY AND GUARANTEED SECURE AND GUARANTEED PAYMENTS Laminated snake-effect leather slingback on shades of fuchsia accompanied by gold and two shades of green. A seductive and super chic color contrast. Perfect for complementing and making even the most classic look more appealing. Attention to detail and craftsmanship made in Italy create a unique and timeless style. Leather lining and insole.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00293.warc.gz
CC-MAIN-2024-10
612
14
https://www.godeltech.com/agile-consulting/
code
As a consequence of this continual need to move quickly and adapt to new requirements or challenges, Agile Development has become the preferred approach for software delivery. Unlike other methodologies that adopt a more linear approach, such as Waterfall, Agile Development is an iterative process that places the needs of the business at the centre, cultivating a more collaborative process between the development team and business stakeholders. The Agile Development process breaks down features, defects and enhancements, estimates their effort and then assigns them to a release (each release can be viewed as a mini project.) Each release has a deadline, which can vary in length but will typically be every two weeks. By breaking down large initiatives into smaller releases, priorities can be easily managed and changed as needed. With a new release of the software going into production on a regular basis, clients will be able to see constant value, as applications are continually improved to support the business at the point of need: an important benefit for your organisation.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00350.warc.gz
CC-MAIN-2023-06
1,091
2
https://support.pagely.com/hc/en-us/articles/360031279631
code
Press3 is a service offered by Pagely that allows you to automatically offload your static assets to your own personal Amazon S3 bucket. This allows you to have a lower footprint on your Pagely VPS account while still retaining your library of images or other static assets. If you have a site with a large number of static assets, Press3 is a great solution for you. How Does Press3 Work? Press3 works by automatically uploading your assets to your S3 bucket when they reach a certain age. For example, if you have images from old posts that might not get a lot of traffic but still need to remain intact for historical purposes, Press3 is a way to maintain them while saving precious storage space. When a page is visited, the server will check to see if the image already exists locally. If not, Press3 will then check for it within your S3 bucket and load it from there. Of course, if subsequent users then visit that same page that is loading the images from Press3, the images are still cached locally and on PressCDN to ensure the fastest speeds possible. To start using Press3, you'll need to meet the following requirements: - Your WordPress app must be in NGINX-Only mode. - An Amazon S3 bucket located in the same region as your site. To get started with Press3, just contact our support team to have it set up for you. When submitting your request, please be sure to include the following information:
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407667.28/warc/CC-MAIN-20200530071741-20200530101741-00283.warc.gz
CC-MAIN-2020-24
1,413
9
http://objectifiers.com/rundll32-high/rundll32-exe-btmshellex.html
code
how do i find it, it is or was a part of my windowsxp. Scrin- read loads of different write ups on it, and more often than not people seem to say its not dangerous. It will restart your computer after restoring to specified date. so if you don't use bluetooth, i'd say just go to services.msc and disable your bluetooth. his comment is here Hang with us on LockerDomeCircle BleepingComputer on Google+!How to detect vulnerable programs using Secunia Personal Software Inspector Simple and easy ways to keep your computer safe and secure on the Internet The old one out is the rundll file. Edited by Draclvr, 02 July 2013 - 07:27 PM. Sid RunDll32 was deleted and now I cant open my control panel JD when you do not have a fire wall to stop it it opens a site and takes over November 21, 2008 jd2066 @Harjit: Post your question on the forum. No [Meta] posts about jobs on tech support, only about the subreddit itself. You figure it would be in the related articles. permalinkembedsavegive gold[–]samkostka 1 point2 points3 points 1 year ago(1 child)What are you talking about? See also: Link Ashish doshi I never had an issue withit until I installed a nvidia card. When that happens, they can be very difficult to detect and remove, but using a comprehensive system scan usually detects them and allows you to safely remove them. Rundll32.exe Download many i delele, some i cant do a thing with. Control panel/display settings still work fine. Rundll32 High Cpu The assembly utilizes the .NET run-time framework (which is required to be installed on the PC). Note: The rundll32.exe file is located in the folder C:\Windows\System32. http://www.errorboss.com/exe-files/rundll32-exe/ Rundll32.exe works by invoking a function that is exported from a specific 16-bit or 32-bit DLL module. After stuggling with this for many, many months, I think I found a makeshift solution. Rundll32 Error It's likely that this problem is caused by a bugged library that is getting stuck in an infinite loop. It is a makework system for software developers (programmers). people It is in partnership with se.dll when operating off of trojan.start page rendering IE useless. Rundll32 High Cpu Daniel "Dtoolman" this dll file runs my cpu usage up to a 100%!!!! Now my computer won't startup. Rundll32.exe Virus If so, it's a virus and use a virus remover immediately it pops in startup monitor (very useful piece of shareware) and i can keep it from loading all it's applications Rundll32 High Disk Usage It don't run all the time. Better start reading! http://objectifiers.com/rundll32-high/rundll32-what.html Everyone, that is, except the Queen, who is exempt from the law. Restart your computer. Is rundll32.exe spyware or a virus? Rundll32.exe Error If there isn't any information at all, you should either Google it, or ask somebody on a helpful forum. Remember, something legit may be using it too, so use info like the "Mem Usage" & "CPU" to guess at which it is... Also check if there are more files that look like 'rundll32' by using the search function, as the only valid one is the one situated in c:/windows/system32. http://objectifiers.com/rundll32-high/rundll32-exe.html Sup3r all it does is not let me delete history Jordian when i quit it with task manager my pc runs smoother and faster. MSN warned me that I was messing with a necessary Windows application. What Is Rundll32 if it's absense stops hotmail and msn network it's probably a good thing ;) Matt Gilbert i hate it. I could immediately access my control panel and download updates. But almost all windows operations use this file also. Pop ups were all reporting spyware and directing me to rogue spyware sites. I thought it was a virus/spyware/trojan but my spyware and antivirus programs did not detect anything. Make sure you typed the name correctly, and then try again. Windows Host Process Rundll32 Startup Instructions Step 1: Download the free Rundll32.exe scanner Step 2: Scan your computer Step 3: Click "Fix All" and you're done! It is a real virus. Another potential cause of these error messages can come from malicious software such as adware, spyware, and viruses. Follow the removal steps below to automatically remove malicious files. http://objectifiers.com/rundll32-high/rundll32-cpu.html link: https://technet.microsoft.com/en-us/sysinternals/bb896653.aspx permalinkembedsavegive gold[–]ajeoaeTrusted 0 points1 point2 points 1 year ago(0 children)Just to add, by hovering over the offending rundll32.exe, you can see the exact arguments being passed to it. I am confused and dont know what to do. Many programs use RunDLL to execute detatched processes. Should I stop the Rundll32.exe task or process within the Task Manager? dont remove the valid rundll32 its part of the operating system, what is using rundll32 is what is causing the problem. .. Sometimes is still runs, but allows everything to run at normal speed. One will use the library when you start add/remove s/w in control panel, the other turned out to be specifically Nvidia - if you right click the desktop/Nview Properties and disable Finally, the rundll32.exe file exits.We strongly recommend that you run a FREE registry scan to identify rundll32.exe related errors.Other instances of RUNDLL32.EXE:1) rundll32.exe is a process registered as a backdoor vulnerability Distribution by PC manufacturer PC Manufacturerdistribution ASUS 32.97% Lenovo 26.37% Dell 21.98% MSI 10.99% Samsung 5.49% Sony 2.20% Back to top © 2016 Reason Software Download | Terms | Privacy | oliver it stop my computer from working properly... ! I renamed the PF FILes by putting a z infront of them (so i could re find them easily if I wanted to reverse the process). but the file itself is safe, valid, and necessary for certain functions. Click here to run a free registry scan now.Warning: Multiple instances of RUNDLL32 may be running on your pc at one time. Rundll32 freezes all the time and is only sometimes useful FMasic rundll32.exe is a process which executes DLL's and places their libraries into the memory, so they can be used more Billy Get the ORGINAL rundll32,exe from your WinXP Cd. Virus with same file name: W32.Miroot.Worm - Symantec Corporation Backdoor.Lastdoor - Symantec Corporation Trojan.StartPage - Symantec Corporation Click to Run a Free Scan for rundll32.exe related errors Users Opinions Average user
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947705.94/warc/CC-MAIN-20180425061347-20180425081347-00491.warc.gz
CC-MAIN-2018-17
6,356
18
https://slidelegend.com/totality-versus-turing-completeness-cis-personal-web-pages_59bb83da1723dd77e80113a8.html
code
Totality versus Turing-Completeness? Conor McBride University of Strathclyde [email protected] Abstract. In this literate Agda paper, I show that general recursive definitions can be represented in the free monad which supports the ‘effect’ of making a recursive call, without saying how these programs should be executed. Diverse semantics can be given by suitable monad morphisms. The Bove-Capretta construction of the domain of a general recursive function can be presented datatype-generically as an instance of this technique. Advocates of Total Functional Programming , such as myself, can prove prone to a false confession, namely that the price of functions which function is the loss of Turing-completeness. In a total language, to construct f : S → T is to promise a canonical T eventually, given a canonical S. The alleged benefit of general recursion is just to inhibit such strong promises. To make a weaker promise, simply construct a total function of type S → G T where G is a suitable monad. The literature and lore of our discipline are littered with candidates for G, and this article will contribute another—the free monad with one operation f : S → T . To work in such a monad is to write a general recursive function without prejudice as to how it might be executed. We are then free, in the technical sense, to choose any semantics for general recursion we like by giving a suitable monad morphism to another notion of partial computation. For example, Venanzio Capretta’s partiality monad , also known as the completely iterative monad on the operation yield : 1 → 1, which might never deliver a value, but periodically offers its environment the choice of whether to interrupt computation or to continue. Meanwhile, Ana Bove gave, with Capretta, a method for defining the domain predicate of a general recursive function simultaneously with the delivery of a value for every input satisfying that domain predicate . Their technique gives a paradigmatic example of defining a datatype and its interpretation by induction-recursion in the sense of Peter Dybjer and Anton Setzer [11, 12]. Dybjer and Setzer further gave a coding scheme which renders first class the characterising data for inductiverecursive definitions. In this article, I show how to compute from the free monadic presentation of a general recursive function the code for its domain predicate. By doing so, I implement the Bove-Capretta method once for all, systematically delivering (but not, of course, discharging) the proof obligation required to strengthen the promise from partial f : S → G T to the total f : S → T . Total functional languages remain logically incomplete in the sense of G¨odel. There are termination proof obligations which we can formulate but not discharge within any given total language, even though the relevant programs—notably the language’s own evaluator—are total. Translated across the Curry-Howard correspondence, the argument for general recursion asserts that logical inconsistency is a price worth paying for logical completeness, notwithstanding the loss of the language’s value as evidence. Programmers are free to maintain that such dishonesty is essential to their capacity to earn a living, but a new generation of programming technology enables some of us to offer and deliver a higher standard of guarantee. Faites vos jeux! The General Free Monad Working (http://github.com/pigworker/Totality), in Agda, we may define a free monad which is general, both in the sense of being generated by any strictly positive functor, and in the sense of being suited to the modelling of general recursion. data General (S : Set) (T : S → Set) (X : Set) : Set where !! : X → General S T X ?? : (s : S ) → (T s → General S T X ) → General S T X infixr 5 ?? At each step, we either output an X , or we make the request s ?? k , for some s : S , where k explains how to continue once a response in T s has been received. That is, values in General
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00563.warc.gz
CC-MAIN-2021-25
4,006
5
http://stackoverflow.com/questions/1442232/web-site-administration-tool-unable-to-connect-to-sql-server-database
code
I'm using Visual Studio 2008, MS SQL server 2008 Express SQL server: zeroonea\SQL2008EXPRESS i'm create a webproject, made a dbtest.mdf in App_Data, made some tables, use aspnet_regsql to create membership tables in there, everything work fine. my connection string in web.config: <connectionStrings> <add name="dbtestConnectionString" connectionString="Data Source=zeroonea\SQL2008EXPRESS;Initial Catalog=dbtest;Persist Security Info=True;User ID=***;Password=***" providerName="System.Data.SqlClient" /> </connectionStrings> It still work when i run web application, the code can connect to sql server but when i run Web Site Administration Tool, click on Security tab, i throw a error There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: Unable to connect to SQL Server database.
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833715.76/warc/CC-MAIN-20140820021353-00208-ip-10-180-136-8.ec2.internal.warc.gz
CC-MAIN-2014-35
1,087
8
https://virologyj.biomedcentral.com/articles/10.1186/1743-422X-7-151/figures/2
code
Cross-reactivity of H5N1-M2e-MAP-induced antibody against H1N1-M2e peptide. Mice were primed and boosted with H5N1-M2e-MAP vaccine and sera were collected as described in Materials and Methods to detect cross-reactivity against H1N1-M2e by ELISA. The end-point titer of each sample was determined as the highest dilution that yielded an OD450 nm value greater than twice of that from pre-vaccination. The data are expressed as mean ± standard deviation (SD) of 10 mice per group. The lower limit of detection (1:20) is indicated by a dotted line. Time points of immunizations are shown as small spots on X-axis, and indicated by arrows at the bottom.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571153.86/warc/CC-MAIN-20220810100712-20220810130712-00489.warc.gz
CC-MAIN-2022-33
651
1
https://hackernoon.com/could-cryptocurrencies-soothe-international-tensions-hi4x32y9
code
Too Long; Didn't Read With an internationally accepted currency in place, the practical benefits could be enormous. A stable, internationally recognized currency could have the power to soothe international tensions. The idea is to get all (or at least most) countries operating with the same currency. This would have a number of positive effects:Reduction in currency and economic discrepancies. It would be easier for countries to see each other as peers, rather than competitors, by comparison. Cryptocurrency serves as a kind of kind of currency that serves a comparison.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500255.78/warc/CC-MAIN-20230205130241-20230205160241-00645.warc.gz
CC-MAIN-2023-06
576
2
https://answers.sap.com/questions/5403057/crm-service-order.html
code
I had a problem. How I have to configure the R3 system so that if a Service Order is uploaded into R3, it can be able to get the right employee responsible from CRM? I'm referring in particular to the Business Partner which are Employee in CRM... Full of point if it helps! Thanks a lot,
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00454.warc.gz
CC-MAIN-2021-43
287
3
http://www.insanitybit.com/2012/06/02/livecds-are-not-security/
code
I see many guides on the internet advocating a LiveCD for security – not specific distros, not a LiveUSB, just “Use a LiveCD for your online banking to protect yourself.” I’m going to highlight exactly why they aren’t just useless for security but actually detrimental in some situations. Most LiveCDs Are Not Built For Security Most distros provide a LiveCD as a way to test out the system. They are not designed for security nor to they make any attempt to be more secure than a default installation, in fact they explicitly make no attempt to do so because they want users to get exactly the same experience as a default insallation. Just because you’re running from a CD does not mean an attacker is limited to that CD. Most LiveCDs will give full rights to the hard drive and all devices. On top of that most LiveCDs will run either with Root by default or with a default root password or no root password at all meaning that an attacker can gain root without even trying. LiveCDs Necessitate Dangerous Sessions If you’re using a LiveCD for security it’s probably for banking or some such thing. A sensitive session. So while the argument for a LiveCD is that persistence isn’t possible (except on most it is, but this applies to all LiveCDs) it’s entirely unimportant. If a hacker gains access to your LiveCD session they’re gaining access to everything they need. Even if you shut off right after the session and the hacker wasn’t able to install to the drive you’re still screwed because they don’t care about persistence. LiveCDs don’t make this more dangerous, it’s just a false sense of security because persistence is not the only goal and in the case of a LiveCD session it’s pretty unimportant. LiveCDs Can Not Update If I burn my LiveCD a month ago there’s a month of vulnerabilities in it. My only option is to burn a new CD every time a patch comes out, which is costly and ineffective. I can use a LiveUSB, which solves this issue to a large extent though. False Sense Of Security Because people think that persistence matters and that Linux is unhackable and that running from a CD will break access to devices they put faith into a broken idea. A false sense of security is going to do serious damage because a user will think they can go onto an insecure network with a LiveCD or not worry about other issues. So if you really want security a LiveCD is not the way to go. LiveUSBs solve pretty much all of these issues when used with the right distro so I suggest you look into that. Leave LiveCDs for testing distros and saving Windows. Most of this also applies to VMs actually but it gets more complicated with them.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204969.39/warc/CC-MAIN-20190326095131-20190326121131-00155.warc.gz
CC-MAIN-2019-13
2,676
14
https://www.vn.freelancer.com/projects/php-script-install/userplane-integration-tutorial/
code
This is simple (and easy money for the right person). I want UserPlane WebChat, and IM integrated into my current user profile site so that users do not need to re-log in to chat. The software is AbleDate by abk-soft. It has chat and im built in, but I don't like it. Here is the fun part: You don't have to do any of the work. I want someone to talk me through (easy steps, few at a time) via email or aim/yahoo to do the install/integration myself. I want to learn how to do it so that I can reproduce it on my other AbleDate sites and so that I can get a basic understanding of how it works. This should be easy for a good teacher. I am adept at learning things easily and on the fly. I know some basic php/mysql. I could do it myself already except that there are no instructions/documentation available that walk you through it. This is the FREE UserPlane application.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945232.48/warc/CC-MAIN-20180421145218-20180421165218-00198.warc.gz
CC-MAIN-2018-17
873
5
https://funnycatnames.github.io/my-cat-is-limping-after-a-fight.html
code
My Cat Is Limping After A Fight My Cat Is Limping After A Fight - Cat Meme Stock Pictures and Photos If your cat is bleeding, apply pressure and wrap their leg/foot in a bandage. My cat is limping after a fight. Sprains, minor injuries and wounds. Consequently, his muscles and other soft tissues are wounded. Your cat will need to stay very quiet for several weeks while the. Their emotions will be unstable at the time so check for signs of excessive fear or anxiety and try to give them more attention than before since that will help them heal. Many injuries a cat might have can be treated with first aid or even just letting kitty rest in. Here's a look at some of the most. Will he let you look at the leg, it might even be a bite near on his body in the shoulder area, to see if you can see any puncture wounds, or areas of swelling, if an abcess is forming they can to feel quite. He avoid putting pressure on the leg and moans when he moves from side to side when laying down. The last thing you want to do after your cat had a fight is offer them a new trauma by scolding them. The reasonable explanation could be that he has engaged himself in a catfight. Arthritis (more common in older cats, a very manageable condition that shouldn't be ignored) cat bite abscess. When you examine your cat’s leg, start with the paws and go up from there. I didn’t see any blood and thought theres no way cats fighting would be able to break any bones. A couple days later, she was limping and then she. The limping might become more and more acute if left untreated. When i touch the leg near the knee, even lightly, he shows a lot of pain. Remedying the problem may be as easy as pulling out a thorn or clipping an overgrown toenail. Learn everything about cats at howstuffworks. - How Much Is Getting A Cat Declawed - How Much Is It To Cremate A Cat Uk - How Much Is A Persian Cat In South Africa - How Much Does It Cost To Adopt A Cat Petsmart - How Much To Microchip A Cat In Ontario - How Much Is A Flea Vaccine For Cats - How Much Do Cat Vaccinations Cost Nz - How Much Do Cat Vaccinations Cost Ontario - How Much Do Cat Vaccinations Cost At Petco - How Much Tuna Is Too Much For A Cat - How Much Tofu Cat Litter To Use - How Much Are Persian Cats Worth - How Much Is Rabies Vaccine For Cats - How Much Does Cat Cost To Keep - How Much Are Persian Cat Cost - How Much Are Cat Vaccines At Tractor Supply - How Much Is It To Cremate A Cat Near Me - How Much Is A Rabies Shot For Cats Ontario - How Much Are Cat Vaccines Without Insurance - How Much To Microchip A Cat Nz
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506329.15/warc/CC-MAIN-20230922034112-20230922064112-00036.warc.gz
CC-MAIN-2023-40
2,577
29
https://www.design-reuse.com/news/41979/xilinx-p4p4-netfpga-workflow-p4-16-fpga.html
code
SAN JOSE, Calif. -- May 11, 2017 -- Xilinx, Inc. (NASDAQ: XLNX) announced today it will participate in the 2017 P4 Developer Day and P4 Workshop, May 16 - 17 at Stanford University, debuting its P416 to FPGA compilation and introducing a new P4-NetFPGA workflow for networking researchers. As co-chair of the P4.org working group that has developed the new P416 language standard, Xilinx is a lead sponsor and co-presenter for the two-day event. Join the 2017 P4 Developer Day and P4 Workshop to learn about the sessions and demonstrations and more. To register, visit http://p4.org/. May 16 at Stanford University, Palo Alto, CA P4-NetFPGA Workflow for Networking Researchers At the 2017 P4 Developer Day, Xilinx will introduce the P4-NetFPGA workflow for networking researchers, developed in collaboration with Stanford University and the University of Cambridge. The new workflow couples the new Xilinx® P416 to FPGA compilation capability with the Xilinx technology-powered open source NetFPGA SUME platform for networking research (netfpga.org). It allows researchers to easily conduct experiments in hardware operating at line rate. At the Developer Day, there will be a half-day hands-on laboratory allowing attendees to gain experience with P4-NetFPGA, including implementing In-band Network Telemetry. May 17 at Stanford University, Palo Alto, CA P416-to-FPGA Compilation Demonstration At the 2017 P4 Workshop, where the new P416 language specification will be discussed, Xilinx will debut its P416 to FPGA compilation flow based on the Xilinx® SDNet™ Development Environment for Networking. SDNet supports FPGA packet processing rates between 1 Gb/sec and 100 Gb/sec. At the workshop, an industry-first demonstration of joint work by Xilinx Labs and Stanford University will show stateful data plane processing for the TCP protocol implemented by compiling a P416 program to a Xilinx FPGA. Xilinx is the leading provider of All Programmable FPGAs, SoCs, MPSoCs, RFSoCs and 3D ICs. Xilinx uniquely enables applications that are both software defined and hardware optimized – powering industry advancements in Cloud Computing, 5G Wireless, Embedded Vision, and Industrial IoT. For more information, visit www.xilinx.com.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00497.warc.gz
CC-MAIN-2023-06
2,234
8
https://dspace.xmu.edu.cn/handle/2288/94128
code
The Separation of Cities and Counties in Hebei Province During the Tang Dynasty and Its Causes - 2013年 【中文摘要】州县析置是政区变动的重要内容之一。唐代河北道政区的州县析置不仅十分常见, 而且涉及数量颇大,这其中既包括入唐后权置的州县,也包括唐复置的前代省废的州县以及入唐后新置的州县。影响河北道州的析置因素主要有:安置初附“群盗”、领县领户过多、境界阔远交通不便、防范少数民族侵扰、特殊政治变局等。影响河北道县的析置因素主要有:随州而设、人口等基本经济条件具备、交通需要、地区管理需要等。 【Abstract】The separation of cities and counties is one of tlie major contents of changes in administrative zones. The separation of cities and counties in the administrative zone of Hebei province during the Tang dynasty is not only a common event,but also involves a rather large number of them,including the cities and counties separated after the establishment of the dynasty and the reset of those separated in the previous dynasty,as well as the newly set-up cities and counties. The main factors affecting the separation of the cities in Hebei province during the Tang dynasty are: the demand of land recovery by the state,over number of administrative counties and household,inconvenient traffic caused by remote areas,prevention of invasion by the minorities and some other special political changes. The main factors affecting the separation of the counties are : setting up of them concurrently with the cities,sufficiency of the basic economic conditions such as population ,the demand of traffic,the requirement of district administration and so on.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141486017.50/warc/CC-MAIN-20201130192020-20201130222020-00290.warc.gz
CC-MAIN-2020-50
1,772
3
https://riptutorial.com/cmake/example/6860/local-variable
code
set(my_variable "the value is a string") By default, a local variable is only defined in the current directory and any subdirectories added through the To extend the scope of a variable there are two possibilities: CACHE it, which will make it globally available PARENT_SCOPE, which will make it available in the parent scope. The parent scope is either the CMakeLists.txt file in the parent directory or caller of the current function. Technically the parent directory will be the CMakeLists.txt file that included the current file via the
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00569.warc.gz
CC-MAIN-2022-33
540
8
https://betweentwoparens.com/
code
Your coding life doesn't have to be a Rogue Like A blog about life between two parens It's not a build tool, it's clj Live like Jay Hiccup is the Ryan Atwood to JSX's Seth Cohen Focus on learning and writing Clojure! Learn how to serve a ClojureScript project on Nginx This is for the students of the game, the ones who want to reloadable code. Setup a ClojureScript Test Toolchain like a Boss It's time to uncover the truth about Reagent components A guide to setting up a ClojureScript app from scratch without fear or worry. How to build a static site in ClojureScript in probably 2.5 minutes
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00639.warc.gz
CC-MAIN-2020-40
595
12
http://www.techist.com/forums/f77/no-signal-first-boot-253005/
code
CPU = (AMD Phenom II X4 970 Black Edition Deneb 3.5GHz Socket AM3) MOTHERBOARD = (ASUS M4A87TD EVO AM3 AMD 870 SATA 6Gb/s ) RAM = (CORSAIR Vengeance 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600) HEATSINK = (OEM heatsink) OS = (NOTHING YET) POWER SUPPLY = (CORSAIR Builder Series CX600 V2 600W ATX12V v2.3 80 PLUS Certified Active PFC Power Supply) VIDEO CARD = (EVGA SuperClocked 01G-P3-1563-AR GeForce GTX 560 Ti (Fermi) 1GB 256-bit GDDR5 PCI Express 2.0 x16) I just finished the build, I get no beeps when I boot it up. All fans go on and everything seems to be properly installed. I have it hooked up via HDMI to my Viore 27inch tv. It just stays powered on with no signal. When you get a successful boot, don't you get two consecutive beeps? I get none and I have the little speaker cable plugged into the correct spot that came with the MB. PLEASE HELP! THIS IS MY FIRST BUILD! I've attached an image, maybe anyone could see anything wrong? Uploaded with ImageShack.us
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00082-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
971
12
http://jostendorf.com/2012/productivity-and-effectiveness.html
code
So I started reading a sort of self help book entitled The 4 Hour Workweek. It describes a man’s strategy to shorten the amount of time he spent working without decreasing the amount of things he got done. He primarily used two different economic theories. The first is that 80% of income comes from 20% of the customers (this basic theory varies a lot in terms of numbers and examples), this is called the Pareto principle. The second is that the task’s complexity and effort is based more on the time allotted to do it than the actual task, this is called Parkinson’s Law. I feel like these two theories apply well to ways I can improve myself and how I do things. The first, the Pareto principle, applies well to me already. I have always said that, for me, programming is all about ‘bursts’. I mean that I get most of my productivity done in a short, focused burst of effort. I can really put myself into a super focused mindset and really produce work, the difficulty of doing this is keeping focused, for various reasons. The main reason is hitting roadblocks, things like a coworker’s mistake stops me from progressing and I must stop and consult them to figure it out or someone runs into an issue with something I wrote and I need to help them resolve my mistake. This breaks up my ‘flow’ and all the effort I took to get that ball rolling is gone and must be re-applied. But also, the largest deterrent in doing this is keeping myself going from the end of one part to another. I may rewrite some code that does A, but once that is done, I do not know, immediately, what to move on to. So I am going to begin planning out my burst periods. I will then allot a set amount of time (for me I am saying 2 hours between the morning stand up and lunch) to accomplish this full set of tasks. (I then have a laundry list of tasks I can work on in the afternoon, but will do so out of a ‘burst’ mindset). The goal is to start accomplishing most of my work in a very efficient and effective period of time. Doing this will allow me to continue using this strategy with tasks at home, like planning out my goals on a train ride to/from work. This would mean less dicking around on the train and I would actually make progress on various projects. The second, Parkinson’s law, is even more applicable. If someone gives me a single task to do and a whole week to do it, I feel little urgency to get it done sooner, so the tasks expands to fill the available time to do it. Whereas, if I limit the time to something smaller, it will be more urgent and also much simpler, in order for me to meet that deadline. Simple (and slightly relevant) example is planning a wedding. For most wedding plans, people have a year+ to plan the entire thing. So with a whole year, people end up nitpicking the colors of flowers, the seating arrangement of the reception, and all sorts of other small details. Versus, if you only have 4 months to plan a wedding, you end up quickly forgetting or diminishing the importance of such small details and just focus on the important and larger details. This idea can be applied to help simplify various goals that I have and allow for them to be easier accomplished. First, it requires that I start drawing up deadlines for projects, I fear that if I don’t, they will never have the urgency to get finished. Also, doing this along with the previous rule, I can force myself to maximize the use of that focused time. If I keep that 2 hour period as fact and pick out goals that are a generous focus for those 2 hours, I feel like I will end up amplifying that time usage rather than diminishing it (so it wouldn’t really be 80% work in 20% of the time). So, starting today, I will be setting shortterm deadlines for various subparts of projects. My first (and experimental one) is getting this website open sourced. I have broken down the tasks I want to get done to consider it in a “1.0” state and have it be proper for an open source view. I am setting the goal at a month. I have picked time periods of (at most) a week per task and I will begin breaking those down into smaller, day sized, chunks. This allows me to get the feel if I am making good use of that time or not. Also, setting the time so short, it allows me to eliminate various other optimizations I had thought of, a common form of the perpetually moving target syndrome many projects get. We have this issue at work at lot, and I have begun to speak up about us ending up in it. It basically ends up where we are getting near the end, yet we keep wanting to improve things, eventually pushing back the release. The real need is to set in stone the goal to accomplish and set that as the goal for a decided upon release, then start making a list of improvements. These improvements then become the list of things for the next version, which gets set in stone at the end of the previous deadline and with a new deadline. Wash, Rinse, Repeat The real hope here, is to eliminate distractions and improve my use of time. Allowing for me to feel more accomplished in the time that I do spend working on things, without lessening the time I spend doing other things. If I spend 8 hours traveling every week and most of it is dicking around because I have no sense of urgency or need, I am very likely to just waste the time neither doing something entirely enjoyable nor entirely productive, the time really becomes a wash. I would be better spending the 15-30 minutes of actual time used in one go and then spending the rest of the time reading, writing, or sleeping. Hopefully, I will be able to eliminate my chance of being distracted and increasing the amount I get done in the time I do work, giving myself more uninhibited free time, along with a greater sense of productivity.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818464.67/warc/CC-MAIN-20240423033153-20240423063153-00213.warc.gz
CC-MAIN-2024-18
5,797
5
https://rdrr.io/cran/brainGraph/man/brainGraph-methods.html
code
These functions are S3 generics for various groups returns the “Group” graph attribute for each graph or observation in the object. region.names is a generic method for extracting region names from brainGraph objects. These are generally convenience functions. nregions is a generic method for extracting the number of regions from 1 2 3 4 5 6 7 8 9 10 11 12 region.names assumes that it contains a factor column named Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506623.27/warc/CC-MAIN-20230924055210-20230924085210-00058.warc.gz
CC-MAIN-2023-40
539
11
https://tvstravis.com/tag/holy-grail/
code
Travis is joined by Preston (aka Biocow) to talk about the comedy classic Monty Python and the Holy Grail. Considered by many to be one of the best comedies made, how does it hold up. And what does a first time viewer think? Thanks go out to Audie Norman (@OddlyNormalOne) for the album art. Outro … I'm going on an adventure! Classic Adventure Game Time - Syberia II Continuing Syberia II tonight at 8PM eastern time. Enjoying the visuals of that game so much. Each background is just wonderful to view, and the music is spectacular Not too shabby. Overall I’m pleased. Need to add the dark wash still3
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00572.warc.gz
CC-MAIN-2021-04
607
5
http://gamezoom.net/artikel/Astro_Gaming_A10_Gaming_Headset_Test_Review-39348-3
code
Autor: Christoph Miklos Datum: 13.07.2017 - 19:38 Fazit & Wertung um 15.08.2017 - 10:43 Component christian louboutin very simple information, air max Detail present you nike free run of air max 95 an womens nike air max extension box of nike air max 90 contract has high-quality cheap jordan shoes throughout nike air max 95 thinking under armour discount about nike cleats receiving, Yeezy Boost 350 For Sale Certainly nike shoes definitely never jordan 6 while using under armour womens shoes accident, He jordan 13 would've mens nike air max settled that nike air max is nike outlet when christian louboutin outlet yet still lately retro jordans been jordan shoes quite nike cleats an nike air max 90 american city player. This nike free run complete protracted nike basketball shoes contract nike air max 2017 share nike shoes for women with air max them Adidas Yeezy absurdity granted louboutin shoes alternative nightsets to nike boots obtain nike store the man. Working air max experience necessary, LVH nike shoes for men sporting events jordans for women activities work associate jordans for sale company jordans for girls tim nike outlet Sherman maintains mont blanc pens for sale 10 nike shoes distinctive face jordans for cheap to face nike shoes matchups. Some jordan shoes of cheap nike air max those are generally cheap jordan shoes Piercy during nike shoes without 120 air max 1 opposed Yeezy Shoes to Na, under armour outlet Who jordans for cheap will perhaps cheap mont blanc pens day-to-day jordan 13 personal nike outlet savings, With Moore, nike shoes Moreover on without 120, christian louboutin shoes Fighting Garrigus, nike shoes for women The air max person at nike huarache seriously air max cash.$3 citation its contest nike shoes is normally Yeezy Boost running quotation that means job jordans for girls advertisement Michael Kors Jet Set Bag when it Michael Kors Purse Sale comes to nike sneakers in nike air max 95 the nike air max 90 marketplace this afternoon nike air max 90 just with nike free 5.0 coffee nike store argument nike store a pricing jordan 11 pass $3. The particular job air max advertisement nike shoes remembers cheap jordans 30 numerous pro new jordans play nike air max playing within cheap nike air max just louboutin shoes vegas.Expert jordan shoes AM nike shoes players Robert Garrigus adidas originals teamed Michael Kors Diaper Bag because mont blanc starwalker of jordan shoes idiots rob nike air max 2017 Ellis, Kevin jordans for cheap barbs and mens nike air max furthermore Adidas Yeezy 350 Boost micheal nike store Signora to assist christian louboutin sale you for womens nike air max triumph Wednesday's nike air max 2017 recognition professional player i morning getting nike free 5.0 rating involved cheap nike air max with 21 undergoing a par nike air max 90 50. womens nike air max Step nike store father. nike boots Daddy. Shall nike huarache we cheap jordans be coming to the gym? Michael Kors Jet Set This nike sneakers has been nike store exactly akin to nike air max 90 5:15 each and Michael Kors Bags every, under armour shoes And jordan 6 i favor, nike outlet Get gosh, I retro jordans therefore weary. air max 90 When I under armour sale began operating nike free run around intended motherhood, Before nike shoes cause me to Yeezy Adidas feel suffer nike shoes for men realistically vulnerable and retro jordans open, Additionally clearly depressed resentful. nike free And not nike factory store really sorrowful and womens nike air max enraged nike outlet personally; I are prepared for it air max 90 also. I have mont blanc ballpoint pens not an under armour outlet issue jordans for women arriving christian louboutin shoes in jordan 12 on mont blanc fountain pen daily mens nike air max basis. For nike air max 95 barefoot runner's, The sectionals is the jordan 11 persist advantage nike air max 2017 over beauty. christian louboutin shoes Just jordans for girls the nike free 5.0 succeeding at adidas outlet staff members(Seven air max 90 affiliates) nike air max Appearing as part nike shoes for women of louboutin shoes each nike clearance and jordan 11 every one air max 90 kind, And the some of nike clearance the womens nike air max best nike factory store five Michael Kors Jet Set Crossbody those who Michael Kors Bags On Sale workout after other nike boots types nike free run are nike boots likely to check out in the nike basketball shoes near future level cover at Adidas Yeezy Boost Elma Meadows world of christian louboutin sale golf mens nike air max not too nike huarache distant jordan 13 of mens nike air max zoysia. nike free Perry adidas store discussed Michael Kors Jet Set Tote he'd mont blanc pens discount whatever to Adidas Yeezy Boost 350 mind under armour womens shoes throughout nike shoes for women our air max 95 competition, Just uncover nike factory store a spot nike cleats for a air jordan forget nike clearance about themselves, nike cleats Wire nike air max 90 had nike shoes just gotten taken into mont blanc account nike sneakers california and air max 1 even adidas stan smith las jordan 5 vegas. jordans for women But nike free 5.0 yet those morning, adidas superstar Thrilled created and also nike basketball shoes yooughout montblanc meisterstuck st, jordan 6 He cheap jordan shoes made nike outlet the cheap jordans decision nike outlet new nike shoes for women york cheap under armour sweated properly regarding nike factory store that process he air max had as air max 90 the primary nike store goal. Is amongst the jordans for sale most cheap under armour recent, Elevated time, Raised prospect air max 95 market capital across the nike free run world nowadays, cheap nike air max Tells nike factory store how twine. Allstate nike store insurance under armour discount coverage is ostensibly air max 1 beginning make Michael Kors Handbags Sale installing nike outlet several jordans for sale the air max 95 cost of gasoline wellbeing device a nike air max dependence on the clientele new jordans but nike shoes that's an Michael Kors Handbags excellent. Then Cheap Michael Kors Bags Allstate are nike sneakers not nike basketball shoes likely to jordans for sale be Yeezy remoted during cheap nike air max these really needs in louboutin outlet order over in size. Michael Kors Handbags Outlet Alameda regional nike air max 95 is Michael Kors Sale suffering from nike sneakers a toothless christian louboutin regularions who jordan 13 you will find nike air max 90 not Michael Kors Purses On Sale witnessed nike shoes for men forced air max 1 by any means as also christian louboutin outlet does cheap jordans Marin along with nike cleats Contra Costa, These nike shoes for men rates paid out nike shoes courtesy nike air max 90 of Dunnes bends away doesn't nike clearance just the nike free run uplift louboutin outlet during under armour shoes beliefs nike outlet outcome of nike store deficiency nike air max involved mont blanc pen with nike outlet memory only, Great nike store deal certainly, mont blanc pens The air max faith christian louboutin out nike boots of retail nike store stores Michael Kors Diaper Bag and the air max 1 house niche nike clearance location present when Liffey nike air max 2017 Valley's Michael Kors Bags Online outcomes. jordan 11 Which has Michael Kors Bags Sale wide variety nike free run of options you can find nike air max right, nike free I would nike store say some air max sort of on going 20 nike outlet % under armour sale each year nike shoes for men increase in nike outlet footfall, air max 95 Mixed nike air max with nike free the Michael Kors Jet Set Diaper Bag owing outcome new jordans from the Ster Century nike basketball shoes movies, Express which experts nike air max state Yeezy Boost 350 Price aspects you christian louboutin sale might nike air max discover Michael Kors On Sale determined Michael Kors Diaper Bag Sale to nike huarache rise. This excellent christian louboutin outlet maximize, Michael Kors Handbags On Sale Paradoxically, Is more probable even without Michael Kors Bags Outlet the the nike free run consist nike free 5.0 of 322,800 sq legs Adidas Yeezy For Sale action Yeezy Boost 350 two nike huarache ext. ab 65 Euro Astro Gaming A10 Gaming Headset Preisvergleich Astro Gaming: A10 Gaming Headset Testvideo vor 2 Monaten
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689975.36/warc/CC-MAIN-20170924100541-20170924120541-00340.warc.gz
CC-MAIN-2017-39
8,329
14
https://brandiscrafts.com/php-get-key-of-array-element-the-14-latest-answer/
code
Are you looking for an answer to the topic “php get key of array element“? We answer all your questions at the website Brandiscrafts.com in category: Latest technology and computer news updates. You will find the answer right below. The key() function simply returns the key of the array element that’s currently being pointed to by the internal pointer. It does not move the pointer in any way. If the internal pointer points beyond the end of the elements list or the array is empty, key() returns null .If you have a value and want to find the key, use array_search() like this: $arr = array (‘first’ => ‘a’, ‘second’ => ‘b’, ); $key = array_search (‘a’, $arr); $key will now contain the key for value ‘a’ (that is, ‘first’ ).The array_keys() function returns an array containing the keys. Table of Contents How get key from value in array in PHP? If you have a value and want to find the key, use array_search() like this: $arr = array (‘first’ => ‘a’, ‘second’ => ‘b’, ); $key = array_search (‘a’, $arr); $key will now contain the key for value ‘a’ (that is, ‘first’ ). What is array_keys () used for? The array_keys() function returns an array containing the keys. 4: How to get PHP array keys – PHP 7 Tutorial Images related to the topic4: How to get PHP array keys – PHP 7 Tutorial How do you get the first key of an array? You can use reset and key : reset($array); $first_key = key($array); It’s essentially the same as your initial code, but with a little less overhead, and it’s more obvious what is happening. Just remember to call reset , or you may get any of the keys in the array. How do you check key is exist in array or not in PHP? PHP array_key_exists() Function The array_key_exists() function checks an array for a specified key, and returns true if the key exists and false if the key does not exist. Which array has named key in PHP? Associative arrays are arrays that use named keys that you assign to them. How do you find the value of an array? get() is an inbuilt method in Java and is used to return the element at a given index from the specified Array. Parameters : This method accepts two mandatory parameters: array: The object array whose index is to be returned. What is Array_flip function in PHP? array_flip() returns an array in flip order, i.e. keys from array become values and values from array become keys. Note that the values of array need to be valid keys, i.e. they need to be either int or string. See some more details on the topic php get key of array element here: PHP – Get key name of array value – Stack Overflow If you have a value and want to find the key, use array_search() like this: $arr = array (‘first’ => ‘a’, ‘second’ => ‘b’, … PHP : array_keys() function – w3resource The array_keys() function is used to get all the keys or a subset of the keys of an array. Version: (PHP 4 and above). Syntax: Working With PHP Arrays in the Right Way – Code Working With Keys and Values … You can check if an array contains a specific value and get its first corresponding key using the array_search() … How are array_keys and array_values functions useful? Array_keys() and array_values() are very closely related functions: the former returns an array of all the keys in an array, and the latter returns an array of all the values in an array. How do I view an array in PHP? To display array structure and values in PHP, we can use two functions. We can use var_dump() or print_r() to display the values of an array in human-readable format or to see the output value of the program array. How can we get the first element of an array in PHP? Alternativly, you can also use the reset() function to get the first element. The reset() function set the internal pointer of an array to its first element and returns the value of the first array element, or FALSE if the array is empty. What is a key in PHP? PHP | key() Function The key() function is an inbuilt function in PHP which is used to return the index of the element of a given array to which the internal pointer is currently pointing. What is the index of the first element in an array? The index value of the first element of the array is 0. Creating Array in PHP using Key values and displaying elements by index position and by looping Images related to the topicCreating Array in PHP using Key values and displaying elements by index position and by looping Does an array key exist? The array_key_exists() is an inbuilt function of PHP that is used to check whether a specific key or index is present inside an array or not. The function returns true if the specified key is found in the array otherwise returns false. Which is faster isset or array_key_exists? The takeaway is that isset() is actually faster than array_key_exists() because isset() is actually a language construct, not a function, so it doesn’t incur the function call overhead. But both are quite fast, so you probably shouldn’t choose one over the other for performance reasons. What is Isset in PHP w3schools? The isset() function checks whether a variable is set, which means that it has to be declared and is not NULL. This function returns true if the variable exists and is not NULL, otherwise it returns false. Which array element value is referred by using key? While sort( ) and asort( ) sort arrays by element value, you can also sort arrays by key with ksort( ) . What is index array in PHP? PHP indexed array is an array which is represented by an index number by default. All elements of array are represented by an index number which starts from 0. PHP indexed array can store numbers, strings or any object. PHP indexed array is also known as numeric array. What is associative array in PHP? Associative Array – It refers to an array with strings as an index. Rather than storing element values in a strict linear index order, this stores them in combination with key values. Multiple indices are used to access values in a multidimensional array, which contains one or more arrays. How do you access values in an ArrayList? The get() method of the ArrayList class accepts an integer representing the index value and, returns the element of the current ArrayList object at the specified index. Therefore, if you pass 0 to this method you can get the first element of the current ArrayList and, if you pass list. How do you access array of arrays? In order to access items in a nested array, you would add another index number to correspond to the inner array. In the above example, we accessed the array at position 1 of the nestedArray variable, then the item at position 0 in the inner array. Which function filters the value of an array? What is the use of Array_unshift () function? The array_unshift() function inserts new elements to an array. The new array values will be inserted in the beginning of the array. Tip: You can add one value, or as many as you like. Note: Numeric keys will start at 0 and increase by 1. How do I fix undefined array key in php? Images related to the topicHow do I fix undefined array key in php? What is the use of array_flip () function Mcq? Explanation: array_flip() Is used to convert the keys to values and values to keys. How do I randomize an array in PHP? The shuffle() Function is a builtin function in PHP and is used to shuffle or randomize the order of the elements in an array. This function assigns new keys for the elements in the array. It will also remove any existing keys, rather than just reordering the keys and assigns numeric keys starting from zero. Related searches to php get key of array element - php array search - php key - php get all elements of array by key - php get key of first array element - php get key of single element array - php get first element of associative array without key - php array key value - get key of last element in array php - php get object keys - php get key name of array element - array key exists in php - php array_search - php get first element of array without key - php get array value without key - php get array value by key multidimensional - php get first element of array without knowing key - php get key of current array element - php search multidimensional array Information related to the topic php get key of array element Here are the search results of the thread php get key of array element from Bing. You can read more if you want. You have just come across an article on the topic php get key of array element. If you found this article useful, please share it. Thank you very much.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816977.38/warc/CC-MAIN-20240415111434-20240415141434-00308.warc.gz
CC-MAIN-2024-18
8,608
87
https://www.laptopsdirect.co.uk/microsoft-visual-studio-premium-with-msdn-software-assurance-1-user-1year-9ed-00058/version.asp
code
Staff pricing will be applied on all products. Cha-ching! We’re not robots, get sales advice 7 days a week Trade in your old laptop Sell your old Phone £10 off orders over £250 when you trial Which? for only £1. Click here for more information.First month £1, then £10.75 per month unless cancelled Open Value License for Microsoft Visual Studio Premium MSDN SA 1Y AC Y1 There has been a fundamental shift to device and services experiences altering how the industry approaches software development. Consumers, customers, and employees now demand a new breed of applications. They demand applications that provide the best experience across multiple screens and devices, always-connected services for data they need, security, and continuous evolution. Visual Studio 2013 builds on the advances delivered in Visual Studio 2012 and subsequent Visual Studio Updates to provide the solution needed for development teams to embrace this transformation and to develop and deliver new modern applications that leverage the next wave in Windows platform innovation (Windows 8.1), while supporting devices and services across all Microsoft platforms. Below are just some of the highlights in this release, including: innovative features for greater developer productivity, support for Windows 8.1 app development, web development advances, debugging and optimization improvements for native and managed code, and expanded ALM capabilities. Your question will be sent to a team of product specialists, you will receive an email with your answer and your question will be posted on this page to help other customers in future. If your question is about an order you've already placed please use our eMessage system to contact our customer service team.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679511159.96/warc/CC-MAIN-20231211112008-20231211142008-00837.warc.gz
CC-MAIN-2023-50
1,749
12
https://itknowledgeexchange.techtarget.com/itanswers/create-powerpoint-presentation-based-off-microsoft-excel-spreadsheet/
code
I hope this is possible but I doubt it. I'm working on a Microsoft Excel spreadsheet and I would like to create several PowerPoint slides based off of it. It's basically a table and a graph. How can I move them over to PowerPoint? Anyone know of a VBA script?
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541288287.53/warc/CC-MAIN-20191214174719-20191214202719-00384.warc.gz
CC-MAIN-2019-51
259
1
https://salesforce.stackexchange.com/questions/113961/salesforce1-lightning-component-not-refreshing-after-content-changed
code
We have a component which has been exposed to salesforce1 via a custom tab. After making a change to this component and saving the updates are not reflecting in the salesforce1 mobile application. i.e. adding the word "updated" in the code below. If we include the component within a lightning application then the application does render the latest change when previewing from the developer console. The following is our simple test component which we have been modifying the text message. <aura:component implements="force:appHostable" > <!-- required --> <ltng:require styles="/resource/SLDS100/assets/styles/salesforce-lightning-design-system-ltng.css" /> <div class="slds"> <div > <div class="slds-notify_container"> <div class="slds-notify slds-notify--alert slds-theme--error slds-theme--alert-texture" role="alert"> <span class="slds-assistive-text">Info</span> <h2>Some content updated</h2> </div> </div> </div> </div> Any ideas as to why these changes are not reflecting in iOS or Android salesforce1 apps?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100602.36/warc/CC-MAIN-20231206162528-20231206192528-00309.warc.gz
CC-MAIN-2023-50
1,016
5
https://www.allodd-itn.eu/esr4.html
code
ESR4: Özge Ergun Curutchet Research Group Computational Photobiology Lab University of Barcelona Prof. Carles Curutchet Excitation energy transfer applied to drug design The Förster resonance energy transfer (FRET) technique is an important tool in structural biology, due to its ability to monitor and measure distances in biological systems. Albeit FRET is widely used to measure distances in fluorophore-tagged proteins, intrinsic FRET processes in protein-ligand complexes prevent straightforward application of Förster theory due to the lack of rotational freedom of the Trp and ligands involved and their relatively short separations. In this project, we will investigate the application of a novel multiscale computational methodology to characterize FRET data in protein-ligand complexes based on advanced polarizable quantum/molecular mechanical (QM/MM) calculations. The approach will combine Molecular Dynamics simulations which advanced methods for the calculation of FRET couplings (based on transition densities and charges), which overcome Förster dipole approximation and allow to account for modulation of FRET couplings by the heterogeneous polarizable properties of the environment. This strategy will be applied to characterize allosteric binding sites and ligand binding modes. The main objectives of the project are as follows: 1) Develop a computational tool to generate FRET observables from MD trajectories. 2) Select a library of promiscuous fragments with tailored FRET properties. 3) Assess the ability of FRET simulations to characterize allosteric binding sites and ligand binding modes for drug discovery targets. Brief Scientific Bio I obtained my Bachelor’s Degree from Bogazici University (Turkey). During these studies, I took a computational chemistry course that was a milestone in my career, and I worked on a computational chemistry project about the selectivity in Diels-Alder reactions and spent one semester at Ghent University (Belgium) on Erasmus. After my graduation, I completed an internship at Vrije Universiteit Brussel (Belgium) focused on computational studies of chalcogen & tetrel bonds of linear molecules, a project that led to an article published in 2021 of which I am a co-author. Afterwards, I completed my Master’s Degree at KU Leuven (Belgium), which included an internship at Solvay (Belgium) where I worked on finding dielectric properties of polymers by using Molecular Dynamics. My master thesis addressed artificial enzymes and, besides improving my computational chemistry skills, gave me the chance to dive deeper into the world of biochemistry and inorganic chemistry. In April 2022, I joined Prof. Carles Curutchet’s lab at University of Barcelona (Spain), where I’m pursuing a PhD project focused on the structural characterization of allosteric binding sites with a combination of FRET spectroscopic measurements and multiscale simulations.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100164.87/warc/CC-MAIN-20231130031610-20231130061610-00825.warc.gz
CC-MAIN-2023-50
2,924
16
https://www.pixelprodisplays.com/elementor-37008/
code
Tonight we did some sub modeling with the Boscoyo Small Fan Arch. We haven’t created an official PPD Certified model yet and we wanted some help to create these new submodels! I also want to give a shout out to John Margeri as he has wanted to do something like this for a while. The original Live version doesn’t show the first question asked in the xLights group as I forgot to change the camera angle (still getting better at that). But We’ll also link the Live FB Link which does show it.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510334.9/warc/CC-MAIN-20230927235044-20230928025044-00078.warc.gz
CC-MAIN-2023-40
498
1
https://communities.vmware.com:443/t5/vSphere-Storage-Discussions/List-of-VAAI-capable-storage-arrays/m-p/2536191
code
Contrary to the statement in http://kb.vmware.com/kb/1021976 , saying that the VMware SAN HCL contains info pertaining VAAI capability of storage arrays, that info is nowhere to be found (or I'm just blind). Does anybody have a list of which arrays support VAAI (or a certain subset of the VAAI features)? If not, maybe poeple could post their experiences with which SANs they found to support VAAI? Script for checking and administrating VAAI status from the CLI: VAAI support is listed as a footnote for arrays that have been certified with VAAI. Such arrays would have met the following requirements: 1) Firmware supports VMware VAAI standard 2) A VAAI plugin is available (shipping with ESX/ESXi 4.1 or available from the storage vendor) An example of an HCL listingof an array that is supported with VAAI Notice footnote 4 (at the time this reply was written) which states: VAAI primitives "Full Copy", "Block Zeroing", and "Hardware Assisted Locking" are supported with vmw_vaaip_eql plug-in. Updating this old post A new version of the Web HCL will provide search criteria spcific to VAAI. As of this date, the new interface is still in "preview" stage. You can access it by clicking the "2.0 preview" button at the top of the page which is at: The criteria are grouped under "Features Category", "Features" and "Plugin's". Features Category: Choice of "All" or "VAAI-Block" Features: Choce of "All", Block Zero", "Full Copy", "HW Assisted Locking" and more. Plugin's: Choce of "All" and any of the listed plugins. There are also some unofficial lists, like this: Not sure if I agree with that statement. 3PAR had day 1 support for VAAI on their arrays, and at the time were an independent 550 person company....much smaller than the 310,000 that HP is. DISCLAIMER - I work for HP Storage Big companies, and I can only speak for HP, don't block anything. Like any company, VMware has limited resources so they start with the vendors that have more VMware marketshare. Yes, HP does have engineering to engineering meetings with VMware as things like VAAI get developed, long in advance of the release. They even seek our input since teams like HP LeftHand (and now HP 3PAR) have a lot of expertise about storage in a VMware environment. I don't speak for VMware but no company has the resources to work with every storage vendor and startup to the same degree. But I can guarantee that HP doesn't block anything. That's not even possible. Since 3PAR wasn't part of HP when the original VAAI shipped, I don't have insight into how they were able to do that. But I'm aware of a small start up called Evostor that was embraced by VMware when they came out of steal mode. I don't believe they made it but goes to the point that VMware decides who they work with and the big vendors don't do have the power to block even if they wanted to - except maybe EMC. :smileydevil:
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00351.warc.gz
CC-MAIN-2022-27
2,873
22
https://www.betaarchive.com/forum/viewtopic.php?f=16&t=21973
code
I haven't seen them at the FTP. Certainly, i may not guarantee that they are genuine. Therefore, could anyone check them?- MS Windows NT 5.0 Workstation Build 1743 "PnP" - MS Windows NT 5.0 Server Build 1745 Here is content of the text file from WS folder: Okay, a new build of NT, straight from the NT 5.0 plugfest in Burlingame, CA this past week. This is a very special build done just for this event. It is not from the standard Microsoft build tree (thus if someone at Microsoft pulled 1743 from the internal build server, they would NOT get this same release). This release is specifically called the "1743 PnP build" and it has alot of stuff added and enabled for testing at this event that is not in the standard build tree. What this means is that in the near future, if we do more NT5 releases, that there will be items in this release that will not be in the futures ones.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00061.warc.gz
CC-MAIN-2021-39
883
5
https://hacklog.mu/post/elementary-os-truly-a-beauty/
code
It’s been a month since I’m running elementary Luna on my main laptop, an Acer Aspire 4741. I was tempted to have a look initially when I heard about it’s sleek UI. I thought I should give it a try & see if elementary could be enlisted among distro-suggestions. Usually when people ask me about a simple-to-use Linux distribution, I refer them to Linux Mint. With a simplistic design, Linux Mint Cinnamon helps Windows users migrate smoothly. elementary OS has been built on the robust Ubuntu 12.04. It thus inherits the stability and vast hardware support available from the Ubuntu base. Installation was painless as it’s based on Ubuntu’s Ubiquity. Hardware recognition & installation was flawless. Once installed, my jaw dropped at the UI. You don’t often see such a sleek user interface all while preserving speed. Yeah baby! elementary rocks! It comes with a modified Gnome Shell called Pantheon. It’s beautiful. In the beginning the team developed a couple of applications targeted at Ubuntu. They ended rolling their own distribution. So far I haven’t had any trouble or bug encounter. I like the simplicity & speed. Oh! I need to say elementary does not come with heavy applications. You might not see Firefox, LibreOffice etc. Instead you’ll find lightweight applications such as Midori Browser, Scratch (instead of Gedit), Geary Mail etc. Installing your favorite apps shouldn’t be a pain though. Just fire up a terminal session & apt-get your stuffs. You may also install through the Software Center, it’s graphical & should be easier if you want to wander around for a while. Applications developed by the team are : - Pantheon Greeter: Session manager based on LightDM. - Wingpanel: Top panel, similar in function to GNOME Shell’s top panel. - Slingshot: Application launcher located in WingPanel. - Plank: Dock based on Docky. - Switchboard: Settings application (or control panel) - Midori: Web browser based on WebKit. - Geary: Email client. - Calendar (a.k.a. Maya): Desktop calendar. - Music (a.k.a. Noise): Audio player. - Scratch: Simple text editor, comparable to gedit or Notepad - Pantheon Terminal: Terminal emulator. - Pantheon Files: File manager.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257553.63/warc/CC-MAIN-20190524064354-20190524090354-00323.warc.gz
CC-MAIN-2019-22
2,197
18
https://answers.sap.com/questions/13156378/endless-loop-when-using-submit-with-rm06eps0-me49.html
code
I am using the following code to get the ALV data into memory from program RM06EPS0 (tcode ME49) and i found out that i am entering in an endless loop when doing so. FIELD-SYMBOLS <lt_data> TYPE ANY TABLE. DATA lr_data TYPE REF TO data. cl_salv_bs_runtime_info=>set( EXPORTING display = abap_false metadata = abap_false data = abap_true ). SUBMIT rm06eps0 AND RETURN. TRY. cl_salv_bs_runtime_info=>get_data_ref( IMPORTING r_data = lr_data ). ASSIGN lr_data->* TO <lt_data>. CATCH cx_salv_bs_sc_runtime_info. MESSAGE `Unable to retrieve ALV data` TYPE 'E'. ENDTRY. cl_salv_bs_runtime_info=>clear_all( ). After debugging the standard code i found out that everything should work just fine. the standard program is getting the data and also runs REUSE_ALV_GRID_DISPLAY correctly. BUT right after the ALV grid code there is a condition that creates the problem. Standard code for the ALV in program FM06IF03: WHILE l_leave_sw IS INITIAL. * build event table PERFORM alv_build_event_table USING p_vorgang lt_events. * get reference for output structure / table PERFORM alv_get_table_ref USING p_vorgang CHANGING l_table_ref. * assign the table reference to the output table ASSIGN l_table_ref->* TO <outtab>. * fill the output table PERFORM alv_fill_output_table USING p_vorgang CHANGING <outtab>. * build layout PERFORM alv_build_layout USING p_vorgang CHANGING ls_variant ls_layout l_grid_settings. * build fieldcatalog PERFORM alv_build_fieldcatalog USING p_vorgang CHANGING lt_fieldcat. CHECK sy-subrc IS INITIAL. l_repid = sy-repid. * deactivated interface check, as this is not necessary here! "n1068548 * call the ALV Grid CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY' EXPORTING i_interface_check = ' ' "n1068548 i_callback_program = l_repid is_layout = ls_layout i_grid_title = l_grid_title i_grid_settings = l_grid_settings it_fieldcat = lt_fieldcat i_default = 'X' i_save = 'A' is_variant = ls_variant it_events = lt_events IMPORTING e_exit_caused_by_caller = l_exit_caused_by_caller es_exit_caused_by_user = ls_exit_caused_by_user TABLES t_outtab = <outtab> EXCEPTIONS program_error = 1 OTHERS = 2. IF sy-subrc <> 0. MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4. ENDIF. IF ls_exit_caused_by_user = 'X' OR "1094328 sy-batch = 'X' OR sy-binpt = 'X'. l_leave_sw = 'X'. ENDIF. ENDWHILE. As you can see the whole section is in a WHILE loop. This while loop DOES NOT exit when using the SUBMIT. the reason is that the variable l_leave_sw never becomes true. When you run the report normally everything works fine and the ALV is displayed. I tried to set sy-batch or sy-binpt to true in my code but it was unsuccessfull. Any ideas on how to make it work?
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00732.warc.gz
CC-MAIN-2022-40
2,692
10
https://svn.haxx.se/users/archive-2003-09/0392.shtml
code
From: Eric M. Hopper <hopper_at_omnifarious.org> Date: 2003-09-13 07:53:17 CEST I have a particular server certificate I want to accept. I don't care -- There's an excellent C/C++/Python/Unix/Linux programmer with a wide range of other experience and system admin skills who needs work. Namely, me. http://www.omnifarious.org/~hopper/resume.html -- Eric Hopper <[email protected]> This is an archived mail posted to the Subversion Users mailing list.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00263.warc.gz
CC-MAIN-2023-50
463
5
http://www.sciforums.com/threads/unix-flavours-variants-history.5667/
code
As an active UNIX user i felt a thread on UNIX,its flavours would be great.i would highly be indebted to anyone who contributes to this thread about the present,past of UNIX.flavours,LINUX etc.the following thread attempts to give an introductory talk on UNIX and its features.Xelios i hope you read this,this might help you out in understanding of basics of UNIX and its various flavours.(If you do know a bit,plz share your experience with me about the topic).Please Register or Log in to view the hidden image! ======================================================================== Welcome to the world of UNIX.once the domain of wizards and gurus,today UNIX has spread beyond the university and laboratory to find a home in Global corps and small internet servers alike.this capability to scale up and down,to accomadate small installations or complex corporate networks with little or no modification is the only charracterstic of UNIX that is responsible for its popularity. Unix is built on rich,powerful and yet simple elements.Although many more recent operating systems have borrowed concepts and mechanisms of UNIX,those who are most familiar with the legacy Mainframe enviornments,or those whose experience is limited to mostly single user oriented enciornments may find UNIX intimidating at first.At its base ,UNIX is both Simple and elegant,with a consistent architecture that,in turn,underlies and guides the design of its many application programs and languages. so what exactly is the <b>UNIX</b>? ======================================================================== >A TRADEMARK >A MULTITASKING,MULTIUSER OPERATING SYSTEM. >THE NAME GIVEN TO WHOLE FAMILY OF RELATED OF OPERATING SYSTEMS AND THEIR MOST COMMON APPLICATIONS,UTILITY,COMPILER PROGRAMS. >A RICH,EXTENSIBLE AND OPEN COMPUTNG ENVIORNMENT. ============================================== UNIX like other operating systems is a layer between hardware and the applications that run on the computer.Essentially three layer Binds to Form a complete UNIX system. **************************************************** 1.)A Kernel(Innermost software layer) 2.)A SHELL(Interpreter) 3.)User Thus from above you can see that kernel is the one that directly interacts with the hardware,shell is an interface to understand commands of a user. <B>HISTORY OF UNIX</B> ============================================== in mid 1960s,AT&T BELL LABS was participating in an effort to build a new Operating system.this was called Multics.in 1969,Bell Labs pulled out of Multics effort and members Ken Thompson,Dennis Ritchie et al developed and simulated the development into what later evolved to UNIX file system. as the team continued to experiment,they developed their work to do text processing for patent dpt at AT&T.afterwards C (yes the famous C)was developed with joint efforts of Kernigham and Ritchie on and for UNIX,the UNIX thus was re-written in C itself.this is what made it an open system that it is today. As a then-regulated company,AT&T wasnt allowed to market computer systems.Nonetheless,the popularity of UNIX grew with internal use at AT&T and licensing to univesities for EDUCATIONAL use.By 1977,comercial licenses for UNIX were being granted.later versions developed at AT&T included SYSTEM III several releases of SYSTEM V.All the versions of UNIX based on AT&T work require a license from the current owners,UNIX SYSTEM LABORATORIES. <B>BSDs(Berkeley Software Distributions)</B> ============================================== In 1978(As far as i remember,but it could be 1979 also),research group turned the distributions of UNIX to UNIX support group(USG),which had distributed an internal version called programmers workbench(thats what it was called).in 1982,USG introduced System III.,and then on SYSTEM V etc. The computer science group at University of California at Berkeley(UCB) developed of what was called populary BSDs.the original PDP-11 had 1BSD AND 2BSD.Support for DEC VAX compuetrs was introduced in 3BSD.Then continued its VAX tie-up with 4.0BSD,4.1BSD,4.2BSD,4.3BSDs etc. <B>UNIX and STANDARDS</B> ======================================================================== Because of multiple versions,cross pollination between variants many features have diverged in the different vesions of UNIX.Standardization has hence become a need,a powerful one. The Institute of Electrical and Electronic Engineers(IEEE)created a series of standard committees to create standards for "An Industry-Recognised operating systems Interface standard based on UNIX operating system".the POSIX.1 committe standardizes the C library interface used to write programs for UNIX.the POSIX.2 committee standardizes the commands avalable for general user and so forth. the US govt has specified a series of standards based on XPG and POSIX.currntly FIPS 151-2 specifies the open systems requirements for federal purchases. for more information on the subject you may like to log on to: www.x.org ======================================================================== <B>SOURCE VERSIONS OF UNIX</B> ======================================================================== Several version of UNIX and UNIX like systems have been made that are free or inexpensive and also include source codes.These versions are particularly attractive to the modern-day hobbyist,who can now run a UNIX system at Home for little investments and with great opportunity to experiment the OS or make changes to suit his own needs. An early UNIX like system was developed by famous A.S.Tanaenbaum(the author of great bestseller computer Networks),it was called MINIX by him. TODAY,the most popular version of UNIX is undoubtdly Linux.Linux was designed by Linus Torvalds to be a free replacement for UNIX and it aims for POSIX compliance.Linus during his early days didnt have enough money to buy a UNIX os.but he was learning C language during those days.this enabled him to write and compile the source code for his own kernel program wich he christened as Linux and put it on his colleges Network.later on with various ,infact millions of users add-ons and improvements it went on to become the choice of Millions of users,from small to mid-sized ISPs and Web-servers. However,if you"re an active windows user,then i"d suggest you start with DRAGON-LINUX,it can work as a folder inside your windows and you dont need partitioning etc,however that would involve skipping various important concepts like swap partition etc. in the end i would say that the thread aims to introduce UNIX as a whole to Science Forums.i would appreciate any replies from any one.any one can share experiences with me.i"ll come up with a little add-on to this thread to describe exactly how shell kernel boots and how the whole system works.later,for now ... bye!
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591719.4/warc/CC-MAIN-20180720174340-20180720194340-00601.warc.gz
CC-MAIN-2018-30
6,802
1
https://www.jotform.com/answers/400269-IS-it-possible-to-duplicate-a-segment-of-form-the-no-of-times-condiotionally-as-entered-in-a-text-box-
code
- friendspharmAsked on July 07, 2014 at 10:24 AM I wish an entry to be made in a text box. Then, i want to have a segement of 4-5 boxes(text boxes, radio etc mixed) duplicated the number of times entered in the box. I assume it will use conditions.. But, im not able to get it done. Please help. - CarinaAnswered on July 07, 2014 at 11:54 AM From what I understood you wish that according to the number inserted in a field, to have this number of fields showing. You can see the test form: The best way is by adding form-collapses and then adding the form fields you wish them to appear. You may set it up as Hidden: Using form collapses makes simple adding a condition. Lets say you have a first dropdown field like "how many clients". If user says 1 you will have 1 name, 1 email, ... If user says 2, you will have 2 name fields, 2 email fields,... So if the maximum clients is 10 you will need to create 10 name, email, ... Now you need to add a condition that "if the first field is equal to 1, then show (form collapse #1): You may clone the form to inspect it closer. Let us know if further support is needed.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891277.94/warc/CC-MAIN-20180122093724-20180122113724-00773.warc.gz
CC-MAIN-2018-05
1,115
13
https://unix.stackexchange.com/questions/56175/implement-java-javafx-on-arm
code
I am working on ARM Linux. I have found this link that says that JavaFX could work on ARM. I have compiled my own kernel and built a functional root file system with busybox, glibc library and ARM cross compiler toolchains. Should I need to implement a JVM to get the J2SE and JavaFX platform? I just want to build a small Java based OS especially using JavaFX. I have the glibc-2.9 library to run the framework as said by the requirement to run Embedded J2SE. But there is no tutorial about how to install or set it up to work. Can anyone help me?
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493818.32/warc/CC-MAIN-20200329045008-20200329075008-00032.warc.gz
CC-MAIN-2020-16
548
6
https://altervision.org/what-important-software-program-do-you-install-on-a-new-computer.html
code
Data Logger, which is also referred to as a Data Recorder, is an electrical machine that performs the task of recording knowledge over a time period. Time Inc.’s expansive portfolio of primary manufacturers and a digital enterprise of scale with development potential, complemented by Meredith’s rising tv broadcasting business will produce sturdy money circulation for the blended company. In the first half of the twentieth century , scientist s started using computers, largely as a result of scientists had numerous math to figure out and wanted to spend more of their time occupied with science questions as a substitute of spending hours adding numbers collectively. It is an web neighborhood of people that current to do stuff for $5. I see folks all the time doing promoting and advertising after they know their companies or merchandise isn’t working successfully. ENSEK helps energy suppliers provide a better buyer experience by modern technology and the facility of knowledge. Petri Vuorimaa is the coordinator of the Digital Media Technology programme at Aalto University, Finland. All through the course of the three-12 months program, school college students shall be uncovered to a lot of programming languages and paradigms. Everytime you carry out a blogger internet page on-line on blogspot, chances are high it’s possible you’ll merely improve your buyers’ engagement by along with curiosity, glamour and fairly just some content material material to your pages. As anticipated, the period of time spent utilizing media was significantly associated to the stream and dependancy related to all digital media utilization. The report requires a human-centred agenda for the future of labor that strengthens the social contract by inserting folks and the work they do at the centre of economic and social policy and business practice. Instruksi yang lebih kompleks bisa digunakan untuk menyimpan gambar, suara, video, dan berbagai macam informasi. Computer Science Experience is a 3-yr program that prepares students to work as entry-stage software program builders in small, medium or massive enterprises. In little more than a day, Disney Plus registered greater than 10 million individuals , the company mentioned Wednesday. Illinois Computer Science Smaller, cheaper and quicker than their predecessors, these computers used keyboards for enter, screens for output, and employed programming languages harking back to FORTRAN (Components Translation), COBOL (Frequent Enterprise Oriented Language) and C -Language. Promoting Knowledgeable is a digital market and online promoting greatest multi vendor wordpress theme 2016 3a WordPress theme with 7 demos.Posted on May 19 2016 by Marisa Tracie in Weblog Enterprise Progress WordPress As we focus on we keep in social media market now we have got now gone from paper flyers to on-line adverts and inside the ultimate phrase 12 months on-line product sales have skyrocketed on account of social media selling out there to. Promoting Expert is a digital market and on-line selling best multi vendor wordpress theme 2016 3a WordPress theme with 7 demos.Posted on Might 19 2016 by Marisa Tracie in Weblog Enterprise Enchancment WordPress Proper now we reside social media selling in social media market we now have gone from paper flyers to on-line ads and all by means of the remaining yr on-line product gross sales have skyrocketed as a result of social media promoting accessible to. I like technology, the digitalization of the whole society, but I tend to look at folks instead of the information or the technical details, and to all the time put folks first. With the help of the tech group, the programme helps these companies in reaching their world ambitions, creating jobs and alternatives across the UK, and inspiring the following technology of tech entrepreneurs, founders and companies. For the primary time, most younger people aren’t optimistic concerning the future. The Intel® sixty four and IA-32 Architectures Software program Developer’s Manual, Quantity 1, describes the basic architecture and programming surroundings of Intel 64 and IA-32 processors. For months I’ve been evaluating my current life to my life earlier than social media. At this age of latest technology the place new devices and digital apps are developed or created day by day, tech web sites and blogs come in useful.Internet users would know these new technologies via studying blogs. Computer Science Technology is a 3-12 months program that prepares faculty college students to work as entry-degree software builders in small, medium or huge enterprises. There are many causes accounting errors occur in double entry bookkeeping (Also see Accounting – All you Must Study Double-Entry Bookkeeping). Advertising consultants like Eyal Gutentag understand that a advertising and marketing plan can begin out with just a few options and change over time. Inside design is the art work and science of understanding individuals’s habits to create useful spaces inside a constructing. The curriculum will middle on the technical components of information technology, along with database administration, methods evaluation, and expertise planning. Promoting Educated is a digital market and on-line promoting best multi vendor wordpress theme 2016 3a WordPress theme with 7 demos.Posted on May 19 2016 by Marisa Tracie in Weblog Enterprise Enchancment WordPress In the meanwhile we maintain social media promoting in social media market we now have gone from paper flyers to on-line commercials and all by the closing yr on-line product product sales have skyrocketed on account of social media advertising on the market in the marketplace accessible in the marketplace to. Together with the migration to Google servers , an extreme amount of new selections have been launched, along with label group, a drag-and-drop template modifying interface, learning permissions (to create personal blogs) and new Net feed decisions. The schooling we offer familiarises faculty college students with Finnish teacher education, Finnish learning environments and their research, in addition to continued development work in Finnish schools and institutions of upper education. can be the favored technology associated website but it is only confined to the Apple related merchandise. By carefully discounted pupil rates, Street & Smith’s Sports actions Group, College & College Program offers college college college students the possibility to develop a broader understanding of the sports activities actions trade by studying both Sports activities actions Enterprise Journal and Sports actions Enterprise Day-to-day. We could even use this time to cowl research talents, citing sources, realizing if sources are reliable, extra Web security and digital citizenship. We use extremely effective deep learning and artificial neural neighborhood technology to research large info from social media and digital platforms, and we meaningfully asses total market and mannequin sentiment. Daniel Kardefelt-Winther of the Innocenti analysis office of Unicef, the United Nations’ kids’s firm, checked out all the proof he may uncover on how children’s use of digital technology affected their psychological effectively-being, their social relationships and their bodily activity, and situated less cause for alarm than is often advised.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00147.warc.gz
CC-MAIN-2023-40
7,402
16
http://www.metalcab.com/ebook/download-Pilot%27s-handbook-of-flight-operating-instructions-for-models-B-25C-and-B-25D-1942/
code
Metal Cabinet and Fixture Company division of Span-O-Matic, Inc. It is we download Pilot\'s handbook of flight operating instructions for models B 25C and B 25D 1942; you&rsquo go what Transplantation; re emerging for. also detailed can Create. something populations; concepts: This l is advisors. By being to remark this password, you know to their import.Seyedsina YousefianmoghadamAdvanced Top chaotic activities 'm using scheduled and an athymic academic download Pilot\'s handbook of flight operating instructions examines presented blocked closing to a induced critical order. electrons of paleontology Y electron( SSI) in Other needs learn requested already broken in the video. It did sent in some hours and businesses like NEHRP( 1997, 2000, and 2003), ASCE 2005, and Even in ATC 3-06. everywhere those needed Luckily be containing month of case. 1818028, ' download ': ' The territory of security or request gift you have using to include is entirely reached for this survey. 1818042, ' AIRE ': ' A Russian l with this community case right takes. The access F opinion you'll reload per list for your library fault. The need of publications your credit sent for at least 3 ia, or for easily its free confidence if it 's shorter than 3 users. Copyright Oxford University Press, 2018. operation, Law, and Rhetorical Performance in the Anticolonial Atlantic. Ohio State University Press, 2016. 95( configuration), ISBN 978-0-8142-5213-0. How to Find Like a Computer Scientist( Interactive Book) - great ' CS 101 ' book Queen of the Summer Stars used in Python that not files on the M of gp120 using. This is beyond the theoretical Stress Corrosion Cracking: Theory built to go powered, but it is such a basic block that we had to sign it down. due book Get It Done:) - Fun browser with 33 books that you can start with Python quality. A Beginner's Guide to SQL, Python, and Machine Learning - We are been with General Assembly to contact you a inner that guy of how these insane machines F Standard variety. double big view La Renaissance européenne 2002 team that you can answer and be the access not from inside RStudio( the most positive application shown to use R). For those who have better by embedding only Do through the solutions. reached for cloning up to embed n't. Series) - powerful Ashes of the Earth: A Mystery of Post-Apocalyptic America 2011 of calculation Y from Harvard. decreased for being deeper More inspiring ideas. many Woman's Change of Life for those with resolution things. We use this Liberalism in Modern Times: Essays in Honour of José G. Merquior (Central European University Press evolution because it has Special useful ia for each processing. How to Learn Statistics for Data Science, The Self-Starter ebook Coupled processes in subsurface deformation, flow, and transport 2000 - Our l that completes these channels in more j. now as a next page is the spatial dynamics, you'll deliver possible scientists. really, you'd understand charged at what you can enable out supposedly.Select Authorize with Esri n't being the download Pilot\'s. The Authorization Information Prologue species. delete the predicted reviews and play different. The Authorization Information( continuous) Y data. help the donated claimants and Ensure young. The Software Authorization Number importance links. be the F configuration for the different contributions) and reload amazing.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00478.warc.gz
CC-MAIN-2019-09
3,400
2
http://forum.modrewrite.com/viewtopic.php?f=7&t=1414&p=4776&sid=959f036daf9d3ed44a253a56c42349a1
code
I tried to post a new topic in "Bennigers corner", but i can't, when i submit myu post return an error 412. Precoditing error. something like this. What happens ? P.S: Sorry my bad english! I'm brazilian programmer Users browsing this forum: No registered users and 1 guest
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00155-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
273
4
https://dachte.livejournal.com/494779.html
code
Today was awesome, partly because most of the people here are rather cool, and because the event has a lot of good stuff. The day started with Jimbo's plenary. It was one of the "assembled speeches" that shows that people who have enough interesting things to say don't need to organise them strictly (and often are better for it). Larry Wall, with his State of the Onion speeches, is someone I'd call a master of the style, but it seems that a number of the people whose speeches I admire tend to use this style. I believe this is because loosely-written speeches tend to be interesting at every point, like Hofstadter books. I should work on getting good at writing speeches like that. Returning to the narrative, I next went to Identity, Anonymity, and the Wiki, which covered a lot of ground on what goes on in our mind and how that relates to the community. The most interesting single idea I saw (not completely new to me, but better phrased than I had thought of/seen before) was to distinguish pseudonymity and anonymity, suggesting one builds a community and one does not. This is not, strictly speaking, true -- some sites (like 4chan) have a community and a culture despite being more or less anonymous. However, with many observations of this kind, there are few absolute statements that hold water -- what is really meant is that pseudonyms make building a community much easier, which I agree with. Our community certainly does a lot more than 4chan does, and we have to deal with many more challenges than simply sharing media because of our ambition. Next, we had Trust and Wikipedia. The term "social capital" was used a lot in this presentation, which a lot of people didn't like, but I think that it's a mistake to think one can understand what goes on in what we're building without giving some life to that term. It provided an overview of challenges to the community given that we're big enough to attract some trouble (and troublesome folk). This got me thinking about some friends of mine I know who have enormously powerful destructive urges towards anything idealistic and large, projecting their own lack of idealism outwards as a justification to act the way they do (they assume others will do the same). It's possible to be friends with someone, and think they'd be really destructive in some environments one cares about.. which is disappointing. Next, I had a pretty decent lunch. The company was good, and the food, buffet style as it was, was quite good. Speaking with people involved in running it yesterday, it was explained that sponsors helped make up the difference between what we all paid to show up and what we got. Everyone rushed over to see Lawrence Lessig speak after lunch, and that was phonomenal -- I've read Lessig's blog, off and on, for quite some time, but I was still surprised at exactly how powerful a speaker he is. His thoughts on open culture versus closed culture solidified some thoughts I'd been toying with for some time, and provided more of a historical context for my understanding of how commercialisation and culture relate. I am very keen to grab a copy of Lessig's presentation, and might buy a copy of his book for some lawyer friends I have -- his criticism of modern, lawyer-laden society is a bit surprising given who he is, and I feel it might at least interest them. The one area I strongly disagree with Lessig is where we go from his criticism -- I don't think the creative commons is necessarily a good project because I think every new license is an abomination -- what we should do is suggest people use the GPL and GFDL (or similar) while we work towards removing the laws that make it necessary, not finding ways to make it easier for people to make new comprimises with things they still think of as property. I understand where Lessig is coming from though, in that he sees comprimise on this matter possible and operates on the maxim that the CC licenses will result in more net freedom. In this manner I'm a bit less comprimising, for better or for worse. I would've liked to have chatted with Lessig on it, but he's notorious for disappearing not long after his presentations are done. After lunch, I had to choose between the Wikipedia criticism plus the 1.0 project track (which I didn't go to) and the social issues track (which I went to). This part didn't go too well -- neither of the people in this track were particularly good speakers, and both of them essentially read long papers, word for word. I largely played on the internet for this part. I also had a quick interview with a finnish newspaper, and that was kind of interesting. The questions asked stirred up memories of the earliest articles I edited on, logged in and not, in years long past. I looked at the entries for Chatham University and Point Park University, among others, things I started with my original account, and tried to see what parts are the same and what have nothing left of my original template. After that double session was finished, I went to the triple-presentation on Wiki projects, covering Wikihow (which looks cool, and I had not seen before), Wikitravel, and diplopedia (an internal wiki being set up for the department of state for internal organisation purposes). These were all excellent presentations on interesting topics. Wrapping up the official stuff, there was a poster session. I had an interesting conversation on links between literacy researchers and groups working directly in increasing adult literacy, using wikis as a way to bridge research and practice. This is an area where I'm glad to see people working -- I've often wondered about this given the vast cultural difference between universities and areas where the knowledge they theoretically build is (hopefully) put to use. I believe the cultural difference will change as new academes grow into the new, collective/collaborative culture of wikis and some of the old traditions of competition and propriety fade away. I also spoke with some folks about a perl-based wiki software (that I'd like to check out) and finally I had my first in-the-flesh meeting with Jimbo, talking with him and another guy about fundraising, a bit of global politics, and the like. Afterwards, a bunch of us went outside to kick some balls around (tennis balls and volleyballs, neither of which were that great given the ground), eventually to become tossing a ball around, each catcher describing a really bad (but funny) idea before tossing it to someone else. My most amusing contribution was the notion that an edit conflict results in a 24-hour autoblock. A much smaller group of us went out for indian food, finding a place that was pretty decent, and we then returned to campus. Tomorrow, there's going to be a lot more interesting stuff, including a slight diversion from the conference schedule as I head with aforementioned cute and interesting girl to a local fair for a bit. For the parts I'll be there for, it'll be hard to choose what to see. There's also a chance that I might see Rocky Horror here with her in the evening, which would be pretty awesome. Wandering around a bit, I think Boston's a nice town. I don't know if I'd want to live here (I could probably be happy here, although my cost of living would be considerably higher), but it's awesome to visit. Boston does a good job of pulling my social side out into the open, at least so far. I'm exhausted again, so that's pretty much it for now.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00291.warc.gz
CC-MAIN-2021-31
7,457
5
https://coreteka.com/case-studies/ychamp-fitness-app/
code
Project Main Goals YChamp is a platform for virtual sports competitions and an activity-tracking application. The system analyzes user physical activity to suggest the competitions with the participants of the alike activity level. One doesn't have to be a pro to bear a palm! Set the daily goals, exceed them, and track the results! Get motivated by comparing current results with previous ones, and reach new heights. Analyze your progress and strive for more. You just need to fill out the profile and select the activity. The application will record the distance covered and show your progress. Team members: [ 7 ] Months: [ 4 ]
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100626.1/warc/CC-MAIN-20231206230347-20231207020347-00407.warc.gz
CC-MAIN-2023-50
632
5
https://unix.stackexchange.com/questions/120414/how-to-extract-the-nth-file-using-7-zip
code
I have an archive which, for reasons beyond my comprehension, contains 900 files all with the same name. That means that if I ask 7zip to extract them all, at the end there is just one file. The solution, of course, is to ask 7zip to extract the files one at a time, and rename each one to something else. But how, pray tell, do you ask 7zip to extract one particular file when they all have the same name?? Is there some way to ask 7zip to extract the Nth file in the archive? That would work... (I want to do this from a script, so I don't really want to use 7zip's interactive mode.)
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00059.warc.gz
CC-MAIN-2021-25
586
4
https://community.sophos.com/utm-firewall/f/mail-protection-smtp-pop3-antispam-and-antivirus/125243/where-is-successful-spf-check-documented/457859
code
Advisory: Support Portal Maintenance. Login is currently unavailable, more info available here. Where does the UTM document whether it successfully validated SPF records? and with which IP and or Domain it was validated? I have to investigate a phishing campaign and i have access to the email itself as well as the smtp log file. In neither of them i can see any SPF check results. SPF is and was enabled. The sending domain must specify SPF for it to be checked. In the SMTP Proxy log, search for SPF and spf to see passes and failures. Post the headers from the email here with your private information obfuscated. Cheers - Bob
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507045.10/warc/CC-MAIN-20210116195918-20210116225918-00666.warc.gz
CC-MAIN-2021-04
630
8
http://www.openhardwarehub.com/projects/88-MBI5030-starter-board
code
Featured User: kurt Open-source hardware project hosting is my passion. I spend most of my free time building neat gadgets or planning what I'll build next. I love building things, and I want to make Open Hardware Hub a place that inspires others to build, ... Updates 2013 February 18 It's been a while, hasn't it? Well, that's ok because we've got a lot of updates to talk about. Most of these have been effective on the site fora couple weeks now. A few may or may not be active when this article gets posted, but they'll certainly be applied in the ... MBI5030 starter board As the title says, it is a small ‘dev-board’ that will get you started with the Macroblock MBI5030. Macroblock is a Taiwanese manufacturer whose product line includes a wide variety of very affordable and quite capable LED driver chips. They’re available at kingelectronics, a US based distributor for them. The MBI5030 is a 16-channel constant current LED driver with 16/12-bit PWM. You send your data just once, feed the chip with a constant high frequency ‘grayscale clock’ and it takes care of the rest. Many AVR chips can be programmed (FUSE setting) to output their system-clock on a certain pin. That option is quite suitable to drive the ‘grayscale clock’. The main idea for this board came when I was trying to write some code for that chip. I simply didn’t want to breadboard all of those LEDs, too many wires, wouldn’t easily survive transportation in a bag… With 16 onboard LEDs, all you need to wire is power and the SPI interface and you’ll get visual feedback instantly. When satisfied, pull the jumper, add high(er)-power LEDs and an external power supply to test the real thing. I have found these chips quite valuable if you frequently deal with LED projects and have to keep the cost down. Chips from other major manufacturers (TI…) are nice too, but for a similar set of features they make you pay a lot, especially for small quantieties. It may be worth to have a look at the MBI chips, maybe even build a small stash of them. That saves shipping costs in the long run. This open source hardware project contains no files. Bill of Materials |Qty||Part #||Description||Schematic ID||Source| |1||MBI5030||PWM LED-driver, 16-ch, SPI interface.||IC1||Source| |4||EXB-34V102JV||Res Thick Film Array 1K Ohm 5% ?200ppm/?C ISOL Molded 4-Pin 0606(2 X 0603) Convex SMD Punched Carrier T/R||Source| |1||08055C104KAT2A||CAPACITOR, 0805, 0.1UF, 50V||Source| |1||GRM21BR61C475KA88L||CAPACITOR, 0805, X5R, 16V, 4.7UF||Source| |1||TC33X-2-202E||TRIMMER, 2K, 3MM||Source| |16||SML-310MTT86||SML-310 Series Green 0603 16 mcd Tinted Clear 2.2 V LED Surface Mount||Source| Get yourself a ‘copy’ or have it made. Paying a visit to tindie would be a good start ;-) There should be markings... some kind of symbol. Don't assume anything, measure which way they should be placed. Then make the connection with the symbol. Place one bead of solder. Use flux. The LEDs are tiny, so you do need a small pointed tip to get inbetween them. 3 - More descriptive project description. 2 - Added a link to tindie. 1 - Initial project release blog comments powered by Disqus
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122159.33/warc/CC-MAIN-20170423031202-00238-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
3,168
27
http://lists.mplayerhq.hu/pipermail/ffmpeg-user/2012-March/005428.html
code
[FFmpeg-user] FLV to MP4 conversion ben.halhead at quvex.com Thu Mar 8 11:23:38 CET 2012 We are using HDFVR - http://hdfvr.com/ - to record flv files from the user's webcam. The .flv files appear fine and playback at a good quality, however we are then trying to convert the to .mp4 files with ffmpeg and this is where we are hitting problems. At first we used a custom build installed by a Rackspace technician with ffmpeg experience, I believe thsi was version 0.5.2. This worked intermittently but on some larger files we would get the error "error [libx264 @ xxxxxxxx]error, non monotone timestamps xxxxx >= xxxxx" and the encoding would fail. The ffmpeg command used for this conversion was "ffmpeg -i PATH_TO_CURRENT_FILE.flv -acodec libfaac -ab 96k -ac 2 -vcodec libx264 -vpre hq -vpre ipod320 -threads 0 -crf 22 Unsure of whether this was a problem with the input file or the ffmpeg installation we then fired up a cloud server and installed ffmpeg ourselves via this link - http://ffmpeg.org/trac/ffmpeg/wiki/CentosCompilationGuide. Now we can encode the file fully to .mp4 however the audio plays back fine whilst the video stutters as if buffering on a dial up connection. Can anyone give any advice either on the first issue with the 'non monotone timestamps' or on the second issue and maintaining a good sync with the video and audio? Any advice/instruction is greatly appreciated. More information about the ffmpeg-user
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525046.5/warc/CC-MAIN-20190717041500-20190717063500-00251.warc.gz
CC-MAIN-2019-30
1,434
25
http://riskcraft.xyz/archives/4392
code
Novel–The Mech Touch–The Mech Touch Chapter 2865 – A Little Mercy unlock special Obviously, he couldn’t straight up acknowledge this, so he were required to attire up his words and phrases so that you can maintain help. “The offense of higher treason is just not yet well-defined in your regulations.” Ves admitted to the herd. “We have established a number of our first and rudimentary guidelines around the rulebook of your Dazzling Republic. Still what very little we certainly have is sufficient enough to maintain proper rights in this case. Dr. Redmont acquired the verdict he deserved, and then for that he or she shall have the only abuse for clansmen found guilty of substantial treason.” i’m the evil lord of an intergalactic state wiki A dismembered brain soared away from the remainder of the body system and quickly fell to the top of the podium similar to a 50 %-deflated soccer ball. The ugly squelch noise made this execution experience even more actual to Ves and everybody. Obviously, he couldn’t outright disclose this, so he had to attire up his words and phrases in order to maintain service. The good news is, every thing proceeded to go depending on prepare thus far. With Dr. Redmont subjected to a formidable silencing field that does not only neutralized his tone of voice, and also scrambled his mouth area, he was completely lacking the chance to interrupt the proceedings! Ves stared straight into your eyes of Dr. Redmont. Staying declared guilty had not been a satisfactory blow in itself. Ves realized that a lot of personal-righteous nutcases had been happy to recognize penalties if they been successful in pushing off their harmful systems. “It’s already happening to point out remorse, traitor.” Ves hissed. What Ves got performed would be to pull them in public and uncovered all of their mistakes! He organised the trial run in a way that made everyone’s view with the think. The judges, who occurred to always be significant specialist aircraft pilots, personally led this procedure, in that way making sure that the think would not be for the right part! Ves went back in one of many seminars of the biomech production elaborate when being then Blessed and the recognize safeguard. He started to see why tyrants and dictators were actually so fond of executions. To be able to decide upon the lifestyle and loss of life of other people was this type of strong dash that could also be a lot more addicting than stimulating elements! He was quite certain that other previous Lifers had acquired a significant idea on which would affect them whenever they harmed the clan. What Ves got finished was to drag them in public and uncovered all of their mistakes! He organised the trial run in ways that converted everyone’s thoughts and opinions from the imagine. The judges, who took place to always be powerful expert aviators, personally guided this procedure, and thus making certain that the suspect would never be over the proper area! Fortunately, Ves was without to enact one of the contingency plans he prepared against these sudden times. The tribunal proceeded with no situations and the speeches guided public judgment within the proper course. Ves smirked in response. “I simply really feel you are worthy of a little bit mercy.” A dismembered brain soared clear of all of those other human body and quickly decreased into the surface of the podium similar to a fifty percent-deflated tennis ball. The unattractive squelch sound created this delivery experience additional real to Ves and everyone. “The offense of significant treason is absolutely not nevertheless well-determined in this guidelines.” Ves accepted on the audience. “We now have primarily based a number of our first and rudimentary legal guidelines around the rulebook from the Bright Republic. Still what tiny we now have is plenty enough to maintain justice in this instance. Doctor. Redmont gotten the verdict he deserved, and for that he or she shall get the only penalty for clansmen found guilty of significant treason.” Thankfully, every thing journeyed according to system to date. With Dr. Redmont put through a solid silencing area that does not only neutralized his tone of voice, as well as scrambled his lip area, he was completely lacking the ability to affect the procedures! Having said that, Ves still given Dr. Redmont a way of measuring kindness. Even though he realized nothing about swordsmans.h.i.+p, he obtained already practised this motion before the trial offer. He understood precisely how he required to move his left arm and exactly how considerably compel he found it necessary to utilize. He stepped even closer the responsible prisoner until he was just an arm’s size out. Lucky quietly adhered to behind Ves, curious at what was planning to ensue. Ves smirked responding. “I just now sense you are worthy of a bit mercy.” This became the estimated verdict. However the wedding ceremony around it and also the gravitational pressure of your situation caused it to be tone a great deal more significant than it was actually. As he taken care of a good amount of self-assurance that Jannzi and Tusa can have no sympathy for Redmont, he did not dare to your.s.sume the guilty verdict was already occur stone. Skilled aviators tended to assume differently off their individuals and some of their thoughts can be quite severe! Although he realized almost nothing about swordsmans.h.i.+p, he experienced already practised this action prior to the trial. He understood the best way he required to shift his arm and just how considerably power he had to use. He investigated Nigel Redmont’s vision one further time. The more aged man’s tear-streaked eyeballs finally demonstrated true popularity. He valued the mercy of an swift conclude. Ves swung the sword in the fast, sleek action. “I’m… not dead…” Nigel Redmont spoke when touching his neck. Not much of a solo warning sign marred his epidermis! “I.. didn’t perish. As I am happy at the fact that I’m still still living, why do you additional me, Mr. Larkinson?” He stepped nearer to the remorseful prisoner until he was just an arm’s distance absent. Fortunate enough quietly put into practice behind Ves, inquisitive at that which was going to ensue. “It’s far too late to point out remorse, traitor.” Ves hissed. Ves went straight back to among the list of workshops of your biomech manufacturing elaborate when remaining combined with Fortunate enough with his fantastic respect defense. Nevertheless, Ves still given Dr. Redmont a measure of kindness. As a creator, Ves understood effectively that everybody craved reputation. Martyrs only prevailed when others accepted and backed their decisions. It turned out a good deal more challenging so they can go through using their damaging functions if anyone and their mom assumed people were bad! Novel–The Mech Touch–The Mech Touch
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00146.warc.gz
CC-MAIN-2023-06
6,960
37
https://community.oracle.com/message/11178961?tstart=0
code
i've 11.2 Database with Apex. I've many Virtual Circuit Wait events and i probably hit this problem Doc.id 1136313.1 Looking Anonymous session i see the initially it uses Shared session (with high network usage in ash Viewer) After 60s it changes to Dedicated (in v$session) without a process in v$process. Looking ASH viewer it show me very high cpu usage, with description "Not waiting, currently on CPU. Time on CPU = -16 seconds, so far" We use Embedded PL/SQL Gateway. >I've many Virtual Circuit Wait events please quantify at what value does count become "many". >Looking ASH viewer it show me very high cpu usage please quantify at what value does CPU usage become high. exactly what problem needs to be solved? how will you, I, or anyone recognize which post contains correct solution for you?
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170253.67/warc/CC-MAIN-20170219104610-00633-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
801
12
https://softwareengineering.stackexchange.com/questions/373459/uml-class-diagram-with-reference-data-type
code
For the following UML Class diagram, I have a reference data type Nurse, and I am using it in Hospital Class, is it needed all the time to describe the reference data types in UML Class diagram? Is this diagram created correctly? Also, is it needed for the methods that don't return anything to mention: void? is it needed all the time to describe the reference data types in UML Class diagram? The vast majority of software isn't described using UML, so no. If you find it is useful to describe something in UML or you believe whoever is reading your diagram will find it useful, include it. Most UML classifiers are assumed to represent reference types, a stereotype is usually used if they are not. Is this diagram created correctly? Syntactically, yes. It shows a number of operations and properties of two types, and a composition association between Hospital (whole end) and Nurse (part end). Whether this matches the semantics of your problem domain is not something anyone can answer. Also, is it needed for the methods that don't return anything to mention: void? Partly this is a convention of the implementation language, and partly of your processes. If the return type is not specified, then your process might be to mark that as a missing detail in the model, and it might be that your team has the convention that it is treated as void. It depends how detailed you want your model to be whether this matters.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00477.warc.gz
CC-MAIN-2021-17
1,423
8
https://proceedings.esri.com/library/userconf/proc17/abstracts/a782.html
code
Vital Streets: A Mobility Framework for the Complex Urban Environment Track: Urban and Regional Planning Authors: Oliver Kiley, Jonathan Oeverman, Lilly Shoup Urban environments are challenged to provide mobility for a growing and increasingly diverse population – especially with constrained public right-of-ways. The Vital Streets project in Grand Rapids, MI provides a framework for decision making, enabling the community to make smart, impactful transportation investments. GIS was a critical tool for creating a multimodal network of street types that reflect both the mobility and destination functions of streets in a complex and growing city.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645089.3/warc/CC-MAIN-20230530032334-20230530062334-00076.warc.gz
CC-MAIN-2023-23
653
4
https://www.oreilly.com/library/view/vba-and-macros/9780789744142/ch06.html
code
5. Looping and Flow Control IN THIS CHAPTER Loops are a fundamental component of any programming language. If you’ve taken any programming classes, even BASIC, you’ve likely encountered a For...Next loop. Fortunately, VBA supports all the usual loops, plus a special loop that is excellent to use with VBA. This chapter covers the basic loop constructs: This chapter also discusses the useful loop construct that is unique to object-oriented languages: Next are common loop constructs. Everything between Next is run multiple ...
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00484.warc.gz
CC-MAIN-2021-25
533
8
http://c-1.songs.on-planet.com/A_F_I_-_Bleed_Black.html
code
I am exploring the inside I find it desolate. I do implore these confines now as they penetrate, I'm hovering throughout time. I crumble in these days. I crumble, I cannot find reflection in these days. If you listen (listen, listen) listen close, beat-by-beat You can hear when the heart stops. I saved the pieces when it broke and ground them all to dust. I am destroyed by the inside. I hope to destroy the outside. It will alleviate and elevate me Like water flowing into lungs, im flowing through these days. As morphine cuts through deadened veins Im numbing in these days. I know what died that night. It can never be brought back to life once again, I know, I know.(repeat) I know I died that night, and I'll never be brought back to life. Once again, I know(repeat)
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823674.34/warc/CC-MAIN-20181211172919-20181211194419-00476.warc.gz
CC-MAIN-2018-51
774
20
http://biology.stackexchange.com/questions/tagged/breathing+human-biology
code
Is it possible for a device to measure how much air we breathe in and out over the entire day and at what rate? I think if we have access to this data we can compare it across people. I figure this is a rather strange question, however, I noticed this quite some time ago and wanted to make sure that this is in fact a permanent condition before posting. The situation is as follows: ... It could be the tidal volume because it effects how a person inhales and exhales normally. It could be the residual volume and functional residual volume, because it increases its amount. Because it ... I've written a computer program which beeps, then beeps after 10 sec, then beeps after 11 sec, then beeps after 12 sec, etc. I tried the following "experiment" on myself: do only one breath between ... When we ingest food, the epiglottis covers the trachea and the uvula covers the nasal passage. But what happens when we breathe? Why does the air go into our trachea and not the oesophagus? During inhalation, your alveoli expand, creating a pressure difference between the atmosphereic pressure and our lung sacks and therefore air will flow into the repspiratory airways. I am trying to ...
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442420.22/warc/CC-MAIN-20141017005722-00236-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
1,184
6
https://cks.mef.org/space/rtblog/anime/LinuxAnimeDVD
code
Linux DVD players for anime For my future reference if nothing else, based purely on watching Shingu on DVD: Xine on Fedora 8 has much better handling of DVDs than even a bleeding edge mplayer, but mplayer has better deinterlacing and keyboard controls for pausing and so on so I used mplayer. Mplayer did turn the subtitles to mush in a few places so I went back and re-read them with xine. More modern versions of Xine may have better deinterlacing, which would probably make it almost completely superior. Different players have somewhat different renditions of subtitles. My Xine shows them as solid yellow text; mplayer shows them as semi-transparent white text. Most of the time I prefer mplayer's style, at least when it's not garbling the subtitles. (The Fedora 8 xine also has the irritating habit of leveling the left and right audio channels when it starts. I deliberately have mine slightly off balance because that's what it takes to sound right in my setup.) gmplayer -nocache -vf yadif dvd://N, where N is the episode/title on disk; at least for Shingu, yadif was the best option for deinterlacing out of the ones that my old computer could do in real time. Occasionally I needed ' -aid ID' as well to make mplayer use the Japanese audio track (by default I believe mplayer picks the first audio track; on most of the Shingu DVDs this was Japanese, but on one it was English). You want to stop and start gmplayer to change between titles or otherwise change parameters; when I did it from the gmplayer menus, the very bottom bit of the picture got this shifting green cast. I could not get mplayer's DVD menu support to work at all well, so I did not attempt to use it. (I suspect that live action may call for quite different deinterlacing options than anime.) I admit that it was periodically tempting to give in and download a fansub for Shingu, despite having the DVDs. I suspect I would have had somewhat better visual quality (since someone who knew what they were doing would have deinterlaced it well) and a better rendition of the subtitles. (As I mentioned in my reactions, modern softsubs are clearly better than DVD subtitles. This should be unsurprising; among other things, modern subtitles are higher resolution.) PS: the sign that one's anime DVD needs deinterlacing is that things moving sideways get this comb effect at the edges, as half the pixel lines are displaced relative to the other half. It's very visible. PPS: if people have opinions on the best Linux DVD player for anime and the best settings for this, I'd love to hear them. I can't say I've done extensive experiments here. Written on 25 May 2011.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00051.warc.gz
CC-MAIN-2023-14
2,645
24
https://jobs-hanesbrands.icims.com/jobs/8572/senior-software-developer---manhattan%2C-ks/job
code
The Senior Software Developer position works as part of a Scrum team with 2-3 developers to build RESTful APIs using a Microservice architecture built on Microsoft’s Service Fabric platform. The product is in its beginning stages, meaning you will influence the long-term sustainability of the product through architectural and operational decisions made by the team. The value of the product is to provide our partners an interactive interface to our internal systems. API partners can leverage our assets providing additional revenue channels to sell and submit orders for fulfillment. Minimum Education and Experience Required: To qualify, applicants must be legally authorized to work in the United States and should not require now, or in the future, sponsorship for employment visa status Only applicants requiring reasonable accommodation for any part of the application and hiring process should contact us directly:
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866870.92/warc/CC-MAIN-20180524205512-20180524225512-00598.warc.gz
CC-MAIN-2018-22
926
5
https://debtags.debian.org/reports/taginfo/protocol::ip
code
This page shows all known information about tag protocol::ip. Internet Protocol (v4), a core protocol of the Internet protocol suite and the very basis of the Internet. Every computer that is connected to the Internet has an IP address (a 4-byte number, typically represented in dotted notation like 184.108.40.206). Internet IP addresses are given out by the Internet Corporation for Assigned Names and Numbers (ICANN). Normally, computers on the Internet are not accessed by their IP address, but by their domain name.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100304.52/warc/CC-MAIN-20231201183432-20231201213432-00702.warc.gz
CC-MAIN-2023-50
520
8
https://mono.github.io/mail-archives/mono-devel-list/2013-August/040664.html
code
[Mono-dev] New Property on System.Web.Hosting.HostingEnvironment monoforum at my2cents.co.uk Mon Aug 5 20:16:54 UTC 2013 I'm not sure of the best place to raise this, it's not a bug really, but a property that seems to be new to the framework. System.Web.Hosting.HostingEnvironment has the property "InClientBuildManager" and it looks like it was added in 3.5. The reason I think it's an important thing to get added soon is the fact that WebActivatorEx uses it, and it's part of just about every NuGet package for MVC. So if people want to move over to mono, this could be a On the plus side, WebActivatorEx handles it well, so you just get an exception on first run. -------------- next part -------------- An HTML attachment was scrubbed... More information about the Mono-devel-list
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651325.38/warc/CC-MAIN-20230605053432-20230605083432-00155.warc.gz
CC-MAIN-2023-23
786
15