url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
---|---|---|---|---|---|---|
https://chcnav.com/about-us/news-detail/chc-navigation-launches-the-new-version-of-copre-software
|
code
|
CHC Navigation launches the new version of CoPre LiDAR Processing Software, an even more powerful and intuitive software suite for seamless 3D LiDAR data and image processing.
Shanghai, China – April 22, 2022 – CHC Navigation (CHCNAV) today released the new CoPre v2.4 LiDAR processing software, a powerful software ecosystem developed by CHCNAV allowing users to process mobile geospatial mapping data quickly and efficiently. CoPre features accurate trajectory processing through a proprietary algorithm, point cloud and image georeferencing, data adjustment for post-processing accuracy, point cloud colorization, filtering and digital ortho model (DOM) generation.
"3D reality capture professionals are increasingly adopting CHCNAV's LiDAR solutions from all over the world. To provide an enhanced user experience, our latest CoPre v2.4 release offers further features and a streamlined workflow for post-processing raw data from CHCNAV's LiDAR systems," said Andrei Gorb, Product Manager of CHC Navigation's Mapping and Geospatial Division. "Whether you want to process data from the airborne AlphaAir 450 LiDAR + RGB system, perform massive data processing from the vehicle-mounted Alpha3D, or get the results of the corridor mapping project with the AA2400 on a helicopter, CoPre will support all your mapping scenarios."
Figure 1. CoPre software: LiDAR scanner raw data processing.
Powered by the accurate and efficient algorithm developed by CHCNAV, CoPre supports POS processing of vehicle-mounted, UAV, or airborne setups. Multiple data sets can be processed simultaneously to increase workflow efficiency.
Figure 2. CoPre software: trajectory processing.
EXTREME DATA QUALITY
CoPre can handle the layering of multiple point clouds and improve relative accuracy with an efficient strip adjustment algorithm. Advanced calibration and optimization techniques result in point clouds that are up to 30% less thick than comparable products.
Automatic point cloud processing, image georeferencing, point cloud colorization, depth maps and results output are available in a single click.
Figure 3. CoPre software: point cloud processing.
EFFICIENT DATA ANALYSIS
CoPre includes various powerful options for analyzing data after processing steps. It supports the visualization of massive datasets with multiple colorization options. Automatic trajectory slicing and stratification checks can be performed, allowing quick detection of misalignment in the entire dataset.
The CoPre software is available worldwide today through the CHCNAV distribution network.
About CHC Navigation
CHC Navigation (CHCNAV) creates innovative navigation and positioning solutions to make customers' work more efficient. CHCNAV products and solutions cover multiple industries such as geospatial, construction, agriculture and marine. With a presence across the globe, distributors in more than 120 countries and more than 1,500 employees, today CHC Navigation is recognized as one of the fastest-growing companies in geomatics technologies. For more information about CHC Navigation [Huace:300627.SZ], please visit: https://chcnav.com/about-us/overview
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506420.84/warc/CC-MAIN-20230922134342-20230922164342-00025.warc.gz
|
CC-MAIN-2023-40
| 3,138 | 15 |
https://channel9.msdn.com/Niners/ChrisLinx
|
code
|
I just wanted to thank you publicly for taking the time to answer my query. Those links were helpful, too.
Also, I've just returned from Office DevCon 2009 here in Australia and have to say I was impressed by the way Access is heading.
The guys from Redmond were great & our confidence is restored! Yeah, the ribbon's a pain from a developer's perspective but there are ways around it and lots of great features in Access 2010 that we could utilize to have a very slick commercial product.
Thanks for the offer of direct contact - will take you up on it.
Couldn't agree more with Brice It - thanks for taking the trouble to be so specific (perhaps a little TOO explicit, but it's understandable!)
We are an ISV stuck in Access 2003 because 2007 was such a pain. The ribbon concept being one problem (but there are others) - as an ISV we want to hide anything other than our GUI.
Clint - are we on the wrong track here? Should we be moving away from Access for our application (mainly small-business CRM - up to about 10 users mostly). We need a front-end DB as well as a back-end & Access has served us well. VB.net would not give us
what we want & we're keen to use 2010 if suitable.
Here's my question - are MS really standing behind Access for development of commercial apps or is the focus now just for internal SME use? We're about to spend a squillion dollars on the next release. We really need to know if we're on the right track.
Please be honest - should we quit now and move platforms?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864958.91/warc/CC-MAIN-20180623113131-20180623133131-00486.warc.gz
|
CC-MAIN-2018-26
| 1,496 | 10 |
https://libguides.law.ucdavis.edu/easysecond
|
code
|
What are secondary sources?
What are some examples?
I'm just starting my research. What is the easiest secondary source to start with?
Jury instructions are a little complex in California because of their history. There are two publishers of jury instructions in California:
The instructions from the judicial council are somewhat "official". The ones published by West are the instructions that were once published by the council but have since been superseded. For your research purposes, using either one is fine. Remember that jury instructions are designed to accurately reflect the law but are not the law itself.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00086.warc.gz
|
CC-MAIN-2022-21
| 619 | 5 |
https://www.forvis.com/forsights/2022/12/using-solver-as-a-data-source-for-power-bi
|
code
|
The Solver Data Warehouse can be used as a data source for Power BI. By using the Solver Data Warehouse (DW) as a data source, you can gain these advantages:
- Less reconciliation required between report types. Using a single data source helps keep data consistent and can reduce the reconciliation of reports between the systems.
- A consistent data structure. A consistent data structure provides users with familiarity of the data, increasing the efficiency of writing reports. The data will look the same in both systems, so users won’t have to learn two different data structures.
- Reduction in data modeling efforts. By using the Solver DW as a data source, the data structure will be consistent and users won’t have to recreate the wheel, so to speak, in Power BI.
Requirements for Using Solver as a Data Source
To utilize Solver as a data source, you need to have the Power BI connector from Solver. Once you have the connector license, you can set up Solver to be used as a data source. To do this:
- Go to the Data Warehouse
- Select Configuration
- Select API
- Toggle the Enabled switch
- Select the modules to be available in Power BI – When you select a module, all dimensions used by the module become available, too
Getting Data in Power BI
Now that you’ve enabled Solver for use with Power BI, it’s time to switch to Power BI to get data.
Select Get data from the startup menu.
Enter Solver in the search box.
Select Solver on the right.
Click the Connect button.
You’ll get a message requesting an API URL from the Solver Portal.
Go back to the API page in Solver.
Select the gear icon at the top right.
Copy the URL that appears.
Paste the URL into the field in Power BI.
If you have never used Solver with Power BI on your computer, you will get an authentication message.
Go back to Solver and select the gear icon a second time.
Copy the Access Token.
Paste the information into the field in Power BI.
Then, a list of tables will become available.
You may select as many tables as you want.
Recommended best practice: Only select one module table per Power BI report.
For example, for General Ledger, you might select these tables:
- Module General Ledger
- Dimension Account
- Dimension Category
- Dimension Department
- Dimension Entity
- Dimension Period
- Dimension Scenario
After you’ve selected your tables, click the Load button.
Note: Solver uses an import method of connecting. Therefore, the speed of loading data can vary based on how much data you have and how fast your connection is. In other words, don’t be in a hurry.
Here is how the tables will appear:
If you select the Data Model icon on the left, you’ll see this data structure. Notice the dimensions are joined already to the module.
Below is how the data will look in Report Designer.
The items with arrows are the dimensions. We didn’t select all dimensions available so some of the dimensions in Report Designer are not showing as tables in Power BI. However, you could certainly add more dimensions. Use the Get Data button at the top to get more tables.
Now that you have data, the next step is to create a report. The Business Technology Solutions Team at FORVIS can assist. Our professionals have certified experience with Solver corporate performance management and Microsoft Power Platform software. Use the Contact Us form below to get in touch.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679511159.96/warc/CC-MAIN-20231211112008-20231211142008-00263.warc.gz
|
CC-MAIN-2023-50
| 3,370 | 44 |
https://techzone.vmware.com/resource/providing-disaster-recovery-vmware-horizon
|
code
|
Providing Disaster Recovery for VMware Horizon
Disaster Recovery in Horizon Design
VMware Horizon® is a family of desktop and application virtualization solutions that enable organizations to deliver virtualized desktop services and applications to end users. Because organizations depend on Horizon to deliver key services to users, the design of the Horizon services must provide disaster recovery (DR) capabilities to ensure availability, recoverability, and business continuity.
Disaster recovery for Horizon workloads should be viewed from the perspective of the users. Where a user is being delivered Horizon-based desktops or published applications from a particular location, contingencies should be made to provide equivalent services from a separate location.
Simplify Disaster Recovery with Horizon
Disaster recovery plans need to include the ability for users to access desktops and applications. With physical desktops, this was often planned on the assumption that the users could work from another office location, either owned or leased, that had been primed for them. That secondary location would provide replacement imaged physical devices. Times and challenges have changed, and the assumption that users can access an office location no longer holds true. The focus is now on how to provide both continuity for the user, as well as business continuity for the IT systems. When planning for user access, a remote-first approach must now be taken.
A remote-first approach that relies on physical desktops is challenging and introduces many operational and security concerns. Providing disaster recovery for physical desktops is difficult, especially when combined with the need for the flexibility of a remote-first experience.
By contrast, Horizon allows flexibility for providing disaster recovery to desktops and the applications.
- Horizon is designed to allow users a remote and roaming experience, where they can access their desktop and applications from any device in any permitted location.
- Horizon abstracts the desktop from the physical device. This allows the components that make up a user’s desktop and data to be replicated to alternative data centers or locations to provide site resiliency and disaster recovery.
- Horizon offers multiple deployment options, from on-premises data centers to running in a variety of cloud platforms. This offers flexibility with regard to the consumption model and choice about the location of recovery services.
Purpose of This Guide
Whether you have already deployed Horizon or you are looking at deploying Horizon, it is important to understand, plan, and design for a Horizon environment that is resilient at all levels, including the ability to deliver disaster recovery to protect against site or location outages.
This guide covers the considerations and discusses how to approach providing disaster recovery for Horizon-based workloads, including:
- Strategy and approach
- Available deployment options for recovery locations or sites
- Ways in which the resources from multiple Horizon environments can be presented to the user
- Considerations for making user applications and data available in the recovery location
- Considerations when enacting disaster recovery and failing over to the recovery site
Figure 1: Disaster Recovery for VMware Horizon
Although outside the scope of this document, consideration should also be given to business continuity of enterprise applications, databases, and other systems that users will require access to when using their Horizon resources during a DR event.
To understand the terms used throughout this guide see the Glossary of Terms at the end of this document.
Hybrid, Multi-Cloud Consumption Strategy
Many choices and decisions must be made about how to approach disaster recovery, what kind of service to offer to users, and what data to replicate and reproduce. Horizon offers many deployment options for a hybrid consumption of resources to provide both expansion and disaster recovery capabilities.
Figure 2: Multi-Cloud Consumption Strategy
The kind of service delivered from a DR site should be driven by the requirements of the users and the business, balanced with any limitations or cost restrictions. At a high level, DR for Horizon consists of offering users an equivalent service (desktops and published applications) from an alternative site or location. For example:
- Users are normally serviced from site 1 at location A.
- In a disaster recovery event, users will be delivered an equivalent service from site 2 in location B.
Figure 3: Equivalent Horizon Resources Available from a Second Site and Location
When designing a disaster DR solution, choosing a geographical location involves many considerations and has many implications, including:
- Safe distance, to ensure that the DR site is far enough away from the production location to be unaffected by any geographic disasters such as storms or flooding
- Connectivity, so that the DR site can adequately service your users in the event of an outage
- Security and monitoring
- Cost of ownership
Each individual Horizon environment should be built to be highly available from a user’s point of view. High availability is achieved by implementing redundancies and leveraging platform-based functions to service end users in accordance with their expectations. This ensures that the service can still be delivered from that location, even in the event of single-component failures. But even though a single-pod deployment of Horizon might be highly available, it cannot, by itself, provide a DR solution.
If you implement a second Horizon pod in the same site, you can configure the pods to work together to provide users with an always-on experience of service. You can leverage global entitlements with Cloud Pod Architecture or multi-cloud assignments with VMware Horizon® Universal Broker™, as is discussed later in this document, in the section How Horizon Users Leverage Multi-Cloud Resources.
Although deploying multiple pods in a single geographic locality might provide a more highly available service, it typically does not protect against local disasters. For example, if the recovery site is located within the same building, site, or power grid, or is serviced by the same local Internet facilities, it is likely not suitable as a recovery site. A good disaster recovery site should not share any single point of failure with the primary, or production, site.
Just as you can deploy multiple pods in a single site, you can also configure pods in multiple sites to work together to provide users with a cohesive experience. Horizon can be implemented in various types of multiple-site deployments and can be be presented to end users as a single way of consuming.
The primary benefit of a multiple-site deployment is that you have alternate sites that can service users in the event of a disaster for a given site. In multiple-site deployments, the production site or sites contain the Horizon deployment that users access as a part of their normal activities, whereas alternate sites contain information and applications that are built from the primary repository information. Similarly, to deploying multiple pods in a single site, you can also configure pods in different sites to work together to provide users with a cohesive experience.
For the examples that follow, we assume that multiple-site deployments are geographically distanced from each other to reduce the risk of a disaster effecting both locations.
Types of Sites or Deployments
Another decision that you need to make is the purpose and state of readiness of each individual deployment of Horizon. Will the DR location and the Horizon deployment in it be fully functional and running at full scale, or will it be running a minimal infrastructure that can be scaled up during a recovery event?
Factors to consider include the cost of the recovery site, both during normal operations and during a DR event. The type of recovery deployment chosen has an impact on the recovery time objective (RTO)—how long it takes to bring the system back online—and, potentially, the recovery point objective (RPO)—how much data the business can afford to lose, such as 10 minutes, one hour, or one day of data.
For example, in the event of using a cold recovery site, what number of users would be impacted during the time it takes to bring the cold site online? What would be the revenue loss or other impact to the business as a result of this downtime?
Table 1: Recovery Site Types and Considerations
Recovery Site Type
Other factors to consider include complexity of failover processes, capacity planning, and the tasks that will need to be carried out to enable the recovery site in the event of an outage.
A cold site is equipped with appropriate infrastructure components and with adequate space that allows for the installation or buildout of a set of systems of services by the key staff required to resume business operations. This type of site might consist of core components, such as Horizon Connection Servers, that are normally powered off. During an outage the servers would need to be powered on, additional servers built to scale out for capacity, and pools of virtual desktops and farms of RDSH servers created to provide services for users.
The term pilot light refers to a small flame that is always lit in devices such as gas-powered heaters and can be used to start the devices quickly when required. Relative to disaster recovery, a pilot-light environment contains all the core components of a distinct system or service, and it is adequately maintained and regularly updated. The implementation is always on and is built to functional equivalency of the production site.
In a pilot-light site, user-based desktop and applications resources are kept at minimal levels for standard operation. When a disaster event is declared, the relevant IT teams are directed to add additional capacity. This implementation allows you to restore a system at an effective scale quickly and efficiently. Scaling up the DR environment might include:
- Acquiring more hardware hosts or capacity
- Adding more components, such as Horizon Connection Servers, to increase the scale and throughput capability
- Increasing the size of desktop pools or RDSH server farms
When utilizing a Software-Defined Data Center (SDDC)-based cloud platform during normal operations, you can choose to keep a small host footprint in the recovery location where you will deploy your Horizon instance. An important consideration for DR then becomes host availability in the cloud platform and the ability to increase the resources available to your recovery environment.
You can opt to have a DR environment that is fully built and scaled out so that it is ready to service users in a recovery event with minimal effort or intervention. In many cases, it makes sense to utilize both the original production environment and this new recovery environment during normal operations, by servicing different groups of users from both locations.
In this type of implementation, each site is accessed by users regularly and is always live. Users are typically spread among each implementation based on location (nearest location or specific site is designated as the default for that user), and users may use other sites in the event of an outage or disaster. When both sites are actively used during normal operations, users should be assigned a default site. This makes operational challenges such as replication of data more manageable.
- Users in group A are normally serviced out of site 1 at location A and can be failed over to site 2 at location B.
- Users in group B are normally serviced out of site 2 at location B and can be failed over to site 1 at location A.
Figure 4: Horizon in Two Sites with Active Users in Both Sites
Although spare hosts and capacity are usually available that can be used to expand your recovery site, depending on your RTO (recovery time objective) and growth requirement, you might not be able to reach your target number right away. The only way to guarantee the number of hosts you need right away is to reserve them ahead of time, but the trade-off is increased cost.
Note: Different platforms have varying expansion capabilities. New hosts in the same cluster might be created serially or in parallel, while hosts in different clusters are usually created in parallel. Investigate and understand how the selected platform provisions new hosts and whether faster host availability might be achieved by using more clusters.
Carry out tests to understand the time required to acquire capacity, add more components, and provision additional desktop clones or RDSH server clones.
When using VMware Horizon® Cloud Service™ on Microsoft Azure, and the native-Azure infrastructure platform, you need only request additional VM capacity from Microsoft (see Standard quota: Increase limits by region) and once granted, you can expand your deployment. With a native Azure implementation, Microsoft is responsible for maintaining the proper amount of hardware capacity to support the demands of the workloads in any given region.
Work with your VMware sales representative to ensure that you will have adequate DR capacity when you need it.
Segmenting Use Cases
For most organizations, it is overly burdensome to provide like-for-like functionality in the event of a disaster. Maintaining duplicate copies of all possible systems might be an impossible task. For practical purposes and to reduce cost, many organizations decide that they will provide limited or reduced functionality for users in the event of a disaster.
Segment end-user populations into tiers in terms of RTO. Some user segments might require a recovery desktop right away or within a very short period. For these user segments, you might have recovery desktops created and on standby for them. Other user segments might be able to tolerate a longer RTO and might require a recovery desktop only after a longer period.
Also consider what basic functionality must be recovered and by when for each of the user segments. Do you need to recover all the functionality or just offer the core apps and data that are considered essential? With some user segments, you might restore additional functionality over a longer period.
By segmenting and prioritizing users, and deciding when and what functionality is needed, you can plan for recovery of essential services and time the phased recovery of others, if required. This approach allows for flexibility in capacity planning and gauging the time it takes to acquire new hosts and provision additional Horizon desktops.
The types and purposes of Horizon clones can affect how you decide to reproduce the service in the recovery location.
- Nonpersistent desktops and published applications – Data and user configuration is extractable from the desktop.
- Full clones – Might contain user configuration and data, which cannot be extracted.
Unlike traditional DR solutions for server applications, where replication of all data from the production site to the recovery site is needed, we recommend a different approach for Horizon when using nonpersistent desktops or published applications. Nonpersistent desktops and published applications use stateless virtual machines that can be created and recreated very quickly. It does not make sense to replicate these VMs across sites.
Separate Horizon pods can be deployed on a variety of platforms, and users can be entitled to both the production and DR versions of their Horizon resources. Using Universal Broker, Workspace ONE Access, or Cloud Pod Architecture can assist in making this easier for users and give them a better experience.
You will need to keep persistent data such as user profiles, user data, and golden VM images synced between the two sites by using a replication mechanism, such as DFS-R in a hub-spoke topology or another third-party file share technology. If you also use VMware App Volumes™ and VMware Dynamic Environment Manager™, App Volumes packages and file share data will also need to be replicated from the production site to the recovery site, as discussed later in this document, in the section Data Replication.
The disaster recovery workflow recommended in the previous section works well for nonpersistent clones. There are some additional considerations for protection of persistent full-clone desktops.
First, consider: Do your users require mirror-image desktops after a production site failure? If the answer is yes, you will need to replicate your production full-clone desktops periodically to the recovery site. This is the most expensive type of protection. For every production full-clone desktop, you will need an equivalent recovery full-clone desktop in a recovery location, always running. You will also need to script the import of recovery full-clone desktops into the Horizon Connection Servers on the recovery site as a manual full-clone pool.
Most customers find that, given the cost of providing a fully mirrored desktop, it is acceptable to give their persistent full-clone desktop users a recovery desktop that offers a similar service by using one of the following strategies:
- Create a new full-clone desktop in the recovery site using the same golden image as was used in production.
- Create instant-clone desktops that can offer a good-enough service in the event of a disaster.
In these scenarios, because you are not replicating the full-clone desktops, any user customization or data not saved in a file share and replicated to the recovery site will be lost, so you will need to ensure that all important user data resides on a file share.
VMware Horizon can be deployed on any VMware vSphere® or VMware Cloud™ certified partner platform. Horizon can also be consumed from Horizon Cloud on Microsoft Azure. To provide business continuity and an alternate location to run Horizon workloads, you can either use the same deployment option that you already employ or use one of the other deployment options.
Figure 5: Multi-Cloud Deployment Options for Horizon
VMware offers the following Horizon deployment options:
- Horizon deployed on-premises in a private data center
- Horizon Cloud Service on Microsoft Azure
- Horizon on VMware Cloud™ on AWS
- Horizon on Azure VMware® Solution
- Horizon on Google Cloud VMware® Engine
- Horizon on Oracle Cloud VMware® Solution
- Horizon on some other infrastructure provided by one of the VMware partners listed in VMware Cloud Providers
If you use the same deployment option for both your production site and your alternative site, place the additional site in a different location or different cloud region to ensure separation. Each deployment option should be designed, deployed, and managed separately.
Figure 6: Horizon Infrastructure in Two Sites for Disaster Recovery
If you choose a vSphere-based Horizon deployment, once you have completed the deployment, you have the option of connecting it to the Horizon Control Plane using the Horizon Cloud Connector. Doing so gives you the ability to manage your hybrid environment from the Horizon Universal Console user interface. If you choose Horizon Cloud on Microsoft Azure, the deployment is connected to the Horizon Control Plane, and you do not need the Horizon Cloud Connector. For more information, see Getting Started with VMware Horizon Service.
To ease user experience and consumption during an outage event, you can deploy Universal Broker, Workspace ONE Access, or both with Horizon.
This document applies broadly to providing disaster recovery with one or more deployment options for Horizon, where sites are placed in two different locations.
Horizon Deployed in a Private Data Center
Many companies start with a private data-center deployment of VMware Horizon, which runs on a vSphere Software-Defined Data Center (SDDC) infrastructure. When designing a DR solution, one deployment option for the recovery site is to create another on-premises Horizon environment in a different location.
For more information about this multi-site approach, see the Horizon Architecture chapter in the VMware Workspace ONE and Horizon Reference Architecture.
Horizon Cloud on Microsoft Azure
Horizon Cloud on Microsoft Azure is a Horizon platform running on Microsoft Azure that leverages a different set of building blocks than Horizon on vSphere to achieve the same goal of delivering virtual desktops and applications. Deploying a Horizon Cloud pod on a Microsoft Azure infrastructure is straightforward. If your organization does not already have access to Azure resources, Microsoft provides you details on how to acquire Azure capacity on their Azure portal.
Figure 7: Horizon Cloud Service on Microsoft Azure Logical Architecture
For more information, see the Horizon Cloud on Microsoft Azure Architecture chapter of the VMware Workspace ONE and Horizon Reference Architecture. It is important to understand the components that are deployed, how Horizon Cloud on Microsoft Azure scales, and how it is designed for multiple sites. Refer to these sections of the VMware Workspace ONE and VMware Horizon Reference Architecture:
Horizon on VMware Cloud on AWS
Horizon on VMware Cloud on AWS delivers a seamlessly integrated hybrid cloud for virtual desktops and applications. It combines the enterprise capabilities of the VMware SDDC (delivered as a service on AWS) with the capabilities of VMware Horizon.
With this solution, you can provision an entire SDDC, including the Horizon management components, in a matter of hours.
- See Rapidly Build and Scale Horizon 7 Desktops and Applications with VMware Cloud on AWS.
- Watch this brief VMware Cloud on AWS – Feature Walk-through video to see how easy it is to deploy Horizon on AWS.
Figure 8: Horizon on VMware Cloud on AWS
For more information, see the Deploying Horizon on VMware Cloud on AWS guide.
Horizon on Azure VMware Solution
Azure VMware Solution (AVS) is a cloud platform built on the VMware Cloud Foundation™, a comprehensive offering of software-defined compute (vSphere), storage (VMware vSAN™), networking (VMware NSX-T Data Center™), and management (vSphere and VMware HCX®) services. With this option, you deploy Horizon in an Azure VMware Solution private cloud, which also lets you take advantage of Azure’s high availability, disaster recovery, and backup services.
This combination of customer-managed VMware Horizon running on Microsoft-managed vSphere infrastructure gives you control over your desktop virtualization infrastructure (VDI) while removing the need to manage the underlying SDDC and hardware components. You also get access to Azure native management, security, and services as well as the global Microsoft Azure infrastructure.
Figure 9: Horizon on Azure VMware Solution
Horizon on Azure VMware Solution differs from Horizon Cloud on Microsoft Azure in the following way: Horizon Cloud on Microsoft Azure is a VMware-managed Horizon solution that provides desktops and published apps as a service (DaaS) using a Microsoft Azure public cloud infrastructure. Native Azure instances, rather than a vSphere infrastructure, are used.
With Horizon on Azure VMware Solution, because AVS uses the same VMware Horizon and vSphere components that you might have on-premises, you can build a scalable, elastic, hybrid platform without a complicated migration. For more information, see the Horizon on Azure VMware Solution Architecture chapter in the VMware Workspace ONE and Horizon Reference Architecture.
Horizon on Google Cloud VMware Engine
Google Cloud VMware Engine offers a private cloud environment that can be used for Horizon deployments to address use cases such as data-center extension, disaster recovery, and burst capability. Companies that already have an on-premises Horizon environment can use their existing VMware tools, skills, and processes with Horizon on Google Cloud VMware Engine.
With Google Cloud VMware Engine, a VMware Cloud Foundation stack runs on the Google Cloud Platform, meaning that a VMware SDDC is provided as a service on Google Cloud. The VMware software stack includes vSphere, VMware vCenter Server®, vSAN storage virtualization software, the NSX-T Data Center networking platform, and VMware HCX, an application mobility platform for cloud migration.
Figure 10: Horizon on Google Cloud VMware Engine
The VMware SDDC runs natively on a Google Cloud bare metal infrastructure in Google Cloud locations and fully integrates with the rest of Google Cloud. Google takes care of managing the SDDC, providing full end-to-end support, including licensing, software upgrades, and patching.
For more information, see the Horizon on Google Cloud VMware Engine Architecture chapter in the VMware Workspace ONE and Horizon Reference Architecture.
Horizon on Oracle Cloud VMware Solution
Oracle Cloud VMware Solution provides a private cloud environment that can be used for Horizon deployments to address use cases such as data-center extension, disaster recovery, and burst capability. Companies that already have an on-premises Horizon environment can use their existing VMware tools, skills, and processes with Horizon on Oracle Cloud VMware Solution.
With Oracle Cloud VMware Solution, a VMware Cloud Foundation stack runs on the Oracle Cloud Infrastructure, meaning that a VMware SDDC is provided as a service on Oracle Cloud. The VMware software stack includes vSphere, vCenter Server, vSAN storage virtualization software, the NSX-T Data Center networking platform, and VMware HCX, an application mobility platform for cloud migration.
Figure 11: Horizon on Oracle Cloud VMware Solution
How Horizon Users Leverage Multi-Cloud Resources
In the event of a disaster, users need access to the Horizon resources they use daily. These Horizon resources can be virtual desktops, published applications, or both. This section reviews the ways to present resources to users that will allow them to fail over to a recovery site in the event of a disaster.
There are several options to choose from, and in some cases a combination of them may be used. Each option has some restrictions and considerations, with some presenting a single view of the resources to the users that represents both their production and recovery resources, while some options present the two resources separately.
Figure 12: Multi-Cloud Consumption Options
Using Multiple Horizon Client Shortcuts
The VMware Horizon® Client software lets users access published applications and VDI desktops. One option for providing users access to two distinct Horizon environments (production and recovery) is to create two shortcuts on the Horizon Client desktop and application selector. One shortcut accesses the normal production Horizon environment. The second shortcut is to be used only in a DR event, to access the recovery Horizon environment. In a DR event you would inform your users to connect to the Horizon environment in the recovery site.
Additionally, you can use uniform resource identifiers (URIs) to create web page or email links that when clicked, open the Horizon Client, connect to a specific Horizon environment, or open a virtual desktop or published application.
Using Multi-Cloud Assignments with Universal Broker
The Horizon Universal Broker is the cloud-based brokering technology used to manage and allocate virtual resources from multi-cloud assignments to end users. It allows users to access multi-cloud assignments in your environment by connecting to a fully qualified domain name (FQDN), which is defined in the Horizon Universal Broker configuration settings. Through the single Horizon Universal Broker FQDN, users can access assignments from any participating Horizon pod in any site.
Multi-cloud assignments are used to assign users or groups to resources. Either Horizon Cloud on Microsoft Azure or cloud-connected Horizon 8 (or Horizon 7) resources can be managed with multi-cloud assignments. In the case of a DR solution, pools at both the production site and the recovery site can be selected as a target of an assignment.
The production site is assigned as the home site for users. If the production site is available, users will be presented resources in the production site. If the production site goes down, users will automatically be routed to the DR site to access their resources. For more information, see Creating Assignments in a Universal Broker Environment.
Figure 13: Multiple Desktop Entitlements Presented and Accessed Through Universal Broker
Universal Broker and multi-cloud assignments work together to give end users the perception of using a single resource while hiding the complexity around where those resources are sourced from. Universal Broker and multi-cloud assignments do not work in the same way for all platforms and capacity types. Refer to Creating Assignments in a Universal Broker Environment and Considerations for Assignments in a Universal Broker Environment When a Pod Goes Offline for details on configuration options and results of configuration items with Universal Broker and multi-cloud assignments.
To connect a Horizon pod to use the Universal Broker, you need to leverage the Horizon Cloud Connector and universal licensing. The Horizon Cloud Connector is a virtual machine that enables the Horizon Service to integrate with your Horizon pods. For more information, see High-Level Workflow When You are Onboarding an Existing Manually Deployed Horizon Pod as Your First Pod to Your Horizon Cloud Tenant Environment.
- For Horizon 8 (and Horizon 7), you must deploy a Horizon Cloud Connector for each Horizon pod that will use the Horizon Service and its features, which include the Universal Broker. The Horizon Cloud Connector is also required when subscription licensing is used. See the Horizon Cloud Connector section of the Horizon Architecture chapter in the VMware Workspace ONE and Horizon Reference Architecture.
- For Horizon 8 (and Horizon 7), you must also install a Universal Broker plug-In on each Connection Server, as described in Horizon Pods - Install the Universal Broker Plugin on the Connection Server.
- With Horizon Cloud on Microsoft Azure, the Universal Broker components are already present and configured on each pod manager. You do not need to install the Horizon Cloud Connector to use Universal Broker features.
- With Horizon 8 (and Horizon 7), when creating a pool, you must select the Desktop Pools Settings option of a Cloud Managed pool in order to use a pool with multi-cloud assignments.
Figure 14: Select Cloud-Managed in the Horizon Pool Wizard
See the following documents for the latest on limitations when using the Universal Broker:
- Known Limitations of Universal Broker
- Introduction to Universal Broker and Single-Pod Broker (Considerations When Selecting a Broker) section
For a list of the prerequisites for using Universal Broker, see System Requirements for Universal Broker.
Using Global Entitlements with Cloud Pod Architecture
Horizon Cloud Pod Architecture (CPA) introduces the concept of a global entitlement (GE) through joining multiple Horizon pods together into a federation. CPA can be used between both on-premises deployments and cloud-based deployments.
As with Universal Broker, use of Cloud Pod Architecture is optional. Both Universal Broker and CPA allow you to entitle users to desktops and applications in multiple pods.
A global entitlement for a user or group can contain desktop pools or published applications from one or more Horizon pods either in the same location or in different locations. When used as part of a recovery strategy, a global entitlement could contain Horizon resources from both the production environment and the recovery environment. Because the user sees a single entitlement in the Horizon Client, the user has a straightforward means of accessing their recovery Horizon desktops or published applications when needed.
The following figure shows a logical overview of a basic two-site CPA implementation.
Figure 15: Cloud Pod Architecture
Additional policy controls, such as home site, home site override, and site scope can be used to control the behavior and placement of a session for the user.
The scope policy determines the scope of the search when Horizon looks for desktops or applications to satisfy a request from the global entitlement. The scope can configure Horizon to search only on the pod to which the user is connected, only on pods within the same site as the user's pod, or across all pods in the pod federation.
Note: Cloud Pod Architecture cannot be used with Horizon Cloud on Microsoft Azure.
One consideration to be aware of is potential hair-pinning of the Horizon protocol traffic through another Horizon pod than the one the user is consuming a Horizon resource from. This can occur if the user’s session is initially sent to the wrong Horizon pod for authentication. This flow is illustrated in the following figure.
Figure 16: Horizon Protocol Traffic Hair-Pinning Through Another Horizon Pod with CPA
Note: Instead of using CPA, you can use multi-cloud assignments with Universal Broker, as discussed in the previous section, to avoid this potential for protocol traffic hair-pinning.
Using the Workspace ONE Access Catalog
VMware Workspace ONE Access™ (formerly called VMware Identity Manager) provides single sign-on and conditional access to a self-service catalog of Horizon virtual desktops and published applications, in addition to any SaaS, web, cloud, and native mobile applications you might want to configure. The following screenshot shows an example of the self-service catalog displaying shortcuts for launching a production desktop and a DR desktop.
Figure 17: Workspace ONE Access Catalog Opened in a Browser
Workspace ONE Access can be integrated with Horizon 8 (and Horizon 7), or Horizon Cloud on Microsoft Azure. This allows Horizon entitlements to be synchronized to Workspace ONE Access so that when a user logs in, they see the Horizon desktops and applications that they are entitled to. For details on the integration of Workspace ONE Access and Horizon, see High-Level Horizon-Workspace ONE Access Integration Design.
One option for providing users access to their DR resources is to integrate both the production and the recovery Horizon environments into Workspace ONE Access. Users can be assigned resources at both the production and the recovery sites. Users see both Horizon entitlements when they authenticate to Workspace ONE Access. To simplify desktop selection for end users, the pools can be named something like “Windows 10 – Production” and “Windows 10 – DR,” as shown in the figure above.
Alternatively, Workspace ONE Access can also be used in conjunction with multi-cloud assignments using Universal Broker or with global entitlements as part of Cloud Pod Architecture. When these are used with Workspace ONE Access, the user only sees a single shortcut for the resource in the self-service catalog.
Designing your disaster-recovery service involves choosing how to make various components available in both the production and the recovery environments. These components include user data, applications, and golden VM images. It might seem that all you need are the golden images used to recreate desktops, but without applications and the user’s data, you do not have a full-service offering.
Figure 18: Data Replication Between Horizon Environments
For each of these components, you need to decide whether your DR strategy will include that component and, if so, which method to use for replicating it across environments or reproducing it in the recovery environment. Determine what data is considered important enough to replicate to your DR location, and how often that replication takes place. You can also look at a phased approach to recovery of data, where the most important data is replicated frequently and made available quickly in a DR event, but less important data is replicated less frequently and might become available only after a longer period.
Figure 19: Data Replication Considerations
Replicating the Golden VM Image
A golden VM image is the base VM from which pools of VDI desktops or farms of RDSH servers are created. You will want to use the same VM image in all locations whenever possible because without a golden image, no pools or farms can be created.
There are various methods that you can use to replicate the golden VM between your production and recovery environments.
Note: The data format (VHD) that Horizon Cloud on Microsoft Azure uses for VMs is incompatible with that used by vSphere (VMDK). Therefore, VM images are not directly interchangeable between Microsoft Azure and vSphere-based platforms.
For more information on how to create a golden VM image, see Creating an Optimized Windows Image for a VMware Horizon Virtual Desktop.
The Image Management Service is a component of the Horizon Control Plane, included with the VMware Horizon Service. The cloud-based Image Management Service simplifies and automates the management of system images used by desktop assignments, such as desktop pools and server farms, across your cloud-connected Horizon pods.
The Horizon Image Management Service has the following features and benefits:
- Centralized catalog of images managed across all cloud-connected Horizon pods.
- Automated replication of images across cloud-connected Horizon pods.
- Automated version control and tracking of images.
- Automated updates to desktop assignments with customized images by using desktop markers. With desktop markers, you can easily update desktop pools and server farms with newer golden images or roll back to older versions of images, as necessary.
The Image Management Service is currently only supported for use with Horizon deployed on-premises and with Horizon Cloud on Microsoft Azure. Availability and support for other cloud platforms is planned.
For more details on what the Image Management Service is and how it works, see the Horizon Image Management Service section of the Horizon Control Plane Services Architecture chapter of the VMware Workspace ONE and Horizon Reference Architecture, and see the Image Management Service section of the Horizon Control Plane Services focus page.
vSphere content libraries are container objects for VM and vApp templates and other types of files, such as ISO images and text files. You can use the templates in the library to deploy VMs and vApps in the vSphere inventory. You can also use content libraries to share content across vCenter Server instances in the same or different locations.
You can create a local content library to store and manage content in a single vCenter Server instance. If you want to share the contents of that library, you can enable publishing. When you enable publishing, other instances can use a subscribed content library that points to the library. Its content can be used if HTTP(S) traffic is allowed between the two systems.
Using a content library is ideal when you are leveraging one or more platforms that do not allow attaching extra datastores, such as Google Cloud VMware Engine, Azure VMware Solution, and VMware Cloud on AWS.
For more details on content libraries, see Using Content Libraries in the vSphere documentation.
To store golden VM images on platforms that allow attaching datastores, such as on-premises vSphere, you can use network-based storage, such as an NFS or iSCSI datastore, attached to every host.
For more details on using a shared datastore, see Mount Datastores in the vSphere documentation.
If using a content library or shared datastore is not desired or possible, you can export the golden VM to an OVF (Open Virtual Format) template file and import the OVF file on other instances.
You can also use VMware PowerCLI to export a VM to an OVA file.
Get-VM -Name <VM-Name> | Export-VApp -Destination <Export-Directory> -Format Ova -Force
Get-VM -Name Win10-20H2 | Export-VApp -Destination I:\Images Format Ova -Force
Applications, which can be delivered either in the virtual desktop or as published applications by RDSH servers, need to be replicated, or reproduced, and available in the recovery site.
The methods you choose will affect the way you design and deliver your DR service. If App Volumes is used with your Horizon implementation, you must consider which features are supported on your chosen infrastructure platform before proceeding with your DR planning.
Applications can be installed directly in your golden images. Consider the following benefits and limitations.
Table 2: Considerations for Applications in the Golden Image
With applications installed in your golden image, DR planning should focus on making the golden image available for use at the recovery site. Earlier in this document, the Replicating the Golden VM Image section discussed the different methods of replicating golden images containing applications.
App Volumes abstracts applications from the golden image and dynamically delivers them to users and VMs as needed. Adding App Volumes to your Horizon implementation allows you to separate application lifecycle management from OS lifecycle management, resulting in several operational efficiencies. With the increasing supportability of App Volumes across infrastructure platforms, you might choose to include App Volumes in your DR plan. Consider the following benefits and limitations.
Table 3: Considerations for Applications in App Volumes Packages
Applications are abstracted from the golden image and packaged in application packages. Your DR strategy should include duplication of App Volumes infrastructure at the recovery site, along with any required application packages.
Note: App Volumes is integrated and included as a service with Horizon Cloud on Microsoft Azure. When using Horizon Cloud on Microsoft Azure as your target DR platform, no additional App Volumes infrastructure is required.
You can either replicate existing App Volumes packages or recreate them, depending on the type of source and destination environments. Recreating packages follows the same process used when the original packages were created and works regardless of the types of Horizon deployments being used.
Replicating existing App Volumes packages is currently only supported between on-premises vSphere-based environments. To build and replicate application packages between two vSphere sites, see Multi-Site Design Using Separate Databases in the “App Volumes Architecture” chapter of the VMware Workspace ONE and Horizon Reference Architecture. The section describing the use of storage groups to replicate application packages between App Volumes instances with a non-attachable datastore is currently applicable when your production and recovery infrastructure platforms are both on-premises vSphere.
In a nonpersistent virtual desktop environment, all applications that the user installs are removed after the user logs out of the desktop. Writable volumes configured with a user-installed app (UIA) template store the applications and settings of users and make the writable volume persistent and portable across nonpersistent virtual desktops.
Note: App Volumes user-writable volumes are not supported with Horizon Cloud on Microsoft Azure.
Table 4: Considerations for Applications Delivered Using Writable Volumes
Supports use cases such as providing development and test machines for users to install custom applications on nonpersistent virtual desktops.
Replicating writable volumes between sites adds complexity to your DR strategy.
When creating a DR design, you must decide whether replicating writable volumes from the production site to a DR site is necessary. This decision will guide your DR strategy. In the context of replication, there is a key difference between App Volumes packages and App Volumes writable volumes.
- Packages are read-only to users and therefore can be safely replicated. All copies can actively be used.
- Writable volumes can be written to by the user, so that although replication is possible, only one copy will be designated as the live copy that is actively being used.
Figure 20: Replication of Applications with App Volumes
You might find that providing only those applications delivered in the golden image or in App Volumes application packages is sufficient in the case of a disaster, which negates the need to replicate the writable volumes. In this case, no action is required. If you decide not to replicate existing writable volumes, consider providing new writable volumes at the DR site. This provides end users the option to reinstall user-installed applications if absolutely required. In this case, create and assign new writable volumes in the App Volumes instance at the recovery site.
Replicating writable volumes requires careful planning because users can write to writable volumes. You can back up and restore writable volumes and essentially copy them from one site to the other, but you do not want the user to access the copy until a DR event. If you allow the user access to both copies of their writable volume, the writable volumes might become out of sync. From the user’s perspective, only one site and one copy of their writable volume should be active. The copy at the recovery site is a standby and should be made active only in the case of a DR event.
Of course, this also raises questions about RPO and RTO. How often should you copy the writable volumes from site 1 to site 2? How long would it take to make the copy in site 2 active for the user? Some organizations decide not to protect the writable volumes because of the data replication challenges and cost. For them, in a DR event, presenting the desktop with the App Volumes packages and Dynamic Environment Manager data suffices.
You could also consider only backing up the writable volumes very infrequently or not initially giving users their writable volumes in a DR event but instead focusing on the core components and then adding nice-to-have things like writable volumes later.
Replicating Profile Data
Windows profile data includes user content data and user configuration data. You may choose to manage profile data with one or multiple tools. For additional information about Windows profile components, see Anatomy of a User Profile. The following sections provide an overview of commonly used tools to manage profile data and the options for replicating data between the production and recovery sites.
Many technologies are available to aid in the replication of data. Due to the nature of profile and user data, most replication should be regarded as active-passive, where one copy will be live while the other is passive, standing by to be used in a DR event. You will need to consider how to promote the passive copy in a DR event. Because the DR copy becomes the live copy during a DR event, and users will make changes to that copy, you will also need to understand how to reverse replication when the production site becomes available and how to fail back when the DR event is resolved.
Figure 21: Replication of User and Profile Data
VMware Dynamic Environment Manager provides profile management by capturing user settings for the operating system and applications. User content data is managed through folder redirection, which can be configured either by using Dynamic Environment Manager or by using Windows policy objects. When designing your DR service, you may choose to restore user configuration data, user content data, or both.
Unlike traditional application profile management solutions, Dynamic Environment Manager does not manage the entire profile. Instead, it captures user configuration data for applications and Windows settings that the administrator specifies. This reduces login and logout time because less data needs to be loaded. Alternatively, for specific applications, the settings can be dynamically applied when a user launches the application, rather than at login, making the login process more asynchronous.
Dynamic Environment Manager (DEM) and folder redirection require little infrastructure, making replication relatively easy. The following components should be considered in your DR strategy:
- File server infrastructure – Used to host folder redirection and Dynamic Environment Manager shares.
- Configuration share – Share on a file server containing Dynamic Environment Manager configuration rules.
- Profile archives share – Share on a file server containing user configuration data.
- FlexEngine configuration (GPO or NoAD) – XML- or ADMX-based configuration for the Dynamic Environment Manager (FlexEngine) agent.
- Dynamic Environment Manager management console – Stand-alone management console.
Figure 22: Dynamic Environment Manager Architecture
Replication of the Dynamic Environment Manager configuration share and profile archives share can be accomplished using various file replication technologies such as Microsoft Distributed File System Replication (DFSR).
Table 5: Considerations for Replication of Dynamic Environment Manager Shares
Profile Archive Shares
Users have read access only. Only admins make changes.
Users can read and write to these shares.
Replication is supported.
Replication is supported.
All copies can actively be consumed by users.
Only one copy should be actively consumed by users.
One of the replicated copies can be used in a DR event but will need to be promoted to become the active copy.
See the Disaster Recovery and Multi-site Design sections of the “Dynamic Environment Manager Architecture” chapter in the VMware Workspace ONE and Horizon Reference Architecture for more detail.
FSLogix provides VHD-based profile redirection technologies, which redirect part or all of a user profile to a remote file share. Profile Containers can store the entire user profile or may be combined with folder redirection to abstract user content data from VHDs to an SMB file share.
The type of data you choose to store on the Profile Containers will influence your DR strategy. Office Containers are used to store cache data for Office 365. Because this data is easily and automatically rebuilt from the cloud if the Office Container is lost, Office Containers are typically considered disposable.
For more information see:
- What is FSLogix? to learn more about Profile and Office Containers.
- Integrating FSLogix Profile Containers with VMware Horizon on how to use FSLogix with Horizon.
Note: FSLogix is one of many third-party solutions that work with VMware Horizon. Although FSLogix is often integrated into Horizon designs, VMware assumes no responsibility to provide support for the use of FSLogix software with VMware products.
When designing your DR service, you must decide whether to replicate FSLogix containers to your DR site. If you are using Profile Containers with folder redirection, you may decide to replicate user content data (folder redirection) but not user configuration data (Profile Container). If you are using Office Containers for disposable cache data only, you may decide not to replicate the VHDs to the recovery site.
Consider the following when designing your DR strategy:
- FSLogix stores profile data in VHD(X) files on remote SMB shares. File- or storage-level replication technologies could be used to replicate containers to the recovery site.
- Cloud Cache may be used in your DR strategy or to create an active-active FSLogix service. See Cloud Cache to create resiliency and availability for additional information.
As with any profile data, you must consider RPO and RTO targets and the practicalities of how frequently you can replicate Profile Containers from one site to another. Sufficient bandwidth between the sites will be needed to replicate the quantity of data that changes and to accommodate the target frequency. Also consider which tasks need to be performed before replication, such as preventing users from changing data during the copy process.
User-writable volumes configured with a profile template may be used to persist part or all of a user profile to a VMDK file assigned directly to an end user. User-writable volumes can store the entire user profile or may be combined with folder redirection to abstract user content data from VMDKs to an SMB file share. The type of data you choose to store in the user-writable volume will influence your DR strategy.
When designing your DR service, you will need to decide whether to replicate App Volumes user-writable volumes to your DR site. If you are using writable volumes with folder redirection, you may decide to replicate user content data (folder redirection) but not user configuration data (VMDK). The earlier section in this guide called Applications Captured in App Volumes User-Writable Volumes discussed considerations regarding replication of writable volumes.
VMware Persona Management preserves user profiles and dynamically synchronizes them with a remote profile repository. Persona Management has been deprecated and removed from VMware Horizon 8 (2012) and later.
VMware View Composer persistent disks redirect the Windows profile to a local VMDK. It is also possible to use persistent disks to store Outlook OSTs, user-installed applications, or simply treat a persistent disk as a secondary hard disk for storage. Using persistent disks for anything other than profile redirection is outside the scope of this guide. View Composer and persistent disks have been deprecated and removed from VMware Horizon 8 (2012) and later.
Persona Management and persistent disks are legacy technologies and VMware recommends upgrading to modern alternatives. See Modernizing VDI for a New Horizon for guidance on selecting and migrating to a modern alternative.
In the event of an outage, business continuity policy will enact DR processes such as failover of the service and delivery of resources from your Horizon DR environment. Depending on your deployment, you might need to perform some tasks before your DR environment can accommodate your users:
- DR infrastructure – Scale up the DR infrastructure to cope with the increase in users and demand.
- Data replication – Promote copies of data in the DR location to become the active instances.
- Applications – Ensure Horizon in the DR location has access to replication or reproduced applications.
- User access – Present the DR Horizon resources to users.
Scale-Up of the Recovery Infrastructure
Depending on the normal running state of your DR Horizon environment, you might have to perform tasks to make it fully functional or to expand its capacity. For example, if you have implemented a “cold site” or “pilot-light” configuration of Horizon on a cloud-based infrastructure, you might need to add capacity by adding hosts and powering on supporting infrastructure components.
If additional capacity is required, you need to understand the process for acquiring capacity and determine the amount of time required. You should also document any additional configuration tasks that might need to be done to make use of this additional capacity.
Another consideration in scaling up the DR Horizon environment is to increase the capacity of Horizon desktop pools or RDSH server farms to handle the increase in users that will be serviced from the DR location. Ideally, any required pools or farms should already exist and be seeded with an initial size so that any provisioning tasks are minimized. Carry out tests to understand the time required to provision additional desktop clones or RDSH server clones.
Failover of Profile Data
During a failover to a recovery site, you should understand how to promote and provide access to the replicated or reproduced profile and user data.
As described earlier, in Replicating Dynamic Environment Manager File Shares, there are two types of file shares.
- Configuration shares can be replicated and be actively accessible to users in the recovery site, so no intervention is required.
- Although profile archive shares can be replicated, the copy in the recovery site is a standby copy. An administrator will need to promote and enable this DR standby copy to allow users access.
To speed up recovery in a DR event, you could allow the recovery to be provided in stages, with the configuration shares available at the time of failover, and the profile archive shares becoming available shortly thereafter.
See Disaster Recovery in the Dynamic Environment Manager Architecture chapter of the VMware Workspace ONE and Horizon Reference Architecture.
If you have chosen to replicate FSLogix VHD(X) files to the recovery site, it is imperative to ensure end users have access to only one site at any given time to ensure profile integrity. The file share or shares containing FSLogix containers at the production site should be active. Under normal operating conditions, the file shares at the recovery site should be passive. If a disaster occurs, you will need to deactivate the production share and promote the passive share to active status.
The following example workflow applies if you are using DFS-R and DFS-N to replicate and make FSLogix containers available at both sites:
- If possible, verify that DFS-R data replication from the active folder target to the desired folder target is complete.
- Deactivate the DFS-N referral status for the active folder target.
- Enable the DFS-N referral status on the desired folder target.
Failover of Applications
This section discusses the failover of applications that are delivered in App Volumes packages or that are installed in user-writable volumes.
If you have built your App Volumes implementations using a multi-instance model, where App Volumes infrastructure components are duplicated at the production and DR sites, failing over in the case of a disaster is a simple process. The key to success is to ensure that the infrastructure, application packages, writable volumes, and assignments are replicated or recreated in the DR site before an outage occurs.
App Volumes application packages are utilized by end users when accessing Horizon virtual machines. Ensure application packages are available and that all necessary assignments have been created in the App Volumes Manager instance at the DR site. When you fail over to the DR site for Horizon access, users will automatically receive App Volumes application packages that have been assigned.
App Volumes writable volumes are utilized by end users when accessing Horizon virtual machines. When you fail over to the DR site for Horizon access, users will automatically receive App Volumes writable volumes that have been assigned.
Presentation of Recovery Resources to Users
In a DR event, depending on how Horizon resources are presented to users, and how users are consuming multi-cloud resources, some administrative tasks might be required to direct users to their recovery resources.
Using Universal Broker with multi-cloud assignments in a multi-pod environment makes it easier to handle failover in a DR event. Universal Broker monitors the availability of each Horizon pod configured to use the service. When one pod is unavailable, if users are entitled to a multi-pod assignment and are not prohibited from using capacity in other Horizon pods, users will automatically be routed to available capacity on the other pod.
To set up a multi-cloud assignment in the Horizon Control Plane, see High-Level Steps for Setting Up Horizon Cloud Multi-Cloud Assignments (MCA) for Your Horizon Cloud Tenant.
Important: Be careful not to set up your multi-cloud assignments with a Home Site Restriction because if you do, your users will not be automatically redirected to other pods with available capacity during a DR event.
Once the DR event is over and all capacities are back online, users will be automatically redirected to their primary capacity sites (Horizon pods), according to the rules configured in the multi-cloud assignment. Users will need to log out of all currently assigned resources to be redirected on their next login.
See Considerations for Assignments in a Universal Broker Environment When a Pod Goes Offline for more details.
With Cloud Pod Architecture and global entitlements, sessions are normally delivered from the users’ defined home site. In a DR event, the same global entitlement will also allow users to access resources from the recovery site when their home site is unavailable
Depending on the specific configuration of the global assignment, this failover may require administrative changes on the assignment policies:
- Modify the home site or apply a home site override to ensure that users are directed to the recovery site. See Managing Home Sites.
- Ensure that the scope allows Horizon to search the recovery site and allocate desktops or published applications from the Horizon pods there. See Modify Attributes or Policies for a Global Entitlement.
If the production and recovery Horizon environments have been integrated into Workspace ONE Access users should be directed to use the appropriate shortcut from Workspace ONE Access to access the recovery environment.
- If the production and recovery pods are separate and not using either Universal Broker or Cloud Pod Architecture, the users will have two shortcuts and will need notified to use the DR shortcut.
- If Workspace ONE Access is used in combination with Universal Broker or Cloud Pod Architecture, the users will be presented with a single unified shortcut and do not need to be notified to use a different shortcut.
Where Horizon Client is being used without Universal Broker or Cloud Pod Architecture, or if Horizon is not integrated into Workspace ONE Access, the production and recovery resources must be presented separately to users.
Depending on how the Horizon shortcuts are provided to users, users should be directed to either:
- Use Horizon Client to add a new server and enter the URL administrators specify for the recovery site Horizon pod.
- Use desktop shortcuts, web page links, or links in email, as directed by administrators, to access the recovery site Horizon pod.
Other Technical Considerations
When planning for a new deployment for DR purposes, before you get started building a recovery environment, you will need to perform a health check on your existing environment and review your software versions and licenses. The following section covers the main considerations specific to Horizon and Horizon Cloud, as well as those that are common across all environments.
Before building a DR environment, check that your existing Horizon environment is healthy, has been properly deployed, and is sufficiently sized to cope with the number of users you must support. Check components and amend configurations, as necessary, to make sure they follow sizing, security, and other best-practice recommendations.
Review the following items and update any current environment, as necessary.
- Release versions – Review the versions of software used in the environment to make sure that the Infrastructure is running supported versions and taking advantage of all the fixes, features, and performance improvements in recent releases of Horizon. See the Version Checks section of this guide for more details.
- Sizing – The environment should be sized correctly to cope with not only current demand but also any increase that is expected during a DR event. To ensure that the current and any future environment is designed and sized correctly, review the Scalability and Availability section of the Horizon Architecture chapter and the Scalability section of the Horizon Cloud on Microsoft Azure Architecture chapter in the VMware Workspace ONE and Horizon Reference Architecture.
- Security – Review the Horizon communication routes used, network ports used, and firewall rules to ensure that only required traffic is allowed.
- Authentication – Review how user authentication is handled and evaluate if this should be enhanced.
- Golden VM images – Ensure that best practices have been followed for creating images to be used for virtual desktops and for Windows RDSH servers used for published applications. If the golden VM image is not properly optimized, the virtual desktop or RDSH server that is cloned from it might consume more resources than required and adversely affect user experience. Review Creating an Optimized Windows Image for a VMware Horizon Virtual Desktop for more details.
The versions of software used should be reviewed to make sure that the infrastructure is running supported versions and taking advantage of all the fixes, features, and performance improvements in recent releases of Horizon. Horizon is updated on a quarterly basis, and fixes, features, and performance benefits are included in the new versions.
Check the interoperability matrix to make sure the Horizon, vSphere, and vCenter versions are supported together.
- Infrastructure – Connection Servers, VMware Unified Access Gateway™ instances, vSphere and vCenter, and App Volumes Managers should be running the latest build or a recent build to benefit from the new features, enhancements, and bug fixes.
- Horizon Agent – The version of Horizon Agents used in the virtual desktops should be reviewed and updated where necessary. Ideally these should match the Horizon Connection Server version. Newer versions of agents will include both new features and performance improvements.
- Horizon Client – The Horizon Client on the users’ endpoint devices should also be updated to make sure they have the latest version. Newer versions of the client will include both new features and performance improvements. By default, the Horizon Client for Windows and Mac is configured to automatically check for updates. Android and iOS devices receive updates through the app store used for installation.
Any Horizon environment needs to be properly licensed. Horizon licensing is available in a subscription model, as either a SaaS subscription or a Term subscription. Horizon standard subscription SaaS licenses give the option for either on-premises (private data center) deployments or public-cloud deployments. Horizon universal subscriptions SaaS licenses give you the flexibility to deploy and expand on your platform or platforms of choice with a hybrid multi-cloud deployment.
For more information see:
Summary and Additional Resources
In a DR event, Horizon is a powerful desktop and application virtualization solution that can be accessed across multiple on-premises locations and public and private clouds. This guide outlined the various recovery site types to consider, strategies for providing access to users from multiple locations, methods for replicating user data across sites, and other technical considerations to consider. With careful planning, a Horizon DR solution can minimize the inconvenience users experience during an outage at their production site.
Glossary of Terms
Business Continuity – The capability of an organization to continue running its business at acceptable, predefined levels of function or operational capacity following a disruptive incident.
Business Continuity Plan – A predefined set of procedures that guide an organization on how to respond to an incident that disrupts business function or operational capacity.
Disaster Recovery – The process of restoring and maintaining the relevant data, equipment, applications, and other technical resources on which a business depends.
High Availability – A highly available system supports operations that continue with little or no noticeable impact to the user. A high-availability strategy should remove any single points of failure by using multiple and redundant components within a site.
Redundancy – A design practice of using multiple sources, devices, or connections so that no single point of failure will completely stop the flow of information.
Recovery Point Objective (RPO) – The point in time to which a firm must recover data as defined by the organization. In other words, the RPO is what an organization determines is an “acceptable loss” in a disaster situation. The RPO dictates which replication method will be required (such as nightly backups, snapshots, continuous replication). For example, for some organizations the RPO might be the loss of one hour’s worth of data.
Recovery Time Objective (RTO) – The duration of time and service level within which a business process must be restored after a disruption to avoid unacceptable losses. RTO begins when a disaster hits and does not end until all systems are up and running.
Production or Primary Site – In the context of a production and recovery site, the production site contains the original infrastructure and data. It is the site that is typically in use during normal operations.
Recovery or DR Site – A site that provides a secondary instance or replica of your IT environment and infrastructure. This site provides equivalent resources and is activated when the production site becomes unavailable.
Remote Site – A site that provides a secondary instance or replica of your IT environment—without physical desks and office infrastructure—that your organization’s employees can securely access and use remotely, through standard Internet connections from anywhere.
Cold Site – A site that is equipped with appropriate infrastructure components with adequate space that allows for the installation or buildout of a set of systems of services by the key staff required to resume business operations.
Pilot Light – The term pilot light refers to a small flame that is always lit in devices such as gas-powered heaters and can be used to start the devices quickly when required. Relative to disaster recovery, a pilot-light environment contains all the core components of a distinct system or service, that is adequately maintained and regularly updated. The implementation is always on and is built to functional equivalency of the production site. This implementation allows you to restore and scale the system quickly and efficiently.
Multiple Site – A system or service implemented in multiple locations, where the implementation provides one-for-one functionality of the other location in the event of an outage of any given site.
Mission Critical – A computer system or application that is essential to the functioning of your business and its processes.
SDDC Infrastructure – A VMware based platform being leveraged as an infrastructure platform. VMware software-defined architecture can be deployed in your data center as a private cloud or off-site, using secure infrastructure-as-a-service (IaaS) operated by VMware or one of our certified partners. Most companies choose a hybrid combination of on-premises and IaaS platforms. Not all SDDC platforms have been certified for use with VMware Horizon. Refer to VMware Horizon 8 Announcement and Pricing and Packaging Updates for details.
To learn more about VMware End User Computing solutions, visit the Digital Workspace Tech Zone, your fastest path to understanding, evaluating, and deploying VMware EUC products.
The following updates were made to this guide.
Description of Changes
Authors and Contributors
This guide was written by:
- Graeme Gordon, Senior Staff EUC Architect, EUC Technical Marketing, VMware
- Caroline Arakelian, Senior Technical Marketing Manager, EUC Technical Marketing, VMware
- Chris Halstead, Staff EUC Architect, EUC Technical Marketing, VMware
- Hilko Lantinga, Staff EUC Architect, EUC Technical Marketing, VMware
- Jim Yanik, Senior Manager, EUC Technical Marketing, VMware
- Josh Spencer, Senior Product Line Manager, EUC Technical Marketing, VMware
- Rick Terlep, Senior EUC Architect, EUC Technical Marketing, VMware
To comment on this paper, contact VMware End-User-Computing Technical Marketing at [email protected].
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00234.warc.gz
|
CC-MAIN-2023-50
| 71,998 | 349 |
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/5/html/administration_and_configuration_guide/ch13s11
|
code
|
13.11. Experimental Components
- Sometimes you may need the ability to invoke traditional transaction components, such as EJBs, within the scope of a Web Services transaction. Conversely, some traditional transactional applications may need to invoke transactional web services. The Transaction Bridge (txbridge) provides mechanisms for linking these two types of transactional services together.
- BA Framework
- The XTS API operates at a very low level, requiring the developer to undertake much of the transaction infrastructure work involved in WS-BA. The BA Framework provides high-level annotations that enable JBoss Transaction Service to handle this infrastructure. The developer can then focus more on business logic instead.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00560.warc.gz
|
CC-MAIN-2021-31
| 734 | 4 |
http://serverfault.com/questions/262750/tcpprobe-logs-not-making-sense
|
code
|
I am trying to use TCPProbe over my network to study TCP but the logs it is generating for a simple iperf TCP connection doesn't seem to make any sense. I am running an iperf over two nodes in my network which is a 1Gbps network and I capture TCPProbe logs on the client side (sender). Based on that the snd_cwnd is just increasing to values as big as 8000. I have used TCPProbe in the past and my understanding is that this value is in segments. So a window of 8000 segments in a network of 0.1 msec RTT, means 8000*1500bytes/0.1msec = 960Gbps which is ridiculous. I am wondering if anyone else has seen such a behaviour or has any clue why TCPProbe is reporting such a non-sense values for snd_cwnd?
migrated from stackoverflow.com Apr 23 '11 at 12:26
This question came from our site for professional and enthusiast programmers.
I've not used TCPProbe but snd_cwnd is almost undoubtedly the congestion window size. This is normally in bytes and indicates the maximum number of unacknowledged bytes that will be sent.
The window size is maintained by the sender and adjusted based on throughput, if segments are not being acknowledged (because they are being dropped or there is too much fragmentation in the path) then the segment size will be dropped down (halved if I remember correctly), then TCP will go through a slow start routine which gradually increases the segment size.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011030107/warc/CC-MAIN-20140305091710-00081-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 1,383 | 5 |
http://www.phoronix.com/forums/showthread.php?78568-Hardware-Expectations-For-Valve-s-Steam-Box&p=317535
|
code
|
Hardware Expectations For Valve's Steam Box
Phoronix: Hardware Expectations For Valve's Steam Box
Based upon my extensive Linux hardware testing of enthusiast and gamer grade hardware over the past nine years on Phoronix and immense amounts of performance benchmarking, plus having been involved with Steam on Linux, here's some of my thoughts, expectations, and hopes for the hardware comprising Valve's official SteamBox...
Obviously nvidia gpu is the only sensible choice at this time, but considering most next-gen games will already be better optimized for multi-treading due to PS4 and xbox720 octacore cpus and the price difference, I wouldn't discard an amd cpu, probably "octacore" piledriver.
My thoughts too, probably an AMD APU. Richland the Trinity reboot would probably be okay here.
With an Intel CPU & SS drive included I don't see how they could sell these things as the price would be ~500 US.
AMD richland, 4 GB 256 Gig, controller, headset and a stripped Ubuntu distro is my bet.
I don't think an Optimus style is feasible or even needed, with current GPU's idle as low as it is, a mid-high -end card can be kept cool and quiet without taking much space, adding optimus adds complexity and very little gain. Maybe there's something I've missed, but how does the brand of the CPU affect OpenGL? Problem with Intel CPU's is you often have to pay out the nose to get what you suggest, while in most current games you don't need that much power.
I do hope AMD start putting more work into their Linux drivers, so far I've had positive experiences with them on my Llano based laptop, but I don't use it for that many graphic intense applications, and even to me it's obvious it's not quite as smooth as their Windows drivers currently are.
I think the main driving factor is going to be matching PS4 e XBox720 hw.
My guess for their middle tier
Graphics/CPU: AMD, something near PS4/XBox720. Despite current driver quality, it would be a cheaper solution.
Disk: Standard HDD 1TB. SSD are to expensive and games are taking 5-20GB. A 128GB SSD can hold something like 10 big games, it won't cut it.
You think they're going with NVidiad rather than with AMD, because of the better working drivers and then you guess, they might end up with an Optimus device?
Since AMD already delivers Hardware for the big two next generation consoles, they'll probably be able to provide a far better offer than NVidia. And I really wish Valve decided in favor of AMD just to kick some distorted heads back into reality:
If something is wrong with your AMD GPU setup, it's the Catalyst which sucks. If something goes wrong with an NVidia GPU it's just a particular case. Noticing anything?
So you would go for the clearly inferior solution because you don't like the attitude of some people?
Originally Posted by alexThunder
So you didn't notice anything? Ok, let's put it this way:
Originally Posted by Vim_User
I just got the game and are happy to have inferior AMD hardware :P
Last edited by alexThunder; 03-08-2013 at 09:06 AM.
I think he was refering to the state of AMD's Linux drivers, AMD's hardware is without doubt amongst the best, but yeah their drivers on Linux aren't great. On Windows they in my experience work just as well as nVidia's drivers.
Originally Posted by alexThunder
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640001.64/warc/CC-MAIN-20150417045720-00238-ip-10-235-10-82.ec2.internal.warc.gz
|
CC-MAIN-2015-18
| 3,290 | 24 |
http://ijieee.org.in/paper_detail.php?paper_id=3344&name=Facial_Expression_Recognition_Using_Local_Facial_Features
|
code
|
International Journal of Industrial Electronics and Electrical Engineering(IJIEEE).
Paper Title - Facial Expression Recognition Using Local Facial Features
Facial expression recognition has received a lot of attention in recent years due to its importance in many
multimedia and human-computer interaction applications. One of the critical issues for a successful facial expression
recognition system is to develop a discriminative feature descriptor. In this paper, we present a texture descriptor, Local
Direction and Transition Pattern, to effectively capture the facial features. The recognition performance of the proposed
method is evaluated on the Cohn-Kanade facial expression dataset with a support vector machine classifier. Experimental
results show that the proposed method yields good recognition accuracy than other existing methods.
Keywords- Facial Expression Recognition, Local Direction and Transition Pattern, Support Vector Machine
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00219.warc.gz
|
CC-MAIN-2021-10
| 951 | 9 |
https://collab-help.its.virginia.edu/m/sitetools/l/634168-how-do-i-clear-the-chat-history
|
code
|
How do I clear the chat history?
Go to Chat.
Select Chat from the Tool Menu in your site.
Select the Clear History link for the room you want to clear.
The Clear History link displays under the chat room's Title.
Confirm the deletion.
On the Deleting all messages from chat room page, select the Delete button to confirm the permanent removal of all chat messages from the room.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649293.44/warc/CC-MAIN-20230603133129-20230603163129-00690.warc.gz
|
CC-MAIN-2023-23
| 378 | 7 |
https://forum.kicad.info/t/op-amp-4-power-supply/15089
|
code
|
I’m going to finish my first project with eeschema.
The project consist of two riia pre stages; each stage use an opa1612.
I design a single filter supply for each pin power .( by an RC net). Then,
I’d connect each single power supply pin to its own filter.
Given that power supply are invisible, what is the right procedure to do?
Thanks in advanced.
OPA1612 is a dual opamp.
In Eeschema it is divided into 3 “Units”.
A). Opamp with suffix “A” and pins 1, 2, 3
B). Opamp with suffix “B” and pins 5, 6, 7
C). Power pins section with pins 4 and 5.
You can edit the properties by hovering over a symbol and press “e”.
Then in the symbol Properties window in the top left corner you can change the “Unit” to A, B, or C.
Alternatively, you can also select which unit you want when you get a symbol from the libary directly.
Type “a” in eschema, type “OPA1612” in the search box, and then select either Unit A, B, or C.
This does assume @Masca64 uses version 5 libs. In version 4 the op amps really used invisible power pins.
good morning ,
thx for you reply paul and rene.
let me see if I understand everything: in my schematic I used 1 chip (OPA1612) for left channel + 1 chip for right channel (another OPA1612) .
At the beginning:for each channel I used only OPA1612 A unit and OPA1612 B unit : in effect I didnt use in the schematic part C (with power pin section) ; I understant this is the 1st error: I must use , for each chip, unit C too, it’s correct?
to minimize intermodulation between channels, I prefered use a separate power supply for each channel.
I means that, Negative and Positive rails go through a low pass rc filter before attach to te negative and positive pins of each chip.
Wich is the procedure to forced power pin of each chip to a different net -instead (as default) Vcc or Vdd ?
I must declare a property in part editor of chip?
thanks in advanced
sorry, i forgot:
I downloaded Kicad yesterday, so I’m using Version (5.0.2)-1
Simply connect the power input pins to the net you want them to be connected to. (unit C of the symbol!) Do not use global labels here unless you really know what you are doing (Power symbols ARE global labels!)
This depends on you having one full IC per channel. If you share the same IC then you only have one supply for that part! (This is not in any way a limitation of kicad. It would be a limitation of your design. The IC simply has one supply for all its sub units combined.)
thanks a lot Rene,
during the morning I was trying to use unit C too, connecting it to the output filter, in effect is the right way because the circuit well works …finally I can continue to build pcb …
Thanks again Rene
HAND (Have A Nice Day)
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503249.58/warc/CC-MAIN-20190221071502-20190221093502-00266.warc.gz
|
CC-MAIN-2019-09
| 2,718 | 33 |
https://www.experts-exchange.com/questions/26900095/Assign-program-to-Print-Screen-key.html
|
code
|
How can I find out which program is using the Print Screen key?
In my current Windows XP when I push the Print Screen key the current screen is always printed directly to the default printer.
I want to use the SnagIt program to make screencopies on other printers or on disk.
But the 'print screen' key is already in use by another programs and I cannot find it.
Is there a location somewhere in teh registry or on an other place where XP is storing this information.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947931.59/warc/CC-MAIN-20180425154752-20180425174752-00136.warc.gz
|
CC-MAIN-2018-17
| 467 | 5 |
http://microformats.org/wiki/index.php?title=rel-examples&oldid=12730
|
code
|
rel attributes which could, potentially, be used in microformats.
This appears to be a potential (perhap unintentional) attempt to bypass the process. Abstract examples of rel attribute values if anything are more akin to existing formats rather than examples.
Rather than listing any such rel attribute uses here, if there is a desire to actually rigorously develop the functionality behind such rel attributes into a microformat, then the process should be followed with that functionality in mind, rather than a paticular rel attribute value. Tantek 15:00, 21 Jan 2007 (PST)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00802.warc.gz
|
CC-MAIN-2023-40
| 577 | 3 |
https://kite.trade/forum/discussion/1626/reconnection-time
|
code
|
It looks like you're new here. If you want to get involved, click one of these buttons!
from kiteconnect import WebSocket# Initialise.kws = WebSocket("your_api_key", "your_public_token", "logged_in_user_id")totalTokens=# Callback for tick reception.def on_tick(tick, ws): print tick# Callback for successful connection.def on_connect(ws): # Subscribe to a list of instrument_tokens (RELIANCE and ACC here). ws.subscribe(totalTokens) # Set RELIANCE to tick in `full` mode. ws.set_mode(ws.MODE_FULL, )def on_disconnect(ws): print "Connection is dropped...subscribing again" on_connect()# Assign the callbacks.kws.on_tick = on_tickkws.on_connect = on_connectkws.on_disconnect=on_disconnect# Infinite loop on the main thread. Nothing after this will run.# You have to use the pre-defined callbacks to manage subscriptions.kws.connect()
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00386.warc.gz
|
CC-MAIN-2021-17
| 831 | 2 |
https://robin.sanborn.com/accounts/new
|
code
|
This dataset is a public record and, as more fully described below, there are no restrictions on the use, reproduction, or distribution of this dataset. Notwithstanding the foregoing, the public release of this dataset should not be construed, expressed or implied, as to whether any use constitutes a legally permissible purpose. It is the sole responsibility of the user to determine if the data is usable for their purposes. This dataset is provided “AS IS'' and on an “AS AVAILABLE” basis. The State of Michigan (“State”) makes no warranties, express or implied, regarding the accuracy, adequacy, reliability, timeliness, or completeness of this dataset. The State also does not make any warranties, express or implied, for the continued quality, accuracy, or currency of this dataset after it has been downloaded, nor the quality or accuracy of any analyses or re-uses of this dataset. THE STATE DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS DATASET AND ANY INFORMATION PROVIDED TO YOU, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT OF PROPRIETARY RIGHTS. THE STATE WILL NOT BE LIABLE, REGARDLESS OF THE FORM OF ACTION, WHETHER IN CONTRACT, TORT, NEGLIGENCE, STRICT LIABILITY OR BY STATUTE OR OTHERWISE, FOR ANY CLAIM FOR CONSEQUENTIAL, INCIDENTAL, INDIRECT, OR SPECIAL DAMAGES, INCLUDING WITHOUT LIMITATION LOST PROFITS AND LOST BUSINESS OPPORTUNITIES, RELATED TO THE ACCESS OR USE OF THIS DATASET. IN NO EVENT WILL THE STATE BE LIABLE FOR ANY AMOUNTS THAT MAY RESULT FROM THE ACCESS OR USE OF THIS DATASET, REGARDLESS OF THE FORM OF ACTION, WHETHER IN CONTRACT, TORT, NEGLIGENCE, STRICT LIABILITY, OR BY STATUTE OR OTHERWISE. You forever release the State, its departments, subdivisions, officers, and employees from all claims, rights, actions, demands, damages, liabilities, expenses and fees, which arise out of or relate to your access or use of this dataset. You must defend, indemnify and hold the State, its departments, subdivisions, officers, and employees harmless, without limitation, from and against all actions, claims, losses, liabilities, damages, costs, attorney fees, and expenses (including those required to establish the right to indemnification) arising out of or relating to your access or use of this dataset. The State reserves the right to modify or remove this dataset for any reason, without notice, at any time. Nothing in these terms constitutes or is intended to be a limitation upon, or waiver of, any privileges and immunities that apply to the State. These terms are governed by and interpreted under the laws of the State of Michigan without regard to conflict of laws provisions. These terms do not apply to other materials or content, including maps or logos, that may be located on the site or portal containing this dataset and that may be protected by intellectual property rights such copyright, trademark, or patent. Nothing in these terms should be construed, expressed or implied, as impacting any existing rights or licenses in such materials or content, if any.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816832.57/warc/CC-MAIN-20240413180040-20240413210040-00562.warc.gz
|
CC-MAIN-2024-18
| 3,084 | 1 |
https://typeset.io/papers/estimating-chaos-and-complex-dynamics-in-an-insect-38zyk4wug6
|
code
|
Estimating chaos and complex dynamics in an insect population
Cites methods from "Estimating chaos and complex dynami..."
...The first uses bootstrapping (Efron and Tibshirani 1993, Dennis et al. 2001)....
"Estimating chaos and complex dynami..." refers background in this paper
...It was therefore not surprising that the recognition of chaotic dynamics in ecological models (May 1974a, May and Oster 1976) was followed immediately by the search for chaos in existing population time-series data (Hassell et al. 1976)....
...In particular, two broad classes of stochastic mechanisms important to populations have been widely discussed: demographic stochasticity and environmental stochasticity (May 1974b, Shaffer 1981)....
...Ecological Monographs Vol. 71, No. 2 of) the largest eigenvalue of Jt evaluated at the equilibrium (the eigenvalue commonly used in stability analysis of a discrete-time system; May 1974b)....
...For example, the familiar one-dimensional, discrete-time logistic model forecasts dynamical changes (bifurcations) from extinction to equilibrium to two-, four-, eight-cycles, etc., to chaos as the birthrate parameter is increased (May 1974a)....
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00813.warc.gz
|
CC-MAIN-2023-50
| 1,170 | 8 |
https://www.hughsexey.com/dans-ma-valise-saying-what-i-have-in-my-suitcase-a/
|
code
|
OBJECTIF: I can say what is in my suitcase, and why!
In this video I show all the items I have in my suitcase to take on holiday, and give a reason for each.
Watch the video and write down, in English and French, all 10 objects I have in my suitcase.
Write down as much as you can understand about the reasons for each item, in English.... and French too if you feel brave!
Here is the video script. You are going to write, and record, a similar video, describing what there is in your suitcase, and why!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948234904.99/warc/CC-MAIN-20240305092259-20240305122259-00333.warc.gz
|
CC-MAIN-2024-10
| 504 | 5 |
http://forums.linuxmint.com/viewtopic.php?f=175&t=106392&p=600190
|
code
|
I'm using LM 10 LXDE on an older P4 laptop with a max ram of 512 MBs. It tends to run rather jerkily. It says there are no updated video drivers to be downloaded. Are there any tricks like disabling processes or what not to get this thing to run better? Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00042-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 261 | 1 |
https://www.leapdragon.net/2016/01/19/transparency-and-accountability-in-technology/
|
code
|
The notion of accountability, often misimagined as by endusers as transparency, was a key item of discussion in my dissertation.
When it comes to technology, questions that users may not even think to ask impact in significant ways technology’s usability. What is it doing? How is it doing what it is doing? What are the ground rules and operating parameters? Can these be expressed in simple, intuitive ways?
These things are the keys to predictability and interactivity, and they are questions that are also critical in human-to-human interaction, though we don’t often think about it in these terms. But they come up (as Garfinkel pointed out) when they are violated or when exceptions occur. This is one of the reasons that mental illness is so difficult and problematic for us; we struggle to interact with those that are mentally ill because they violate assumptions about precisely the questions above, rendering us unsure about the effects of our actions within the context of the ongoing interaction.
So I’m here to point out that the Google+ transition (from old to new) and Amazon’s author pages are amongst the most recent examples I’ve found of poorly accountable technologies. It’s not clear why they do what they do, what they’re doing, or what will happen as we interact with them. Only after the fact do we know, and of course by then it’s too late to make a decision about whether or not we’d like to do the things that we did—make the gestures that we made—as we related to one another.
Such accountability is often misframed as “transparency” or “documentation” and users tend to bemoan the lack of these when outcomes are unexpected, but in fact, nobody really wants transparency (i.e. understanding of the actual operations at the machine level). Those things are best left to machine state diagrams of the sort that I used to do as a computer science student all the way back in 1991 (when departments were still teaching in C and Pascal and assembly).
Instead, what people really want to know is what the ground rules of an interaction are and what the outcomes will be, as an interactive totality, of any particular interactive choice that they make. So—not “what is this software or hardware doing”—but rather “what will be the result for this interaction and relationship of any particular action that I might take in response to the system’s actions?”
On this level, these two bits of software fail miserably.
— § —
As a supplemental note, the term “accountability” does not imply “responsibility” but in fact Garfinkel’s discussion of the ability “to provide a sensible and defensible account of” what each party to an interaction is doing. Accountability is essential to interaction as it enables parties both to explain themselves (to others and to themselves) and to come to grips with the very same kinds of explanations provided by the counterparty. Unaccountable activity, particularly in social interaction, tends to lack sensibility—that is to say that people cannot make sense of or integrate the sensations of what has occurred. An “accounting” by both parties and the “accountability” of each party’s actions are thus critical both to individual and to mutual understanding.
One can easily see the ways in which such accountability is at the core of most problems in usability and interactivity in the technology space, as has been pointed out by both Suchman (first) and Dourish (later). As it turns out, this concept is also at the core of most of the problems we’ve had building AI systems, though such a point is beyond the scope of a complain-complain post like this one.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00151.warc.gz
|
CC-MAIN-2021-39
| 3,693 | 10 |
https://learn.microsoft.com/en-us/answers/questions/269034/data-copied-into-windows-clipboard-does-it-get-sav
|
code
|
the clipboard is not stored. It lives in memory exclusively.
A memory object on the clipboard can be in any data format, called a clipboard format. Each format is identified by an unsigned integer value. For standard (predefined) clipboard formats, this value is a constant defined in Winuser.h; for registered clipboard formats, it is the return value of the RegisterClipboardFormat function.
Reference: About the Clipboard
If the Answer is helpful, please click "Accept Answer" and upvote it.
Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100172.28/warc/CC-MAIN-20231130062948-20231130092948-00097.warc.gz
|
CC-MAIN-2023-50
| 647 | 5 |
https://www.softserveinc.com/en-us/resources/billing-data-capacity-cloud-based-monetization
|
code
|
Solution Augments Billing Data Capacity for Leading Cloud-Based Monetization Company
Founded in 2008, today our client powers a wide range of publicly traded and privately held companies from traditional indu stries to IoT, cloud apps, and high-tech enterprises. They assure top-line revenue growth, faster time-to-market, visibility into revenue streams, and operational savings.
The client’s product is an internet-scale monetization and revenue automation platform designed to manage simple to sophisticated billing and payment transactions. People use this software to monetize and quickly bring to market new products and services. The client set out to develop a project for Representational State Transfer (REST) web services that would allow enlarging the amount of sources billing data could retrieve, building them on top of existing software infrastructure, and assuring the security of business resources.
The project was comprised of four architecture layers. The web service application was implemented on the top API layer served to define REST. The business logic of the project was implemented on the Spring framework-based service layer. The database layer was responsible for integrating with all data sources such as MySQL, PostgreSQL, and ElasticSearch. And the domain layer was a “cross-layer” module which consisted of business object models used in all architecture layers.
In order to complete the project, SoftServe experts decided to apply a requirements-driven development framework. This approach enabled the Swagger Code Generation application to create an interface for easily developing and consuming an API by effectively mapping all the initial java classes, resources, models, and operations associated with them on Swagger YAML. This approach gave the developers the opportunity to edit generated classes and add further business logic. It was powered by Amazon SWF, a fully-managed state tracker and task coordinator, which helped developers build, run, and scale background jobs that had parallel or sequential steps.
The solution delivered by the team of engineers from SoftServe reflected a modern approach to software development and provided an outstanding solution for our client. Due to the successful deployment of the project, developers managed to significantly improve the performance of the current system. The newly-implemented solution enabled the controlling of the system’s components, storing client data, maintaining the database, and processing data flows from heterogeneous sources on another qualitative level, making the embedded software a primary tool for managing data objects.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474445.77/warc/CC-MAIN-20240223185223-20240223215223-00000.warc.gz
|
CC-MAIN-2024-10
| 2,647 | 6 |
https://www.stlukesjolon.org/how-to-have-more-than-one-streamlabs-chatbot-counter/
|
code
|
Is Outgrow Co the Best Chatbot Software?
Outgrow is a powerful chatbot software that allows you to engage with your customers through many different methods. You can design your own chatbot which will answer frequently asked questions. You can also create one to search for detailed help articles. Based on your preferences, you can even set your chatbot to be GDPR compliant. Outgrow can also be used to make your website GDPR-compliant.
Outgrow is one of the top chatbot software options. It can help you increase conversion rates and bounce rates. The tools and analytics it provides allow you to analyse results and increase the number of newsletter subscribers. Outgrow can also be used to create a PDF download of your results. However certain aspects of the downsides might turn you off. Learn how to make Outgrow work for you. In addition, it could be helpful to send follow-up messages to your existing customers.
AlphaChat’s conversational artificial intelligence is a powerful tool that can help businesses. Its AI technology lets businesses automatize messages and speech-enabled apps. AlphaChat’s Natural Language Understanding allows businesses to manage support requests and improve communication between their CS team members and customers. Administrators can also monitor key metrics such as the accuracy of their solutions, solve rates, and average response time.
The platform offers a comprehensive support and video tutorials making it easy to build chatbots for customer service. It’s more than just an automated bot builder. It provides a variety of templates that are suited to different uses and can be modified to reflect your company’s voice. Outgrow is fully adaptable and works on tablet, mobile, and desktop devices. Outgrow is the best choice for medium-sized to large-sized companies.
Salesforce Marketing Cloud
Outgrow Co is a cloud-based solution for marketing automation, that integrates marketing campaigns with customer data and behavior. Outgrow clients can create interactive quizzes, recommendations , and more without the need of the assistance of a developer. Outgrow and Salesforce Marketing Cloud are both marketing automation tools. If you’re looking for a chatbot for your business, Outgrow is the right option.
Chatbots are an excellent way for companies to increase revenues and cut costs. In the field of finance, outbound conversations with customers have proven to yield 360,000 hours of labor savings per year. According to one study, 34% of consumers are prone to ignore chatbots. Chatbots can save companies 360,000 hours of labour, and 1 in 5 will purchase expensive products via them.
The COIN chatbot is a fantastic illustration of the advantages of chatbots in business. In the past few years, this chatbot saved J.P. Morgan 360,000 hours of human labor and has also helped increase their revenue. More businesses are realizing the benefits of chatbots. Almost 34% of companies are using chatbots in order to reduce the amount of time spent on customer service. And more than this, 39% of businesses are using chatbots to make their websites more interactive.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100972.58/warc/CC-MAIN-20231209202131-20231209232131-00386.warc.gz
|
CC-MAIN-2023-50
| 3,126 | 9 |
https://wiki.lspace.org/index.php?title=Anima_Unnaturale&oldid=357
|
code
|
Broomfog's Anima Unnaturale is a compendium of all the weird and strange animals known to the Disc. In Small Gods, it is one of the scrolls "rescued" by Brutha just before the destruction by fire of the Great Library of Ephebe. Brutha just memorises the layout of pictures and strange squiggles on the page, for copying out later when he has a few spare moments.
It hath thee legges of a mermade, the hair of a tortoyse, thee teethe of a fowle, and the wings of a snake. Of course I have only my worde for it, thee beast having the breathe of a furnace and the temperament of a rubber balloon in a hurricane.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00529.warc.gz
|
CC-MAIN-2022-40
| 608 | 2 |
https://www.dyescape.com/articles/authors
|
code
|
All plugins on Dyescape are coded by me, and I'm also responsible for setting up and maintaning all of the servers. I have big plans when it comes to Dyescape's software and servers. One of my plans is to create our own custom anti cheat built using artifactual intelligence / machine learning. It's practicly a digital brain. If you've ever seen Psycho-Pass, you'll know what I'm talking about
I've been playing Minecraft since the 1.0.0 release. Since roughly version 1.2.5 I got into server management and running one. I've been in the business ever since and I love doing it. Some time ago, I created a small survival network called Dyescape. It didn't work out great as player count never really got high or stable at all. The staff team and I decided we shut everything down and made something truely unique of Dyescape.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323864.76/warc/CC-MAIN-20170629033356-20170629053356-00117.warc.gz
|
CC-MAIN-2017-26
| 826 | 2 |
https://community.ragic.com/t/whats-the-best-way-to-achieve-this-please/3891
|
code
|
I have the following sheets for admins - Sheet A(admin), Sheet B(admin) & Sheet C(admin).
Sheet C(admin) was created by ‘New sheet from subtable’ via Sheet A(admin) and has link & load fields from Sheet B. So far so good and everything works great.
I want to set up duplicates for users who should have access to less fields/data for Sheet A(admin) and Sheet C(admin) so duplicated them both and named them Sheet A and Sheet C. However, on Sheet C, when I click on the link back to Sheet A it actually links back to Sheet A(admin) which I don’t want users to access. How do I change it to link to Sheet A instead?
I did try to reset the links but for some reason Sheet A lost connection with it’s parent and I couldn’t find any way to reconnect it without deleting everything and starting again.
What is the best way of approaching this please before I have another try. Should I duplicate just Sheet A(admin) to get Sheet A and then recreate a ‘New sheet from subtable’ to create Sheet C? Or is there a better way?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540548544.83/warc/CC-MAIN-20191213043650-20191213071650-00414.warc.gz
|
CC-MAIN-2019-51
| 1,029 | 5 |
http://www.lib.utexas.edu/longhorn_reviews/city-and-city
|
code
|
By: mIEVILLE, cHINA
So you are reading along in this noirish meta-police procedural, indebted to
Bruno Schulz and Italo Calvino and maybe Raymond Chandler, with its surreal
atmosphere of quantum physics, and suddenly you slip down into it. You are trying to
read the story, but the decontextualized puzzles and jokes are getting in the way.
You try to unsee them, but sometimes you just can't and you lose the thread. You
breach - the streets look familiar, the dialog is the same, but there is something
else going on. Elegant, witty, not as elaborate as "The Name of the Rose", but sly,
like P.I. Taibo.
Reviewer: dennis trombatore
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148834.81/warc/CC-MAIN-20160205193908-00048-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 633 | 10 |
https://www.answers.com/Q/How_to_prevent_unauthorized_memory_access_in_c
|
code
|
Make sure a password has been set on the computer and make sure its one that you know is easy to memorize but hard for others to find out. firewall - software firewall is a program that is stored into the computer which protects the computer from unauthorized incoming and outgoing data. virus protection program - that helps stop or detect and fix virus problems from happening.
You access memory with a pointer or a reference to the memory. To allocate memory dynamically, use calloc or malloc (C or C++) or new (C++ only).
A pointer in C++ is the same as a pointer in C -- it is a variable that is used to store a memory address and which allows indirect access to that memory address. When a pointer is not in use, it must be zeroed or nullified by assigning the NULL value, thus preventing indirect access to invalid memory.
Define 'low level memory' first.
1) Using inline assembly language functions feature in C we can directly access system registers. 2) C programming also supports high level language features. 3) C Programming is used to access memory directly using pointer.
There is no memory management operator in C++ -- it is an unmanaged language. You use the C++ new operator to allocate memory, and use the C++ delete operator to release previously allocated memory.
# Manual memory allocation/deallocation # (Semi-) direct access to registers
C language is a middle level language, a middle language is one which somehow allows you to access your computer memory directly. Where as Java and C# are completely highlevel language as they dont allow you to directly access your computer memory, Assembly Language is said to be the low level language as it allow the the direct access of memory. you can read more on C language here: http://thetechnofreaks.com/2011/08/23/the-basics-welcome-to-the-world-of-programming/ Actually, there are no 'middle level languages', machine code and Assembly is low level, everything else is high level. And of course you cannot break out from your virtual memory space using C (or any other language). It is called 'protected mode' for a reason.
A dangling pointer is one that points to a memory location but the memory itself has been freed or released back to the system. The memory may still contain valid information, but the system can overwrite the data at any time so any attempt to access that memory via the dangling pointer could prove disastrous. As soon as memory is released, the pointer is invalid -- because the memory it points to is potentially invalid. To prevent this, always nullify pointers (set them to point at memory address zero) when they are no longer required, immediately after releasing the memory they point to. There are occasion when this is not necessary, such as when releasing a member pointer in a class destructor, but if a pointer is re-used, it must be initialised before being accessed again.
Yes, C++ has pointers, which are references to memory locations. which are variables that store memory addresses, or NULL (zero). If the pointer is non-NULL, the pointer is said to dereference the object (or variable) residing at the stored memory address, which permits indirect access to that object so long as the object remains in scope.
You do not access MS Access from C you do it from windows by using MS Access api calls. MS Access does not run on a computer running Linux, QNX or DOS etc. but they can all be programmed in C
New and Delete are the memory management operators in c++,like c language we use malloc() and calloc() functions to allocate memory and free() functiong to release the memory similarily we use new to allocate memory in C++ and Delete to release the allocated memory....
Java does not have the concept of Reference Variables. We cannot access the memory location where the data is stored in Java.
State-of-the-art graphics is usually pushing the boundaries of CPU and memory capacity as graphics become more and more visually impressive. C and C++ allows for very fast code by giving programmers access to low-level operations (such as pointer arithmetic, memory management, etc).
Being a high level language like java..., C supports direct access to memory as assembly language (which is a low level language) . So C is called mid level language
A pointer is simply a variable that stores a memory address. Thus a pointer to an object is simply a variable that stores the memory address of an object. Since pointers are variables, they require memory of their own. Pointers may also be constant, which simply means you cannot change what they point to. Pointers can also be dereferenced to provide indirect access to the memory they point to -- hence they are known as pointers. However, unlike C, pointers are not the same as references. In C++, a reference is simply an alias for a memory address and requires no storage of its own.
Main Memory (RAM).
Static memory allocation occurs at compile time where as dynamic memory allocation occurs at run time.
U need to find the DRIVER for C language that will connect to Oracle or Access Database and then access that driver through C program.
In Windows you can use the CreateFileMapping API to create shared memory in one program, and OpenFileMapping to access that memory from another program. For a more generic approach, consider using disk files, pipes or messages.
There are no access specifiers in C. All functions and data are public.
Contiguous memory allocation in C programming refers to the assigning of consecutive memory blocks to a process. Contiguous memory allocation is one of the oldest and most popular memory allocation schemes in programming.
yes, In C its possible to allocate array in expanded memory at run time
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00497.warc.gz
|
CC-MAIN-2021-43
| 5,707 | 23 |
https://cacm.acm.org/news/256943-trouble-at-the-source/abstract
|
code
|
Machine learning (ML), systems, especially deep neural networks, can find subtle patterns in large datasets that give them powerful capabilities in image classification, speech recognition, natural-language processing, and other tasks. Despite this power—or rather because of it—these systems can be led astray by hidden regularities in the datasets used to train them.
Issues occur when the training data contains systematic flaws due to the origin of the data or the biases of those preparing it. Another hazard is "over-fitting," in which a model predicts the limited training data well, but errs when presented with new data, either similar test data or the less-controlled examples encountered in the real world. This discrepancy resembles the well-known statistical issue in which clinical trial data has high "internal validity" on carefully selected subjects, but may have lower "external validity" for real patients.
No entries found
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473871.23/warc/CC-MAIN-20240222225655-20240223015655-00762.warc.gz
|
CC-MAIN-2024-10
| 946 | 3 |
https://lowithoutlimits.com/author/laurenowicki/
|
code
|
This video is a leg workout using ONLY Dumbbells!
Iin this video I show you how to make the BEST Pumpkin Bread that’s Gluten-Free and Dairy-Free!
In this video I share and test out some ways to use FOOD to CLEAN things around your house/apartment/wherever!
The Hip Hinge is super important for moves like dead lifts, good mornings, kettlebell swings and more and having that mind-muscle connection allows you to really work what you want to without risking an injury.
Hi everyone, in this video I show you how to make your own paleo and vegan crackers in LESS than 30 minutes!
In this video, I take you on a tour of my house plants ONE YEAR LATER!
A “What I Eat In A Day” video as I go out of my apartment and aim to have no added sugar!
In this video, I show you how to make your own Acai Bowl at just a FRACTION of the cost you can buy one for!
In this video I break down what “Carb Cycling” is as well as the reasons as to why people do it, benefits of it, and HOW to do it!
In this video I put the Woman Code 4-Day Reset to the test!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986649035.4/warc/CC-MAIN-20191014025508-20191014052508-00527.warc.gz
|
CC-MAIN-2019-43
| 1,047 | 10 |
https://lists.debian.org/debian-kernel/2012/01/msg00229.html
|
code
|
Bug#654876: "divide error" while receiving from iwlagn 5300, leading to kernel panic
On Mon, 09 Jan 2012 at 00:53:26 +0000, Simon McVittie wrote:
> Via netconsole, I can bring you the attached logs (three separate bug/panic
>From the crash dump, I think this may be to do with IGMP. It started
around the time I added a Playstation 3 (which I believe uses multicast for
UPnP media sharing) to my network, although correlation doesn't imply
causation... I haven't been able to reproduce the crash with wired-only
networking, though, which seems strange if it's independent of the device.
The instruction pointer in the crash dump is in igmp_start_timer(), which
performs a modulus operation. If max_delay is 0 I believe that will cause
a divide error.
In igmp_heard_query() there are several cases, depending on the version of
IGMP in use.
If ih->code is 0 (IGMP v1), max_delay is set nonzero.
If it's IGMP v2, max_delay could conceivably be nonzero if there's an
overflow or something? It's calculated from data in the network packet.
If it's IGMP v3 and v2 queries have been seen, max_delay could, again,
conceivably be nozero?
In the v3 case, max_delay is explicitly clamped at 1 after a similar
calculation, suggesting that this should be done in the other cases too.
At the end of the function (in all cases except v3), igmp_mod_timer() is
called with max_delay; perhaps clamping it at 1 there, or even in
igmp_start_timer(), would fix this?
Does this make sense? If so, I can can create a simple patch and try it out
for a while.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945037.60/warc/CC-MAIN-20180421051736-20180421071736-00398.warc.gz
|
CC-MAIN-2018-17
| 1,534 | 25 |
http://ux.stackexchange.com/questions/tagged/reports+alignment
|
code
|
User Experience Meta
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Label alignment for report fields
I've seen information that indicates that top aligned labels are more efficient for input forms. Does the same concept apply to labels on reports? Is top aligned better, or is it better to use ...
Jan 13 '12 at 15:41
newest reports alignment questions feed
Hot Network Questions
Expected number of ratio of girls vs boys birth
Something of value that is worthless in the current context?
Meaning of "that friendship will not continue to the end which is begun for an end"
Do interfaces really need to "look good"?
order is done or order received
How can I find jobs which have private offices?
Student insists I am wrong
Non blocking drupal_http_request
What is exactly a "Pixel"?
Autofocus icon is missing in Canon 650D, please help
What do hardware address pins do?
Windows Phone 8.1 Battery Drainage
“I just see” or “I just saw”? (Or neither?)
Why is it illegal to use the information learned by exploiting a bug?
Single pivot brake with handlebar levers (Resurrecting an old Colnago)
Does it make sense to open one window all the way when the other window is much smaller?
What does 'highly non linear' mean?
Just found out my 13 year old girl is Bi and dating a 17 year old girl in an "open" relationship. Huh? Now what?
Push button issue in circuitikz
Best strategy on buying champions and runes in LoL?
What would be preferred aesthetically and performance wise?
What motivated Lucas to convert a torch (flashlight) into deadly weapon?
Is the Pathfinder FAQ considered RAW?
Is Python 2 removed from 14.04 CD images?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00057-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 2,223 | 53 |
https://smartbear.com/blog/test-and-monitor/agile-testing-challenges-web-services-testing-issu/?feed=test-monitor
|
code
|
This is the fourth installment of blogs regarding the Top 5 Agile Testing Challenges.
You can view the prior blogs or download a more detailed white paper, here:
1. Agile Testing Challenges - Finding Defects Early
2. Agile Testing Challenges - Broken Builds
3. Agile Testing Challenges: Inadequate Test Coverage
4. Five Challenges for Agile Testing Teams: Solutions to Improve Agile Testing Results (white paper)
Agile development is a faster, more efficient and cost-effective method of
delivering high-quality software. However, agile presents testing challenges
beyond those of waterfall development. That’s because agile requirements are
more lightweight, and agile builds happen more frequently to sustain rapid
sprints. Agile testing requires a flexible and streamlined approach that
complements the speed of agile.
Inadequate Testing for your Published API
Many testers focus on testing the user interface and miss the opportunity to
perform API testing. If your software has a published API, your testing team
needs a solid strategy for testing it. API testing often is omitted because of
the misperception that it takes programming skills to call the properties and
methods of your API. While programming skill can be helpful for both automated
and API testers, it’s not essential if you have tools that allow you to perform
testing without programming.
Getting Started with API Testing
Similar to automated testing, the best way to get started with API or Web Services testing is
to take baby steps. Don’t try to create tests for every API function. Focus on
the tests that provide the biggest bang for your buck. Here are some guidelines
to help you focus:
- Dedicated Resource: Don’t have your manual testers
develop API tests. Have your automation engineer double as an API tester;
the skill set is similar.
- High Use Functions: Create tests that cover the most
frequently called API functions. The best way to determine the most called
functions is to log the calls for each API function.
- Usability Tests: When developing API tests, be sure to
create negative tests that force the API function to spit out an error.
Because APIs are a black box to the end user, they often are difficult to
debug. Therefore, if a function is called improperly, it’s important that
the API returns a friendly and actionable message that explains what went
wrong and how to fix it.
- Security Tests: Build tests that attempt to call
functions without the proper security rights. Create tests that exercise the
security logic. It can be easy for developers to enforce security
constraints in the user interface but forget to enforce them in the API.
- Stopwatch-level Performance Tests: Time methods (entry
and exit points) to analyze which methods take longer to process than
Once you create a base set of API tests, schedule them to run automatically
on each build. Every day, identify any tests that failed to confirm that they’re
legitimate issues and not just an expected change you weren’t aware of. If a
test identifies a real issue, be happy that your efforts are paying off.
API testing can be done by writing code to exercise each function, but if you
want to save time and effort, use a tool. Remember, our mission is to get the
most out of testing efforts with the least amount of work.
When considering API testing tools, take a look at
SmartBear’s soapUI Pro. It’s easy to learn and has scheduling capabilities
so your API tests can run unattended, and you can view the results easily.
Which API Metrics Should You Watch?
Focus on API function coverage, API test run progress, defect
discovery, and defect fix rate. Here are some metrics to consider:
- Function Coverage: Identifies which functions API tests cover. Focus on the
functions that are called most often. This metric enables you to determine if
- Blocked Tests: Identify API tests that are blocked by defects or external issues
(for example, compatibility with the latest version of .NET).
- Coverage within Function: Most API functions contain several properties and
methods. This metric identifies which properties and methods your tests cover to
ensure that all functions are fully tested (or at least the ones used most
- Daily API Test Run Trending: This shows, day-by-day, how many API tests are run,
passed, and failed.
What Can You Do Each Day to Ensure Your API Testing Team Is Working Optimally?
Testing teams should perform these things every day:
- Review API Run Metrics: Review your key metrics. If the overnight API tests found defects, retest them manually to rule out a false positive. Log all real defects for resolution.
- Continue to build on your API Tests: Work on adding more API tests to your arsenal using the guidelines described above.
For tools to support these practices:
Download a free trial of soapUI Pro
Download a free trial of QAComplete
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00150.warc.gz
|
CC-MAIN-2020-34
| 4,863 | 72 |
https://gunstreamer.com/watch/long-range-shotgun-trick-shots-gould-brothers_rDOLWORaKNt13tS.html
|
code
|
Long Range Shotgun Trick Shots | Gould Brothers
Long Range Shotgun Trick Shots: Many of our shotgun trick shots are with hand-thrown clay targets at fairly close distances, but in this video, we step it back to see pull off a few of our favorite shotgun trick shots from 80+ yards.
🎉 💥 GBX LIVE 🎉 💥
Videos are great, but live is even better!
Come see us live http://bit.ly/gbxliveschedule
- OR -
Book us for your event and make it one to remember http://bit.ly/BooktheBros
😎 WE'RE SO MUCH COOLER ONLONE 😎 JOIN US!
► Gould Bros. IG http://bit.ly/GouldBrothersInstagram
► Aaron's IG http://bit.ly/AarongIG
► Steve's IG http://bit.ly/StevegIG
► Facebook http://bit.ly/GouldBrothersFacebook
LEARN MORE ABOUT THE PRODUCTS WE USE
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391277.13/warc/CC-MAIN-20200526160400-20200526190400-00274.warc.gz
|
CC-MAIN-2020-24
| 749 | 13 |
https://www.fibreop.ca/support/article/receive-error-when-trying-to-my-changes-to-my-a-la-carte-package/7569
|
code
|
You can only add or remove channels if you are subscribed to an à la carte package.
When you make your selections, the amount of channels must match exactly with your à la carte package.
- For ALC 15: 15 channels must be selected.
- For ALC 30: 30 channels must be selected.
If you would like to change your subscribed à la carte package, you must contact us at 1-866-FIBREOP (342-7367)
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370502513.35/warc/CC-MAIN-20200331150854-20200331180854-00492.warc.gz
|
CC-MAIN-2020-16
| 389 | 5 |
https://www.physics.uoguelph.ca/events/graduate-seminar-series/advice-panel-on-grad-school-applications
|
code
|
In this very special edition of the GSS, come observe an advice panel made up of resident grad students. The focus of discussion is on the search for and application to graduate programs. Panelists will each relate their own experiences, detailing how they got to where they are. They will impart their wisdom on topics such as finding the right supervisor, applying internationally, and which exams and scholarships to look into. In addition, the panel will be wide open to any questions that audience members may have concerning this subject matter. This should be a great session for anyone thinking about where they may be going after a Bachelor’s degree, either in physics or beyond.
Graduate Seminar Series
The seminar series consists of weekly talks designed and delivered by graduate students within the department. The goal of this project is to expose upper-level undergraduates to current physics research. The talks are aimed at the fourth-year level, but all are welcome and encouraged to attend.
Snacks will be provided at 12:30.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517495.99/warc/CC-MAIN-20181023220444-20181024001944-00197.warc.gz
|
CC-MAIN-2018-43
| 1,045 | 4 |
http://www.geek.com/forums/topic/built-in-video-workaround/
|
code
|
2/1/2000 (4:44 am) by admin
We got a couple comments about our System Builder FAQ suggestion that <A HREF="http://www.geek.com/sysup/sysupfaq.htm#q4"> built-in video</A> can't be disabled:
Regarding the question (of built-in video), it is possible to disable the built in video in Win 9x. Most motherboards disable the built in video if you install a new video card, but Windows won't disable it automatically. To get around this, first disable the built in video, in System / Device Manager. Then shut down, install your new card and turn on your PC. Voila, Windows detects the new card, installs the drivers and you're up and running.
You will run into problems if you're re-installing Windows, as Windows will detect both cards and report a conflict. To work around this, install Windows with the built in video, and after it's completely installed, follow the procedure above.
and another one:
If you can't disable the integrated video in the BIOS then you should be able to disable it in Windows 98 through the Device Manager in the display adapter and also by changing video in the BIOS from AGP to PCI.
It's worked everytime for me and for everyone I have ever explained it to!
It appears that our answer may be outdated. Certainly these users have had different experiences than we did when we first tried to disable on-board AGP video. Firstly, if video is build into the motherboard, you don't have a spare AGP slot, so you must use a PCI video card. Secondly, I've heard reports of people having problems getting rid of the AGP video completely - even when following instructions like these.
So, what do the readers think? Have you disabled on-board video recently? How did you do it, and how well did it work? What PCI card did you use to replace the built-in AGP video?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00170-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,782 | 9 |
https://www.noodle.com/learn/details/126222/tedxrutgers-tina-eliassi-rad-network-science
|
code
|
Tina Eliassi-Rad is an Assistant Professor of Computer Science at Rutgers University. Until September 2010, Tina was a Member of Technical Staff and Principal Investigator at Lawrence Livermore National Laboratory. Tina earned her Ph.D. in Computer Sciences (with a minor in Mathematical Statistics) at the University of Wisconsin-Madison in 2001. Broadly speaking, Tina's research interests include data mining, machine learning, and artificial intelligence. Her work has been applied to the World-Wide Web, text corpora, large-scale scientific simulation data, complex networks, and cyber situational awareness. Tina is an action editor for the Data Mining and Knowledge Discovery Journal. In 2010, she received an Outstanding Mentor Award from the US DOE Office of Science and a Directorate Gold Award from Lawrence Livermore National Laboratory for work on cyber situational awareness.About TEDx, x = independently organized event:In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)
Questions about TEDxRutgers -Tina Eliassi Rad - Network Science
Want more info about TEDxRutgers -Tina Eliassi Rad - Network Science?
Get free advice from education experts and Noodle community members.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119642.3/warc/CC-MAIN-20170423031159-00598-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,659 | 4 |
https://techblog.xavient.com/all-that-you-need-to-know-about-automic-application-release-automation-ara/
|
code
|
Automic’s Application Release Automation (ARA) provides a consistent, repeatable and auditable process to automate the packaging and deployment of next-gen applications and update of applications across multiple environments.
ARA also enables consistency and predictability across environments, which significantly reduces manual efforts and human errors while increasing deployment speed and reliability of the application. It also helps increase Visibility and Control via role-based access, peer review of workflows, native approval support, and automatic change tracking.
The existing MOPs are automated through workflows that include environment health check, deployment, validation, notification and rollback if required. Deployments are tracked and recorded while dashboard provides statistics on application deployments across target environments and Data Center teams can have specialized views of execution status across environments.
Advantages of Automic’s Application Release Automation
- Easy Drill In and Out: With the introduction of new Automic’s Web Interface, drilling in and out of workflows at any level is efficient, easy and refined. It supports easier development and you can monitor live deployment workflows by drilling down.
- Workflow Development: It is one of the most important elements of ARA. It is easy to use and can be worked by more than one team member on a workflow.
- Automatic Rollback: ARA can automatically rollback if an issue crops up during the deployment phase. It is the only platform, currently, in the market that provides this feature. The run books in Automic that are used to deploy the artifact have corresponding rollback functionality, which is built-in. It also allows you to rollback manually post-deployment.
- Server and Application Configurations: Automic can set the properties of Application. Set property values can be accessed by the application during the workflow execution.
- Workflow Versioning: Users can easily roll back workflow execution as it is version controlled. Users can drill down to the previous executions to verify the logs.
- Web-based Editor: ARA comes with a fully web-based workflow editor and monitor and has a simple design that facilitates easy workflow development.
Automic’s Web Interface
- It is the responsibility of the System Administrator to setup the login page of Automic ARA.
2. The system administrator will provide users with login details. ARA prevents users from logging into their accounts with two different ids in two tabs of the same browser.
3. After logging in, the users see the following interface (dashboard):
4. Dashboards are a quick way to access objects, tasks and functions. Users can even customize their dashboard if they are authorized to do so by the administrator.
5. The UI of Automic ARA is organized around perspective, which is a functional area that contains access to the function that particular user needs. Depending upon your user roles you will have access to one or more perspectives.
(i) Administrative perspective is where the admin creates and manage users, user group, agents, connection and many more.
(ii) Process Assembly perspective is where developers and object designers create and configure objects and define their logics by writing scripts in objects where they are then executed and tested.
(iii) Process Monitoring is where operators and managers keep an eye on the processes to make sure that the workload is processed everyday smoothly and troubleshoot if something goes wrong.
(iv) Dashboard perspective gives operators and managers quick access to customized views.
Here’s how the system architecture of Automic’s ARA looks like:
Functional Architecture – Deployment Process
The deployment process involves the following steps:
- Create an application along with the various components that are necessary to perform some functionalities from the perspective of Release Automation.
- Then create a package to deploy the application to the server via agents. These agents are nothing but simple programs that run in the background on Windows or Linux servers.
- Design a development workflow with multiple actions to perform some tasks. Actions are combined and linked to each other to perform a single process.
- Create an operating environment that includes endpoints where the application is deployed.
- Add deployment targets and assign them to the respective environment. Deployment targets are the servers on which you will deploy the package.
- Create a login object that will store the login credentials to the servers.
- Design ARA infrastructure elements for process ownership, variable creation, and logical modeling.
- Deploy the application by executing the workflow. When executing the workflow, make sure you assign the deployment package and deployment profile. The deployment profile contains the details of the login object and deployment targets.
The all-inclusive platform provided by ARA enables the development and operations teams to automate the deployment pipeline right from the development stage to the production stage across the environments, which helps in promoting and rejecting versions, enacting automatic rollback of changes whenever necessary, and monitoring environments. All in all, Automic Software Inc. with ARA aims at creating an easily orchestrated and centrally managed deployment pipeline. Thanks to the large set of built-in integrations and plugins, Automic’s deployment automation tool suits all mid-sized organizations across industry verticals.
That is it from us, see you next time. Do let us know your thoughts on ARA in the comments below.
Until next time!
- Real Time Data Ingestion (DiP) – Spark Streaming (co-dev opportunity)
This blog is an extension to that and it focuses on integrating Spark Streaming to Data Ingestion Platform for performing real time data ingestion and visualization. The previous blog DiP (Storm Streaming) showed how…
- How to Build & Run Workflow with Control-M Automation API
Hi readers, we are back with another blog on Control-M Automation. We hope you have already read our previous blog, which gave an overview of Control-M Automation API. In this…
- Real Time Data Ingestion (DiP) – Apache Apex (co-dev opportunity)
Data Ingestion Platform This work is based on Xavient co-dev initiative where your engineers can start working with our team to contribute and build your own platform to ingest any…
- HAWQ/HDB and Hadoop with Hive and HBase
Hive: Apache Hive is a data warehouse infrastructure built on top of Hadoop for providing data summarization, query, and analysis. HBase: Apache HBase™ is the Hadoop database, a distributed, scalable, big…
- Introduction to Messaging
Messaging is one of the most important aspects of modern programming techniques. Majority of today's systems consist of several modules and external dependencies. If they weren't able to communicate with…
- Hadoop Cluster Verification (HCV)
Verification scripts basically composed of idea to run a smoke test against any Hadoop component using shell script. HCV is a set of artifacts developed to verify successful implementation of…
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00507.warc.gz
|
CC-MAIN-2020-10
| 7,181 | 46 |
https://discussions.apple.com/thread/4441020
|
code
|
Would it be save to use flash video?
Yes, quite safe.
Safari is the fastest and safest browser for OS 10.5 Leopard.
Security of OS X generally:
Security Configuration for Version 10.5 Leopard:
Suggestions for ‘safe surfing’:
1. In Safari Preferences/Security:
Do not check the box 'Allow websites to ask for location information'
Accept Cookies only from sites you visit
Select 'None' for Database storage
From the Safari menu, select 'Block pop-up windows'
2. Do not allow Google to register your interests:
Removal/prevention of Google cookies:
3. Download and install Ghostery from http://www.ghostery.com/
having read about what it does. It prevents you being tracked by outfits like Doubleclick - and there are many. Glims will indicate how many such tracking devices it has blocked: some sites use as many as 15! This Apple site uses one, sometimes two. That will effectively stop unsolicited advertising pop-ups.
4. Do not use a cell phone for any financial transactions. Such phones are simply to easy to lose.
There is no such thing as total internet security, but what I have suggested above will go a long way towards invasive tracking. As regards other aspects of internet security, please read on:
You will find this User Tip on Viruses, Trojan Detection and Removal, as well as general Internet Security and Privacy, useful:
The User Tip (which you are welcome to print out and retain for future reference) seeks to offer guidance on the main security threats and how to avoid them.
More useful information can also be found here:
Klaus1, thanks a lot for that!
I had a look at the links, but the first link redirects to:
and the 3rd one (for Security Configuration for Version 10.5 Leopard) leads me to:
http://www.apple.com/osx/server/ ?? Edit: found it through googling - 260 pages...
I've been mainly using Firefox lately, where I have ( roughly) the security settings you mentioned; plus Ghostery, TrackMeNot and Google Analytics Opt-out Browser Add-on.
Could/should I stay with FF or switch to Safari (with similar settings & add-ons, if possible/available)?
I do quite a bit of internet-banking and online purchase, though.
Sorry about the links! Apple have a habit of 'losing' articles about versions of OS X that they no longer support!
As for Firefox v. Safari, that has to be your call. I find Firefox much slower than Safari, but whatever suits you best.
As not all banks keep up with the latest browsers, you may find that some sites work better with one browser, and another with another (if you follow me).
Keep both browsers on your Mac. With the add-ons you mention you should be pretty secure!
Since I (recently), as someone had suggest to improve security, unticked all Java preferences in the /Applications/Utilities/ folder , I can a 'missing plug-in' message in emails whenever a picture or so has been attached. Should I leave it like that (and look at the content of the attachment through Quick Look before opening) or is het safe to reset these Java preferences (by clicking Restore Defaults)?
You have the changed the subject! If you now running Snow Leopard, and you recently applied the latest security updates:
Mac OS X v10.6: Mail.app won’t open, or "You can't use this version of Mail…" alert after installing Security Update 2012-004:
Fellow user Grant Bennet-Alder offers this solution:
Some users have reported this problem if the Mail Application has been moved out of the top-level /Applications folder, or duplicated in another location.
When the Security Update is done, the old version of Mail is disabled.
The solution has been to:
1) make certain Mail is in the /Applications folder
2) There is no other copy anywhere else.
3) Once steps 1 and 2 have been done, Manually download and re-apply the Security Update (2012-004) by hand.
If the Mail.app has been LOST, it can be re-installed by applying the 10.6.8 version 1.1 combo update. But this update is quite large and it is usually not necessary:
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398454160.51/warc/CC-MAIN-20151124205414-00293-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 3,963 | 42 |
http://excel.bigresource.com/Text-and-Formula-in-Same-Cell--3djRiJPO.html
|
code
|
Text And Formula In Same Cell?
How do I enter both text and a formula into the same cell?
Eg. Cell needs to result in: "Today's sales = $12,500" where $12,500 is the result of a formula.
View Complete Thread with Replies
Related Forum Messages:
Copy A Formula In A Cell And Then Paste Only The Text Of The Formula
I would like to copy a formula in a cell and then paste only the text of the formula, but I can't figure it out. Basically, I would like to avoid going into the cells and absolute referencing or hitting F2, then copying the text.
When I hit "Ctrl C" to copy the cell, then hit "Alt/E/S/F/Enter" to paste the formula, it is just like a regular copy/paste formula-wise in that the references move.
Convert Cell Reference Text To Cell Formula
I am wanting to convert a cell reference text
to an actual cell reference
Manually I can go through each cell and click F2 + Enter and Excel automatically changes it.
I have tried recording a macro whereby I click through each cell with F2 + Enter but the VBA writes the actual formula "=$A$1" rather than the process. This does not work as the cell reference is variable.
I'm NOT wanting an external cell to convert it for me
because I am wanting to copy the answer to another independent spreadsheet
I'm NOT wanting to paste values
i.e. return the answer from cell $A$1
because I want the cell reference to remain within the cell.
Formula To Get The Cell With The Text
I need an excel formula to find the part of the text in the range of cells and display the values.
Eg: cell's A1 :A25 has text in it, And B1 :B25 has values in it.
I need to find "ABC" from A1:A25 ,IT could be either "ARRA ABC" or
"NON ARRA ABC" but not just and display the values for this frm B1:B25 in C1.
examples for A1:A25 is
A1 ARRA ABC
A2 Non ARRA DEF
A4 Non ARRA GHI
A6 ARRA JKL
I used the following formula,It seems to be working but only to that cell if i try to put the same formula in another sheet or another cell ,I don't see the values.
My Formula is as follows.
=IF((OR(A1:A25="ARRA ABC",A1:A25="NoN ARRA ABC")),B1:B25,0)
Formula From Text Cell
on the attached sheet i am trying to extract the number from the cell "under 200.5 pts" so i get just 200.5 then the cell with "L" in changes dependant on the number in the total points cell. When i try, i am getting the same answer regardless whether the total points number is higher or lower than the extracted 200.5.
Insert Text From Once Cell Into A Formula In Another Cell
I built a formula that should work, but it's too long so I need to condense it.
I have three columns, column 1 has names, column 2 has a formula.
I have 15 sheets, each with a name that could appear in column 1.
If the cell in column 2, sheet 1 is Bob, I want it to pull H5 from sheet bob. That works as:
=IF(A5="Bob", 'Bob Data'!H4, "Work in Progress")
But if I build that formula for all the possible names, it's too long. Is there a way to make the formula autofill with the name in cell A5
So: =IF(A5="XXXX", 'XXXX'!H4, "Work in Progress")
Adding Text In Same Cell As Formula?
I have a list of numbers in column A (i.e.: 1234)and I need them to show up in column B in with an "*" asterisk on each side on the number (i.e.: *1234*). So I was using "=a1" in cell B1, is there away to add the asterisk to the formula as text?
Formula Only For Bold Text Cell
i have a worksheet which has a price list for parts, about 2500 rows. in the Column C i have a retail price and in Column D have -5% of the C. i need to add Column E -10% of CERTAIN items, the ones in BOLD Only, of Column D. and change the color of that cell, is there a easy way to do this. i have attached screen shot what i mean.
Formatting Formula And Text In Same Cell
Is it possible to apply formatting to a formula in a cell when you are combining that formula with text? As an example, I want to format the following as a percentage: ="The result is"&" "&(a2/a1)
Currently, it is returning [e.g.] ...result is 0.5, instead of ...result is 50%
Substituting Text In A Cell For A Worksheet Name In A Formula
The content of cell "animal!A1" will change according to a simple vlookup table. Let's say the value can be "dog", "cat", or "horse". In cell "animal!A5", I want to duplicate the content either "dog!A5", "cat!A5", or "horse!A5", depending on the current value of "animal!A1".
I've tried to do a simple reference like:
wanting the A1 to actually read either dog, cat, or horse so the reference would refer to the worksheet of the same names. This doesn't work, so I need to know if there is a way to do this.
Parse Words From Cell Text Formula
"Use a formula to fill in column F (brand name) in the data worksheet. The Brand Name is the Branded Description minus the last word.
NOTE extra mark: If your formula can’t find a space (is error = true) then it takes whatever is in the cell and uses that."
Would I be using the CONCANATE formula or something similar?
Numbers And Text In Same Cell & Tally In A Formula
Can numbers and text be included in the same cell and still have the number be included in the total in a formula in another cell? Or must a cell only have numeric values for it to be seen/included in a formula's total value.
I'm trying to create a database that totals materials for a construction project. I want to display the number of doors for a house in a row of cells and have the all the doors totaled in the last cell. This I have no trouble doing.
The problem arises when I want to add some text information about the style of each door in the same cell that the number of doors is shown. As soon as text information is added to a cell that has numeric information, that cells numeric information is not included in the final total in the last cell in the row.
I resorted to using comments instead, but, when the are made visible on the spreadsheet, they don't seem to lock to a relative position regarding the cell they're attached to. For instance, if I widen columns or make any significant spatial changes to the spreadsheet, the comments don't move with the changes.
There may be a way to lock comments to stay in a relative position regarding the cell they're attached to. And if that's the only way to make comments for the items in each cell stay with the cell, then I'll have to use that method. But I'd rather not have to use the comments function at all.
I'd much rather be able to have numbers and text be in the same cell, and still have the number value of that cell be included in a formula total at the end of a row of numeric information.
Example: (In this example separate cells that include both numeric values and text are indicated by parenthesis. The final cell that has the formula that totals the numeric information in the separate cells is indicated by brackets)
(30, raised panel doors, unpainted) (10, raised panel doors, white)
Formula Turns Cell To Text Format
I have a workbook with two worksheets. I added a formula to the first worksheet
= COUNTIF(Scorecard!H3,"K"). It works fine when I add another COUNTIF that references another column (baseball fans may realize I'm counting total strikeouts for a batter): =COUNTIF(Scorecard!H3,"K")+COUNTIF(Scorecard!L3,"K") However, when I try to expand this to cover more columns, =COUNTIF(Scorecard!H3,"K")+COUNTIF(Scorecard!L3,"K")+COUNTIF(Scorecard!P3,"K")
Excel automatically changes the format of this cell to "Text" and it shows the formula as text instead of calculating it. What could be wrong here? Excel's documentation is woefully inadequate for cases like this. Is there an undocumented limit on how many times I can add COUNTIFs together?
Formula To Highlight Text Within A Cell Which Also Has Numbers
I have attached a file which shows some cells which start with "p" and then a number and some have the same but with the word " total" in them.
I would like to run a formula in the column next to it which will highlight which cells have that word in order that I can data sort a large file and delete the totals.
I think it will be an IF formula on cells that contain criteria.
Find Cell With Text & Insert Formula Below
I want the macro to:
1. search A1:AZ1 to find the cell that has the text "VBA Test" in the cell. There could be other text in the cell as well - this is not an exact match - but these two words are the common text.
2. go to that cell
3. go to one cell below that
4. enter a formula (I've got it from here ....)
Formula That Will Test Text Conditions In A Single Cell
I need a function that will use a column of text values and test these values
to see if one or more of the values exist in a single cell. If it does I need
the function to return true or false.
Ie. cell A1 contains the text "Jim Smith" the B column contains the test
names (column of test values ) ie. B1 is "bill" B2 is "fred" B3 is "jim".
Because Jim is in the cell A1 I would need the function in C1 to return the
value "true". If A1 contained the text "bob smith" then function in C1 would
return the value "false".
Return Formula Value/Text Based On Many Cell Conditions
This is to manage which departments (approxiamately 30) within a business need which compulsary training (approximately 11 courses)
Spreadsheet currently reads list of new employees and I want to be able to have "YES" or "No" values under the different courses
Is there a formula/function that i can use (like the IF Formula) to complete the following information;
EG: =IF(OR(A3=H2, A3=H5 etc... ), "YES", "NO"
Column H lists all departments
Column A lists deaprtments
A3 representing the 1st Department needing training
Macro To Edit A Cell & Convert Formula To Text
Have a macro that copies a formula from each of 100 workbooks to a new workbook. I want to display these formulas as text and want a macro or someway to display these cells as text. I have tried to record a macro that presses the F2 key, the home key and the apostrophe. This works for the one cell but provides the following macro that does not work for anyother cell.
ActiveCell.FormulaR1C1 = _
"'=VLOOKUP($A$30,'G:Variance Reports FY07[Salary Dist Var Repts_Cur Mth.xls]end of July'!$E$76:$G$200,3)"
Re-establishing Formulas On “text” Input For A Specific “cell” = Original Formula Act
See attached worksheet for reference. Is it possible (while utilizing the same spreadsheet on a weekly basis) to zero a spreadsheet subsequent to its use. Importantly however, all relevant formulas must remain perfectly intact and will re-establish themselves once relevant data is placed inside an individual cell? In this case, as soon as a “Name” (or even a letter) is referenced inside the “Name” column: H10:H19?
In other words, the entire sheet is blank bar the top date and respective headings. Once any text is placed inside cells H10:H19, the formulas from the associated Row re-applies itself to the “Week-Start” dates, “Week-End” dates and references a default “Phone” amount for ‘$10’? The Data Validation formulas I’m sure would remain undamaged? This would prevent ‘text clutter’ (such as dates extending to the bottom with no apparent referencing or connecting information?
Show Cell Value(text) In Comment Box Text, Or Mouse Tool Tip On An Gif Icon
I have a spread sheet were the area is getting very limited. I need to insert a small icon and when the mouse goes over (like it does in a form tool tip) will show the value of a cell (text value) located in another sheet in same workbook, or I was thinking inset a comment next to the icon and link the comments of the comments text to cell with the text value.
I've look the properties of this to objects and can figure it out.
How To Use A Text Formula In A Real Formula
This is going to be hard to explain but, ill give it a try
I have a list of formula written in text in columnB
each formula correspond to a type of road in columnA
I would like to create a formula that will choose the right formula and substitute the variable "x" by a specific cell (lets put $Z$1) to finally give me the final answer in column C.
Text Search Returns Cell Text Contents Of Different Column In The Same Row
Search a worksheet for a user defined text string, and have excell return the contents of a predetermined column in the same row in which the text string was found.
A prepopulated worksheet has the text "gold" entered in cell T278.
1. user searches for "yellow_metal"
2. Excell finds "yellow_metal" in row 278, say in cell A278.
3. Excell then goes to predetermined column (programed as part of macro or VB), say "T", and returns the text contents of the cell in that column, T278 in this example.
4. Excell returns "gold"
Combine Cell Content With Text File & Save As Text
- I have excel file with data I need
- I have fixed txt(html) template that i need to integrate Excel information into
- Final result that I want to achieve is saved .txt(html) file with combination of fixed information (text) and data from excel cells.
I need to writing a VBA code for each of above (integrating text & cells, saving results as text)
Write Ranges To Text Files & Save Each As Cell Text
I m trying to write a macro which could take the text from a single column row T2 to row T313 and write it to a .txt file. Have the .txt file name created by the text in T4 or I could also put the text to name the file in T1 if you think it would be easier.
Then carry on to the next named sheet and produce another .txt file in exactly the same way until all 15 sheets have been completed. It would also be helpful if prior to starting to write each text files, it could test for any text in cell A2 of the sheet. The first empty A2 cell of a sheet would determine the end of the run, if it was prior to sheet 15 being reached.
Lots Of Text In Cell...stops Wrapping Text
I am preparing a very large spreadsheet of text. Once I reach a certain point (a few paragraphs?), the program stops wrapping the text. All of my text is visible in the box at the top of the spreadsheet when I click on the cell. I double checked to make sure it's set to wrap, which it is. I tried merging two cells, no change.
Pull Varying Length Text From Cell Text
I need to find text within middle of a string.
Character before required text is say AAA
Character after required text is say BBB
Text required can vary in length.
Extract text and place in another column.
All text in a single column, required text not in every line. but
Validate Cell For Text Length & Characters In Text
I have a cell (B2) I would like to apply multiple data validations to.
I know I need to use the custom formula option but don't know how to write the formula.
I don't even know if it is possible, but here is what I'm after
I need to make sure the cell is 4 digits long
I need to make sure the cell starts with a zero (Because the cell starts with a zero I have it as a text cell)
I need to make sure the 2nd number is not 0 if A2 begins with 5 (A2 is also a text cell).
Check If Text String In Cell With Other Text Is In List
I have a sheet in which some of the cells have two strings separated by a linefeed. I have come up with a cumbersome formula which will let me check if either of the two strings is a member of a list stored on another sheet. However, it fails if there is only one string in the cell, presumably as there is no linefeed for the formula to find. How can I modify the formula to cope with this situation?
There are also on occasions, three strings in the cell, but I can't seem to access the middle string with the formula. Simplified spreadsheet attached to show the problem. This must be formula-based, as we have a no VBA policy. If you think there is better way of doing this, please let me know.
Highlight Specific Text Within A Cell Of Other Text
This is the text:
Take 5 PPE Swabs per Area, Both Shifts. Test various equipment - hands, aprons, sleeves, hats, etc.
What I need is for "Take 5 PPE Swabs per Area, Both Shifts." to be bold and highlighted in gray, but none of the other text. Conditional formatting highlights the entire cell, which won't work.
Append Text To Existing Cell Text
I've got a sheet in which I want a drop down box, to ADD the value* to a cell, not overwriting its current value!
*The name of the selected option in the drop down box, the names are located in Map3!A1-n, I set the drop down box to display the related number in a cell next to it.
The cell would contain some text, and by selecting something in the drop down box, it would add the name of that option to the already existing value in the cell.
So if at first the cell's value is
Hi! I 'm Mark,
and you select the following option from the dropdown box
I 'm from Holland!
the cell would end with the value
Hi! I 'm Mark, I'm from Holland!
This would probably work with a macro, already made a start with it but I couldn't get it to ADD the value instead of overwriting it.
Add Text Based On Part Of Another Cell Text
I have been working on this worksheet part of which is attached herewith. I would like excel to automatically enter Updated/Inserted in column B against Individuals' names as per the instructions given in column A. For example: As per instruction in A9, B13:B16 should show Updated. I have tried to use the nested if function, but it does not work as I want it to. Also as I am not used to macros or VBA codes, could this problem be solved with formulas?
Formula To Target Another Cell: Formula/Data In Same Cell
Note: I know the syntax below is not how you would enter forumlas into Excel but I am using it to quickly illustrate what I'm trying to do and need help with. A cell not in column D, E, or F contains a formula of the form: F=IF(D>0, D/E, "blank") .
Note: If a cell in column D>0 (eg. D5>0) then the cell to the right of the selected D cell (eg. E5) will also be >0 ; otherwise, both cells will be empty.
This is what I want to do: If cell D?>0 , then F?=D?/E? ; otherwise, F?=empty .
Example 1: If D5>0, then F5=D5/E5 ; otherwise F5=empty .
Example 2: If D7>0, then F7=D7/E7 ; otherwise F7=empty .
Etc. I want this to apply to all rows.
I cannot enter the formula directly into the F? cell because sometimes I will need to enter data into cell F? manually. When I need to enter data manually into F? this results in the formula being overwritten by the new data and this means that if I ever needed to have data calulated based on the formula F=IF(D>0, D/E, "blank") I would have to re-enter the formula from scratch over again; obviously this is a nusance.
If Cell Contains Text Place Text In Another Cell
This might not actually be able to be done, but im sure the best chance I have to do this is by getting help from you all.
What I need to do is look in cell "A1". If that cell contains a number I need to go to cell "B1" and type with the 00 being replaced with what is in cell "A1".
For Example if cell "A1" has the number 67 in it then I need B1 to say .
Join Cell Text And 1 Cell Date From Cell With 2 Dates
I have three cells in A2:C2 which require user to input some data.
What I want to achieve is to combine the data from A2:C2 in D2.
C2 is a field which user will input the date. He might key in 21/08/06 or
I have tried using below formulas in D2 but without success.
=A2&" " &B2&" "&(C2)
=A3&" " &B3&" "&DAY(C3)&"/"&MONTH(C3)&"/"&YEAR(C3) (doesn't work if there are 2 dates.
I have attached a file which shows 3 scenarios if user input 1 date and 2 dates.
Formula And Text In The Same Box
I am currently in the making of a new financial plan template and I am having a problem arranging all of the rows in an orderly manner. I was wondering if i could have a sentence and at the end of the sentence a number figure. I cannot use a cell to the right because that way there is a gap and it looks pretty bad.
Sum Formula When There Is Text
A B C
3.10 Leaver 3.10 here I want value Leaver returned
-1,482.75 1,687.50 204.75
-3,120.00 3,000.00 -120.00
-760.00 1,000.00 240.00
-1,495.00 1,625.00 130.00
-1,107.91 1,204.25 96.34
-1,708.99 1,298.75 -410.24
-2,297.28 2,500.00 202.72
-1,150.00 1,250.00 100.00
-2,150.51 2,156.25 5.74
-1,557.31 Starter -1,557.31 Here I want value Starter Returned
-263.97 1,649.75 1,385.78
* text value is only in column B. see attached File
Using Text In A Formula
I am using =INDEX(7:7,MATCH(9.99999999999999E+307,7:7)) to return the current price of a product. I would like to be able to have the formula return either a text value (discontinued) or the current price, ie column G contains the current price and if it is a discontinued item I could just type in "dis" instead of the price when updating the sheet.
Inserting Text Into A Cell Containing Text
I have pulled a report from a website. The website only allows a certain number of characters. For instance it might go to john.smith@, dave.bird@, tom.jones@... this has been pulled into an excel sheet. I want to add the domain at the end of the email address so it would become
But I have a list of 2000 usernames and I don't want to go line by line inserting whatever.com. Is there a way to automatically do this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663036.27/warc/CC-MAIN-20140930004103-00196-ip-10-234-18-248.ec2.internal.warc.gz
|
CC-MAIN-2014-41
| 20,982 | 204 |
https://calendar.colorado.edu/event/appm_colloquium_-_yingda_cheng_5767
|
code
|
APPM Colloquium - Michael Stutzer
Michael Stutzer, Leeds School of Business, University of Colorado Boulder
The Kelly Criterion (a.k.a. expected log utility maximization) is a well-known criterion function used to select optimal repeated gambles or long-term portfolio investments. It is often rationalized by its asymptotic properties as the investment horizon grows to infinity. One criticism is that use of this criterion leads to excessively volatile wealth paths that can lead to uncomfortably high, finite-time probabilities of underperformance.
The Statistical Theory of Large Deviations and its key object – the Rate Function -- provide a tractable framework for introducing considerations of risk-control into the asymptotic rationale. Instead of maximizing the expected log utility of wealth, I maximize the asymptotic decay rate of the probability that wealth will fall short of user-selected targets, contrast this criterion with the Kelly Criterion, and empirically implement the idea to select long-term optimal portfolios of stocks and bonds.
Doing so in a realistic manner requires calculation of the large deviations rate function for time averages generated by Markov Switching (a.k.a. Hidden Markov) processes, which may also be of interest to those in other fields.
Friday, September 7, 2018 at 3:00pm to 4:00pm
Engineering Center, Room 245
1111 Engineering Drive, Boulder, CO 80309
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00267.warc.gz
|
CC-MAIN-2019-43
| 1,404 | 8 |
https://art-care.gr/en/product/flexible-tabs-without-notch/
|
code
|
Due to their big surface, they step well on the backing board, helping the theme to stretch well.
Suggested to be used together with Corrugated Backing Boards.
Ideal for assemblings, with periodical theme-changes, eg photoframes.
Flexible Tabs (Pointers), black-coloured, without notch.
1 box contains 5.000 tabs.
Συνδεθείτε για να δείτε τιμές
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511075.63/warc/CC-MAIN-20231003092549-20231003122549-00274.warc.gz
|
CC-MAIN-2023-40
| 368 | 6 |
https://chappuishalder.com/client-cases/marketing-risk-assignment/
|
code
|
Client Needs & Objectives
This experience is illustrative of the changes in mandates that we have seen in the recent months.
Our modelling and risk expertise is now used in less regulatory and more operational contexts.
- Context: A 4-month mandate and independent mission between the bank (project sponsor) and its financial partner
- Client: A specialised financing bank (Retail)
Risk analysis mission on customer portfolios financed by the banks as part of its partnership:
- Data study / Available variable (data quality check)
- Clustering and segmentation of data
- Statistical studies (correlation, segmentation tree)
- Identification of the riskiest segments and associated business case
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00110.warc.gz
|
CC-MAIN-2023-14
| 695 | 10 |
https://pubknow.com/2021/03/system-procurement/
|
code
|
We have often been asked to help a public-sector client out of a jam with a multi-million-dollar system procurement effort. Each time, it was too late because their procurement design or evaluation resulted in the selection of a vendor that did not meet their needs.
These agencies constructed their procurement around a typical strategy—identify three or four categories of things that matter to them (for example, solution provider experience and viability, solution fit with requirements, and cost), weight those criteria, and score the vendors. Here is the problem—in tight economic times, the cost is often seen as critical, and it is weighted accordingly. Vendors know this, so they offer a minimally acceptable technical solution with the lowest possible cost. And they fully expect to make up this lost revenue through change orders.
How do you make sure an inferior product does not come out on top?
The following tactics can help you procure the system your agency needs:
Define your requirements well.
- This will allow you to better compare the solutions vendors propose. It is difficult and time-consuming to develop a detailed set of requirements. Because you rarely develop requirements, you might not have the skills in-house to do it well. Working with a third party that specializes in developing requirements is worth the expense.
Unless you are buying a commodity, consider a value-based procurement instead of basing your procurement strictly on cost.
- There are many ways to evaluate value. For example, score the value of a proposed solution by considering the cost per technical point. This will minimize cost-based gaming.
We’re not suggesting that you ignore total cost.
- Leave yourself room to negotiate the total cost. Pick the vendor with the highest value while maintaining the option to cut the total cost of the solution by reducing scope. Or if your regulations allow, consider a best and final officer solution.
Consider publishing your budget.
- Releasing your true budget in the procurement ensures you can score the responses on the best solution and customer service.
Test your procurement strategy.
- Consider all possible scenarios before you release your procurement (for example, a scenario in which a vendor has a low total cost and high technical scores or perhaps a high cost and high technical score), document them, and run an evaluation of the results. Are the results what you expected? Perhaps you should revise your scoring strategy.
The most important lesson here is HAVE a strategy before you release the procurement of a system. If you don’t procure major systems regularly (and who procures multi-million dollar systems regularly?) get some assistance with your procurement.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100290.24/warc/CC-MAIN-20231201151933-20231201181933-00419.warc.gz
|
CC-MAIN-2023-50
| 2,739 | 15 |
https://zasyasolutions.com/portfolio/lune-valet
|
code
|
What makes a company great? That’s simple. It’s the people who work here.
LUNE valet is a SAAS application for a valet solutions , which lets enroll various valet
companies and let users search for them based on their location and ease process of valet .
You have successfully joined our subscriber list.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650409.64/warc/CC-MAIN-20230604225057-20230605015057-00720.warc.gz
|
CC-MAIN-2023-23
| 308 | 4 |
http://sto-forum.perfectworld.com/showthread.php?p=1880480&mode=threaded
|
code
|
I had a lot of the issues you are reporting for account creation...Server down / busy...unable to process payment...Key already in use ETC.
I did have to run through the setup about 6 times. Each time getting the server is down or some other like issue. I then did the account setup with firefox instead of ms exp, and got a different error.
I logged out of the online web page close and refreshed browser and I was good to go.
I hope this helps.
Bottom line their servers are buys and just keep trying. It may say that the process wasn't completed but it very well might have been. Don't worry about multiple hits on your bank account as you can only have one "subscription" per account. And be sure to refresh everything, every now and again.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462313.6/warc/CC-MAIN-20150226074102-00295-ip-10-28-5-156.ec2.internal.warc.gz
|
CC-MAIN-2015-11
| 744 | 5 |
https://www.engadget.com/2012-08-15-windows-8-rtm-whats-new.html
|
code
|
It's been two weeks since Microsoft signed off on Windows 8, and shipped the final code to manufacturers prepping shiny new computers. Today, another round of folks are getting their hands on the code: devs, and IT pros with subscriptions to Microsoft's TechNet program. Of course, you might not be a developer or IT whiz and, if we're being honest, neither are we! Happily for us, though, Microsoft gave us an early peek at the RTM build -- the same software that will ship to consumers October 26th. Granted, Microsoft says it will continue tweaking the built-in apps, with updates coming through the Windows Store. Barring these minor changes, though, what you see here is what you'll get ten weeks from now. Meet us after the break for a summary of what's new.%Gallery-162397%
More customization options for the Start Screen
The last time we took a look at Windows 8, Microsoft had added more color themes for the Start Screen. Now, though, you can add one of 14 "personalization tattoos," patterned backgrounds and borders that line the Start Screen.
As you can see, some options are more subtle than others. (Ed. note: those multicolored birds and dangling flowers are just for show. Okay, guys?)
No more Aero
No surprise here: Microsoft announced all the way back in May that the desktop would no longer have the Aero it's been rocking since Vista. And indeed, the desktop here in RTM has a more flattened look (see the open window in that screenshot up there for an example of what we're talking about). If you're curious about the rationale behind that shift (and have a few minutes for a long read) hit up the more coverage link at the bottom of this post for Steven Sinofsky's detailed explanation.
By now, we've seen most of the apps that will come baked into Windows 8, but there is one late-comer: Bing. When you first launch the application, you'll see a mostly blank screen, with just a search bar and an ever-changing background photo. As you type results, Bing will offer suggestions and if we do say so, the auto-completion feels pretty quick. From there, results will be displayed not in a linear order, but as tiles you can swipe through, from side to side. Incidentally, this is one of the rare instances in Windows 8 when you can scroll almost infinitely through live tiles; you can keep going as long as there are more results to peruse.
Keep in mind that as with many Metro (excuse us -- Windows 8) apps, the level of functionality isn't quite as deep as what you'd get on the desktop. Whereas Bing is normally adept at travel- and flight-related queries, you can only use the built-in Bing app for simple keyword and image searches; you'll need to go to the Travel app instead for things like airfare searches.
Though the People app isn't new, per se, it got a facelift before Microsoft signed off on Windows 8. In addition to scrolling through names in alphabetical order, you can link your Facebook, Twitter and LinkedIn accounts and view your notifications all on one page. You can also check out a "What's New?" page to see what your friends are posting. As ever, linking our various accounts was a painless process that took about a minute, all told. For more screens, be sure to check out the gallery further up the page.
Since we last checked in, Microsoft updated its Windows Store so that you can search for things the same way you would on the Start Screen. Which is to say, you can just open the store and start typing -- a pane will immediately pop up on the right side of the screen, where you can see the list of results stat to shrink as you continue typing. It would seem, though, that you can only do this on the Windows Store's main page; if you go into the games section and start typing "Mine" for Minesweeper, you won't see that list of results.
By the by, this is as good a time as any to clarify that Minesweeper is new with RTM, as are Solitaire, Mahjong and Xbox SmartGlass. There are some new third-party apps too, but the ones we just mentioned are the only new ones created by Microsoft. If you're curious, we've screenshots below -- those should tell you all you need to know about how the games are laid out.%Gallery-162477%
Additionally, the Windows Store now supports 54 new markets, and developers have the option of certifying their apps in 24 more languages. Lastly, the Store will at last be open to paid apps, and not just free and trial ones.
As it happens, many of the improvements in this late-stage build are under the hood, including both performance enhancements and some unspecified bug fixes. All told, Microsoft promises that battery life, I/O performance and hibernation speeds should all be improved over Windows 7. As you may know, the company also implemented different compression codecs as a way of speeding up both the download and installation process.
At this point, there's barely anything Microsoft could have done to change your opinion of Windows 8: this is the same user experience we've been testing for months, just with smoother performance and a bit more cohesiveness. Rest assured, though, this isn't the last you've heard from Engadget on this topic: we're curious to see what tweaks Microsoft makes between now and general availability, and we're definitely wondering what PC makers might do to customize the software. Until then, at least, those of you left to run Release Preview can take comfort in the fact that you're not missing too much, and that what you're testing is apparently pretty darn close to the final version.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817106.73/warc/CC-MAIN-20240416191221-20240416221221-00638.warc.gz
|
CC-MAIN-2024-18
| 5,520 | 14 |
https://forum.arduino.cc/t/sound-synchronised-leds/337400
|
code
|
I have been working on this same project for quite some time and have finally had large amounts of success. Are you looking to do live synchronization from an mp3 cable or do you want to use prerecorded mp3s or wavs?
If you want to use an mp3 cable, just get a splitter adn run one to your speakers as normal and the other goes to the arduino. BUT there is an ic/chip that you need called the MSGEQ7 that will analyze the audio signal and break it into 7 different frequency ranges that the arduino can read via analog input. The circuit also requires at least 2 digital IO pins for controlling the ic. You can see a complete demo of the MSGEQ7 here: http://nuewire.com/info-archive/msgeq7-by-j-skoba/
I have been working on a system that controls RGB LEDs and can be set to different modes like Fade from one color to another, solid color, swap fade where 2 "channels" swap fade their colors, and of course music synchronization. I use a series of MOSFET transistors to drive high powered RGB LED strips. The same concept applies to single color LED strips as well. By using the transistors, the arduino is protected from over voltage and over amperage and the LEDs are able to be powered to 100%.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153934.85/warc/CC-MAIN-20210730060435-20210730090435-00073.warc.gz
|
CC-MAIN-2021-31
| 1,198 | 3 |
https://blog.jimsjump.com/tag/attiny85/
|
code
|
I printed danman’s Halloween Crow from Thingiverse at 98% to fit my PRUSA printer, added LED eyes and made a custom base for it.
Base and cover is printed in Hatchbox Wood Filled PLA
Pololu addressable LEDs for eyes
The package download link below has the following:
– 3D STL files to print
– Schematic for the ATTiny85 / LM7805 / Addressable LEDs all powered by a 9V Battery
– Arduino code
– FreeCAD native CAD Files
Note: See commented sections in the INO code file for links and URL information for parts programming etc.
I used a micro rocker switch – you’ll have to modify the CAD file or STL for a switch of your choice. Since there are 1000’s of switches out there I will ignore & delete requests for a specific switch.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816832.57/warc/CC-MAIN-20240413180040-20240413210040-00045.warc.gz
|
CC-MAIN-2024-18
| 742 | 10 |
http://blog.gainlo.co/index.php/2016/10/22/design-youtube-part/
|
code
|
One of the most common types of system design interview questions is to design an existing popular system. For example, in the past, we’ve discussed How to Design Twitter, Design Facebook Chat Function and so on so forth.
Part of the reason is that the question is usually general enough so that there are a lot of areas to discuss. In addition, if candidates are generally curious enough, they are more likely to explore how existing products are designed.
So this week, we’re going to talk about how to design Youtube. It’s a broad question because building Youtube is like building a skyscraper from scratch and there are just too many things to consider. Therefore, we’ll cover most of the “major” components from interviewer’s perspective, including database model, video/image storage, scalability, recommendation, security and so on.
Facing this question, most people’s minds go blank as the question is just too broad and they don’t know where to start. Just the storage itself is non-trivial as serving videos/images seamlessly to billions of users is extremely complicated.
As suggested in 8 Things You Need to Know Before a System Design Interview, it’s better to start with a high-level overview of the design before digging into all the details. This is true especially for problems like this that has countless things to consider and you’ll never be able to clarify everything.
Basically, we can simplify the system into a couple of major components as follows:
- Storage. How do you design the database schema? What database to use? Videos and images can be a subtopic as they are quite special to store.
- Scalability. When you get millions or even billions of users, how do you scale the storage and the whole system? This can be an extremely complicated problem, but we can at least discuss some high-level ideas.
- Web server. The most common structure is that front ends (both mobile and web) talk to the web server, which handles logics like user authentication, sessions, fetching and updating users’ data, etc.. And then the server connects to multiple backends like video storage, recommendation server and so forth.
- Cache is another important components. We’ve discussed in details about cache before, but there are still some differences here, e.g. we need cache in multiple layers like web server, video serving, etc..
- There are a couple of other important components like recommendation system, security system and so on. As you can see, just a single feature can be used as a stand-alone interview question.
Storage and data model
If you are using a relational database like MySQL, designing the data schema can be straightforward. And in reality, Youtube does use MySQL as its main database from the beginning and it works pretty well.
First and foremost, we need to define the user model, which can be stored in a single table including email, name, registration data, profile information and so on. Another common approach is to keep user data in two tables – one for authentication related information like email, password, name, registration date, etc. and the other for additional profile information like address, age and so forth.
The second major model is video. A video contains a lot of information including meta data (title, description, size, etc.), video file, comments, view counts, like counts and so on. Apparently, basic video information should be kept in separate tables so that we can first have a video table.
The author-video relation will be another table to map user id to video id. And user-like-video relation can also be a separate table. The idea here is database normalization – organizing the columns and tables to reduce data redundancy and improve data integrity.
Video and image storage
It’s recommended to store large static files like videos and images separately as it has better performance and is much easier to organize and scale. It’s quite counterintuitive that Youtube has more images than videos to serve. Imagine that each video has thumbnails of different sizes for different screens and the result is having 4X more images than videos. Therefore we should never ignore the image storage.
One of the most common approaches is to use CDN (Content delivery network). In short, CDN is a globally distributed network of proxy servers deployed in multiple data centers. The goal of a CDN is to serve content to end-users with high availability and high performance. It’s a kind of 3rd party network and many companies are storing static files on CDN today.
The biggest benefit using CDN is that CDN replicates content in multiple places so that there’s a better chance of content being closer to the user, with fewer hops, and content will run over a more friendly network. In addition, CND takes care of issues like scalability and you just need to pay for the service.
Popular VS long-tailed videos
If you thought that CDN is the ultimate solution, then you are completely wrong. Given the number of videos Youtube has today (819,417,600 hours of video), it’ll be extremely costly to host all of them on CDN especially majority of the videos are long-tailed, which are videos have only 1-20 views a day.
However, one of the most interesting things about Internet is that usually, it’s those long-tailed content that attracts the majority of users. The reason is simple – those popular content can be found everywhere and only long-tailed things make the product special.
Coming back to the storage problem. One straightforward approach is to host popular videos in CDN and less popular videos are stored in our own servers by location. This has a couple of advantages:
- Popular videos are viewed by a huge number of audiences in different locations, which is what CND is good at. It replicates the content in multiple places so that it’s more likely to serve the video from a close and friendly network.
- Long-tailed videos are usually consumed by a particular group of people and if you can predict in advance, it’s possible to store those content efficiently.
There are just too many topics we’d like to cover for the question “how to design Youtube”. In our next post, we’ll talk more about scalability, cache, server, security and so on.
By the way, if you want to have more guidance from experienced interviewers, you can check Gainlo that allows you to have mock interview with engineers from Google, Facebook ,etc..
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00649.warc.gz
|
CC-MAIN-2023-06
| 6,457 | 28 |
http://android.bigresource.com/Android-notifications-from-Facebook-6ehWgsMIv.html
|
code
|
when I get notifications from Facebook, it does the usual, notification bar, pull down, take me to the browser deal. Is there a way I can set it to take me to the touch site or the app instead? I prefer the interfaces on those better than the lackluster mobile site.
I just got the Droid 2 a couple days ago, and I was wondering if the face book app notified you of wall posts, comments, likes...etc.? Because mine only does for Messages. I hope nobody else has asked this question, but I would really appreciate the feedback.
I have a HTC Desire, and it's pretty much amazing and all. One of the things that bugs me tho, (coming from the iPhone which had that ability) is that the Facebook app, don't show any notifications, and that is something I'd really want. I've tried searching the forum, but couldn't find anything use full. cam out really be true that I can't get any app's which can show me notifications from face book?
I haven't been using the Facebook app since the version 1.3.2 battery issue, but I've noticed that recently it's been updated to 1.4.1. So I thought I'd give it another try, and they seem to have fixed the battery issue which is great, but I'm now missing notifications (i.e. when you tap the blue bar at the bottom you get a list of comments people have made either after your comments, or on your photos and things). For example, this morning I had 8 notifications on the app, but when I checked the mobile site there were two that hadn't pushed through to the facebook app. Anyone else get this? I have the feeling I'm missing a really simple setting somewhere.
i dont know if its me but i dont seem to be getting instant notifications via text when someone replies to my status or when i post on someones wall etc. I used to get them instantly on my Storm. Is there a step Im missing here?
I have an htc hero and I love it, but have a couple questions. First, does anyone know if it is possible for the facebook app to have notifications or is it all through email?
Second, I was getting email notifications very quickly untill 2 days ago when they all stopped, and as of yesterday I could at least view new emails though I was not getting a notification, but now I'm not getting the new emails or the notification... what happened?
facebook app (one that comes with phone) seems to have some problems loading notifications most of the time and also other unknown errors... like "Cannot retreive notifications.... [601/parse error... at position 57] and some others.
I have Facebook for Android set to update notifications every hour and have set it to vibrate and play a sound...but I have yet to get even 1 notification. I have to go in to and manually update myself. Anyone else got this problem?
the Facebook for android apps of the htc desire is not receiving notifications. When i refresh the page of the notifications, a error will occur. All the rest can be used except for the notifications. Anyone has the same problems? or if you know the solution kindly help.
I have it set up to receive SMS notifications from a handful of my friends whenever they update their status, post something on my wall, etc. and it used to work perfectly on my Blackberry. I got the new EVO a couple of weeks ago and am no longer receiving them. Anyone else having this issue and if so, is there a fix?
How do I get notified when I have a new facebook message? I don't seem to be getting any, but when I use the Facebook app, I can see new messages. I'm coming from a BB Curve where I got actual facebook message notifications when I had a new message or someone replied to a thread that I had commented in. Does Android not do that?
I find the facebook App really poor. I don't seem to be getting notifactions when i get a message or someone posts on my wall. I have checked the settings and i have ticked all the options but nothing.
Am i right in thinking that the phone should alert me to any post even when i am on the home screen (i.e. not in facebook app)?
That's pretty much all I want to know. I have no constructive input (save for that the release of the SDK may herald a new era of notifications for facebook :P), but just wanted to complain. That is all.
Why can't i get any notifications from facebook? I have it set so the notification refreshes every 30 mins, but that doesn't work either.. someone commented on my wall and even after 2 hours, i didn't get any notification. is this just how the facebook application is? if it is, are there any other (better) facebook app for droid?
So my new incredible doesn't update like my blackberry did. I checked the settings and the notifications are selected, but nothing comes through. When I open the app and goto the home screen and then notifications, I have to force a refresh to see my responses, etc.
I've looked for an answer to this question but couldn't find one anywhere. I have the Facebook app installed on my Droid and have the settings set to send me notifications for all possible things, but the app itself seems to refuse to update on its own or send me any notifications what-so-ever. Is there anyway to get it to actually refresh on its own and notify me?
I'm thinking they are problems with the apps. Both Facebook and Gmail apps have notifications checked off to show when new messages are received but they never put a notification in my notification panel of my phone or sound a tone or vibrate even tho they are set to. Anyone else have this problem and is there a way to fix it?
I'm not receiving either the notification on the upper left of the screen, or the ringtone. Notifications and the ringtones are setup properly, as I know. The only time I see that there's an update, is when that application is actually opened. This unit is set to update at the minimum.
Long time crackberry user converted to android. I got my DX on release day, and have been fighting with facebook since. I cannot seem to get any notifications for facebook what so ever. I set up the "Social Networking" app that came on the X out of the box, and that integrated all my FB friends into my contacts which sucks, but anyway, then I downloaded the facebook app for android. I set up all the settings for notifications, and sounds and all that.
But no matter what, I cannot get a facebook notification like I did on my blackberry. Ill get yahoo emails saying so and so commented on your photo, or what not. But nothing else from FB instant. I have to manually go into my facebook app, go to notifications, hit the menu button and select refresh. Last night for some reason I got the little F in the top left notification bar that someone invited me to join an event. But I cant seem to get anything to mimic it again. I've checked HOFO and they suggested I check here.
I got my HTC Droid Incredible back in July, and it worked amazingly for the last couple of months. All my apps have been running smoothly until recently, when Facebook started giving me problems. Now I don't know if this is a server thing, facebook itself, or a glitch in my phone, but I keep getting error messages.
When I refresh my notifications I get:
Cannot retrieve notifications. Please try again later. [601/Parser error: unexpected '-' at position 64.]
And occasionally I get another error/null type message when I try to refresh my newsfeed.
I've tried to reboot my phone, reinstall facebook, let my phone die recharge it blah blah blah. So, if anyone has any suggestions, please let me know! Thanks :]
Anyone else getting an error on facebook when they check for notifications? i keep getting this long as error, and i can't connect a facebook account, but my friends and newsfeed are all updated. it's just notifications.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826686.8/warc/CC-MAIN-20181215014028-20181215040028-00558.warc.gz
|
CC-MAIN-2018-51
| 7,689 | 28 |
https://wn.com/Roll_Cage
|
code
|
A roll cage is a specially engineered and constructed frame built in (or sometimes around, in which case it is known as an exo cage) the passenger compartment of a vehicle to protect its occupants from being injured in an accident, particularly in the event of a roll-over.
There are many different roll cage designs depending on the application, hence different racing organizations have differing specifications and regulations. They also help to stiffen the chassis, which is desirable in racing applications.
A roll bar is a single bar behind the driver that provides moderate roll-over protection. Due to the lack of a protective top, some modern convertibles utilize a strong windscreen frame acting as a roll bar. Also, a roll hoop may be placed behind both headrests (usually one on older cars), which is essentially a roll bar spanning the width of a passenger's shoulders.
A newer form of rollover protection, pioneered on the Mercedes-Benz R129 in 1989, is deployable roll hoops that are normally hidden within the body of a car. When sensors detect an imminent rollover, the roll hoops quickly extend and lock in place. Cars that have a deployable rollover protection system include the Peugeot 307 CC,Volvo C70, Mercedes-Benz SL 500, and Jaguar XK.
Formally, an (r,g)-graph is defined to be a graph in which each vertex has exactly r neighbors, and in which the shortest cycle has length exactly g. It is known that an (r,g)-graph exists for any combination of r ≥ 2 and g ≥ 3. An (r,g)-cage is an (r,g)-graph with the fewest possible number of vertices, among all (r,g)-graphs.
If a Moore graph exists with degree r and girth g, it must be a cage. Moreover, the bounds on the sizes of Moore graphs generalize to cages: any cage with odd girth g must have at least
vertices, and any cage with even girth g must have at least
vertices. Any (r,g)-graph with exactly this many vertices is by definition a Moore graph and therefore automatically a cage.
A cage is an enclosure made of mesh, bars or wires, used to confine, contain or protect something or someone. A cage can serve many purposes, including keeping an animal in captivity, capturing, and being used for display of an animal at a zoo.
In history, prisoners were sometimes kept in a cage. They would sometimes be chained up inside into uncomfortable positions to intensify suffering.
Cages have been usually been used to capture or trapping a certain life form. For this reason, they've been known as a hunting accessory, often used for poaching animals or simply seizing them.
Cages are often used now as a source to confine animals. These provide as a habitat to the animal, and since they've advanced so greatly, they are now specially designed to fit that species of animal. Captive breeds of birds, rodents, reptiles, and even larger animals have also been known to be confined in a cage as a domesticated animal (also known as a pet). Captivity is a common purpose of the cage.
Luke Cage was created by Archie Goodwin and John Romita, Sr. shortly after Blaxploitation films emerged as a popular new genre. He debuted in his own series, Luke Cage, Hero for Hire, which was initially written by Goodwin and pencilled by George Tuska. Cage's adventures were set in a grungier, more crime-dominated New York City than that inhabited by other Marvel superheroes of the time. The series was retitled Luke Cage, Power Man with issue #17.
Flight dynamics is the study of the performance, stability, and control of vehicles flying through the air or in outer space. It is concerned with how forces acting on the vehicle influence its speed and attitude with respect to time.
In fixed-wing aircraft, the changing orientation of the vehicle with respect to the local air flow is represented by two critical parameters, angle of attack ("alpha") and angle of sideslip ("beta"). These angles describe the vector direction of airspeed, important because it is the principal source of modulations in the aerodynamic forces and moments applied to the aircraft.
Spacecraft flight dynamics involve three forces: propulsive (rocket engine), gravitational, and lift and drag (when traveling through the earths or any other celestial atmosphere). Because aerodynamic forces involved with spacecraft flight are very small, this leaves gravity as the dominant force.
Aircraft and spacecraft share a critical interest in their orientation with respect to the earth horizon and heading, and this is represented by another set of angles, "yaw," "pitch" and "roll" which angles match their colloquial meaning, but also have formal definition as an Euler sequence. These angles are the product of the rotational equations of motion, where orientation responds to torque, just as the velocity of a vehicle responds to forces. For all flight vehicles, these two sets of dynamics, rotational and translational, operate simultaneously and in a coupled fashion to evolve the vehicle's state (orientation and velocity) trajectory.
Euler angles represent a sequence of three elemental rotations, i.e. rotations about the axes of a coordinate system. For instance, a first rotation about z by an angle α, a second rotation about x by an angle β, and a last rotation again about y, by an angle γ. These rotations start from a known standard orientation. In physics, this standard initial orientation is typically represented by a motionless (fixed, global, or world) coordinate system; in linear algebra, by a standard basis.
Any orientation can be achieved by composing three elemental rotations. The elemental rotations can either occur about the axes of the fixed coordinate system (extrinsic rotations) or about the axes of a rotating coordinate system, which is initially aligned with the fixed one, and modifies its orientation after each elemental rotation (intrinsic rotations). The rotating coordinate system may be imagined to be rigidly attached to a rigid body. In this case, it is sometimes called a local coordinate system. Without considering the possibility of using two different conventions for the definition of the rotation axes (intrinsic or extrinsic), there exist twelve possible sequences of rotation axes, divided in two groups:
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209856.3/warc/CC-MAIN-20180815024253-20180815044253-00681.warc.gz
|
CC-MAIN-2018-34
| 6,201 | 19 |
https://appcontacter.com/contact/org.opencv.javacv.recognition/face-recognition
|
code
|
Contact Face Recognition
Why should I report an Issue with Face Recognition?
- AppContacter will directly Email your issue/feedback to an apps's customer service once you report an issue and with lots of issues reported, companies will definitely listen to you.
- Pulling issues faced by users like you is a good way to draw attention of MMM developers to your problem using the strength of crowds.
- Importantly, customers can learn from other customers in case the issue is a common problem that has been solved before.
Face Recognition Features and Description
This Face Recognition application detect and recognize user face. Face Recognition has three main module First Face Recognition allow user to train person by face detection and save user name. Second Face Recognition module face recognition is to recognize trained user faces and display names of person with match for face detection. Third Face Recognition module is face recognition gallery consists of all trained faces by face detection and face recognition. user can delete faces as well. All images are save in user mobile so your images are save so you can train as many faces as you can.
Troubleshoot and Solve Common Face Recognition Issues
How to Fix Face Recognition Not Working, Crashes, has Errors, Is Unresponsive, has Black screen/White screen:
To resolve these issues with Face Recognition, we will start with troubleshooting the service itself and then account issues, then potential problems with your device. Let's get started:
- Check if Face Recognition is down for everyone and not just you: A good way to know if it's not working for everyone is to check Appcontacter Face Recognition user reports here >>. If other users are reporting that Face Recognition is down, you'll need to wait until Face Recognition itself fixes the issue.
- Clear Face Recognition app cache: Clearing cached data will force.....
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650409.64/warc/CC-MAIN-20230604225057-20230605015057-00704.warc.gz
|
CC-MAIN-2023-23
| 1,893 | 12 |
https://community.tp-link.com/us/home/forum/topic/255970
|
code
|
So far (including using current fw) the moment the internet connection is lost the DECO router will immediately go offline and the led turn red which in turn make to complete local network go offline.
Surely there must be a way of making this router a bit more robust to mitigate this behavior as we as users can't always control the WAN side of our networks.
In my case I have both the cable modem (in bridge mode), DECO router and a computer connected to a UPS which "saves" me from losing any part of the setup in that aspect during a power outage. The other computers and network connected devices is not on UPS.
But I can't control when my cable internet provider decides to perform maintenance or experience power loss somewhere in the core network at any time which results in loss of WAN IP (with the cable modem still powered on I might add) and make the entire network unusable until the cable modem receive a new WAN ip to provide to the DECO router to bring it up again.
I would also like to see a more decentralized approach for the configuration of the units so we can also use them offline at least in some manner. A local backup of the configuration plus the ability to restore from it would be most useful.
This and the fact that I can't even access the DECO router at this time is quite frustrating and is by far the biggest shortcoming of this product, at least for me.
My other option is to revert back to using the cable modem as router and use the DECOs as APs mainly to be used for wifi access only which would be a shame.
Any effort into making this more robust is very much appreciated and feel free to contact me if you need additional clarification of my user case.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00435.warc.gz
|
CC-MAIN-2022-33
| 1,692 | 8 |
https://staragile.com/blog/empirical-process-control
|
code
|
Let me first explain the topic with a simple example. Teachers help students understand the concepts in math and provide them with practice sums. They also teach them to crack the given problems. Students keep practicing and finally appear for an examination. In the exam, if the questions are similar to what was taught, and then the children crack them with the right answers and score good marks. On the other hand, if the question has a twist in it, then not everyone can answer them. However, the model remains the same. It depends on the individual understanding and ability to solve such problems.
How is this example relevant to our topic? Let me explain, in the first case where the questions were direct, allowed the students to follow the already taught method to get answers. In the second scenario, the children must apply the concept and find a way to solve the problem. Thus the former method is a planned one and the latter one requires dynamic action.
In software development, we call the first one as defined and the second one as empirical process control respectively.
When the work is complex we cannot understand it at ease. We will try to understand applying various methods. We will observe results and carry out different experiments until we find a solution.
Every complex domain requires this approach as a defined algorithm does not offer certain results. Therefore, there is a need to understand the dynamics and identity of an algorithm to solve the problem.
Where there is innovation expectation there you can apply this method and feel glad about new ideas.
When there is a naïve team working and they would want to try implementing different approaches use this method and find a solution. It will not only help solve complex issues but will motivate the team as well.
Now, let us relate how scrum and this process go hand in hand. We have seen how this way of process control allows the team to think, innovate, observe and experiment. This provides the answer to why the empirical process in scrum?
Yes, it is a dynamic framework where the entire team meets daily to understand progress. During every stage, the customer gets a chance to review the product and provide a suggestion. Then the product owner (PO) adjusts the priority of the backlogs.
Therefore, the scrum method does not have a candid planning but prefers and is successful with the plan on the way. This means the plan is dynamic during the entire process.
You can easily understand that the steps involved are planning which is dynamic and your progress to stop and check. Based on the output you again make changes before reaching the process step.
Consider every sprint as a black box in the scrum. The work happens uninterruptedly during each sprint. Already we are aware that scrum is a time-boxed process with time defined daily standup, planning, review, and retrospectives. Therefore inspection is carried out at each of these meetings to optimize the process by collecting the details to observe, experiment and modify.
Everyone in the scrum team is allowed to observe all the processes. This means the workflow is transparent and hence anyone can suggest changes. Find how this is achieved in every stage of the development.
By keeping it transparent the team feels empowered and hence works with an open mind that will do away with the differences and all will work towards achieving the end goal.
Not just being visible will suffice but the team must also be part of the inspection to find and accept the improper variances in the process. Only when the acceptance happens, it will pave way for adopting a new method.
Steps involved in the inspection
When the inspection becomes successful the team will know the areas that require improvement and will start embracing the changes.
They discuss the following details in the daily standup
In short, the entire idea behind the empirical process control is executed as follows.
Likewise, in the retrospective session, the total process is inspected to take corrective actions in the remaining sprints.
Finally, you must know why should use this process in your implementation.
By itself, scrum is an empirical process as it delivers a shippable product increment which is checked frequently by the team and based on the review the product backlog gets adjusted. The team performance is transparent and is inspected during the retrospective to suggest for improvement.
The role of each individual in a scrum team is vital for the successful implementation of the process. We suggest you attend training for a Scrum Master Certification to practically understand the right way to use the empirical process control in your team and projects.
|Certified Scrum Master||14 Jun-16 Jun 2023,|
|United States||View Details|
|Certified Scrum Master||14 Jun-16 Jun 2023,|
|New York||View Details|
|Certified Scrum Master||17 Jun-18 Jun 2023,|
|Certified Scrum Master||20 Jun-22 Jun 2023,|
>4.5 ratings in Google
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657144.94/warc/CC-MAIN-20230610062920-20230610092920-00221.warc.gz
|
CC-MAIN-2023-23
| 4,961 | 30 |
http://forum.joomla.org/viewtopic.php?p=2134532
|
code
|
I kind of feel like a MF idiot, for posting a question to a thread that almost certainly contains the answer, already....uh, somewhere in there. It's not that I'm lazy, I just literally get a headache after reading code for too long, because it's not whatsoever comprehensive, and it's a lot like reading math equations, only if all the numbers were replaced with words (no, not like word problems...those actually tell a logical story).
Aaaaanyway, I hope someone can help me, here. This certainly seems like the go-to spot for session path and PHP errors. Plus, it is because I recently downloaded the wicked helpful Jt extension, that I was finally able to see what has been plaguing me (the admin.) for as long as I've been using Joomla!.
Please, God, I hope someone answers my post, because I've tried asking this question (in so many words), across all kinds of different forums and threads. Nobody has helped me so far, but perhaps that's because I couldn't yet effectively describe the problem.
The "umbrella" problem has always been this: I have never had access to my HTML/PHP content files, since I started building my website. The only way I was ever able to state the problem to anyone before now, was by saying "I don't know where my coded content lives. Where are my articles hiding out, at the root (not at the backend)? How am I supposed to tidy the code of module X, when I have no access to its PHP files?"
Nobody could help me. But now I realize (or I think I realize) that I am being restricted by the DB. And I'm also missing at least one file, which I presume is of some importance--it's the Joomla administrator .htaccess file. I have the other one at the html root, but apparently I need one in the admin directory, too. I have no idea where this file went/ is/ got deleted from, but it's not there.
Here are the other diagnostics, which I thought might represent issues, and in no particular order:
Session Directory = Unwritable
Open Base Drive = None
Session Save Path = None
Virtual Directory Support = Off
allow_url_include = Off
always_pop_raw_post_data = Off
expose_PHP = Off
safe_mode_gid = Off
allow_call_time_pass_reference = Off
asp_tags = Off
define_systemlog_variables = Off
register_globals = Off
register_globals_argv = Off
register_long_arrays = Off
zend.ze1_compatibility_mode = Off
Basically, I copied everything on the screen was in the "Off" position---that's what I posted above. I guess I sort of assumed that I should pay closer attention to the "Off" functions, rather than the one's that are still operational.
I would just really love to have real
control over the content on my website. While it's worked out okay so far, I'm now starting to put out HTML validation errors, which are so frustrating, because I have NO WAY to fix them. Help, help, help! Pallleaze!
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005985/warc/CC-MAIN-20130516133005-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 2,815 | 24 |
https://community.myfitnesspal.com/en/discussion/comment/44417487
|
code
|
Fitness trackers/smart watch?
I have had a fit bit for a couple of years now. First I had an alta hr and for the last year and a half or so a versa. My versa is starting to not hold its charge, so probably will need to be replaced soon. I was wondering what others have and use. I have done some research on the Samsung and Garmin trackers but do not know anyone that has one. Everyone I know with a tracker has a fitbit. The apple watch is a no go for me because I dont have an iPhone. What are your recommendations and why please and thank you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511284.37/warc/CC-MAIN-20231003224357-20231004014357-00170.warc.gz
|
CC-MAIN-2023-40
| 546 | 2 |
https://community.esri.com/thread/174251-local-geodatabase-issues
|
code
|
I've modified the Quick Report app to work with a local geodatabase using code from the Local Geodatabase Editing sample inserted into a new page in the app. That part works great, but when I tried to build in a geodatabase sync function that runs when the app initializes, it broke my download piece from the feature service to the local geodatabase.
Unfortunately, I don't seem to be getting any sort of error messages that would point me towards a solution. The data download just stopped working.
Any suggestions or advice would be greatly appreciated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573289.83/warc/CC-MAIN-20190918131429-20190918153429-00007.warc.gz
|
CC-MAIN-2019-39
| 556 | 3 |
https://www.codecoffee.com/point-domain-to-server-setup-dns/
|
code
|
Let's say you've come up with the domain name you want to use for your new WordPress site and have registered it - maybe somewhere like Uniregistry (what I use) or GoDaddy. What's next? How do you "connect" or point your domain to your web hosting account / server and get your new site up and running?
This is where DNS comes in.
DNS stands for "domain name system" or "domain name server". The later phrase, "domain name server", suits us best in this instance. Basically, what happens is that when someone types your domain into their internet browser (Internet Explorer, Chrome, Safari, etc) to view your site - let's say www.codecoffee.com for example - DNS translates that domain into a IP address. That IP address then leads to the server on which your site is hosted and your website is "served" (basically, displayed) to your readers.
So, thanks to DNS, instead of having to remember different IP addresses like 184.108.40.206 for Gmail, 220.127.116.11 for your favorite news site and 18.104.22.168 for Facebook, for example, you just need to remember the URLs (domains) for those sites, obviously being www.gmail.com, www.bbc.com (depending on your preference!) and www.facebook.com.
Anyway, back to how to set the DNS for your new domain... The screenshots below might differ slightly depending on where you registered your name, but the settings are the same.
In Uniregistry, you want to head to the "NS / DNS Records" screen as seen below and scroll down to the "DNS Records" section.
Click on "New Record" and then add two new records as below. We don't need to get into too much detail here, but basically what the various settings do is add the link between your domain name (e.g. codecoffee.com) and your web hosting server IP (e.g. 22.214.171.124). Make sure the "Type" is "A".
Make sure you save your changes and you're done!
You might need to wait some time for the settings to be propagated (spread) around the internet, maybe up to a few hours. Sometimes the changes happen very quickly though (like 5 minutes). You can use this tool to check. Just enter your domain into the box and click "Search". If your change has been propagated, the IP address of your server will show up next to each flag. You don't need to wait for all the flags to show the correct IP address, just a handful, and the rest should follow quickly.
The instructions for GoDaddy are more or less the same as above. Just navigate to the "DNS" screen of your Domain Manager and click "ADD". Then fill in the details exactly like we did above.
126.96.36.199 is the IP address of the server that CodeCoffee is hosted on, so you can see I entered that. Once you save everything, your screen should look similar to that below. Just remember that you can always check the status of your DNS at DNSChecker and the "A" type (technically, it's a "record") is the one we are interested in for the purposes of building our WordPress site.
That's it! You've now successfully setup the DNS for your domain. You can proceed to installing WordOps and setting up WordPress.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141203418.47/warc/CC-MAIN-20201129214615-20201130004615-00206.warc.gz
|
CC-MAIN-2020-50
| 3,051 | 12 |
https://www.codeproject.com/Tips/5254249/CSBrick-Reuse-Your-Projects-as-Source-Includes-in
|
code
|
.NET has a lot of cool features to help make your apps deploy exactly like you want them to. Unfortunately, static linking - the ability to embed dependent assembly code - isn't one of them.
CSBrick is something of a workaround for this. What it does, is gather all the source for a particular project it is pointed to, and then merge and minify it into a "source brick" - a single C# file you can easily include into your projects instead of referencing the equivalent binary class library. Extra whitespace and comments are stripped since this is never meant for human reading - the original files it came from are. Global
#defines are moved to the top of the file, and duplicate
usings are removed. Doc comments are preserved.
This code is part of my Build Pack - a suite of build tools and code to make building build tools easier and better. Build tools can't lug around extra DLLs because it complicates using them as pre-build steps so it's important to keep all the code in the single executable. This tool is perfect for that.
Building this Mess
The build pack uses itself to build itself, but since I've stripped all the binaries from the zip I gave you, you'll have to jumpstart it. Open it in Visual Studio and build in Release a couple of times until the file locking errors go away. This is VS hicupping because I use a circular build step, which is okay, but it just causes these messages to come up. After the second build, you should be okay, but flip to your Output pane to be sure the build succeeded since the Error pane tends to get "stuck" on this. Finally, switch to Debug and you're ready to roll. Any time you need to rebuild, you'll have to rebuild in release for changes to the build tools themselves to be reflected in the next build. This is because these projects use the release binaries of other projects in the same solution as build steps to build themselves.
Using this Mess
Using it is pretty straightforward. Here's the using screen:
CsBrick merges and minifies C# project source files
csbrick.exe <inputfile> [/output <outputfile>]
[/ifstale] [/exclude [<exclude1> <exclude2> ... <excludeN>]]
[/linewidth <linewidth>] [/definefiles]
<inputfile> The input project file to use.
<outputfile> The output file to use - default stdout.
<ifstale> Do not generate unless <outputfile> is older than <inputfile>
or its associated files.
<exclude> Exclude the specified file(s) from the output.
- defaults to "Properties\AssemblyInfo.cs"
<linewidth> After the width is hit, break the line at the next opportunity
- defaults to 150
<definefiles> Insert #define decls for every file included
<inputfile> must be a .csproj file. It works with Visual Studio 2017 projects but should work with others too. I just haven't tried it with other versions.
You can add it as a pre-build event for your project in Visual Studio. Simply go to your Project|Properties|Build Events, and add the command line for it. I recommend using the macros like
$(ProjectDir). For example, the included Deslang project has this build event:
$(SolutionDir)CodeDomGoKit\CodeDomGoKit.csproj /output $(ProjectDir)CodeDomGoKit.brick.cs
In this case, it takes all the code from the CodeDomGoKit.csproj and packs it into a single file, CodeDomGoKit.brick.cs in its own project directory, which it then includes. This is basically the equivalent of going to References and adding CodeDomGoKit, but without the extra assembly, and that's rather the point.
Don't edit the brick file. Change the original source the brick file was made from. It will be regenerated automatically any time that happens due to the build event.
Coding this Mess
I've coded a much earlier version of this before but this one is far more suitable to my needs. It uses my
ParseContext class which I cover here. It's not actually parsing C#, but rather doing remedial tokenization of C# and throwing away extra tokens like whitespace and (non doc) comments. The only real gotcha is knowing when to throw away whitespace, and also to skip things like the inside of strings.
It's all in Minifier.cs, mainly
MergeMinifyBody(). Most of it is a large switch case doing things like this:
isIdentOrNum = false;
ocol += pc.CaptureBuffer.Length;
isIdentOrNum = false;
ocol += pc.CaptureBuffer.Length;
Any time we set
isIdentOrNum, the next token will have a space put in front of it, otherwise no whitespace will be emitted.
ocol keeps track of our output column so we can break the line once we've exceeded the line width (recommended 150) - this keeps editors from choking on one single long line in the output.
Using Minifier.cs is easy, you just call this method:
void MergeMinify(TextWriter writer, int lineWidth = 0,
bool defineFiles=false,params string sourcePaths)
Minifier.MergeMinify(output, linewidth, definefiles,
output is your output
linewidth is the desired line width (recommended 150),
true to insert
#defines for each file included, like
#define FOO_CS for "foo.cs", and inputs is an
string filenames to process.
Finally, the big nasty mess you get back is just all the code from the project (accept AssemblyInfo.cs) packed into one file. Use that instead of referencing the project. It bloats your binary size, but keeps you from needing to drag around a DLL. This is why I say it's a workaround for lack of static linking.
- 16th December, 2019 - Initial submission
Just a shiny lil monster. Casts spells in C++. Mostly harmless.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655143.72/warc/CC-MAIN-20230608204017-20230608234017-00558.warc.gz
|
CC-MAIN-2023-23
| 5,405 | 53 |
https://chitter.xyz/@sc/102153788106159264
|
code
|
i find it odd that the only way you can SPEEN Cappy is if you either use single or dual joycons (connected or not)
and use motion controls???
i mean it's Fun but it kinda suffers from Waggle Syndrome after a bit
also it's hard to get going cause there are two or three other motions that evoke Cappy (homing, and one that makes him go upward)
Chitter is a social network fostering a friendly, inclusive, and incredibly soft community.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00089.warc.gz
|
CC-MAIN-2020-29
| 434 | 5 |
http://itopsmgr.blogspot.com/2010/08/direct-io-vs-cached-io.html
|
code
|
The simple difference between Direct I/O and Cached I/O:
Direct I/O :
Cached I/O :
Keep in mind, all of your data won't be cached all of the time. Only a small amount will be, so you won't get that high performance 100% of the time. If you want that, go with SSD.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743732.41/warc/CC-MAIN-20181117185331-20181117211331-00292.warc.gz
|
CC-MAIN-2018-47
| 263 | 4 |
https://forum.renoise.com/t/new-tool-session-time-tracker/60728
|
code
|
After having a burning desire to track how much time I actually spent on each of my songs, and reading a few requests for such a thing in the forum, I put a time tracker tool together.
It’s a totally automatic time tracker for monitoring time spent on each song. If you have the tool installed, it will just work in the background, without altering your workflow in any way. If you choose to open the tool’s dialog, you’ll see the tool window below update in realtime based on your usage.
Saving the file will save the tracking information for the time you spent. If you open a file, fiddle a bit, and close it without saving, no tracking data will be altered.
The tool classifies time spent based on activity, for insight into which tasks take the most time.
To do this, the tool will create a report file alongside each of your opened XRNS song files:
- songname.xrns - Your Renoise song file
- songname.time-report.txt - A text file containing a human-readable report of how much time was spent on the song. Updates automatically whenever you save the song.
Renoise tool window:
- NEW: Data files now tucked away in tool folder
- Shorter report filename – FOO.tracked_time.report.txt -> FOO.time-report.txt
- Conflict management / data file merging
- Export button to save a time report to the folder of your choice
- Fix for potential Windows path issue
- Improved performance
- Bug fixes
- Automatically works in the background, unobtrusively keeping time records
- Classifies time spent based on activity.
- Automatically carries over old time tracking data when “Save Song As…”
- Stops counting automatically when Renoise loses focus
- Correctly handles computer suspend / lid closed by restarting time on resume
- Writes out textual report on song save
- Can only determine usage and time spent from the moment the tool is installed and data collection begins.
Download here: https://www.renoise.com/tools/session-time-tracker
I’m also finishing up another workflow tool that I’ll probably release in a couple of days, so keep an eye out. (Edit: released!)
Enjoy, and let me know how it goes!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00382.warc.gz
|
CC-MAIN-2021-10
| 2,118 | 25 |
https://forums.cabling-design.com/cisco/peer-delete-ike-delete-no-error-11822-.htm
|
code
|
I have a strange problem between a Cisco VPN client and a Cisco Concentrator 3020 (vpn3000-4.1.7.H-k9.bin) .
I created a new User Group on the concentrator, similar to an existing and working one.The main difference is in the IP address pool and the split-tunnel network list.
The new profile works fine, except for one thing. Everyday, the first time i use it i have to connect twice. The first time, after authentication i get disconnected. I try again immediately and it works fine. I can then disconnect and reconnect without problems after that,... until the next day. ( I haven'y mesure exactly after how much time i need to connect twice).
I have five profiles on this concentrator and i have this behavior only with this one. The problem occurs with different users, and different client version. I am testing with 4.8.00.0440. The problem is not related to the radius server since the disconnected sessions occurs after a successfull authentication.
Here are logs from a successful connection and a disconnected session. You can see a "PEER_DELETE-IKE_DELETE_NO_ERROR" appearing.
Anyone have a solution for this ?
Successfull connection logs36 15:54:37.770 03/22/06 Sev=Info/5 IKE/0x6300002F Received ISAKMP packet: peer = 206.x.x.x
37 15:54:37.770 03/22/06 Sev=Info/4 IKE/0x63000014 RECEIVING
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100545.7/warc/CC-MAIN-20231205041842-20231205071842-00497.warc.gz
|
CC-MAIN-2023-50
| 1,302 | 8 |
https://discuss.ray.io/t/evaluate-trained-model-on-long-episodes/2075
|
code
|
I have a RLlib model that I’ve successfully trained on a custom environment, and now I’m looking to evaluate that model more comprehensively. I tried using a slightly modified version of
rollout.py, but it didn’t allow for parallelization (as discussed here). I then tried using the new parallel evaluation implementation provided by
Trainer._evaluate, and it seemed to work well until I scaled up the episode length. Using long episodes (~50M - 100M environment steps per episode) caused slowly increasing memory use until my machine crashed.
I don’t think my custom environment is the problem, I can run long episodes without the RL agent that have minimal memory use is and the memory use does not appear to grow with the number of environment steps executed.
Trainer._evaluate, I see that it uses
RolloutWorker.sample to run the episodes, which constructs and returns a sample batch. That batch will scale with the number of steps in an episode, and could be part of the problem. I’m not training the model, and I don’t need to interact with most of the data that would be included in that sample batch. All I really need is the total episode reward, and possibly the number of steps in the episode.
I believe that
SimpleListCollector is involved in building this batch of experience, so it seems like one way of getting the logic that I want would be to sub-class
SampleCollector (as noted here in the docs). However, it seems like changing the behavior provided by
SimpleListCollector might change what data is passed to the model as it is executing the rollout.
Is there a better way to get parallelized evaluation that can also handle long episodes?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00052.warc.gz
|
CC-MAIN-2021-49
| 1,669 | 11 |
http://www.chemusa.com/6800_01.htm
|
code
|
The All-In-One Versatile
The Award Winning ChemBook™ 6800 is designed for those who
need an All-In-One Portable Multimedia solution. With the 6800 you can
use both the floppy drive and the CD-ROM at the same time without connecting
any devices externally.
Encased in the elegant casing is the real workhorse of this system. Under
the hood of the 6800 you will find an Intel® Pentium® processor with MMX™
technology at speeds up to 233 MHz and 512KB L2 Cache, as
well as a 64-bit PCI video with Zoomed Video capabilities, EDO
system memory up to 128MB, and the largest slim-size Hard Disk Drive
For expansion purposes, the 6800 uniquely features 3 PCMCIA slots,
and is equipped with the standard input/output ports, along with a Fast
Infrared Port for Wireless Communications.
If multimedia capabilities are in your decision to purchase a portable
computer, you have made the right choice in the 6800. Standard features for
the 6800 include built-in 20X CD-ROM, 16-bit Stereo Sound System with Hardware
WaveTable, and RCA TV-Out Port. It is no coincidence that the Laptop
Buyer's Guide and HandBook declared "That's a lot of value for
Find out more about the specifications
of the ChemBook™ 6800!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00195.warc.gz
|
CC-MAIN-2018-30
| 1,203 | 20 |
http://codingsight.com/tag/performance/
|
code
|
In this article, we will focus on real time operational analytics and how to apply this approach to an OLTP database. When we look at the traditional analytical model, we can see OLTP and analytic environments are separate structures. First of all, the traditional analytic model environments need to create ETL (Extract, Transform and Load) tasks. Because we need to transfer transactional data to the data warehouse. These types of architecture have some disadvantages. They are cost, complexity and data latency. In order to eliminate these disadvantages, we need a different approach. (more…)
In this article, we will discuss how different types of indexes in SQL Server memory-optimized tables affect performance. We will examine examples of how different index types can affect the performance of memory-optimized tables.
To make the topic discussion easier, we will make use of a rather large example. For the purposes of simplicity, this example will feature different replicas of a single table, against which we will run different queries. These replicas will use different indexes, or no indexes at all (except, of course, the primary keys – PKs).
Note, that the actual purpose of this article is not to compare performance between disk-based and memory-optimized tables in SQL Server per se. Its purpose is to examine how indexes affect performance in memory-optimized tables. However, in order to have a full picture of the experiments, timings are also provided for the corresponding disk-based table queries and the speedups are calculated using the most optimal configuration of disk-based tables as baselines.
There is often a need to create a performance indicator that would show database activity related to the previous period or specific day. In the article titled “Implementing SQL Server Performance Indicator for Queries, Stored Procedures, and Triggers”, we provided an example of implementing this indicator.
In this article, we are going to describe another simple way to track how and how long the query execution takes, as well as how to retrieve execution plans for each time point.
This method is especially useful in the cases when you need to generate daily reports, so you can not only automate the method but also add it to the report with minimum technical details.
In this article, we will explore an example of implementing this common performance indicator where Total Elapsed Time will serve as a metric.
Table indexing strategy is one of the most important performance tuning and optimization keys. In SQL Server, the indexes (both, clustered and nonclustered) are created using a B-tree structure, in which each page acts as a doubly linked list node, having an information about the previous and the next pages. This B-tree structure, called Forward Scan, makes it easier to read the rows from the index by scanning or seeking its pages from the beginning to the end. Although the forward scan is the default and heavily known index scanning method, SQL Server provides us with the ability to scan the index rows within the B-tree structure from the end to the beginning. This ability is called the Backward Scan. In this article, we will see how this happens and what are the pros and cons of the Backward scanning method. (more…)
I noticed that very few people understand how indexes work in SQL Server, especially Included Columns. Nevertheless, indexes are the great way to optimize queries. At first, I also did not get the idea of the included columns, but my experiments showed that they are very useful. (more…)
In this article, we are going to touch upon the topic of performance of table variables. In SQL Server, we can create variables that will operate as complete tables. Perhaps, other databases have the same capabilities, however, I used such variables only in MS SQL Server.
We continue to analyze what is happening on our MS SQL Server. In this article, we are going to explore how to retrieve information about user performance: who makes what, and how much resources are consumed.
I think the second part will be interesting for both database administrators and developers who need to understand what is wrong with the requests on the production server that used to work fine on the test server.
I’ve recently encountered a problem – SVN went down on ubuntu server. I develop for Windows and I do not have much experience with Linux. I googled the error — without success. The error turned out to be the most typical one (the server unexpectedly closed the connection) and does not indicate anything. Therefore, it was necessary to go deeper and analyze logs/settings/rights/etc.
Finally, I figured out the mistake and found everything I needed, but I spent a lot of time. After solving this problem, I thought about how to reduce the uselessly spent hours and decided to write an article that will help people quickly get the understanding of the unfamiliar software.
There is an information system that I administer. The system consists of the following components:
1. MS SQL Server database
2. Server application
3. Client applications
These information systems are installed on several objects. The information system is used actively 24 hours a day by 2 to 20 users at once on each object. Therefore, you cannot perform routine maintenance all at once. So, I have to «spread» SQL Server index defragmentation throughout the day, rather than defragmenting all the necessary fragmented indexes at one stroke. This applies to other operations as well.
In this post, I’d like to take a brief look at the Query Performance Insight — SQL Azure tool which will help you to identify the most expensive queries in your database.
Query Performance Insights was announced in early October 2015. To understand what it is, let’s think about how do you usually learn that the database performance got down? Probably, you are receiving emails from your clients or it takes an hour to create a weekly report instead of a several minutes, or maybe, your application starts throwing exceptions. (more…)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811352.60/warc/CC-MAIN-20180218023321-20180218043321-00078.warc.gz
|
CC-MAIN-2018-09
| 6,082 | 22 |
https://oogleblogs.com/2012/06/08/there-is-totally-no-security-for-mobile-devices/
|
code
|
There is totally no security for mobile devices
Whatever is your OS
If it is iphone, andriod or windows
Nobody has supplied encryption software
So your Simm card and software
Can be easily hacked to download
Software to monitor your phone
Track everything you do
If you suspect something is wrong with your phone
Set it back to factory default
The only secure device belongs to RIMM
Which has an encrypted network for their email
But their voice is not secure
Everything can otherwise be compromised
Who is the cause of this loophole?
Attack me with technology
And I will uncover all your secrets.
– Contributed by Oogle.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257624.9/warc/CC-MAIN-20190524124534-20190524150534-00277.warc.gz
|
CC-MAIN-2019-22
| 623 | 18 |
https://sourceforge.net/directory/license%3Aother/environment%3Ax11/?sort=popular&page=9
|
code
|
Use the tools and open source technologies you already know and trust, because Azure supports a broad selection of operating systems, programming languages, frameworks, databases, and devices. Azure offers hybrid consistency everywhere: in application development, management and security, identity management, and across the data platform.Advertisement
OSI-Approved Open Source (114)
- Affero GNU Public License (1)
- Apache Software License (1)
- Artistic License (3)
- BSD License (10)
- GNU General Public License version 2.0 (76)
- GNU General Public License version 3.0 (11)
- GNU Library or Lesser General Public License version 2.0 (24)
- GNU Library or Lesser General Public License version 3.0 (2)
- MIT License (3)
- Mozilla Public License 1.0 (1)
- Mozilla Public License 1.1 (6)
- NASA Open Source Agreement (1)
- Nokia Open Source License (1)
- Qt Public License (2)
- zlib/libpng License (2)
- Public Domain (5)
- Creative Commons Attribution License (3)
- Linux (232)
Grouping and Descriptive Categories (203)
- 32-bit MS Windows (95/98) (21)
- 32-bit MS Windows (NT/2000/XP) (33)
- 64-bit MS Windows (16)
- All 32-bit MS Windows (80)
- All BSD Platforms (14)
- All POSIX (126)
- Classic 8-bit Operating Systems (1)
- OS Independent (58)
- OS Portable (22)
- Project is OS Distribution-Specific (1)
- Project is an Operating System Distribution (4)
- Windows (175)
- Modern (159)
- Mac (121)
- BSD (114)
- Other Operating Systems (32)
- Android (22)
- Audio & Video
- Business & Enterprise
- Home & Education
- Science & Engineering
- Security & Utilities
- System Administration
TMPCanvas is a set of Delphi's Components and Controls which implement a vector Canvas and some basic GIS tools.
Videod efficiently provides live or recorded video streams to one or multiple local clients for processing. It is released under the CeCILL-C license (see http://www.cecill.info/index.en.html), a French equivalent of the GNU LGPL.
Vim4J is a new fork of the Vim code with a GUI implemented in Java AWT code. The main project goal is to provide not only a standalone Java-based GUI Vim application, but to also provide an Vim component suitable for embedding into a Java-based IDE.
Ninety percent of Fortune 500 companies trust the Microsoft Cloud, and so can you. Azure helps protect your assets through a rigorous methodology and focus on security, privacy, compliance, and transparency. Azure has been recognized as the most trusted cloud for U.S government institutions, including FedRAMP High authorization that covers 18 Azure services.Advertisement
The Virtual Environment Software Sandbox (VESS) is a suite of libraries for developing virtual reality applications in a portable manner with classes for many tracking devices, a simplified "scene graph", a set of "motion models" and audio support. Please note that all support has been moved to GitHub at https://github.com/ucfistirl/vess.
Here To Dominate The Virtual World With Your Help.
WSGUI developers user interface concepts for web services. The WSGUI standard employs GUIDD and similar techniques and works actively together with other software project to support this standard.
WWIIOLinux, a project to get the MMOG WWII Online to run on Linux with Wine.
Simple Control Center for Debian and its derivatives By Waha project
Editing, Publishing, Audio and Video Productions!
When the Stars Fell is a 2D RPG that is written in C++ and use the SDL librairy.
X11 developer's 'workbench' and lightweight toolkit API
XGKS is a level 2C implementation of the ANSI Graphical Kernel System (GKS) for use in a Unix environment with the X Window System. It supports the Fortran language binding and a C language binding based on the 1988 draft.
A simple and pluggable framework for XML data validation using XML Schemas to achieve clean seperation of data validation code from business logic code for XML Requests and XML Responses. The framework can be easily plugged in any application (J2EE, w
You don't know XML? Yet you want to utilize its power? This is for you!
Control XMMS from a python script (layout sized for 'familiar' linux on the IPAQ or other handheld.)
XMicroSystems aims to a BSD based operating system for pc's, and macintosh systems. It needs to be cost effective for everyone; more importantly it must be easy to use.
Framework for building lightweight and easy to deploy SCADA-like distributed systems. Platforms and technologies: MSWIN, X11, Linux, Python, C++,CORBA
Simple, Fast, Advanced... ZintoriOS. Made in Wellington, New Zealand
File manager like explorer. With plugins.
calculator for avi-file streams
A mirror for CSW - Community SoftWare for Solaris.
bit-torrent transmission utility
A Borland Delphi/C++ Builder/Kylix component for freedb access. Provides full support for the most important forms and functions of freedb access.
The documentation for the UnifiedSessionsManager under the license "CCL-3.0-Attribution-NonCommercial-NoDerivs 3.0 Unported". For BASE package see http://sourceforge.net/projects/ctys.
dxflib is an opensource C++ library for reading and writing AutoCAD (R) DXF files. It provides the functionality to read and write many basic entities as well as information about layers and blocks. From the author of QCAD.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00630.warc.gz
|
CC-MAIN-2017-39
| 5,236 | 70 |
https://thesociablegeek.com/windows-phone-7/wp7-minute-episode-iii-camera-integration/
|
code
|
The Windows Phone 7 Minute is a show to discuss the features of the Windows Phone 7. In the show we will talk about things that are important to both Consumers and Developers. From Live Tiles, to Push Notifications, to cut-and paste, we will talk about the things that are important to you. If you want a particular subject covered, please drop us a line.
Episode III : Camera Integration
In this episode we show you how Windows Phone 7 helps you do things more quickly by integrating common tasks. It also takes it a step farther by letting application developers place their apps where they are needed most.
Check out the video and see what I mean.
UPDATE: If you want to see how this is done in code. Head over to my post on DotNetDoc.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00671.warc.gz
|
CC-MAIN-2022-40
| 741 | 5 |
http://tedwise.com/2010/07/30/coda-notes
|
code
|
“Panic”, an Apple software developer, has released one of the nicest browser extensions I’ve ever seen. Coda Notes makes it very easy to markup a web page with your own comments and highlights. The annotated page can then be emailed to whomever you like. To use Coda Notes you’ll need the very latest version of Safari - 5.0.1.
Once you install it, Coda Notes adds a little leaf button to the Safari button bar. When you click it, a toolbar slides down and the current web page goes into edit mode. You can draw on the page using a pencil tool (in green, red or blue), you can draw using a highlighter (in yellow, purple or blue), you can change the text on the page or you can put sticky notes with text on top of the page.
When you’re done, click the Send button and the whole page rotates to reveal a postcard on the back. You can email the annotated page with comments to whomever you wish.
Here’s a quick view of an annotated page after it was received in email:
Coda Notes is already very slick and very useful, but a few flaws keep it from replacing Skitch for me. The first is that it doesn’t have a tool to draw boxes and arrows yet. The second is that you can only annotate and send the visible portion of the web page. It would be much more useful if you could scroll up and down the entire web page.
One more thing to keep in mind if you make use of the Coda Notes extension, all emails are routed through the Panic servers. Panic has a good reputation and state up front that they don’t keep images, but if you’re at all concerned about security, don’t email your images.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00451.warc.gz
|
CC-MAIN-2020-05
| 1,604 | 6 |
https://www.tr.freelancer.com/projects/mobile-phone/need-math-step-step-solver/
|
code
|
with my scholarship (called exist-Gruenderstipendium, financed by german federal minstry for economy and energy) I want to develop a mobile application for german student about math. Therefore I need a math step by step solver like photomath or wolfram alpha but only high school level.
We work with the programming language Swift. So it would be nice if you are familiar with Swift.
Thanks in advance!
Bu iş için 35 freelancer ortalamada €2419 teklif veriyor
Hello, how are you? I have rich experience in mathematics and swift. I am a computer engineering teacher. I can help you. Let me know your requirements in details. Thank you.
Hi, I have gone through your project details. I am fully experienced with java and python and also swift Contact me for further discussion. Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737238.53/warc/CC-MAIN-20200808021257-20200808051257-00183.warc.gz
|
CC-MAIN-2020-34
| 786 | 6 |
https://meta.stackoverflow.com/questions/262408/when-i-answer-product-specific-questions-i-include-links-to-better-resources-a/262411
|
code
|
When I answer product-specific questions, I include links to better resources for those products -- especially but not limited to help sites provided by the product supplier/vendor/producer.
Tonight, TheEvilPenguin (his choice of name, not mine) followed me through several such responses, and wiped those links out, calling them "other site advertisements". I resent the implication.
Further, I inserted whitespace in several questions, which increases readability and comprehensibility, as is well documented by years of graphic design research. This whitespace was also removed, without explanation. I very much disagree with this.
I find no way to send a message to TheEvilPenguin to ask him why he's doing this; only this meta-space is available...
So I ask...
Who watches the watchers?
Additionally, in response to this question's previous wording, I have been informed that StackOverflow is for "questions and answers about programming problems."
While that may be (or have been) the intent, and may once have been the focus, of this site -- I see a much broader range of questions in practice. It may be that many don't notice the product-specific questions, because they're watching for questions on their language of choice -- and that's fine -- but it leaves the product-specific questioners at sea.
Further, if this site really is for "programming problems" then it seems to me that questioners asking about different subjects (including but not limited to configuration and use of various software products) should indeed be pointed elsewhere... and again I'm left wondering why I've been spanked for doing so.
OpenLink staff, including but not limited to Virtuoso developers, are more active on our "home" sites than elsewhere -- and we (not necessarily me) are usually the best source for the product-specific answers sought. To analogize, asking random drivers how to switch your car from 2 to 4 wheel drive might eventually get you the right answer, but if you can speak with the people who made it, you'll usually get much more relevant and accurate guidance.
As to putting explicit instructions into all Answers, I have to wonder whether folks here have ever maintained software docs? Because software changes, these are moving targets, and the more places you post your docs, the more places you have to edit when changes are necessary -- and the more external sites the docs get echoed to, the more likely some will be missed, potentially leading to major issues for the user. For this reason among others, I believe that "the right thing" for the users, of this site and otherwise, is to link to the authoritative docs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817398.21/warc/CC-MAIN-20240419110125-20240419140125-00353.warc.gz
|
CC-MAIN-2024-18
| 2,641 | 11 |
https://francomdesigns.com/about-me
|
code
|
Currently I'm a freelance designer and web developer, who works with a variety of clients and on many diverse projects.
I work to create innovative solutions that inspire, and adopt unforgettable relationships between brands and their clients. With a focus on branding and advertisement, I strive to create usable and polished products through passionate and deliberate design.
As an independent designer I can take on projects of all kinds, which allows me to tap into all of the experience I’ve accrued through the years. I design brochures, menus, business cards, books, annual reports, PowerPoint and responsive websites, applications—anything my clients need.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00727.warc.gz
|
CC-MAIN-2023-14
| 668 | 3 |
https://docs.oracle.com/cd/E19082-01/819-0690/6n33n7fi4/index.html
|
code
|
The use of static executables is limited. See Static Executables. Static executables usually contain platform-specific implementation details that restrict the ability of the executable to be run on an alternative platform, or version of the operating system. Many implementations of Oracle Solaris shared objects depend on dynamic linking facilities, such as dlopen(3C) and dlsym(3C). See Loading Additional Objects. These facilities are not available to static executables.
To create a static executable use the -d n option without the -r option.
$ cc -dn -o prog file1.o file2.o file3.o .....
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887832.51/warc/CC-MAIN-20180119065719-20180119085719-00154.warc.gz
|
CC-MAIN-2018-05
| 595 | 3 |
https://kadelsberger.com/2021/02/facebook-vs-apple/
|
code
|
Why is Apple releasing a privacy-focused update and why is Facebook spending enormous amounts of money to object? It affects the core of both of their business models.
Apple has positioned itself as the privacy champion in the digital space. I do believe this will be a winning bet for the company to make. And much like other seismic shifts Apple makes expect the other tech companies to follow suit after Apple has set the standard. Also, while this may seem like a moral high ground move, it is ultimately a business decision.
With the release of iOS 14.5 Apple is taking the first big shot in the tech world against the way the system of internet advertising has been run. Here is a summary:
-iOS is on somewhere between 40-60% of the smartphone market in the US.
-The way iOS addresses things will affect how others in the space behave.
-iOS14.5 has a privacy update that does not allow apps to track you unless you opt-in.
-Previously apps would track you by default, now they require permission to track from the default.
-Facebook and the internet has almost always worked on an opt-out model, so you were by default being tracked.
What it’s not:
The end of Facebook advertising. You can still target those who are already your customers with permission. This primarily affects third party advertisers like companies that most people have never heard of. Facebook will still have lots of data on its users to use for targeting but third-party sites may have less info to work off of.
What the results of this going to be?
More paid apps, less targeted advertising, more privacy online, more income for Apple via the app store and less income for Facebook and other digital advertisers. Delayed results and less accurate results from ad campaigns.
What to do?
If you are running eCommerce conversions or app conversions, read up more on Facebook’s news releases. If you are running web conversions, be sure to verify your domain. Consider investing in Email Marketing and Google ads as we are unsure of how much damage this will do to Facebook ad campaigns.
For further reading:
Facebook Releases- https://www.facebook.com/business/help/331612538028890?id=428636648170202
Apple Releases- https://developer.apple.com/app-store/user-privacy-and-data-use/
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00162.warc.gz
|
CC-MAIN-2023-06
| 2,263 | 17 |
http://birdersdiary.com/Support/Product-Forums/aft/19723
|
code
|
The World's Premier Listing Software
for Birders and Naturalists!
Celebrating 24 Years Serving Birders
and Naturalists around the World!
I am trying to figure out how to combine two different checklists. For example, I am taking a trip to Australia, but only visiting Northern Territory and Queensland. Is there a way to combine the two checklists into a single one? I tried creating a new checklist (QU+NT) with parents being each state, but it returned "no data found". Maybe I missed a step?
I used to be able to do this with Avisys using Z-list. Hoping there is a way to do it in BD. I am using 4.0.117
Of course. I think you went about it in reverse.
Create a new location as a child of Australia; name it "QU+NT".
Add this new location as a parent to Queensland and Northern Territory locations.
That will do it.
(I'll have further questions once it's confirmed that I'm clear so far.)
Thanks for posting.
No, the lattter; this is instead just to create a location filter for the checklist.
Let me know if I can assist further.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247488490.40/warc/CC-MAIN-20190218220415-20190219002415-00614.warc.gz
|
CC-MAIN-2019-09
| 1,033 | 14 |
https://www.premiumdumps.com/nokia/nokia-4a0-c02-dumps
|
code
|
How does an IOM3 improve the queue usage over an IOM2 in an Alcatel-Lucent 7750 SR? (Choose two)
Click the exhibit button below.
Based on the configuration of the network policy (below), what will be the forwarding class associated with a MPLS encapsulated customer packet that arrives on a dotlQ encapsulated network port 1/1/4:1 on P1 with the following characteristics:
EXP value = 6
DSCP value = cs1
Dot1pvalue = 3
Which of the following statements regarding the default scheduler in the Alcatel-Lucent 7750 SR are TRUE? (Choose two)
Which of the following statements regarding scheduling are TRUE? (Choose two)
Click the exhibit button below. A network operator has configured a network-queue policy to map forwarding classes to queues, as shown in the exhibit below. Based on the default scheduling behavior of the Alcatel-Lucent 7750 SR, in which order will packets be serviced?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00136.warc.gz
|
CC-MAIN-2022-21
| 885 | 9 |
https://translate.wordpress.com/projects/wpcom/themes/entrepreneur/vi/default/?filters%5Bstatus%5D=either&filters%5Boriginal_id%5D=207201&sort%5Bby%5D=translation_date_added&sort%5Bhow%5D=asc
|
code
|
Translation of Entrepreneur: Vietnamese Glossary
71 / 86 Strings (82 %)
Note: These translations will only be activated on WordPress.com when 85% of the strings have been translated. Currently only 82% are translated.
Validators: Dat Hoang, Duy, Philip Arthur Moore, and Tony, chief at WooRockets.com. More information.
← 1 →
|, Used between list items, there is a space after the comma.||Widget Chân||Details|
Warning: Lengths of source and translation differ too much.
You have to log in to edit this translation.
← 1 →
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652235.2/warc/CC-MAIN-20230606045924-20230606075924-00262.warc.gz
|
CC-MAIN-2023-23
| 530 | 9 |
https://community.anaplan.com/t5/Idea-Exchange/Introduce-sequential-lists-Array-or-Linked-list/idi-p/46175
|
code
|
As model builder I would like to make use of sequential lists (Array or Linked list) to avoid circular references. This can be useful in many calculations..
Native functions like PREVIOUS() or NEXT() could only be applied on line items which are on the time dimension. Such functions should then also work for the sequential list as well, without causing a circular reference. Theoretically you don't have a circular reference if the list is properly sequenced with a PREVIOUS and NEXT item.
At the moment if I require this I map the original list item to a date. Do the calculations on that 'date' item and then map it back to the original list item. E.g. for an allocation algorithm allocating demand based on margin using constrained capacity buckets. The workaround is however limited as you can only use 1 time dimension per module/line item.
The content in this article has not been evaluated for all Anaplan implementations and may not be recommended for your specific situation.
Please consult your internal administrators prior to applying any of the ideas or steps in this article.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00468.warc.gz
|
CC-MAIN-2019-47
| 1,091 | 5 |
https://www.eyeonspain.com/blogs/luislopezcortijo.aspx?month=20109
|
code
|
Gibraltar from Spain
07 September 2010
Published at 04:49 Comments (3)
Here, I want to show you an unusual image: a plane preparing to land.Sometimes, planes land over the front of Gibraltar --like in this picture--; but, usualy, they land over the back of the Rock the reason use to be the bad weather, the wind--.
"A plane flying over Gibraltar", by Luis Lopez-Cortijo
This is the plane, that is flying to the Airport of Gibraltar (on the left side of the picture).Here you can not see any cloud ("Montera") on the top pf Gibraltar, perhaps because the wind changed to Poniente.Here, in this area, the wind is abble to change several times, in a day.
"La montera de Gibraltar", by Luis Lopez-Cortijo
This is another view of Gibraltar --althought it happened the same day--.In this picture, you can see the typical "Montera = Cap", that you can see, over Gibraltar, when the wind is named "Levante".It does not matter if the weather is cloudy or clear --like this day--, because the "Montera" (a solitary cloud) shall be there --well..., when the weather is cloudy, that special cloud (Montera) shall be mixed with the rest; but...it shall be there--.Then, I can tell you that: when the wind comes from Levante, the planes use to land from the front of Gibraltar and, ehwn the wind comes from Poniente, the planes use to land from the back of the Rock.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120844.10/warc/CC-MAIN-20170423031200-00105-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,353 | 8 |
https://meta.stackexchange.com/users/135806/eric-j
|
code
|
Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
VP of Engineering at Strategic Vision
Email: my first name at my last name dot us
Science Fiction author
The Gods We Make
The Gods We Seek
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531335.42/warc/CC-MAIN-20210122175527-20210122205527-00183.warc.gz
|
CC-MAIN-2021-04
| 335 | 6 |
https://www.dk.freelancer.com/projects/php-graphic-design/integrate-design-with-oscommerce/
|
code
|
Want someone experince to integrate a design into oscommerce for me.
Should be integrated with the latest release:
osCommerce Online Merchant v2.2 Release Candidate 1
A ready design (NOT TEMPLATE)
Experince in similar work
PHP, XHTML and CSS knowledge is required.
You can use STS:
[url removed, login to view],1524
You must have experience with oscommerce and design.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00449.warc.gz
|
CC-MAIN-2019-43
| 368 | 9 |
https://www.minty95.com/flashing-stock-android-on-a-lineage-phone-using-linux/
|
code
|
Flashing stock Android on a Lineage phone using Linux : I’ve been running a Pixel 3 with Lineage 19 for the last couple of years now. But I needed to re-flash it back to stock Android in order to sell it.
Now if you are already running Lineage this means your fairly savvy so this should be a walk in the park for you.
This is only for Google Pixel Phones
Rather than doing this using ADB commands on a terminal that I haven’t done for yonks. I did it using the Android Flash Tool using Google’s Chrome Browser. https://flash.android.com/
It’s fairly easy. But the program stopped and lost connection when the phone booted into Fast Mode. Until I sussed out why. Hence this quick post to help you.
In Chrome open https://flash.android.com/
Follow instructions on screen. Allowing ADB access. Enable USB debugging and OEM Unlocking. (to be done on your phone)
Plug in the phone and select your device and just follow the instructions. It’s very clear and easy.
Again I tried this a few times. Every time my Pixel booted into FastBoot mode it stopped. And the chrome lost connection to it.
Here’s how I fixed this :
I’m running Arch Linux. So I just ran sudo pacman -S android-tools (running Ubuntu / Debian try sudo apt-get install android-tools-adb android-tools-fastboot) as I bet this will correct the same problem
Once installed. Just re start https://flash.android.com/ Choosing again your device and build. Start the update procedure. This time it should finish the update without getting stuck.
Flashing stock Android on a Lineage phone using Linux : So Nothing to complicated, it seems that Linux needs the Android Tools installed to complete the flash. Of course if need be you can uninstall after by running sudo pacman -Rns android-tools
I’ve been using Linux for the last four or five years now. Far better than Windows. Just sometimes you need to suss things out. Here’s a post that I wrote about using Cron to back up my Files & Folders bit.ly/2UXUDI8 or adding a second Yubikey https://www.minty95.com/yubikey-u2f-2fa-adding-a-second-key/
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.72/warc/CC-MAIN-20240414130604-20240414160604-00779.warc.gz
|
CC-MAIN-2024-18
| 2,070 | 14 |
https://forum.syncthing.net/t/excessive-ram-usage-during-initial-scan/15135/11
|
code
|
I currently have syncthing 1.5.0 ARM installed on Raspberry Pi 1 b+ with 512M of RAM and I am trying to sync 800k files about 1Tb in size. I have had it running for 5 days and syncthing is already using 1.7Gb+ of RAM. The webui is super slow (takes tens of minutes just to load some basic info) and I can’t see what it is doing. The other clients can’t connect to it because of i/o timeout errors. The CPU usage is less than 10-20%, probably because it is waiting for io and swap.
Is it normal for syncthing to use that much ram during scans or syncing? I had it running for half a year now and at some point it was (slowly) managing to have about 700Gb of files in sync, but things broke in the last 2 months or so (and I have added about 270Gb of data since) and now it doesn’t look like it is working at all even after leaving it alone for weeks. By the looks of things, it had managed to sync 3 smaller folders and is struggling with a large one with 900Gb of data. I have database tuning set to small (and db is about 500Mb in size) and I have changed all folders to random sync order.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00071.warc.gz
|
CC-MAIN-2020-50
| 1,096 | 2 |
https://www.hackyourrussian.com/post/learn-russian-alphabet
|
code
|
Updated: Nov 8, 2020
So, you have decided to learn Russian language. My congratulations!
The first thing you need to do is to familiarize yourself with the Russian alphabet. Russian alphabet can look weird and daunting the first time you see it. But don't worry! I will walk you through and we will learn the words that will help you remember these Russian letters and their sounds. You will see that some Russian letters are actually identical to their English equivalents.
It is not that hard to learn it especially due to the fact that Russian and English languages both come from the same Indo-European family. There are indeed some letters which look and sound the same as in English language. Though, there are also some false friends like the letters B (equivalent to an English V) and P (equivalent to an English R). Have I intrigued you? Well, that is the purpose.
Let's first look carefully through the whole list of Russian letters. Try to guess how they might sound. By the way, there are 33 letters in a Russian alphabet.
Ok, how are you so far? Hope you are not discouraged. Let's analyze these letters by groups:
Russian letters that are (almost) the same.
А а - Pronounce it like the English "a" in words like "bar" or "partner".
К к - Pronounce it like the English "k" in "kit" or "kayak".
М м - Pronounce it like the English "m" in "mother".
O o - When it is a stressed vowel, it is pronounced like the "o" in "bog". If un-stressed should be pronounced more like the letter "a".
Т т - Pronounce it like the "t" in "table" (Please, note that its hand-written (and italic) form is written this way "т" (Yeah, I know it looks like the English "m").
Russian letters that look like English letters but sound different.
В в - No, it is not the English "B". It is rather an equivalent to the English letter "v" and pronounced like the "v" in "vacation".
Е е - No, it's not the English "E". It sounds like the "ye" in "yes".
Н н - No, it's not the English "H". It is rather an equivalent to the English letter "N" and pronounced like the "n" in "nobody".
Р р - No, it's not the English "P". There is a common mistake of people who don't know Russian alphabet to read a Russian word "Ресторан" as "pectopah". Have you guessed what this word is? Ресторан = restaurant in English. So, the Russian letter "P" is rather an equivalent to the English letter "R". It is pronounced like the "r" in "rabbit", but it is rolled (with a Russian accent).
С с - No, it's not the English "C". It is rather an equivalent to the English letter "S". It is pronounced like the "s" in "stay".
У у - No, it's not the English "Y". This letter should be pronounced like the "oo" in "look" or "moon".
Х х - No, it's not the English "X". It is pronounced like the "h" in "hat".
Russian letters that look strange, but have familiar sounds
Б б - Well, this letter looks almost like its English equivalent in its lower case form -"b". It is pronounced like the "b" in "bar".
Г г - This one is equivalent to the English letter "g". It is pronounced like the "g" in "get".
Д д - Equivalent to the English letter "d". It is pronounced like the "d" in "day".
З з - No, it's not a number 3. This letter is equivalent to the English letter "z". It is pronounced like the "z" in "zap".
И и - This letter is sometimes equivalent to the English letter "i", the short 'ee' sound. Pronounced like the "i" in "mix". Please, note that a hand-written form for "и" looks a little like the English "u").
Л л - Equivalent to the English letter "L". It is pronounced like the "l" in "letter".
П п - Equivalent to the English letter "p". Pronounced like the "p" in "parrot".
Ф ф - Equivalent to the English letter "f". Pronounced like the "f" in "father".
Э э - Pronounced like the "e" in "Ted".
Russian letters and sounds that don't exist in English
Ю ю - This letter is pronounced exactly the same as the English pronoun "You".
Я я - It is pronounced like a combination"ya" in "yard".
Ё ё - It is pronounced like "yo" in "your". (please, note that nowadays this letter is often written simply as Е е. Russian people are lazy)
Ж ж - It is pronounced like "s" in "pleasure"
Ц ц - This one is similar to the "ts" sound in "sits" or "its".
Ч ч - It is pronounced like the "ch" in "chair".
Ш ш - It is pronounced like the "sh" in "shout".
Щ щ - It is pronounced like "sh" in a word "shit", you should put your tongue on the roof of your mouth. You can find it difficult to differentiate"ш" and "щ".
Ы ы - It is pronounced like "i" in "sit". (say it with your tongue slightly back in your mouth.)
Й й - This letter is used to form diphthongs. So "oй" is like the "oy" sound in "toy" or "aй" in "sight".
These letters have no sound on their own, but are still considered letters. There have been some attempts throughout history when these letters were about to be replaced by an apostrophe but it didn't happen. So, essentially what you need to know is that they basically serve almost the same role as an English apostrophe but with certain peculiarities.
Ъ ъ - The 'Hard Sign'. It is rarely used in words. It indicates a slight pause between sylables. Examples of words: подъезд, объявление, съёмка (without a slight pause these words would sound completely different and people might not understand what you mean).
Ь ь - The 'Soft Sign'. It makes the previous letter 'soft'. Think of the "f" sound in the word "few".
This is it! I guess you wonder now how exactly you should memorize Russian letters. Well, there is a very effective way to do this. Instead of learning a bunch of random words that start with this letter, I recommend you to watch the video below where I provide some interesting words that reflect Russian culture and mentality. This way you will have some vibrant associations with each letter and get some insights into the life of Russian people.
Are you ready? Поехали!
Hello! My name is Mila and I am a founder of Hack Your Russian language platform. You can find me here:
Patreon - exclusive materials
Do you want to get a free trial Russian lesson, consultation with a coach and lots of great learning materials? Click here. Don't miss it!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00449.warc.gz
|
CC-MAIN-2021-10
| 6,242 | 49 |
https://www.uni-life.nl/job-board/10129
|
code
|
Ready to lead, disrupt and reinvent the sleep industry? We are Emma - The Sleep Company . Created in 2013, we are now the world's largest D2C sleep brand, available in over 30 countries and recommended by many consumer associations in EMEA, APAC, and the Americas. We're pushing the boundaries of technology to transform the world of sleep and we want your help to pull it off. We're a highly ambitious, hard-working team that pushes you to produce your best work yet. We focus on how we approach problems, we chase growth, and set ambitious goals. Want your ideas to have an impact and your career to grow? Then Emma is the right place for you. What you'll do: You will be part of our Product Success Team. You drive the product marketing & communications for one of our key categories - mattresses, beds, or accessories. You ensure our product content is engaging, crisp, and on point to help drive customer interest to Emma's products, while helping them easily choose the best product within our product portfolio for their needs. You apply a customer centric and pragmatic approach in your projects, and leverage your pro-activity to propose and implement innovative approaches to take us to the next level in our product marketing. You will learn to work with all tools in the box: pricing and positioning, UX, new marketing channels, creative assets, etc. You will work closely with a lot of customer-facing teams, e.g. Country teams, Performance Marketing, Brand, UX, CRM, Customer Excellence, etc. You will grow immensely as a professional: you will take responsibility early-on, will learn what drives hyper-growth, and will rapidly build your entrepreneurial skill set. Who we're looking for: Previous experience in product management, marketing, business development, or sales. But anyone with a knack for the above should apply. Proven ability to work cross-team and to manage multiple stakeholder needs and expectations Ability to juggle multiple priorities and effectively deliver in a fast-paced, dynamic environment An international background that allows you to look at opportunities from multiple perspectives You have passion for developing brands from a wholistic perspective You are entrepreneurial by nature and see opportunities everywhere You're fast, pragmatic, and proactive, have high energy and a can-do attitude You love to work with others and others love to work with you You are fluent in English What we offer: A combination of personal and company growth to accelerate your career and help you reach your goals. The chance to work on exciting and challenging projects either independently or as part of a dedicated, international team. Responsibility and decision-making authority from day one-you'll create an impact with new, innovative ideas and help shape our company DNA. To work and learn from experts in diverse fields and get to know your team members at exciting company events. Become an Emmie Emma is transforming the world of sleep - and we want the highest-performing people to help us pull it off. We want you. But only if you're willing to go all in. Only if you're willing to question, disrupt, innovate, and create from the ground up. We proudly celebrate diversity. We are an equal-opportunity employer committed to promoting inclusion in our workplace. We consider all qualified applicants for employment without regard to race, ethnic origin, religion or belief, gender, gender identity or expression, sexual orientation, national origin, disability, or age. Our aim is to get back to you in a couple of days, however, we are currently receiving a large number of applications and this might lead to a delay in the process. We will get back to you as soon as possible!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648322.84/warc/CC-MAIN-20230602040003-20230602070003-00701.warc.gz
|
CC-MAIN-2023-23
| 3,723 | 1 |
https://wiki.tds.tieto.com/pages/diffpagesbyversion.action?pageId=26608006&selectedPageVersions=2&selectedPageVersions=1
|
code
|
- Whole environment is running in public cloud and to internal network is not allowed from there
- Recommended solution master/agent setup, when master runs in public environment and agent is running on server located in private network performs needed tasks in internal/private networks. Jenkins Agent has active connection from internal network to internet accessible Jenkins Master via recommended JNLP port tcp/9000 and keeps listening to builds/jobs. NO direct or NAT network connection is required from internet to internal network. It is secure and simple solution.
Gliffy Diagram size 600 name Jenknsjenkins-public-Master-and-internal-Agent-diagramagent-master pagePin 41
- Jenkins master running in public and listening on JNLP port tcp/9000
- firewall opening for port tcp/9000 from source agent(s) IP(s) in internal network towards internet in general (destination 0.0.0.0)
- servers running in internal network(s) hosting Jenkins agent service(s) with agent service auto-start to assure automatic re-connect to Jenkins master at any time
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145746.24/warc/CC-MAIN-20200223032129-20200223062129-00028.warc.gz
|
CC-MAIN-2020-10
| 1,049 | 6 |
https://coda.io/@micah-cotton/mapbox-pack-guide
|
code
|
The goal of this pack is to mirror various Mapbox APIs as closely as possible so that you meaningfully integrate geospatial data into your docs and manage Mapbox projects. The formulas provided interact with Mapbox web services across their
The quickest way to learn how to use this pack is to
Copy this doc
and explore the examples. Before you can interact with the formulas you’ll need to supply your own Mapbox credentials. It mirrors the Mapbox API Playgrounds and includes tool-tips with documentation for most parameters.
This design is closer to a full-fledged client which allows for a lot of functionality and flexibility, but as a consequence some formulas have a LOT of parameters compared to other packs. Here are some tips to flatten the learning curve with this pack and get up and running quickly.
Many formulas can be run with zero or only a few required parameters
Sensible defaults are provided for most parameters
If a formula is not behaving as expected be sure that you are setting desired parameters to override the default
The Map() formula is a great example because it will work without any parameters, but centers around the default location.
Use named parameters whenever possible.
Formulas in this pack take as many as 19 parameters..using named parameters makes it a lot easier to understand what values are doing.
Take advantage of autocomplete
Small subsets of possible values have autocomplete enabled
Dynamic options like Styles and Tilesets from your studio will autocomplete
Some formulas that require a single coordinate pair also have a search parameter you can use instead of latitude/longitude parameters. This allows you to search from within the formula editor.
A caveat is that coda autocomplete struggles input that isn’t letters, like numbers. I am hoping this improves but for now this feature is better suited for looking up a cities and points of interest than a street address.
Use the GetOptions() formula for parameters with a large set of specific values, such as country codes.
Availability is indicated in parameter documentation
Mapbox makes use of public and secret tokens alike so that access can be scoped to desired resources only. You'll want to set up this pack with a secret token and you can add scopes as necessary depending on the formulas you are using. StaticImage and Map formulas cannot use this token because it will be exposed in a URL. There are three ways to address this.
1) The default behavior is to use your account's default public token and requires no action on your part (provided your secret token used to set up the pack has the token:read scope enabled).
2) Create a dedicated public token expecially for use within coda docs. The minimum scopes it will need for the Map and StaticImage formulas are styles:tiles and styles:read, recommended for Free and Pro coda users.
3) Alternatively you can by leverage temporary tokens. These tokens expire after an hour and can be refreshed accordingly. The provided GetToken formula requires your secret token to have the tokens:write scope enabled and creates a token that is valid for 15 to 60 minutes. You can connect a control like the one below to an Automation that generates a new temp token hourly, recommended for Team and Enterprise users with public facing docs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00557.warc.gz
|
CC-MAIN-2023-50
| 3,301 | 22 |
http://msagent.webs.com/programs
|
code
|
Here you can download some program from another website.
MASH (Microsoft Agent Scripting Helper) is a script program that can animate Microsoft Agent. Microsoft Agent can perform certain actions, like speaking, thinking, and move across the screen.
CyberBuddy is a freeware program. The animated character can tell you a joke, thought of the day, read news, check weather, read text, and many other things. There are also some premium features that are not available in the version of this program.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323588.51/warc/CC-MAIN-20170628083538-20170628103538-00348.warc.gz
|
CC-MAIN-2017-26
| 498 | 3 |
http://jennyharp.com/section/276782.html
|
code
|
I can remember the day my dad brought home our first computer. From that point forward I began creating digital objects; mostly drawings on Microsoft Paint, Kid Pix, and Flying Colors. Those drawings are now lost to crashed computers and outdated software, while many of my drawings with crayon and marker remain tucked away in my parent's garage. I ask myself - what else has been lost to the intangibility of digtial-ness? And can we stop it?
The amount of information and the speed at which it is changing is fascinating and overwhelming. The capacity of our computer systems to process this information far exceeds the limits of our brains, making the systems of processing and organizing seem foreign and abstract. The anxiety caused by this information overload compels me to try and make sense of these systems by slowing things down, by recreating digital actions by hand. I work within a digital universe that I can only attempt to imagine through physical objects.
At times my need to archive this digital world is genuine and results in sincere attempts to create physical records of the software and programs we use. But this cloud full of information, data, systems, and images is so elusive and mysterious that the frustration of creating a genuine archive encourages me to pull from software and systems at will, mashing them up in ways that are both generative and degrading; resulting in quasi-scientific, semi-fictitious images and installations that investigate possible histories and cultures that this invisible world might hold.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.20/warc/CC-MAIN-20150521113210-00004-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 1,550 | 3 |
http://www.tivocommunity.com/community/index.php?threads/move-drive-from-tivo-hd-to-tivo-premiere-4.501892/
|
code
|
I just ordered a new TiVo Premiere 4 from TiVo. From what I understand the Premiere 4 comes with a 500 gb drive but can only record 75 hours of HD programming. My current TiVo HD has a 500 gb drive (I installed this drive myself to increase recording time from 20 hours to 180) but has room to record @ 180 hours of HD programming. Anyone know why the difference? I ask this question because I just ordered a new Premiere 4 and planned to move the drive from my TiVo HD to the premiere? Also, will I be able to simply swap the TiVo Premiere 4's drive with the one from my TiVo Hd?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00066.warc.gz
|
CC-MAIN-2017-22
| 580 | 1 |
https://www.socscistatistics.com/confidenceinterval/default2.aspx
|
code
|
Single-Sample Confidence Interval Calculator
This simple confidence interval calculator uses a t statistic and sample mean (M) to generate an interval estimate of a population mean (μ).
The formula for estimation is:
μ = M ± t(sM)
M = sample mean
t = t statistic determined by confidence level
sM = standard error = √(s2/n)
As you can see, to perform this calculation you need to know your sample mean, the number of items in your sample, and your sample's standard deviation. (If you need to calculate mean and standard deviation from a set of raw scores, you can do so using our descriptive statistics tools.)
Please enter your data into the fields below, select a confidence level (the calculator defaults to 95%), and then hit Calculate. Your result will appear at the bottom of the page.
Please enter your values above, and then hit the calculate button.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.72/warc/CC-MAIN-20240414130604-20240414160604-00343.warc.gz
|
CC-MAIN-2024-18
| 864 | 10 |
http://www.ludumdare.com/compo/2011/08/22/black-hole-a-synopsis/
|
code
|
My game for this compo was Black Hole, a game where the objective is to escape both the horde of Drones who cornered you, and the black hole which they cornered you against.
Fortunately for you, a long-dead alien race left a platform here, which feeds off of the emissions of the black hole and broadcasts energy to all nearby ships. You have moved in close to this platform and the black hole itself, and thus avoided the gigantic Drone Mothership which was chasing you. However, the smaller Drone forces can and did pursue you, so now you must hold out against them to recharge your Jump Engines, allowing you to jump to beyond lightspeed and escape to your home.
The coding process was quite frantic. I had to write the entire game engine, and at one point ended up re-writing and re-integrating the entire collision engine, as I had failed to discover that my method of detecting overlapping rectangles was not accurate if they were rotated. Fortunately, I’m quite skilled with circle based collision, and it only took about 20 minutes to switch and integrate.
Another thing to note: I was high on Oxycodone and other painkillers the entire time, having gotten my wisdom teeth removed on Thursday. This caused many dumb mistakes, the majority being simple math related. I persevered through it though, and ended up coming out with what I think is a fun game!
Plans for the future: to continue to develop this game. I want to add more enemies, powerups, and new game modes.
Timelapse Part One is here.
Timelapse Part Two is here.
I’ll be writing a Post Mortem at some point… Thanks for the fun times all! Cheers!
Edit: 1814 lines of code, for those of you who care.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163839270/warc/CC-MAIN-20131204133039-00033-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 1,674 | 9 |
http://redsaucetoronto.com/29c9ce29/442038.html
|
code
|
I have a Samsung s8+ with the latest pie update.
About a couple of weeks ago I started having an issue where Pandora would stop playing every few minutes. At first I thought it was random, but turns out it pauses a few seconds after the 2 and 7 minutes on the clock. It doesn't matter when I hit play, at xxx:x2:?? and xxx:x7:?? the song will stop playing (I haven't taken the time to figure out the exact second). If I hit play again, it starts right up until the next 2 or 7 minute.
I've restarted my phone, cleared cache, reinstalled Pandora - whether playing songs online or offline, the behavior doesn't change. I don't have any issue playing songs through Google Music.
I'm not sure if this is an issue with Pandora or another app on my phone - any thoughts on how to further troubleshoot this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987803441.95/warc/CC-MAIN-20191022053647-20191022081147-00529.warc.gz
|
CC-MAIN-2019-43
| 800 | 4 |
http://wiki.sugarlabs.org/go/Activities/Crikey
|
code
|
Jump to navigation Jump to search
The Crikey activity - available at http://activities.sugarlabs.org/en-US/sugar/addon/4493 - is a modified version of Sugar's default Measure activity.
The most significant changes are:
- Activity starts in sensor-reading mode
- Input is slowed down to 100th the speed, so you can view changes to a sensor's input over a few seconds, instead of a fraction of a second. (A sensor reader, not an oscilloscope)
- Input is scaled to fit inside the graph window
- Input is inverted so that temperature and light sensors' input is more intuitive (higher values on the graph from greater temperature and light)
- You can record and/or set an audio alarm for when the sensor's value reaches the maximum or minimum
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00298.warc.gz
|
CC-MAIN-2021-04
| 738 | 8 |
http://consumerist.com/tag/sponsorship/
|
code
|
Imagine giving public transit directions to your urban home in the future. “Oh, yeah, you take the Target Red Line, transfer at Comcast Station to the Apple Gray Line headed Fox Sports Westbound, and finally get off at Taco Bell Station.” Seem crazy? Well, you have to name transit stations something, and both Metra and the Chicago Transit Authority are exploring the idea of selling naming rights to stations. They’re not the first city to do this. [More]
Cash-strapped art museums across the country are turning to an unlikely source for new exhibitions: Banks. According to a story in the New York Times, Bank of America, Chase, and a number of other global entities have put together traveling art exhibits and are offering them to museums across the country.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928423.12/warc/CC-MAIN-20150521113208-00068-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 770 | 2 |
https://slashdot.org/users2.pl?uid=576352&view=userhomepage&startdate=201212m
|
code
|
If I read the article right, they don't know if it is directional or not, or how bad the discharge could get.
I agree with the cautions on trusting an instructor, yet at the same time a student is not a good judge either. If I am learning something for the first time, how am I to know that what I've been taught is good until I have a chance to put it to use?
This is why most universities have an accrediting body. That body audits the school to be sure they are offering a sensible curriculum, and that the faculty are qualified to teach the material. I think it is safe to assume that some of these online schools will eventually submit for some form of accreditation (if they haven't already). That process will flesh out the kinds of problems identified in this particular online class.
...when fits of creativity run strong, more than one programmer or writer has been known to abandon the desktop for the more spacious floor. - Fred Brooks, Jr.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608642.30/warc/CC-MAIN-20170526051657-20170526071657-00490.warc.gz
|
CC-MAIN-2017-22
| 952 | 4 |
http://m.dlxedu.com/m/askdetail/3/ef8b974ca4386187ad8d668b5b123251.html
|
code
|
I'm working on ubuntu 10.04 and a gcc. I have a binary file with my own magic number. When I read the file, the magic number is not the same. The streams seams to be correct.
Writing magic number :
chfile.open(filename.c_str(), std::fstream::binary | std::fstream::out);
chfile << (unsigned char)0x02 << (unsigned char)0x46 << (unsigned char)0x8A << (unsigned char)0xCE;
// other input
Reading magic number :
chfile.open(filename.c_str(), std::fstream::binary | std::fstream::in);
unsigned char a,b,c,d;
chfile >> a;
chfile >> b;
chfile >> c;
chfile >> d;
printlnn("header must : " << (int)0x02 << ' ' << (int)0x46 << ' ' << (int)0x8A << ' ' << (int)0xCE); // macro for debugging output
printlnn("header read : " << (int)a << ' ' << (int)b << ' ' << (int)c << ' ' << (int)d);
When I use 02 46 8A CE as magic number it's alright (as the output says):
header must : 2 70 138 206
header read : 2 70 138 206
but when I use EA 50 0C C5 then the output is :
header must : 234 80 12 197
header read : 234 80 197 1
and the last 1 is a legit value for the next input. So why differ they and how do I fix this ?
In the second case,
operator>> is skipping over the character value 12.
12 as whitespace, and skips it, searching for the next valid character.
Try using an unformatted input operation (like
You shouldn't use
>> with binary files, they are used for formatted reading and writing.
In particular, they do special handling of whitespace such as 0xC (i.e. formfeed), which makes them unsuitable for binary I/O.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202510.47/warc/CC-MAIN-20190321092320-20190321114320-00230.warc.gz
|
CC-MAIN-2019-13
| 1,508 | 28 |
https://www.genkiyooka.com/2006/03/real-video-quality.html
|
code
|
Checking out the new Real Player
, along with the Real Guide
. At least RealVideo looks better than flash. DVD quality? Not!
The new player and guide are looking pretty good. Just don't click on anything! The RealPlayer jumps around on the screen like it's got a live carp up the leg of its pants...
You can't really browse around the guide while listening in the background. It's impossible to tell what will cause the player to switch titles/tracks. And how long has the iTunes music store been live?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00070.warc.gz
|
CC-MAIN-2023-14
| 502 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.