text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
When I install gsap via npm install gsap and import in index.vue via: <script> import {TweenMax} from "gsap"; I can reference TweenLite in a method: methods: { draw() { TweenLite.to("#c1", 1, {opacity:0.5, rotation:45}); But when I try to reference TweenMax, I get an error TweenMax.to("#c1", 1, {opacity:0.5, rotation:45}); Vue warn]: Error in event handler for "click": "TypeError: Cannot read property 'to' of undefined" Any idea why TweenMax is not resolved? It looks like all the modules are available under node_modules/gsap… confusing… Thanks! if I use: import TweenMax from 'gsap' it works… maybe a babel or webpack thing…
https://forum.quasar-framework.org/topic/2241/gsap-tweenmax/1
CC-MAIN-2021-31
en
refinedweb
Coauthors: Jeremy Lewi (Google), Josh Bottum (Arrikto), Elvira Dzhuraeva (Cisco), David Aronchick (Microsoft), Amy Unruh (Google), Animesh Singh (IBM), and Ellis Bigelow (Google). On behalf of the entire community, we are proud to announce Kubeflow 1.0, our first major release. Kubeflow was open sourced at Kubecon USA in December 2017, and during the last two years the Kubeflow Project has grown beyond our wildest expectations. There are now hundreds of contributors from over 30 participating organizations. Kubeflow?s goal is to make it easy for machine learning (ML) engineers and data scientists to leverage cloud assets (public or on-premise) for ML workloads. You can use Kubeflow on any Kubernetes-conformant cluster. With 1.0, we are graduating a core set of stable applications needed to develop, build, train, and deploy models on Kubernetes efficiently. (Read more in Kubeflow?s versioning policies and application requirements for graduation.) Graduating applications include: - Kubeflow?s UI, the central dashboard - Jupyter notebook controller and web app - Tensorflow Operator (TFJob) and PyTorch Operator for distributed training - kfctl for deployment and upgrades - Profile controller and UI for multiuser management Hear more about Kubeflow?s mission and 1.0 release in this interview with Kubeflow founder and core contributor Jeremy Lewi on the Kubernetes Podcast. Develop, Build, Train, and Deploy with Kubeflow Kubeflow?s 1.0 applications that make up our develop, build, train, deploy critical user journey. With Kubeflow 1.0, users can use Jupyter to develop models. They can then use Kubeflow tools like fairing (Kubeflow?s python SDK) to build containers and create Kubernetes resources to train their models. Once they have a model, they can use KFServing to create and deploy a server for inference. Getting Started with ML on Kubernetes Kubernetes is an amazing platform for leveraging infrastructure (whether on public cloud or on-premises), but deploying Kubernetes optimized for ML and integrated with your cloud is no easy task. With 1.0 we are providing a CLI and configuration files so you can deploy Kubeflow with one command: kfctl apply -f kfctl_gcp_iap.v1.0.0.yamlkfctl apply -f kfctl_k8s_istio.v1.0.0.yamlkfctl apply -f kfctl_aws_cognito.v1.0.0.yamlkfctl apply -f kfctl_ibm.v1.0.0.yaml Jupyter on Kubernetes In Kubeflow?s user surveys, data scientists have consistently expressed the importance of Jupyter notebooks. Further, they need the ability to integrate isolated Jupyter notebooks with the efficiencies of Kubernetes on Cloud to train larger models using GPUs and run multiple experiments in parallel. Kubeflow makes it easy to leverage Kubernetes for resource management and put the full power of your datacenter at the fingertips of your data scientist. With Kubeflow, each data scientist or team can be given their own namespace in which to run their workloads. Namespaces provide security and resource isolation. Using Kubernetes resource quotas, platform administrators can easily limit how much resources an individual or team can consume to ensure fair scheduling. After deploying Kubeflow, users can leverage Kubeflow?s central dashboard for launching notebooks: Kubeflow?s UI for managing notebooks: view and connect to existing notebooks or launch a new one. In the Kubeflow UI users can easily launch new notebooks by choosing one of the pre-built docker images for Jupyter or entering the URL of a custom image. Next, users can set how many CPUs and GPUs to attach to their notebook. Notebooks can also include configuration and secrets parameters which simplify access to external repositories and databases. Training faster with distributed training Distributed training is the norm at Google (blog), and one of the most exciting and requested features for deep learning frameworks like TensorFlow and PyTorch. When we started Kubeflow, one of our key motivations was to leverage Kubernetes to simplify distributed training. Kubeflow provides Kubernetes custom resources that make distributed training with TensorFlow and PyTorch simple. All a user needs to do is define a TFJob or PyTorch resource like the one illustrated below. The custom controller takes care of spinning up and managing all of the individual processes and configuring them to talk to one another: Monitoring Model Training With TensorBoard To train high quality models, data scientists need to debug and monitor the training process with tools like Tensorboard. With Kubernetes and Kubeflow, userscan easily deploy TensorBoard on their Kubernetes cluster by creating YAML files like the ones below. When deploying TensorBoard on Kubeflow, users can take advantage of Kubeflow?s AuthN and AuthZ integration to securely access TensorBoard behind Kubeflow?s ingress on public clouds: // On GCP:{KFNAME}.endpoints.${PROJECT}.cloud.goog/mnist/kubeflow-mnist/tensorboard/// On AWS: No need to `kubectl port-forward` to individual pods. Deploying Models KFServing is a custom resource built on top of Knative for deploying and managing ML models. KFServing offers the following capabilities not provided by lower level primitives (e.g. Deployment): - Deploy your model using out-of-the-box model servers (no need to write your own flask app) - Auto-scaling based on load, even for models served on GPUs - Safe, controlled model rollout - Explainability (alpha) - Payload logging (alpha) Below is an example of a KFServing spec showing how a model can be deployed. All a user has to do is provide the URI of their model file using storageUri: Check out the samples to learn how to use the above capabilities. Solutions are More Than Models A model gathering dust in object storage isn?t doing your organization any good. To put ML to work, you typically need to incorporate that model into an application ? whether it?s a web application, mobile app, or part of some backend reporting pipeline. Frameworks like flask and bootstrap make it easy for data scientists to create rich, visually appealing web applications that put their models to work. Below is a screenshot of the UI we built for Kubeflow?s mnist example. With Kubeflow, there is no need for data scientists to learn new concepts or platforms to deploy their applications, or to deal with ingress, networking certificates, etc. They can deploy their application just like TensorBoard; the only thing that changes is the Docker image and flags. If this sounds like just what you are looking for we recommend: 1. Visiting our docs to learn how to deploy Kubeflow on your public or private cloud. 2. Walking through the mnist tutorial to try our core applications yourself. What?s coming in Kubeflow There?s much more to Kubeflow than what we?ve covered in this blog post. In addition to the applications listed here, we have a number of applications under development: - Pipelines (beta) for defining complex ML workflows - Metadata (beta) for tracking datasets, jobs, and models, - Katib (beta) for hyper-parameter tuning - Distributed operators for other frameworks like xgboost In future releases we will be graduating these applications to 1.0. User testimonials All this would be nothing without feedback from and collaboration with our users. Some feedback from people using Kubeflow in production include: ?The Kubeflow 1.0 release is a significant milestone as it positions Kubeflow to be a viable ML Enterprise platform. Kubeflow 1.0 delivers material productivity enhancements for ML researchers.? ? Jeff Fogarty, AVP ML / Cloud Engineer, US Bank ?Kubeflow?s data and model storage allows for smooth integration into CI/CD processes, allowing for a much faster and more agile delivery of machine learning models into applications.? ? Laura Schornack, Shared Services Architect, Chase Commercial Bank ?With the launch of Kubeflow 1.0 we now have a feature complete end-to-end open source machine learning platform, allowing everyone from small teams to large unicorns like Gojek to run ML at scale.? ? Willem Pienaar, Engineering Lead, Data Science Platform, GoJek ?Kubeflow provides a seamless interface to a great set of tools that together manages the complexity of ML workflows and encourages best practices. The Data Science and Machine Learning teams at Volvo Cars are able to iterate and deliver reproducible, production grade services with ease.?? Leonard Aukea, Volvo Cars ?With Kubeflow at the heart of our ML platform, our small company has been able to stack models in production to improve CR, find new customers, and present the right product to the right customer at the right time.? ? Senior Director, One Technologies ?Kubeflow is helping GroupBy in standardizing ML workflows and simplifying very complicated deployments!? ? Mohamed Elsaied, Machine Learning Team Lead, GroupBy Thank You! None of this would have been possible without the tens of organizations and hundreds of individuals that have been developing, testing, and evangelizing Kubeflow. An Open Community We could not have achieved our milestone without an incredibly active community. Please come aboard! - Join the Kubeflow Slack channel - Join the kubeflow-discuss mailing list - Attend a weekly community meeting - If you have questions, run into issues, please leverage the Slack channel and/or submit bugs via Kubeflow on GitHub. Thank you all so much ? onward!
https://911weknow.com/kubeflow-1-0-cloud-native-ml-for-everyone
CC-MAIN-2021-31
en
refinedweb
Get It Done in 5 seconds! Are you bored of doing same stuff again? Feeling your life is just doing the same thing over and over again? Here is the thing, today I am going to introduce a tool to automate your BORING stuff — Python. Python is perhaps the most easiest language to learn. Because of your acquired Python skill, you will be able not only to increase your productivity, but also focus on work which you will be more interested in. Let’s get started! I will use an example, paper trading in Singapore stock market as an illustration on how automation could be done. Paper trading allow you to practice investing or trading using virtual money before you really put real money in it. This is a good way to start as to prove whether your strategy works. This is the agenda which I will be sharing: Part 1 — Input the stock code and amount which you want trade in a text file. Part 2 — How to do Web Scraping on your own, the full journey. Part 3 — Clean and tabulate data. Part 4— Output the result into a csv or excel file. Follow the whole journey and you will notice how simple it is to automate your boring stuff and to update your price IN 5 Seconds. Part 1— Input the stock code and amount which you want trade in a text file. Launch a new text file, enter the stock code and the price you will buy given the particular stock, separated by a comma. Launch a new text file, enter the stock code and the price you will buy given the particular stock, separated by a comma as shown Part 2 — How to do Web Scraping on your own, the full journey This is a snapshot of the SGX website. I am going to illustrate how to scrape all trading information contain in this table. Do open a google chrome, right click on the website and you will be able to see the below snapshot. Click on the inspect button, then click on the network tab (top right corner of the below snapshot as highlighted in purple bracket). Next, click on the row as highlighted in purple box and then choose preview as shown in the highlighted green box, both shown in Snapshot 4 below. So you can see from the Preview, all the data are contained in JSON format. Next, click on the purple box (Headers) in Snapshot 5. What I am doing now, is to inspect what elements I should put in to scrape data from this page. From Snapshot 5 above, you will be able to see Request URL, which is the url you need to put in the request part later. Due to encoding issue, “%2c” in the Request URL will be encoded to “,”. If you are interested in encoding, view this link for more information. Now let’s prepare the required information for you to send a proper request to the server. Part 1 Request Url After changing all the “%2c” to “,”, the request url will turn out to be this link below Part 2 Headers Request header is a component of a network packet sent by a browser or client to the server to request for a specific page or data on the Web server. Referring to the purple box in Snapshot 6 , this is the header part which you should put in when you are scraping the website. {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Origin": "", "Referer": ""} Now let’s put everything together as shown in the gist below. import requests HEADERS = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Origin": "", "Referer": ""} # Start downloading stocks info from sgx req = requests.get(", headers=HEADERS) Part 3— Clean Data Till now you will have the response in JSON format. we will use Python pandas library to clean the data. First, load in the stock code which you fill in earlier and clean it. with open('selected.txt') as f: selected_sc = f.readlines() selected_sc = [x.replace('\n', '') for x in selected_sc] portfolio = {x.split(',')[0]: float(x.split(',')[1]) for x in selected_sc} Then, load the scraped data into JSON object, then change it to python pandas object. data = json.loads(req.text)['data'] df = pd.DataFrame(data['prices']) Next, rename the column name to be easier to understand. df = df.rename( columns={'b': 'Bid', 'lt': 'Last', 'bv': 'Bid_Volume', 'c': 'Change', 'sv': 'Ask_volume', 'h': 'High', 'l': 'Low', 'o': 'open', 'p': 'Change_percent', 's': 'Ask', 'vl': 'Volume', 'nc': 'Stock_code'}) Finally, filter the interested stock code which you want to invest or trade in and then calculate the price difference. df = df[df['Stock_code'].isin(portfolio.keys())][['Stock_code', 'Last']] df['bought_price'] = df['Stock_code'].map(portfolio) df['percentage_changes'] = (df['Last'] - df['bought_price'])*100 df['percentage_changes'] = df['percentage_changes'].apply( lambda x: '{0:.2f}%'.format(x)) Part 4 — Output the result in a csv or excel file. Save the data to csv file and 🎉WE ARE OFFICIALLY DONE! 🎉 df.to_csv('reseult.csv', index=False) Below is the snapshot of the csv file: Final Thought I am currently working as a Data Scientist, and what I can inform you is that crawling is still very important. Thank you for reading this post. Feel free to leave comments below on topics which you may be interested to know. I will be publishing more posts in future about my experiences and projects. About Author Low. Source: towardsdatascience
https://learningactors.com/get-rid-of-boring-stuff-using-python/
CC-MAIN-2021-31
en
refinedweb
Let’s take a look at the pending changes to both the Java language and the core libraries as we eagerly await Java 9’s release. Below, I have provided some of the most important core language enhancements for JDK 9.0.The objective of this article is to introduce you to the new features of Java SE 9. This includes mostly conceptual introduction of the features. These are the almost finalized features, that have been accepted and officially announced by Oracle. Java 9 is scheduled for release by about the end of July 2017. SHORT TITLE OF THE FEATURE SUB-HEADING / AREA JEP 261: Module System / Module System in JDK 9 JEP 200: The Modular JDK / Module System in JDK 9 JEP 220: Modular Run-Time Images / Module System in JDK 9 JEP 260: Encapsulate Most Internal APIs / Module System in JDK 9 JEP 223: New Version-String Scheme / Changes in JDK 9 Enable or Disable Web Deployment with Installer’s UI / JDK 9 Installer JEP 158: Unified JVM Logging / Tools in JDK 9 JEP 214: Remove GC Combinations Deprecated in JDK 8 / Tools in JDK 9 JEP 222: jshell: The Java Shell (Read-Eval-Print Loop) / Tools in JDK 9 JEP 224: HTML5 Javadoc / Tools in JDK 9 JEP 228: Add More Diagnostic Commands / Tools in JDK 9 JEP 231: Remove Launch-Time JRE Version Selection / Tools in JDK 9 JEP 240: Remove the JVM TI hprof Agent / Tools in JDK 9 JEP 241: Remove the jhat Tool / Tools in JDK 9 JEP 245: Validate JVM Command-Line Flag Arguments / Tools in JDK 9 JEP 247: Compile for Older Platform Versions / Tools in JDK 9 JEP 282: jlink: The Java Linker / Tools in JDK 9 JEP 219: Datagram Transport Layer Security (DTLS) / Security in JDK 9 JEP 244: TLS Application-Layer Protocol Negotiation Extension / Security in JDK 9 JEP 249: OCSP Stapling for TLS / Security in JDK 9 JEP 246: Leverage CPU Instructions for GHASH and RSA / Security in JDK 9 JEP 273: DRBG-Based SecureRandom Implementations / Security in JDK 9 JEP 229: Create PKCS12 Keystores by Default / Security in JDK 9 JEP 287: SHA-3 Hash Algorithms / Security in JDK 9 Deprecate the Java Plug-in / Deployment in JDK 9 Enhanced Java Control Panel / Deployment in JDK 9 JEP 275: Modular Java Application Packaging / Deployment in JDK 9 JEP 289: Deprecate the Applet API / Deployment in JDK 9 JEP 213: Milling Project Coin / Java Language in JDK 9 JEP 221: Simplified Doclet API / Javadoc in JDK 9 JEP 224: HTML5 Javadoc / Javadoc in JDK 9 JEP 225: Javadoc Search / Javadoc in JDK 9 JEP 261: Module System / Javadoc in JDK 9 JEP 165: Compiler Control / VM in JDK 9 JEP 197: Segmented Code Cache / VM in JDK 9 JEP 276: Dynamic Linking of Language-Defined Object Models / VM in JDK 9 JEP 271: Unified GC Logging / JVM Tuning in JDK 9 JEP 248: Make G1 the Default Garbage Collector / JVM Tuning in JDK 9 JEP 102: Process API Updates / Core Libraries in JDK 9 JEP 193: Variable Handles / Core Libraries in JDK 9 JEP 254: Compact Strings / Core Libraries in JDK 9 JEP 264: Platform Logging API and Service / Core Libraries in JDK 9 JEP 266: More Concurrency Updates / Core Libraries in JDK 9 JEP 268: XML Catalogs / Core Libraries in JDK 9 JEP 269: Convenience Factory Methods for Collections / Core Libraries in JDK 9 JEP 274: Enhanced Method Handles / Core Libraries in JDK 9 JEP 277: Enhanced Deprecation / Core Libraries in JDK 9 JEP 285: Spin-Wait Hints / Core Libraries in JDK 9 JEP 290: Filter Incoming Serialization Data / Core Libraries in JDK 9 JEP 259: Stack-Walking API / Core Libraries in JDK 9 JEP 236: Parser API for Nashorn / Nashorn in JDK 9 JEP 292: Implement Selected ECMAScript 6 Features in Nashorn / Nashorn in JDK 9 JEP 251: Multi-Resolution Images / Client Technologies in JDK 9 JEP 256: BeanInfo Annotations / Client Technologies in JDK 9 JEP 262: TIFF Image I/O / Client Technologies in JDK 9 JEP 263: HiDPI Graphics on Windows and Linux / Client Technologies in JDK 9 JEP 272: Platform-Specific Desktop Features / Client Technologies in JDK 9 JEP 283: Enable GTK 3 on Linux / Client Technologies in JDK 9 JEP 267: Unicode 8.0 / Internationalization in JDK 9 JEP 252: CLDR Locale Data Enabled by Default / Internationalization in JDK 9 JEP 226: UTF-8 Properties Files / Internationalization in JDK 9 List of Changes in JDK 9 As a developer, the ones that impact our day-to-day lives mostly fall in the areas of: Java language in JDK 9 Core libraries in JDK 9 Official Oracle Release: July 2017 Even though others are also very important, they can be studied, researched, and mastered if and when the need arises. Before we get into the Java 9 changes, let us touch on one aspect of Java 8 that we need to know more about. Default Interface Methods Whenever there is existing or legacy code that has Interfaces that require the addition of new methods, it causes breakage of existing classes that inherit/implement from this Interface, unless the implementation for each of these added methods is provided in the classes. This does not make for very maintainable code. Even though a good practice, as per SOLID and other OO paradigms, is to provide an interface without any implementation, we need to handle and solve the problem as mentioned above. This is where Default Interface Methods come in. import java.util.List; public interface LegacyPublicInterface { /** * Additional Default Method that Can be Invoked on Each Type of Invoice that Implements the LegacyPublicInterface. * It can also be over-ridden by the Extending Invoice Types. This is an Example Usage and Benefit of the Java/JDK 8 * Default Interface feature. * @param items */ default void checkStatusOnEachItem (List<String>... items) { for(String item : items) { if(item.startsWith("OOS_")) { items.remove(item); } } return; } } From the example above, the LegacyPublicInterface in an existing application is already extended by multiple invoice types (for example, in an inventory system). But as per changing business requirements, re-engineering effort requires that each of the invoices have a method to invalidate or remove an item marked with “OOS”. Given such a problem, prior to Java 8, we would have had to introduce a new method declaration in the interface and then require that each of the implementing classes implement their own logic for handling this. With Default Interfaces, the task becomes very simple (the code is now more maintainable and extensible and requires much less effort to change). With the introduction of this feature of Default Methods, the following are the possibilities: Use the default method(s) without breaking existing functionality (best use). The implementing class can choose to override these default methods. Abstract classes can be provided over the Interfaces to override implementation. So, let’s get further into each of these changes in Java 9, one-by-one. Java Language in JDK 9 JEP 213: Milling Project Coin @SafeVarargs were introduced to suppress such warnings on unchecked or unsafe operations. By using this, developers are signaling to the compiler that they have made sure that there will be no Heap Pollution (such as unsafe forEach operations) caused. Prior to Java 9, @SafeVarargs were allowed on non-overridable methods such as in static methods, final instance methods, and constructors. Note that the annotation will throw an error if it is used in fixed arity methods. In Java 9, @SafeVarargs can be used on private instance methods. Underscore as a Variable Name Is not Legal Anymore Using only the _ (underscore character) as a variable name is not legal anymore.This is because it is marked as a reserved keyword from Java 1.8 (But causing compilation failure only in Java 1.9). This may cause some issues when compiling legacy source code, especially that which had a necessity to denote some specific resource or entity using the _ (underscore). It may have to be rewritten and may have many related ramifications. Private Interface Methods Have Been Introduced. Allow Effectively final variables to be used as Resources in Try-With-Resources Up to Java 8, every variable that had to be used within Try-with-Resources statements needs to be declared within the try statement. Only then can it be used within the try block. This is a limitation for. Allow Diamond with Anonymous Classes If the Inferred Type’s Argument Type is Denotable Up to and Super along with Wildcard types in Generics. These are usually inferred by the compiler. So, as long as the compiler identifies that the Argument type of Inferred type is Denotable — you can use the Diamond Operator in conjunction with Anonymous Inner Classes. Core Libraries in JDK 9. JEP 193: Variable Handles Java’s Concurrent Package (java.util.concurrent.atomic) provide all atomic types for performing atomic operations. Apart from this, Unsafe Operations (sun.misc.unsafe), such as creating objects without calling the constructor used in Java low-level programming need to be hidden from the outside world as per JEP 260: Encapsulate Most Internal APIs. This has led to the creation of a new abstract class type named VarHandle. This will allow a developer to assign different types to the same reference (dynamically typed references). It can also take care of performing atomic operations on the held variable, including compare and swap (set or exchange) operations. It also provides memory fencing operations, to order the in-memory representation of the object, by providing finer grain control. JEP 254: Compact Strings Although this has no external ramification to a developer in terms of syntax or semantics change, it may impact the way we design for memory and performance. The current UTF-16 representation uses 2 Bytes for Storage. Most of the string contain characters that are only Latin-1 in nature. The Latin-1 characters require only 1 Byte for Storage. With Java 9, String storage has been modified to start with an Additional Encoding Flag. This flag indicates whether it contains ISO-8859-1/Latin-1 characters or the UTF-16 characters. As per the official word, it has led to an improved usage of memory and efficient GC, but with some loss in performance at peak loads. JEP 264: Platform Logging API and Service This defines a minimal logging API for the platform and also provides a Service Interface for the consumers of these messages. The implementation for this interface can be provided by logging libraries or application itself to route the messagesappropriately to the application specific logging framework or implementation being used such as [Log4J or SLF4J]. Whenever an implementation is not available, the run-time switches to Default Java Logging Package. (java.logging). This also allows us to detect Bootstrap issues. JEP 266: More Concurrency Updates There is a continually evolving thought process on concurrency. With Java 9, An interoperable Publish-Subscribe framework has been provided. Also, enhancements to the CompletableFuture object has been provided, along with numerous implementation improvements over JDK 8, including Javadoc rewording. An interoperable Publish-Subscribe framework, also known as Reactive Streams, allow two-way communication based on the traffic or volume at each end of the stream, Publisher and Subscriber. The enhancements to the CompletableFuture API include methods that allow a Future to complete with a value or with an exception, after a Timeout period. Also, a Delayed Executor has been provided to allow a task to execute after some delay. JEP 268: XML Catalogs An XML catalog is made up of entries from one or more Catalog entry files. A Catalog entry file is an XML file whose document element is Catalog and whose content follows the XML Catalog DTD defined by OASIS. With Java 9, there is a public API for XML Catalog management. The external references in the catalog needed to be resolved repetitively. But with this new API, an API containing CatalogManager, Catalog, and CatalogResolver can be used to reduce or eliminate these external invocations, when the catalog resource already exists locally. It creates a Local Catalog and will allow JAXP-based processors to resolve to this local catalog. JEP 269: Convenience Factory Methods for Collections This addition makes it convenient for developers to create Immutable Collections out of existing collections,. JEP 274: Enhanced Method Handles A method handle is a typed, directly executable reference to an underlying method, constructor, field, or similar low-level operation, with optional transformations of arguments or return values. These transformations are quite general and include such patterns as conversion, insertion, deletion, and substitution. JEP 277: Enhanced Deprecation With an eye on maintainable and more informative code, developer-defined deprecation now allows us to mark deprecation with additional elements of information like forRemoval and since. The forRemoval allows us to mark that this item may be removed in future versions of Java, and since provides information about when it was first introduced. JEP 285: Spin-Wait Hints For multi-threading applications, this brings in some performance improvements under Busy-Waiting or Spin-Waiting conditions. Usually, Busy-Waiting is done for synchronization of some state of the object between two or more invokers – Waiting for a condition to occur before processing starts or continues. Thread.onSpinWait() has been introduced as a static method in the Thread class and can be optionally called in Busy-Waiting loops. This will allow the JVM to issue processor instructions on some system architectures to improve reaction time in such Spin-Wait loops and also reduce the power consumed by the core thread or hardware thread. This benefits the overall power consumption of a program and possibly allows other cores or hardware threads to execute at faster Speeds within the same power consumption envelope. JEP 290: Filter Incoming Serialization Data This feature is related to the addition of filters at the serialization incoming streams to improve security and robustness. The core mechanism is a filter interface implemented by serialization clients and set on an ObjectInputStream. The filter interface methods are called during the deserialization process to validate the classes being deserialized, the sizes of arrays being created, and metrics describing stream length, stream depth, and number of references as the stream is being decoded. The filter returns a status to accept, reject, or leave the status undecided. JEP 259: Stack-Walking API Prior to Java 9, the way to access Stack Trace was very limited and provided the entire dump or stack information at once. This was inefficient and did not allow any direct way of filtering data. With Java 9, a Lazy StackWalker API has been introduced. This will allow us to fetch data based on filtering conditions and is more efficient. Full Speed Ahead, with Java 9! Ready Reference on DZone You may have noticed, that unlike my other articles, I have not provided code samples against each of the new features. The articles that I refer to here include my series of articles on the history of Java SE releases, right from Java SE 5. I will extend this current article that you are reading with detailed code samples (as these are new features) some time later. I will publish the same under ‘SKP Java/Java EE Gotchas’ series of articles. Until then, I request you to read the above article again and let all the concepts settle in.
https://learningactors.com/java-se-9-whats-new/
CC-MAIN-2021-31
en
refinedweb
This topic describes how a flow in Serverless workflow schedules reserved resource functions or functions with specified versions. Overview In actual production scenarios, functions scheduled by the task flow may frequently change due to changes in service scenarios. Therefore, you must avoid unexpected actions caused by the changes and control the stability of the task flow. In the following scenarios, functions of a fixed version will help in Serverless workflow task steps: - Flow A orchestrates multiple functions f1, f2, and f3. The same task must execute the same version of the functions. For example, when flow A is under execution, function f1 has been executed, but the function is updated at this time. In this case, the latest versions of functions f2 and f3 may be executed in flow A, which may cause unexpected results. Therefore, the version of the functions that the flow executes must be fixed. - A function needs to be rolled back. If you find that the flow failed due to a new change after the function is launched, you must roll back the flow to the previous fixed version. - The function alias is used to call a reserved resource function, reduce the function cold start time, and Best practice for cost optimization. Functions of different Introduction to versions that are deployed in Function Compute can efficiently support continuous integration and release in similar scenarios. The following section uses an example to describe how to use the function alias in the flow to call a reserved resource function. Reserved resource functions depend on the functions of a specified version. You can see this example in scenarios where functions of specified versions are required. Step 1: Create a reserved instance for a function fnf-demoin Function Compute. In this service, create a Python 3 function named provision and release its version and alias to generate a reserved instance. For more information, see Introduction to supplementary examples. Assume that the version of the created function is 1, the alias is online, and a reserved instance is generated. The following code shows the content of the function: import logging def handler(event, context): logger = logging.getLogger() logger.info('Started function test') return {"success": True} Step 2: Create a flow Serverless workflow natively supports the versions and aliases of functions in Function. In the task step of Serverless workflow, enter the default value acs:fc:{region}:{accID}:services/fnf/functions/test in the resourceArn parameter. Based on the function execution rule, the function of the latest version is executed by default. You can release the Version operations or Alias operations and enter acs:fc:{region}:{accID}:services/fnf.{ alias or version}/functions/test in the resourceArn parameter of the task step in the flow to call the function of the specified version. Therefore, define the flow based on the following code: version: v1 type: flow steps: - type: task resourceArn: acs:fc:::services/fnf-demo.online/functions/provision # You can also use the version by defining resourceArn: acs:fc:::services/fnf-demo.1/functions/provision. name: TestFCProvision Step 3: Execute the reserved function and check the execution result in the console or on a CLI - Execute the flow. The following figure shows the execution details before the reserved mode is used. - The following figure shows the execution details after the reserved mode is used. As shown in the figures, after the reserved mode is used, the flow execution time is reduced from 500 ms to 230 ms.
https://www.alibabacloud.com/help/doc-detail/194652.htm
CC-MAIN-2021-31
en
refinedweb
Establishing a Connection The objects available within our connector are accessible from the "cdata.DB2" module. In order to use the module's objects directly, the module must first be imported as below: import cdata.DB2 as mod From there, the connect() method can be called from the connector object to establish a connection using an appropriate connection string, such as the below: mod.connect("Server=10.0.1.2;Port=50000;User=admin;Password=admin;Database=test") Set the following properties to connect to DB2: - Server: Set this to the name of the server running DB2. - Port: Set this to the port the DB2 server is listening on. - Database: Set this to the name of the DB2 database. - User: Set this to the username of a user allowed to access the database. - Password: Set this to the password of a user allowed to access the database.
https://cdn.cdata.com/help/EDF/py/pg_connectionpy.htm
CC-MAIN-2021-31
en
refinedweb
With RxJS being a prominent member of the Angular framework, you're going to run into it one way or another. If you venture into the world of NGRX for state management, you can't write applications without working with Observables. This is supposed to lead to blazing fast reactive applications, with a predictable direction of flow within your application. Data flows down, events bubble up. This, however, is not always the case. As you throw yourself head first into the world of RxJS, operators and hard to understand docs, you can find yourself in a world of performance trouble and memory leaks. In the following examples I'll outline some patterns useful and harmful when working with data in your components. The premise of the examples is simple - a list of data fetched from store, and the ability to highlight an item, and display the count. Disclaimer: the following 3xamples are handwritten in markdown, and may contain syntax errors and may not run directly. They are for illustration purposes only Using .subscribe(...) One of the first patterns I came across when I started this, was the .subscribe() method. It seems harmless to just subscribe to the observable and assign the value to a private or public property: ; constructor(private store: Store<any>) { } ngOnInit() { this.store.select(selectManyItems).subscribe(items => { this.manyItems = items; this.numberOfItems = items.lenght; }); this.store.select(selectedItem).subscribe( item => this.selectedItem = item ) } public select(item) { this.store.dispatch(selectItem(item)); } } This approach may seem fine, but it's a disaster waiting to happen. Since subscriptions like this are not automatically unsubscribed, they will continue to live on, even if MyComponent is disposed and destroyed. If you really have to use .subscribe(), you have to unsubscribe manually! Using .subscribe(...) and takeUntil(...) One way of achieving this would be to keep a list of all subscriptions, and manually unsubscribing those in ngOnDestroy(), but that's also prone for error. It's easy to forget a subscription, and then you're in the same situation as above. We can achieve a proper unsubscription by introducing the takeUntil(...) operator for our subscriptions. ; destroyed$ = new Subject(); constructor(private store: Store<any>) { } ngOnInit() { this.store.select(selectManyItems) .pipe(takeUntil(this.destroyed$)) .subscribe(items => { this.manyItems = items; this.numberOfItems = items.lenght; }); this.store.select(selectedItem) .pipe(takeUntil(this.destroyed$)) .subscribe( item => this.selectedItem = item ); } ngOnDestroy() { this.destroyed$.next(); } public select(item) { this.store.dispatch(selectItem(item)); } } In this example, we're still setting our private and public properties, but by emitting on the destroyed$ subject in ngOnDestroy() we make sure the subscriptions are unsubscribed when our component is disposed. I'm not a big fan of the subscribe() method within my Angular components, as it feels like a smell. I just can't rid the feeling that I'm doing something wrong, and that subscribe() should be a last resort of some kind. Luckily Angular gives us some automagical features that can help us handle the observables in a more predictable manner without unsubscribing ourselves. Using the async pipe The async pipe takes care of alot of the heavy lifting for us, as it takes an Observable as the input, and triggers change whenever the Observable emits. But the real upside with async is that it will automatically unsubscribe when the component is destroyed. @Component( selector: 'my-component', template: ` <div>Number of items: {{ numberOfItems$ | async }}</div> <ul> <li [class.selected]="(selectedItem$ | async) === item" (click)="select(item)" * {{ item.name }} </li> </ul> `, changeDetectionStrategy: ChangeDetectionStrategy.OnPush ) export class MyComponent { manyItems$: Observable<{ [key: string]: SomeObject }>; numberOfItems$: Observable<number>; selectedItem$: Observable<SomeObject>; constructor(private store: Store<any>) { } ngOnInit() { this.manyItems$ = this.store.select(selectManyItems); this.selectedItem$ = this.store.select(selectedItem); this.numberOfItems$ = this.manyItems$.pipe( map(items => items.length) ); } public select(item) { this.store.dispatch(selectItem(item)); } } Now this seems better. But what we gained in protection against memory leaks, we've lost in the readability in the template. The template is soon riddled with async pipes all over the place, and you'll end up writing lots of *ngIf="myItems$ | async as myItems" to cater for complexity. Though this is just fine in small templates, it can grow and become hard to handle. Another caveat with this approach is that you might require combining, zipping, merging your Observables, leading to RxJS spaghetti which is extremely hard to maintain, let alone read. (If you're using NGRX like in the example code, this can also be avoided by properly mastering selectors!) What I've moved towards in my ventures is container components. Container components By using container/presentation components (dumb/smart, or whatever you'd like to call them), we can separate the conserns even more. Leveraging the async pipe once again, we can keep our Observable alone in our container component, letting the child component do what needs to be done. @Component( selector: 'my-container', template: `<child-component (selectItem)="select(item)" [items]="manyItems$ | async"></child-component>` ) export class MyContainerComponent implements OnInit { manyItems$: Observable<{ [key: string]: SomeObject }> selectedItem$: Observable<SomeObject>; constructor(private store: Store<any>) { } ngOnInit() { this.manyItems$ = this.store.select(selectManyItems); this.selectedItem$ = this.store.select(selectedItem); } select(item) { this.store.dispatch(selectItem(item)); } } Our container component now only contains the selectors from our store, and we don't have to care about anything but to pass that on to our child component with the async pipe. Which makes our child component extremely light weight. @Component( selector: 'child-component', template: ` <div>Number of items: {{ numberOfItems }}</div> <ul> <li [class.selected]="isSelected(item)" (click)="selectItem.emit(item)" * {{ item.name }} </li> </ul> `, changeDetectionStrategy: ChangeDetectionStrategy.OnPush ) export class ChildComponent { @Input() manyItems: SomeObject[]; @Input() selectedItem: SomeObject; @Output() selectItem = new EventEmitter<SomeObject>(); public get numberOfItems() { return this.manyItems?.length ?? 0; } public isSelected(item) { this.selectedItem === item; } } Imporant note: Remember to always use ChangeDetection.OnPush! This causes Angular to run changedetection only when the reference values of your Inputs change, or when an Output emits. Otherwise evaluating methods and getters in your template will be a major performancehit! Our child component now have all the same functionality as all the other examples, but the template has better readability, and the component has no dependencies. Testing this component with plain Jasmine specs is now lightning fast, and simple to do, without TestBeds, mocks or other boilerplate testsetup. The added benefit here, is that you now have a ChildComponent that is completely obvlivious as to how it gets the data it's supposed to display, making it reusable and versatile. Another bonus is that you don't have to introduce new observables with maps and filters, in order to do further work with your data: @Component( selector: 'blog-post-list-component', template: ` <div>Number of blogposts: {{ numberOfBlogposts }}</div> <div>Number of published blogposts : {{ numberOfPublishedBlogPosts }}</div> <ul> <li [class.selected]="isSelected(post)" (click)="selectPost.emit(post)" * {{ post.title }} </li> </ul> `, changeDetectionStrategy: ChangeDetectionStrategy.OnPush ) export class BlogPostListComponent { @Input() blogPosts: BlogPost[]; @Input() selectedPost: BlogPost; @Output() selectPost = new EventEmitter<BlogPost>(); public get numberOfBlogPosts() { return this.blogPosts?.length ?? 0; } public get numberOfPublishedBlogPosts() { return (this.blogPosts || []).filter(blogPost => blogPost.published); } public isSelected(post) { this.selectedPost === post; } } Code is readable, and easy to unittest. Closing notes Obviously this is an extremely simplified example, but believe me, as complexity grows, there are much to be gained by doing handling your observables in a consistent and safe manner from the get-go. RxJS is immensely powerful, and it's easy to abuse. With all the different possibilites at your hand, it's just one more operator in my .pipe(...) right? Well, things quickly get out of hand, and all of a sudden you have a mess of operators and hard to follow code. Keep it simple, refactor and decompose, and you'll be much happier when you revisit your code down the line. Discussion (0)
https://dev.to/yngvebn/managing-data-from-rxjs-observables-in-angular-51d
CC-MAIN-2021-31
en
refinedweb
Hello, I am trying to understand parameter groups in Conv2d. I run the following code, import torch import torch.nn as nn m = nn.Conv2d(3,12,3,1,1,groups=3) fake_in = torch.randn(1,3,244,244) print(f"input: {fake_in.size()}") out = m(fake_in) print(m.weight.size()) # torch.Size([12, 1, 3, 3]) print(out.shape) # torch.Size([1, 12, 244, 244]) The weight shape is torch.Size([12, 1, 3, 3]), does this weight includes 3 groups? and each group will generate ([1,4,244,244]) in this example?
https://discuss.pytorch.org/t/how-to-understand-groups-parameter-of-conv2d/127331
CC-MAIN-2021-31
en
refinedweb
From: scleary_at_[hidden] Date: 2000-10-10 13:56:55 > I haven't written anything for this yet, but I was thinking of making a > certain kind of a numeric adapter class. It should support all the > arithmetic operations the base type supports. > > The problem is the modulus operators. They should be defined only if the > base type is integral. I can check this from the base type's > std::numeric_limits implementation. The value should be a compile-time > constant, so I can use it as a template parameter. > > So I could make one version of my adapter class that includes modulus > operations, and one that doesn't. But I want the end user to see only one > adapter class, which would secretly inherit(?) from one of the specialized > adapter classes (based on the numeric-limit compile-time constant). How > would I do it, or do I have to settle for a single adapter class without > modulus operations? > > (I know this technique can work for template functions. For example, the > iterator std::distance function could specialize for different iterator > types by secretly calling another function that takes the iterator's type > tag as an extra argument. I'm asking for a similar technique for template > classes.) I think I know what you're getting at here. If I misunderstand you, let me know. You can partially specialize a template base class: namespace details { template <typename Derived, bool IsIntegral> struct numeric_adapter_integral_base; template <typename Derived> struct numeric_adapter_integral_base<Derived, true> { // Replace with the correct function signature(s) // for any integer-specific operations // (I'm not sure what your adapter does) Derived & operator%=(const Numeric & rhs) { ... // whatever // you can access the derived type as: // static_cast<Derived &>(*this); } }; template <typename Derived> struct numeric_adapter_integral_base<Derived, false> { /* empty: no additional operations for non-integrals */ }; } // namespace details template <typename Numeric> class numeric_adapter: public details::numeric_adapter_integral_base< numeric_adapter<Numeric>, std::numeric_limits<Numeric>::is_integer > { ... // any generic numeric operations }; This example uses the Barton & Nackman technique. Read the Boost documentation for the "operators" library, which deals with several of the same problems at: or my page on the Barton/Nackman technique at: -Steve Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2000/10/5597.php
CC-MAIN-2021-31
en
refinedweb
Despite monastic rules that forbid it, monk Keo Heng has smoked two packs a day for the last 10 years. "When we are stressed or bored, a cigarette is like a good friend that makes our brain clear," said Heng, 30, who lives at Wat Entagnean in Sihanoukville. But like a growing number of smokers, Heng realized that cigarettes gave him nothing and took quite a lot. They were eating away his money and his health - so he decided to quit. While tobacco use in Cambodia is still widespread, the number of quitters is on the rise, according to several surveys published recently. One of them, produced by the Adventist Development and Relief Agency, researched monasteries in Phnom Penh and four other provinces and found that 23 percent of the sangha had smoked cigarettes during the previous year, a sharp drop from the 36 percent they found in the same provinces in 2001. The survey results were presented during the second National Workshop on Buddhism and Tobacco Control, held in Phnom Penh May 10 and 11. Other statistics cited at the conference also showed progress for the anti-tobacco lobby. The percentage of male smokers aged 20 years or over fell from 59 percent in 1999 to 54 percent last year, according to a 2004 nationwide survey conducted by the National Institute of Statistics. The research found that the number of adult women who used tobacco products had also dropped 1 percent since 1999 to 6 percent in 2004. The results are encouraging, but health experts know there is still along way to go in reducing the negative effects of smoking in the Kingdom. About 80 percent of children under 13 years old are exposed to secondhand smoke from at least one regular smoker in the family, said Yel Daravuth, national program coordinator of the Tobacco Free Initiative at the World Health Organization (WHO). Daravuth said many people in rural areas believe that smoking homegrown tobacco will not damage their health as much as commercially produced cigarettes. "It is a big confusion," Daravuth said. As well as a gnawing away at smokers' health, cigarettes also take a bite from their pocketbook. The average monthly expenditure on tobacco products per household is 14,000 riel, or $69 million nationwide in 1999, according to one report from the WHO. That's enough to buy 274,304 tons of high quality rice, or build nearly 28,000 big wodden homes. Lim Thaipheang, director of the National Center for Health Promotion estimates that at least 20 cigarette companies operate in Cambodia and advertise in the media. "I am not in favor of the advertising, but we do not have the law to ban them," Thaipheang said. The law on tobacco control was drafted in 2001 and is now under consideration at the Council of Ministers, said Ung Phyrun, secretary of state at the Ministry of Health. He said when the law on tobacco control is approved, tobacco advertising will be banned, its sales near schools and health facilities prohibited, and the tax on its import increased. The World No Tobacco day will be held on May 31 with the theme 'Health Professionals and Tobacco Control'.
https://www.phnompenhpost.com/national/your-money-or-your-life-smokers-give-both
CC-MAIN-2018-34
en
refinedweb
Building Your First Serverless Composition with IBM Cloud Functions A few days ago I blogged about the new Composer functionality for IBM Cloud Functions and OpenWhisk. This is a incredibly cool release and I’m going to try my best to demonstrate it over the next few weeks. In today’s post I’m going to focus on what the process is like. By that I mean, how do I go from idea to actually using it and testing it. This won’t be terribly different from the docs, but I figure it may still be helpful for folks to get an idea of how I’m using it. (And of course, I expect my usage to change over time.) Note that the code I’ll be using for this post will be trivial to the max because I want to focus more on the process than the actual demo. Alright, with that out of the way, let’s start. The Demo As I said, the demo is pretty trivial, but let’s cover it anyway so we have context for what’s being built. The demo will convert a random Ferangi Rule of Acquisition into pig latin. So the logic is: - Select a random rule - Take the text and convert it to pig latin. Building a Serverless Application - Old Way I’ll start off by describing what the process would be prior to the introduction of Composer. To be clear, “old way” isn’t meant to be disparaging in anyway. OpenWhisk has always let me build really cool shit and Composer just makes it even better. First, I’ll build the “select a random rule” action. Here is the code listing with the embedded very long list of rules removed. (You can see the full source code on GitHub - I’ll share that link at the end.) function main(args) { /* This is a very, very long string. Not a good idea. Source: */ let."] let chosen = rules[Math.floor(Math.random() * rules.length)]; return { rule:chosen }; } I created this as an action called safeToDelete/rule. (As a reminder, I use a package called “safeToDelete” to store actions I build for blog posts and the such that do not need to stay alive.) wsk action create safeToDelete/rule rules.js I then tested to ensure it worked: wsk action invoke safeToDelete/rule -b -r And the result is: { "rule": "72. Never let the competition know, what you're thinking." } Next I created a Pig Latin rule, based on this repo from GitHub user montanaflynn: // source: function piglatin(text) { var words = text.split(/\W+/) var piggish = "" for (var i = 0; i < words.length; i++) { var word = words[i] var firstLetter = word.charAt(0) if (word.length > 2) { piggish = piggish + word.substring(1) + firstLetter + "ay " } else { piggish = piggish + word + " " } } return piggish.toLowerCase().trim(); } function main(args) { let result = piglatin(args.input); return { result:result}; } I then pushed it up: wsk action create safeToDelete/pig pig.js And tested: wsk action invoke safeToDelete/pig -b -r -p input "My name is Ray" With this result: { "result": "my amenay is ayray" } Alrighty. So to make the sequence, I have a problem. The output of the rule action is a variable named rule. The input for pig requires a parameter called input. In order to create a sequence, I’ll need a “joiner” action. Here’s the one I built: function main(args) { //remove 1. let text = args.rule.replace(/[0-9]+\. /,''); return { input:text } } Note that actually this does two things. It maps the input as well as modifying the text to remove the number in front of the rule. I pushed this to OpenWhisk like so: wsk action create safeToDelete/pigrule pigrule.js Alright, so the final step is to create the sequence: wsk action create --sequence safeToDelete/ruleToPig safeToDelete/rule,safeToDelete/pigrule,safeToDelete/pig --web true That’s a long command but not too bad. Typically I’d make a shell/bat script so I could automate updating each individual rule and the sequence all in one quick call. I’ll grab the URL like so: wsk action get safeToDelete/ruleToPig --url Which gives me: To test that, just add .json to the end. You can see that here. And finally, a sample result: { result: "evernay ivegay wayaay orfay reefay hatway ancay be oldsay" } I’ll be honest, that’s plain unreadable, but who cares. Let’s move on. Building a Serverless Application - With Composer Alright, so I’m assuming you’ve followed the install instructions already and can safely run fsh in your terminal. The first thing you’ll run into is that Composer uses slightly different terminology. Instead of sequences, you’ll create an app. To be fair, it isn’t a 100% one to one correlation, but I think for now it’s ok to mentally map the two. Next - you’ll define your app in code, in a file. (You can use the graphical shell too but I don’t.) So to start, I’ll make a new file called - pigruleapp.js. This file will contain the instructions that make up my composition. Here’s what I started with: composer.sequence( 'safeToDelete/rule', 'safeToDelete/pigrule', 'safeToDelete/pig' ); Notice I don’t define composer. I don’t have to as the system will handle that for me. All I do is define my logic. In this case, I’m using the sequence feature of composer and defining what to run. Essentially I’ve defined the exact same sequence I used before. (I’m going to make that better in a moment.) To create the app, I’ll use: fsh app create ruleToPigFsh ./pigruleapp.js If I have to make any edits, I’d use fsp app update instead. Next I’ll test it with: fsh app invoke ruleToPigFsh And - it works as expected: { result: "evernay etlay a emalefay in lothescay loudcay ouryay ensesay of rofitpay" } Alright, but let’s kick it up a notch. First, I can visualize my app like so: fsh app preview pigruleapp.js Which gives me this: You can ignore the “not yet deployed” message on top. Basically the shell is letting you know you are viewing a local file and not a deployed app instance. So yes it is technically deployed. Anyway, what’s not visible in the screen shot is that you can mouse over the blue boxes to get details. So for example, mousing over rule shows me action | safeToDelete\rule. You can also double click an item to see the source code. This is handy in case you forget: The JSON view is simply how Composer converts your code into JSON format. Here’s what it did with my code: { "Entry": "action_0", "States": { "action_0": { "Type": "Task", "Action": "safeToDelete/rule", "Next": "action_1" }, "action_1": { "Type": "Task", "Action": "safeToDelete/pigrule", "Next": "action_2" }, "action_2": { "Type": "Task", "Action": "safeToDelete/pig" } }, "Exit": "action_2" } And the Code view is simply my file. Another change that may confuse you are sessions. Instead of an activation, invoking a Composer app creates a session. So you can use fsh session list to see your recent tests. Or my favorite, grab the last one with: fsh session get --last. I freaking love this view. Do note though that the time for this test (21.5 seconds) was a fluke. There’s still some performance tuning going on so this is absolutely not what you would expect normally. The details here are awesome and so easily readable. Here’s the trace: I love how I can see the timings of every step. I’ll remind you again - the totals here are a fluke, not the norm, but you can really see how handy this would be to identify the “pain points” of your applications. The raw tab looks a lot like an activation report: { "duration": 21525, "name": "ruleToPigFsh", "subject": "[email protected]", "activationId": "0f941893b60a4331941893b60a633167", "publish": false, "annotations": [{ "key": "limits", "value": { "timeout": 60000, "memory": 256, "logs": 10 } }, { "key": "path", "value": "[email protected]_My Space/ruleToPigFsh" } ], "version": "0.0.65", "response": { "result": { "result": "aithfay ovesmay ountainsmay of nventoryiay" }, "success": true, "status": "success" }, "end": 1508339185192, "logs": [ "0f941893b60a4331941893b60a633167", "b44c2a1e9f4842408c2a1e9f48924084", "fbd917d800ab4be69917d800ab6be6b8", "0210c42b372242a090c42b372262a018", "4ccd2c65559e410a8d2c65559e410a40", "8504691d89f04df084691d89f0bdf072", "42f3b1096f9d40edb3b1096f9da0ed53" ], "start": 1508339163667, "namespace": "[email protected]_My Space", "originalActivationId": "42f3b1096f9d40edb3b1096f9da0ed53", "prettyType": "session" } Session Flow is where things get even more interesting. It’s basically the same flow chart as you saw in the preview above, but check out what you get on mouse over: In case it is a bit hard to read, you are seeing the output of the action. So this gives you the ability to trace the flow of data and help debug where things could have gone wrong. Also note you can click the green bubble for a more clear result. For example, if I clicked the green “rule” box I can see the output from the first item in the sequence: It’s a bit hard to demonstrate in still pictures, but the thing is - I can really dig into my invocation and see how things worked. This was all possible before, of course, but was definitely much more manual. I love this. No, really, I love this a lot. Make It Better! Let’s really improve things though by getting rid of that simple “joiner” action. This is the new app I built (called pigruleapp2.js): composer.sequence( 'safeToDelete/rule', args => ({input: args.rule.replace(/[0-9]+\. /,'')}), 'safeToDelete/pig' ); All I’ve done is replace that middle action with an inline function. I then pushed it up like so: fsh app create ruleToPigFsh2 pigruleapp2.js When invoked, it runs the exact same, but now my setup is one action smaller, which is a good thing in my opinion. In case your curious, this is how the preview changes: One last note. The CLI currently does not tell you how to get the URL for your app. That’s been logged as an issue. You can do so with the webbify command: fsh webbify ruleToPigFsh2 Which spits out the URL: And you can click that link to test it yourself. Wrap Up So I hope this made sense, and if not, just leave me a comment below. I’ll remind folks that the fsh CLI does not currently work in WSL (Windows Subsystem for Linux) so if you are Windows, switch to Powershell when using it. You can find the source code used for this demo here:
https://www.raymondcamden.com/2017/10/18/building-your-first-serverless-composition-with-ibm-cloud-functions
CC-MAIN-2018-34
en
refinedweb
This question already has an answer here: I have a set of classes which implement a common interface and are annotated with a business domain attribute. By design, each class is annotated with different parametrization [Foo(Bar=1)] public class EntityA : ICustomInterface [Foo(Bar=2)] public class EntityB : ICustomInterface [Foo(Bar=3)] public class EntityC : ICustomInterface Either From the Spring's IApplicationContext or using plain old reflection, how do I find the class that implements ICustomInterface and is annotated with [Foo(Bar=Y)]? Something like Spring for Java's getBeansWithAnnotation. I don't require Spring.net, because those objects are prototypes. To be clear: if my task does not require using Spring at all I am happy with that
http://www.howtobuildsoftware.com/index.php/how-do/beB/c-reflection-custom-attributes-springnet-get-object-by-attribute-value-duplicate
CC-MAIN-2018-34
en
refinedweb
Question: I'm writing a simple CMS based on Django. Most content management systems rely on having a fixed page, on a fixed URL, using a template that has one or many editable regions. To have an editable region, you require a Page. For the system to work out which page, you require the URL. The problem comes when you're no longer dealing with "pages" (be those FlatPages pages, or something else), but rather instances from another Model. For example if I have a Model of products, I may wish to create a detail page that has multiple editable regions within. I could build those regions into the Model but in my case, there are several Models and is a lot of variance in how much data I want to show. Therefore, I want to build the CMS at template level and specify what a block (an editable region) is based on the instance of "page" or the model it uses. I've had the idea that perhaps I could dump custom template tags on the page like this: {% block unique_object "unique placeholder name" %} And that would find a "block" based on the two arguments passed in. An example: <h1>{{ product_instance.name }}</h1> {% block product_instance "detail: product short description" %} {% block product_instance "detail: product video" %} {% block product_instance "detail: product long description" %} Sounds spiffy, right? Well the problem I'm running into is how do I create a "key" for a zone so I can pull the correct block out? I'll be dealing with a completely unknown object (it could be a "page" object, a URL, a model instance, anything - it could even be a boat </fg>). Other Django micro-applications must do this. You can tag anything with django-tagging, right? I've tried to understand how that works but I'm drawing blanks. So, firstly, am I mad? And assuming I not, and this looks like a relatively sane idea to persue, how should I go about linking an object+string to a block/editable-region? Note: Editing will be done on-the-page so there's no real issue in letting the users edit the zones. I won't have to do any reverse-mumbo-jumbo in the admin. My eventual dream is to allow a third argument to specify what sort of content area this is (text, image, video, etc). If you have any comments on any of this, I'm happy to read them! Solution:1 django-tagging uses Django's contenttypes framework. The docs do a much better job of explaining it than I can, but the simplest description of it would be "generic foreign key that can point to any other model." This may be what you are looking for, but from your description it also sounds like you want to do something very similar to some other existing projects: django-flatblocks ("... acts like django.contrib.flatpages but for parts of a page; like an editable help box you want show alongside the main content.") django-better-chunks ("Think of it as flatpages for small bits of reusable content you might want to insert into your templates and manage from the admin interface.") and so on. If these are similar then they'll make a good starting point for you. Solution:2 You want a way to display some object-specific content on a generic template, given a specific object, correct? In order to support both models and other objects, we need two intermediate models; one to handle strings, and one to handle models. We could do it with one model, but this is less performant. These models will provide the link between content and string/model. from django.db import models from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes import generic CONTENT_TYPE_CHOICES = ( ("video", "Video"), ("text", "Text"), ("image", "Image"), ) def _get_template(name, type): "Returns a list of templates to load given a name and a type" return ["%s_%s.html" % (type, name), "%s.html" % name, "%s.html" % type] class ModelContentLink(models.Model): key = models.CharField(max_length=255) # Or whatever you find appropriate type = models.CharField(max_length=31, choices= CONTENT_TYPE_CHOICES) content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() object = generic.GenericForeignKey('content_type', 'object_id') def get_template(self): model_name = self.object.__class__.__name__.lower() return _get_template(model_name, self.type) class StringContentLink(models.Model): key = models.CharField(max_length=255) # Or whatever length you find appropriate type = models.CharField(max_length=31, choices= CONTENT_TYPE_CHOICES) content = models.TextField() def get_template(self): return _get_template(self.content, self.type) Now, all we need is a template tag to grab these, and then try to load the templates given by the models' get_template() method. I'm a bit pressed on time so I'll leave it at this and update it in ~1 hour. Let me know if you think this approach seems fine. Solution:3 It's pretty straightforward to use the contenttypes framework to implement the lookup strategy you are describing: class Block(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() object = generic.GenericForeignKey() # not actually used here, but may be handy key = models.CharField(max_length=255) ... other fields ... class Meta: unique_together = ('content_type', 'object_id', 'key') def lookup_block(object, key): return Block.objects.get(content_type=ContentType.objects.get_for_model(object), object_id=object.pk, key=key) @register.simple_tag def block(object, key) block = lookup_block(object, key) ... generate template content using 'block' ... One gotcha to be aware of is that you can't use the object field in the Block.objects.get call, because it's not a real database field. You must use content_type and object_id. I called the model Block, but if you have some cases where more than one unique (object, key) tuple maps to the same block, it may in fact be an intermediate model that itself has a ForeignKey to your actual Block model or to the appropriate model in a helper app like the ones Van Gale has mentioned. Note:If u also have question or solution just comment us below or mail us on [email protected] EmoticonEmoticon
http://www.toontricks.com/2018/06/tutorial-how-to-agnostically-link-any.html
CC-MAIN-2018-34
en
refinedweb
Tutorial: PySpark and revoscalepy interoperability in Machine Learning Server Applies to: Microsoft Learning Server 9.x PySpark is Apache Spark's programmable interface for Python. The revoscalepy module is Machine Learning Server's Python library for predictive analytics at scale. In this tutorial, you learn how to create a logistic regression model using functions from both libraries. - Import packages - Connect to Spark using revoscalepy.rx_spark_connect(), specifying PySpark interop - Use PySpark for basic data manipulation - Use revoscalepy to build a logistic regression model Note The revoscalepy module provides functions for data sources and data manipulation. We are using PySpark in this tutorial to illustrate a basic technique for passing data objects between the two programming contexts. Prerequisites - A Hadoop cluster with Spark 2.0-2.1 with Machine Learning Server for Hadoop - A Python IDE, such as Jupyter Notebooks, Visual Studio for Python, or PyCharm. - Sample data (AirlineSubsetCsv mentioned in the example) downloaded from our sample data web site to your Spark cluster. Note Jupyter Notebook users, update your notebook to include the MMLSPy kernel. Select this kernel in your Jupyter notebook to use the interoperability feature. Import the relevant packages The following commands import the required libraries into the current session. from pyspark import SparkContext from pyspark.sql import SparkSession from revoscalepy import * Connect to Spark Setting interop = ‘pyspark’ indicates that you want interoperability. # with PySpark for this Spark session cc = rx_spark_connect(interop='pyspark', reset=True) # Get the PySpark context sc = rx_get_pyspark_connection(cc) spark = SparkSession(sc) Data acquisition and manipulation The sample data used in this tutorial is airline arrival and departure data, which you can store in a local file path. # Read in the airline data into a data frame airlineDF = spark.read.csv('<data source location like "">') # Get a count on rows airlineDF.count() # Return the first 10 lines to get familiar with the data airlineDF.take(10) # Rename columns for readability airlineTransformed = airlineDF.selectExpr('ARR_DEL15 as ArrDel15', \ 'YEAR as Year', \ 'MONTH as Month', \ 'DAY_OF_MONTH as DayOfMonth', \ 'DAY_OF_WEEK as DayOfWeek', \ 'UNIQUE_CARRIER as Carrier', \ 'ORIGIN_AIRPORT_ID as OriginAirportID', \ 'DEST_AIRPORT_ID as DestAirportID', \ 'FLOOR(CRS_DEP_TIME / 100) as CRSDepTime', \ 'CRS_ARR_TIME as CRSArrTime') # Break up the data set into train and test. We use training data for # all years before 2012 to predict flight delays for Jan 2012 airlineTrainDF = airlineTransformed.filter('Year < 2012') airlineTestDF = airlineTransformed.filter('(Year == 2012) AND (Month == 1)') # Define column info for factors column_info = { 'ArrDel15': { 'type': 'numeric' }, #'CRSDepTime': { 'type': 'integer' }, 'CRSDepTime': { 'type': 'factor', 'levels': ['0', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23'] }, 'CRSArrTime': { 'type': 'integer' }, 'Month': { 'type': 'factor', 'levels': ['1', '2'] }, 'DayOfMonth': { 'type': 'factor', 'levels': ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31'] }, 'DayOfWeek': { 'type': 'factor', 'levels': ['1', '2', '3', '4', '5', '6', '7'] #, 'newLevels': ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday'] # ignored }#, #'Carrier': { 'type': 'factor' } } # Define a Spark data frame data source, required for passing to revoscalepy trainDS = RxSparkDataFrame(airlineTrainDF, column_info=column_info) testDS = RxSparkDataFrame(airlineTestDF, column_info=column_info) Create the model A logistic regression model requires a symbolic formula, specifying the dependent and indepdendent variables, and a data set. You can output the results using the print function. # Create the formula formula = "ArrDel15 ~ DayOfMonth + DayOfWeek + CRSDepTime + CRSArrTime" # Run a logistic regression to predict arrival delay logitModel = rx_logit(formula, data = trainDS) # Print the model summary to look at the co-efficients print(logitModel.summary()) Next steps This tutorial provides an introduction to a basic workflow using PySpark for data preparation and revoscalepy functions for logistic regression. For further exploration, review our Python samples.
https://docs.microsoft.com/es-es/machine-learning-server/python/tutorial-revoscalepy-pyspark
CC-MAIN-2018-34
en
refinedweb
Description editFor the evaluation of most commands, a new level is created and the command evaluated at that level. A level maintains the following four notions: - The current namespace - The current variable table - For most commands, the current variable table is the local variable table that exists only for the duration of the command, and is only accessible to commands in scripts evaluated at the current level. For a few commands such as namespace eval the variable table of the current namespace is the current variable table. - The current command - The sequence of previous levels Commands Related to Levels editSome commands manipulate the current level or reference resources at other levels. Here is a brief description of such commands: - info level - Provides information about the current and previous levels. - namespace eval - Creates a new level, associates that level with the indicated namespace, and evaluates a script at that level. - proc - Arranges for a level to be created, arguments to the command to be assigned in the current variable table, and for the body of the procedure to be evaluated at the new level. - set - Modifies the current variable table. At a level associated with a namespace if the specified variable doesn't exist at that level, but does exist at the global level, modifies the global variable table instead. This behaviour is now considered a mistake to be fixed in some future version of Tcl. - uplevel - Arranges for a script to be evaluated at some level, which is resolved relative to the current level. - upvar and namespace upvar - Each of these commands modifies a variable table, which is resolved relative to the current level. - variable - Modifies the variable table of the current level. See Also edit - Many ways to eval - Whenever a script is evaluated, a level is involved. - downlevel - An Eagle-only extension to uplevel.
http://wiki.tcl.tk/41278
CC-MAIN-2017-30
en
refinedweb
Issues Capturing an image using pygame and saving it I had a problem with respect to the capturing of a image from a webcam and saving it. I wrote a small code to do this, but on executing the program i am just getting a black(blank) image of size(640,480). "import pygame import pygame.camera from pygame.locals import * pygame.init() pygame.camera.init() window = pygame.display.set_mode((640,480),0) cam = pygame.camera.Camera(0) cam.start() image = cam.get_image() pygame.image.save(window,'abc.jpg') cam.stop()" It also opens the pygame window but that is also blank, and it goes to the NOT RESPONDING state in a seconds time. Any solutions you could suggest me regarding the above? Thank you Nirav(nrp). I am working on a Image processing task. Is there any method to control the exposure time and frame rate of the camera from the pygame package? I am asking this because when the images are captured in dark situations then the same image is copied to around 30 frames before the next frame is captured and copied for the next 30 frames. Can this be because of the Buffer involved. Is the buffer slow ?? Any Inputs ?? Which OS are you on? I am on Windows 7. Change the line from your code: pygame.image.save(window,'abc.jpg') to: pygame.image.save(image,'abc.jpg') You saved the window screen, and not the camera image.
https://bitbucket.org/pygame/pygame/issues/78/capturing-an-image-using-pygame-and-saving
CC-MAIN-2017-30
en
refinedweb
"Asrarahmed Kadri" <ajkadri at googlemail.com> wrote > Sorry for asking you such a trivial question.!!! But i want to size > up all > the buttons with the same size as the largest one in the interface.. > And > thats why I am asking this question.. Assuming you mean in Tkinter(given yor other posts) it depends... You can specify the size when you create it, and if you use the placer layout manager it should keep that size. But if you use grid or packer managers then the size may change if the window is resized, depending on the options. However you should be able to read the size of the buttons back from the widget at runtime using dictionary syntax: def max(a,b): return (a>b) and a or b for button in mybuttons: max_size = max(max_size, button['width']) Or something very similar... Alan G.
https://mail.python.org/pipermail/tutor/2006-October/050245.html
CC-MAIN-2017-30
en
refinedweb
In the last couple of posts (1, 2) I described what needed to be done when migrating a Python web site running under Apache/mod_wsgi to running inside of a Docker container. This included the steps necessary to have the existing Apache instance proxy requests for the original site through to the appropriate port on the Docker host and deal with any fix ups necessary to ensure that the backend Python web site understood what the public facing URL was. In changing to running the Python web site under Docker, I didn’t cover the issue of how the instance of the Docker container itself would be started up and managed. All I gave was an example command line for manually starting the container. docker run --rm -p 8002:80 blog.example.com The assumption here was that you already had the necessary infrastructure in place to start such Docker containers when the system started, and restart them automatically if for some reason they stopped running. There are various ways one could manage service orchestration under Docker. These all come with their own infrastructure which has to be set up and managed. If instead you are just after something simple to keep the Python web site you migrated into a Docker container running, and also manage it in conjunction with the front end Apache instance, then there is actually a trick one can do using mod_wsgi on the front end Apache instance. Daemon process groups When using mod_wsgi, by default any hosted WSGI application will run in what is called embedded mode. Although this is the default, if you are running on a UNIX system it is highly recommended you do not use embedded mode and instead use what is called daemon mode. The difference is that with embedded mode, the WSGI application runs inside of the Apache child worker processes. These are the same processes which handle any requests received by Apache for serving up static files. Using embedded mode can result in various issues due to the way Apache manages those processes. The best solution is simply not to use embedded mode and use daemon mode instead. For daemon mode, what happens is that a group of one or more separate daemon processes are created by mod_wsgi and the WSGI application is instead run within those. All that the Apache child worker processes do in this case is transparently proxy the requests through to the WSGI application running in those separate daemon processes. Being a separate set of processes, mod_wsgi is able to better control how those processes are managed. In the initial post the example given was using daemon mode, but the aim was to move the WSGI application out of the front end Apache altogether and run it using a Docker container instead. This necessitated the manual configuration to proxy the requests through to that now entirely separate web application instance running under Docker. Now. Running the Docker image In the prior posts, the basic configuration we ended up with for proxying the requests through to the Python web site running under Docker was: # blog.example.com<VirtualHost *:80> ServerName blog.example.comProxyPass / ProxyPassReverse / RequestHeader set X-Forwarded-Port 80 </VirtualHost> This was after we had removed the configuration which had created a mod_wsgi daemon process group and delegated the WSGI application to run in it. We are now going to add back the daemon process group, but we will not set up any WSGI application to run in it. Instead we will setup a Python script to be loaded in the process when it starts using the ‘WSGIImportScript’ directive. # blog.example.com<VirtualHost *:80> ServerName blog.example.com ProxyPass / ProxyPassReverse / RequestHeader set X-Forwarded-Port 80 WSGIDaemonProcess blog.example.com threads=1 WSGIImportScript /some/path/blog.example.com/docker-admin.py \ process-group=blog.example.com application-group=%{GLOBAL} </VirtualHost> In the ‘docker-admin.py’ file we now add: import osos.execl('/usr/local/bin/docker', '(docker:blog.example.com)', 'run', '--rm', '-p', '8002:80', ‘blog.example.com') With this in place, when Apache is started, mod_wsgi will create a daemon process group with a single process. It will then immediately load and execute the ‘docker-admin.py’ script which in turn will execute the ‘docker' program to run up a Docker container using the image created for the backend WSGI application. The resulting process tree would look like: -+= 00001 root /sbin/launchd \-+= 64263 root /usr/sbin/httpd -D FOREGROUND |--- 64265 _www /usr/sbin/httpd -D FOREGROUND \--- 64270 _www (docker:blog.example.com.au) run --rm -p 8002:80 blog.example.com Of note, the ‘docker’ program was left running in foreground mode waiting for the Docker container to exit. Because it is running the Python web application, that will not occur unless explicitly shutdown. If the container exited because the Apache instance run by mod_wsgi-express crashed for some reason, then being a managed daemon process created by mod_wsgi, it will be detected that the ‘docker’ program process had exited and a new mod_wsgi daemon process created to replace it, thereby executing the ‘docker-admin.py’ script again and so restarting the WSGI application running under Docker. Killing the backend WSGI application explicitly by running ‘docker kill’ on the Docker instance will also cause it to exit, but again it will be replaced automatically. The backend WSGI application would only be shutdown completely by shutting down the front end Apache itself. Using this configuration, Apache with mod_wsgi, is therefore effectively being used as a simple process manager to startup and keep alive the backend WSGI application running under Docker. If the Docker instance exits it will be replaced. If Apache is shutdown, then so will the Docker instance. Managing other services Although the example here showed starting up of the WSGI application which was shifted out of the front end Apache, there is no reason that a similar thing couldn’t be done for other services being run under Docker. For example, you could create separate dummy mod_wsgi daemon process groups and corresponding scripts, to start up Redis or even a database. Because the front end Apache is usually already going to be integrated into the operating system startup scripts, we have managed to get management of Docker containers without needing to setup a separate system to create and manage them. If you are only playing or do not have a complicated set of services running under Docker, then this could save a bit of effort and be just as effective. With whatever the service is though, the one thing you may want to look at carefully is how a service is shutdown. The issue here is how Apache signals the shutdown of any managed process and what happens if it doesn’t shutdown promptly. Unfortunately how Apache does this cannot be overridden, so you do have to be mindful of it in case it would cause an issue. Specifically, when Apache is shutdown or a restart triggered, Apache will send the ‘SIGINT’ signal to each managed child process. If that process has not shutdown after one second, it will send the signal again. The same will occur if after a total of two seconds the process hasn't shutdown. Finally, if three seconds elapsed in total, then Apache will send a ‘SIGKILL’ signal. Realistically any service should be tolerant of being killed abruptly, but if you have a service which can take a long time to shutdown and is susceptible to problems if forcibly killed, that could be an issue and this may not be a suitable way of managing them.
http://blog.dscpl.com.au/2015/07/using-apache-to-start-and-manage-docker.html
CC-MAIN-2017-30
en
refinedweb
SQL Server Backup and Restore - Blaze Walton - 2 years ago - Views: Transcription 1 The Red Gate Guide SQL Server Backup and Restore Shawn McGehee ISBN: 2 SQL Server Backup and Restore By Shawn McGehee First published by Simple Talk Publishing April 2012 3 Copyright April 2012 ISBN The right of Shawn McGehee to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act that in which it is published and without a similar condition including this condition being imposed on the subsequent publisher. Technical Review by Eric Wisdahl Cover Image by Andy Martin Edited by Tony Davis Typeset & Designed by Peter Woodhouse & Gower Associates 4 Table of Contents Introduction 12 Software Requirements and Code Examples 18 Chapter 1: Basics of Backup and Restore 19 Components of a SQL Server Database 20 Data files 20 Filegroups 22 Transaction log 24 SQL Server Backup Categories and Types 28 SQL Server database backups 29 SQL Server transaction log backups 32 File backups 36 Recovery Models 39 Simple 41 Full 43 Bulk Logged 44 Restoring Databases 46 Restoring system databases 47 Restoring single pages from backup 48 Summary 49 Chapter 2: Planning, Storage and Documentation 50 Backup Storage 50 Local disk (DAS or SAN) 52 Network device 58 Tape 59 Backup Tools 60 5 Maintenance plan backups 61 Custom backup scripts 62 Third-party tools 63 Backup and Restore Planning 64 Backup requirements 65 Restore requirements 68 An SLA template 69 Example restore requirements and backup schemes 71 Backup scheduling 73 Backup Verification and Test Restores 75 Back up WITH CHECKSUM 76 Verifying restores 77 DBCC CHECKDB 77 Documenting Critical Backup Information 78 Summary 83 Chapter 3: Full Database Backups 84 What is a Full Database Backup? 84 Why Take Full Backups? 85 Full Backups in the Backup and Restore SLA 86 Preparing for Full Backups 87 Choosing the recovery model 88 Database creation 88 Creating and populating the tables 94 Taking Full Backups 96 Native SSMS GUI method 97 Native T-SQL method 106 Native Backup Compression 111 6 Verifying Backups 113 Building a Reusable and Schedulable Backup Script 114 Summary 115 Chapter 4: Restoring From Full Backup 116 Full Restores in the Backup and Restore SLA 116 Possible Issues with Full Database Restores 117 Large data volumes 118 Restoring databases containing sensitive data 118 Too much permission 120 Performing Full Restores 122 Native SSMS GUI full backup restore 122 Native T-SQL full restore 129 Forcing Restore Failures for Fun 133 Considerations When Restoring to a Different Location 136 Restoring System Databases 137 Restoring the msdb database 138 Restoring the master database 140 Summary 143 Chapter 5: Log Backups 144 A Brief Peek Inside a Transaction Log 145 Three uses for transaction log backups 148 Performing database restores 149 Large database migrations 150 Log shipping 151 Log Backups in the Backup and Restore SLA 152 7 Preparing for Log Backups 153 Choosing the recovery model 154 Creating the database 155 Creating and populating tables 157 Taking a base full database backup 159 Taking Log Backups 161 The GUI way: native SSMS log backups 161 T-SQL log backups 166 Forcing Log Backup Failures for Fun 170 Troubleshooting Log Issues 172 Failure to take log backups 173 Other factors preventing log truncation 174 Excessive logging activity 175 Handling the 9002 Transaction Log Full error 176 Log fragmentation 177 Summary 181 Chapter 6: Log Restores 182 Log Restores in the SLA 182 Possible Issues with Log Restores 183 Missing or corrupt log backup 183 Missing or corrupt full backup 184 Minimally logged operations 184 Performing Log Restores 187 GUI-based log restore 188 T-SQL point-in-time restores 194 Possible difficulties with point-in-time restores 198 Forcing Restore Failures for Fun 200 Summary 204 8 Chapter 7: Differential Backup and Restore 205 Differential Backups, Overview 206 Advantages of differential backups 207 Differential backup strategies 208 Possible issues with differential backups 212 Differentials in the backup and restore SLA 215 Preparing for Differential Backups 216 Recovery model 216 Sample database and tables plus initial data load 217 Base backup 218 Taking Differential Backups 218 Native GUI differential backup 219 Native T-SQL differential backup 221 Compressed differential backups 223 Performing Differential Backup Restores 225 Native GUI differential restore 225 Native T-SQL differential restore 227 Restoring compressed differential backups 230 Forcing Failures for Fun 231 Missing the base 231 Running to the wrong base 232 Recovered, already 235 Summary 237 Chapter 8: Database Backup and Restore with SQL Backup Pro _238 Preparing for Backups 238 Full Backups 241 SQL Backup Pro full backup GUI method 241 SQL Backup Pro full backup using T-SQL 253 9 Log Backups 256 Preparing for log backups 256 SQL Backup Pro log backups 258 Differential Backups 261 Building a reusable and schedulable backup script 263 Restoring Database Backups with SQL Backup Pro 267 Preparing for restore 267 SQL Backup Pro GUI restore to the end of a log backup 269 SQL Backup Pro T-SQL complete restore 277 SQL Backup Pro point-in-time restore to standby 279 Restore metrics: native vs. SQL Backup Pro 289 Verifying Backups 291 Backup Optimization 292 Summary 294 Chapter 9: File and Filegroup Backup and Restore 295 Advantages of File Backup and Restore 296 Common Filegroup Architectures 298 File Backup 303 Preparing for file backups 306 SSMS native full file backups 309 Native T-SQL file differential backup 310 SQL Backup Pro file backups 314 File Restore 318 Performing a complete restore (native T-SQL) 321 Restoring to a point in time (native T-SQL) 326 Restoring after loss of a secondary data file 328 Quick recovery using online piecemeal restore 335 10 Common Issues with File Backup and Restore 340 File Backup and Restore SLA 341 Forcing Failures for Fun 343 Summary 346 Chapter 10: Partial Backup and Restore 348 Why Partial Backups? 349 Performing Partial Database Backups 350 Preparing for partial backups 351 Partial database backup using T-SQL 354 Differential partial backup using T-SQL 355 Performing Partial Database Restores 357 Restoring a full partial backup 357 Restoring a differential partial backup 359 Special case partial backup restore 360 SQL Backup Pro Partial Backup and Restore 362 Possible Issues with Partial Backup and Restore 364 Partial Backups and Restores in the SLA 365 Forcing Failures for Fun 366 Summary 367 Appendix A: SQL Backup Pro Installation and Configuration_ 368 SQL Backup Pro GUI Installation 368 SQL Backup Pro Services Installation 370 SQL Backup Pro Configuration 376 File management 376 settings 378 11 About the author Shawn McGehee Shawn. Shawn is also a contributing author on the Apress book, Pro SQL Server Reporting Services Acknowledgements I would like to thank everyone who helped and supported me through the writing of this book. I would especially like to thank my editor, Tony Davis, for sticking with me during what was a long and occasionally daunting process, and helping me make my first single-author book a reality. I also need to give a special thank you to all of my close friends and family for always being there for me during all of life's adventures, good and bad. Shawn McGehee About the technical reviewer Eric Wisdahl Eric is a development DBA working in the e-commerce industry. He spends what little free time he has reading, playing games, or spending time with his wife and dogs. In a past life he has worked as an ETL/BI Specialist in the insurance industry, a pizza boy, patent examiner, Pro-IV code monkey and.net punching bag. xi 12 Introduction My first encounter with SQL Server, at least from an administrative perspective, came while I was still at college, working in a small web development shop. We ran a single SQL Server 6.5 instance, on Windows NT, and it hosted every database for every client that the company serviced. There was no dedicated administration team; just a few developers and the owner. One day, I was watching and learning from a fellow developer while he made code changes to one of our backend administrative functions. Suddenly, the boss stormed into the room and demanded everyone's immediate attention. Whatever vital news he had to impart is lost in the sands of time, but what I do remember is that when the boss departed, my friend returned his attention to the modified query and hit Execute, an action that was followed almost immediately by a string of expletives so loud they could surely have been heard several blocks away. Before being distracted by the boss, he'd written the DELETE portion of a SQL statement, but not the necessary WHERE clause and, upon hitting Execute, he had wiped out all the data in a table. Fortunately, at least, he was working on a test setup, with test data. An hour later we'd replaced all the lost test data, no real harm was done and we were able to laugh about it. As the laughter subsided, I asked him how we would have gotten that data back if it had been a live production database for one of the clients or, come to think of it, what we would do if the whole server went down, with all our client databases on board. He had no real answer, beyond "Luckily it's never happened." There was no disaster recovery plan; probably because there were no database backups that could be restored! It occurred to me that if disaster ever did strike, we would be in a heap of trouble, to the point where I wondered if the company as a whole could even survive such an event. It was a sobering thought. That evening I did some online research on database backups, and the very next day performed a full database backup of every database on our server. A few days later, I had 12 13 Introduction jobs scheduled to back up the databases on a regular basis, to one of the local hard drives on that machine, which I then manually copied to another location. I told the boss what I'd done, and so began my stint as the company's "accidental DBA." Over the coming weeks and months, I researched various database restore strategies, and documented a basic "crash recovery" plan for our databases. Even though I moved on before we needed to use even one of those backup files, I felt a lot better knowing that, with the plan that I'd put in place, I left the company in a situation where they could recover from a server-related disaster, and continue to thrive as a business. This, in essence, is the critical importance of database backup and restore: it can mean the difference between life or death for a business, and for the career of a DBA. The critical importance of database backup and restore The duties and responsibilities of a Database Administrator (DBA) make at the top of every administrative DBA's list of tasks. 13 14 Introduction Such a plan needs to be developed for each and every user database in your care, as well as supporting system databases, and it should be tailored to the specific requirements of each database, based on the type of data being stored (financial, departmental, personal, and so on), the maximum acceptable risk of potential data loss (day? hour? minute?), and the maximum acceptable down-time in the event of a disaster. Each of these factors will help decide the types of backup required, how often they need to be taken, how many days' worth of backup files need to be stored locally, and so on. All of this should be clearly documented so that all parties, both the DBAs and application/ database owners, understand the level of service that is expected for each database, and what's required in the plan to achieve it. At one end of the scale, for a non-frontline, infrequently-modified database, the backup and recovery scheme may be simplicity itself, involving a nightly full database backup, containing a complete copy of all data files, which can be restored if and when necessary. At the opposite end of the scale, a financial database with more or less zero tolerance to data loss will require a complex scheme consisting of regular (daily) full database backups, probably interspersed with differential database backups, capturing all changes since the last full database backup, as well as very regular transaction log backups, capturing the contents added in the database log file, since the last log backup. For very large databases (VLDBs), where it may not be possible to back up the entire database in one go, the backup and restore scheme may become more complex still, involving backup of individual data files, for filegroups, as well as transaction logs. All of these backups will need to be carefully planned and scheduled, the files stored securely, and then restored in the correct sequence, to allow the database to be restored to the exact state in which it existed at any point in time in its history, such as the point just before a disaster occurred. It sounds like a daunting task, and if you are not well prepared and well practiced, it will be. However, with the tools, scripts, and techniques provided in this book, and with the requisite planning and practice, you will be prepared to respond quickly and efficiently to a disaster, whether it's caused by disk failure, malicious damage, database corruption or the accidental deletion of data. This book will walk you step by step through the process of capturing all types of backup, from basic full database backups, to transaction log 14 15 Introduction backups, to file and even partial backups. It will demonstrate how to perform all of the most common types of restore operation, from single backup file restores, to complex point-in-time restores, to recovering a database by restoring just a subset of the files that make up the database. As well as allowing you to recover a database smoothly and efficiently in the face of one of the various the "doomsday scenarios," your well-rounded backup and recovery plan, developed with the help of this book, will also save you time and trouble in a lot of other situations including, but not limited to those below. Refreshing development environments periodically, developers will request that their development environments be refreshed with current production data and objects. Recovering from partial data loss occasionally, a database has data "mysteriously disappear from it." Migrating databases to different servers you will eventually need to move databases permanently to other servers, for a variety of reasons. The techniques in this book can be used for this purpose, and we go over some ways that different backup types can cut down on the down-time which this process may cause. Offloading reporting needs reporting on data is becoming more and more of a high priority in most IT shops. With techniques like log shipping, you can create cheap and quick reporting solutions that can provide only slightly older reporting data than High Availability solutions. I learned a lot of what I know about backup and restore the hard way, digging through innumerable articles on Books Online, and various community sites. I hope my book will serve as a place for the newly-minted and accidental DBA to get a jump on backups and restores. It can be a daunting task to start planning a Backup and Restore SLA from scratch, even in a moderately-sized environment, and I hope this book helps you get a good start. 15 16 Introduction How the book is structured. Broadly, the book breaks down into four sections. Prerequisites everything you need to know and consider before you start performing backup and restore. Chapter 1 describes the data and log files that comprise a database, and all the basic types of backup that are possible for these file, and explains the available database recovery models and what they mean. Chapter 2 takes a detailed look at all of the major aspects of planning a backup and recovery strategy, from choosing and configuring hardware, gathering and documenting the requirements for each database, selecting the appropriate backup tool, scheduling considerations, running backup verification checks, and more. Basic backup and restore how to capture and restore all of the basic backup types, using SSMS and T-SQL. Chapters 3 and 4 cover how to take standard and compressed full database backups, and restore them. 16 17 Introduction Chapters 5 and 6 cover how to take transaction log backups, and then use them in conjunction with a full database backup to restore a database to a particular point in time. They also cover common transaction log problems and how to resolve them. Chapter 7 covers standard and compressed differential database backup and restore. Basic backup and restore with SQL Backup how to capture and restore all basic backup types using Red Gate SQL Backup. Chapter 8 third-party tools such as Red Gate SQL backup aren't free, but they do offer numerous advantages in terms of the ease with which all the basic backups can be captured, automated, and then restored. Many organizations, including my own, rely on such tools for their overall backup and restore strategy. Advanced backup and restore how to capture and restore file and filegroup backups, and partial database backups. Chapter 9 arguably the most advanced chapter in the book, explaining the filegroup architectures that enable file-based backup and restore, and the complex process of capturing the necessary file backups and transaction log backups, and using them in various restore operations. Chapter 10 a brief chapter on partial database backups, suitable for large databases with a sizeable portion of read-only data. Finally, Appendix A provides a quick reference on how to download, install, and configure the SQL Backup tool from Red Gate Software, so that you can work through any examples in the book that use this tool. 17 18 Introduction Who this book is for This book is targeted toward the novice Database Administrator with less than a year of experience, and toward what I call "accidental" or "inheritance" DBAs, who are those who have inherited the duties of a DBA, by luck or chance, without any training or prior experience. If you have some experience, feel free to skip through some of the more basic topics and head to the more advanced sections. If you are one of our newly-minted DBA brothers and sisters, or someone who's had these duties thrust upon you unexpectedly, reading these prerequisite and basic sections will be a very worthwhile task. Software Requirements and Code Examples Throughout this book are scripts demonstrating various ways to take and restore backups, using either native T-SQL or SQL Backup scripts. All the code you need to try out the examples in this book can be obtained from the following URL: Examples in this book were tested on SQL Server 2008 and SQL Server 2008 R2 Standard Edition, with the exception of the online piecemeal restore in Chapter 9, which requires Enterprise Edition. Red Gate SQL Backup v was used in all SQL Backup examples, in Chapters 8 and 9 of this book. 18 19 Chapter 1: Basics of Backup and Restore Before we dive into the mechanisms for taking and restoring backups, we need to start with a good basic understanding of the files that make up a SQL Server database, what they contain, how and when they can be backed up, and the implications this has with regard to potential data loss in the event of a disaster in which a database is damaged, or specific data accidentally lost, and needs to be restored. Specifically, in this chapter, we will cover: components of a SQL Server database primary and secondary files and filegroups, plus log files how SQL Server uses the transaction log and its significance in regard to restore capabilities possible types of SQL Server backup full and differential database backups, transaction log backups and file backups SQL Server database recovery models the available recovery models and what they mean in terms of backups restoring databases the various available types of database restore, plus special restores, such as system database restores. 19 20 Chapter 1: Basics of Backup and Restore Components of a SQL Server Database Ultimately, a relational database is simply a set of files that store data. When we make backups of these files, we capture the objects and data within those files and store them in a backup file. So, put simply, a database backup is just a copy of the database as it existed at the time the backup was taken. Before we dive into the backup files themselves, however, we need to briefly review the files that comprise a SQL Server database. At its simplest, a database is composed of two files, both created automatically upon execution of a CREATE DATABASE command: a data file and a log file. However, in larger, more complex databases, the data files may be broken down into multiple filegroups, each one containing multiple files. Let's discuss each of these components in a little more detail; we won't be delving too deep right now, but we need at least to cover what each file contains and what roles it plays in a day-to-day database backup and recovery strategy. Data files Data files in a SQL Server database refer to the individual data containers that are used to store the system and user-defined data and objects. In other words, they contain the data, tables, views, stored procedures, triggers and everything else that is accessed by you, and your end-users and applications. These files also include most of the system information about your database, including permission information, although not including anything that is stored in the master system database.; if you enjoy confusing your fellow DBAs, you can apply any extensions you wish to these files. 20 21 Chapter 1: Basics of Backup and Restore The primary data file will contain: all system objects and data by default, all user-defined objects and data (assuming that only the MDF file exists in the PRIMARY filegroup) the location of any secondary data files. Many of the databases we'll create in this book will contain just a primary data file, in the PRIMARY filegroup, although in later chapters we'll also create some secondary data files to store user-defined objects and data. Writes to data files occur in a random fashion, as data changes affect random pages stored in the database. As such, there is a potential performance advantage to be had from being able to write simultaneously to multiple data files. Any secondary data files are typically denoted with the NDF extension, and can be created in the PRIMARY filegroup or in separate user-defined filegroups (discussed in more detail in the next section). When multiple data files exist within a single filegroup, SQL Server writes to these files using a proportional fill algorithm, where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup. Collectively, the data files for a given database are the cornerstone of your backup and recovery plan. If you have not backed up your live data files, and the database becomes corrupted or inaccessible in the event of a disaster, you will almost certainly have lost some or all of your data. As a final point, it's important to remember that data files will need to grow, as more data is added to the database. The manner in which this growth is managed is often a point of contention among DBAs. You can either manage the growth of the files manually, adding space as the data grows, or allow SQL Server to auto-grow the files, by a certain value or percentage each time the data file needs more space. Personally, I advocate leaving auto-growth enabled, but on the understanding that files are sized initially to cope with current data and predicted data growth (over a year, say) without undergoing an excessive 21 22 Chapter 1: Basics of Backup and Restore number of auto-growth events. We'll cover this topic more thoroughly in the Database creation section of Chapter 3, but for the rest of the discussion here, we are going to assume that the data and log files are using auto-growth. Filegroups A filegroup is simply a logical collection of one or more data files. Every filegroup can contain one or more data files. When data is inserted into an object that is stored in a given filegroup, SQL Server will distribute that data evenly across all data files in that filegroup. For example, let's consider the PRIMARY filegroup, which in many respects is a "special case." The PRIMARY filegroup will always be created when you create a new database, and it must always hold your primary data file, which will always contain the pages allocated for your system objects, plus "pointers" to any secondary data files. By default, the PRIMARY filegroup is the DEFAULT filegroup for the database and so will also store all user objects and data, distributed evenly between the data files in that filegroup. However, it is possible to store some or all of the user objects and data in a separate filegroup. For example, one commonly cited best practice with regard to filegroup architecture is to store system data separately from user data. In order to follow this practice, we might create a database with both a PRIMARY and a secondary, or user-defined, filegroup, holding one or more secondary data files. All system objects would automatically be stored in the PRIMARY data file. We would then ALTER the database to set the secondary filegroup as the DEFAULT filegroup for that database. Thereafter, any user objects will, by default, be stored in that secondary filegroup, separately from the system objects.. 22 23 Chapter 1: Basics of Backup and Restore CREATE TABLE TableName ( ColumnDefinitionList ) ON [SecondaryFilegroupName] Any data files in the secondary filegroup can, and typically will, be stored on separate physical storage from those in the PRIMARY filegroup. When a BACKUP DATABASE command is issued it will, by default, back up all objects and data in all data files in all filegroups. However, it's possible to specify that only certain filegroups, or specific files within a filegroup are backed up, using file or filegroup backups (covered in more detail later in this chapter, and in full detail in Chapter 9, File and Filegroup Backup and Restore). It's also possible to perform a partial backup (Chapter 10, Partial Backup and Restore), excluding any read-only filegroups. Given these facts, there's a potential for both performance and administrative benefits, from separating your data across filegroups. For example, if we have certain tables that are exclusively read-only then we can, by storing this data in a separate filegroup, exclude this data from the normal backup schedule. After all, performing repeated backups of data that is never going to change is simply a waste of disk space. If we have tables that store data that is very different in nature from the rest of the tables, or that is subject to very different access patterns (e.g. heavily modified), then there can be performance advantages to storing that data on separate physical disks, configured optimally for storing and accessing that particular data. Nevertheless, it's my experience that, in general, RAID (Redundant Array of Inexpensive Disks) technology and SAN (Storage Area Network) devices (covered in Chapter 2) automatically do a much better job of optimizing disk access performance than the DBA can achieve by manual placement of data files. Also, while carefully designed filegroup architecture can add considerable flexibility to your backup and recovery scheme, it will also add administrative burden. There are certainly valid reasons for using secondary files and filegroups, such as separating system and user data, and there are certainly cases where they might be a necessity, for example, 23 24 Chapter 1: Basics of Backup and Restore for databases that are simply too large to back up in a single operation. However, they are not required on every database you manage. Unless you have experience with them, or know definitively that you will gain significant performance with their use, then sticking to a single data file database will work for you most of the time (with the data being automatically striped across physical storage, via RAID). Finally, before we move on, it's important to note that SQL Server transaction log files are never members of a filegroup. Log files are always managed separately from the SQL Server data files. Transaction log A transaction log file contains a historical account of all the actions that have been performed on your database. All databases have a transaction log file, which is created automatically, along with the data files, on creation of the database and is conventionally denoted with the LDF extension. It is possible to have multiple log files per database but only one is required. Unlike data files, where writes occur in a random fashion, SQL Server always writes to the transaction log file sequentially, never in parallel. This means that it will only ever write to one log file at a time, and having more than one file will not boost write-throughput or speed. In fact, having more multiple files could result in performance degradation, if each file is not correctly sized or differs in size and growth settings from the others. Some inexperienced DBAs don't fully appreciate the importance of the transaction log file, both to their backup and recovery plan and to the general day-to-day operation of SQL Server, so it's worth taking a little time out to understand how SQL Server uses the transaction log (and it's a topic we'll revisit in more detail in Chapter 5, Log Backups). Whenever a modification is made to a database object (via Data Definition Language, DDL), or the data it contains (Data Manipulation Language, DML), the details of the change are recorded as a log record in the transaction log. Each log record contains 24 25 Chapter 1: Basics of Backup and Restore details of a specific action within the database (for example, starting a transaction, or inserting a row, or modifying a row, and so on). Every log record will record the identity of the transaction that performed the change, which pages were changed, and the data changes that were made. Certain log records will record additional information. For example, the log record recording the start of a new transaction (the LOP_BEGIN_XACT log record) will contain the time the transaction started, and the LOP_COMMIT_XACT (or LOP_ABORT_XACT) log records will record the time the transaction was committed (or aborted). From the point of view of SQL Server and the DBA looking after it, the transaction log performs the following critical functions: ensures transactional durability and consistency enables, via log backups, point-in-time restore of databases.. As noted previously, the database's log file provides a record of all transactions performed against that database. When a data modification is made, the relevant data pages are read from the data cache, first being retrieved from disk if they are not already in the cache. Data is modified in the data cache, and the log records describing the effects of the transaction are created in the log cache. Any page in the cache that has been modified since being read from disk is called a "dirty" page. When a periodic CHECKPOINT operation occurs, all dirty pages, regardless of whether they relate to committed or uncommitted transactions, are flushed to disk. The WAL protocol dictates that, before a data page is 25 26 Chapter 1: Basics of Backup and Restore modified in non-volatile storage (i.e. on disk), the description of the change must first be "hardened" to stable storage. SQL Server or, more specifically, the buffer manager, makes sure that the change descriptions (log records) are written to the physical transaction log file before the data pages are written to the physical data files. The Lazy Writer Another process that scans the data cache, the Lazy Writer, may also write dirty data pages to disk, outside of a checkpoint, if forced to do so by memory pressures. By always writing changes to the log file first, SQL Server. This process of reconciling the contents of the data and log files occurs during the database recovery process (sometimes called Crash Recovery), which is initiated automatically whenever SQL Server restarts, or as part of the RESTORE command. Say, for example, a database crashes after a certain transaction (T1) is "hardened" to the transaction log file, but before the actual data is written from memory to disk. When the database restarts, a recovery process is initiated, which reconciles the data file and log file. All of the operations that comprise transaction T1, recorded in the log file, will be "rolled forward" (redone) so that they are reflected in the data files. During this same recovery process, any data modifications on disk that originate from incomplete transactions, i.e. those for which neither a COMMIT nor a ROLLBACK have been issued, are "rolled back" (undone), by reading the relevant operations from the log file, and performing the reverse physical operation on the data. More generally, this rollback process occurs if a ROLLBACK command is issued for an explicit transaction, or if an error occurs and XACT_ABORT is turned on, or if the database detects that communication has been broken between the database and the client that instigated the 26 27 Chapter 1: Basics of Backup and Restore transactions., and so guarantees data consistency and integrity during normal day-to-day operation. Log backups and point-in-time restore As we've discussed, each log record contains the details of a specific change that has been made to the database, allowing that change to be performed again as a part of REDO, or undone as a part of UNDO, during crash recovery. Once captured in a log backup file, the log records can be subsequently applied to a full database backup in order to perform a database restore, and so re-create the database as it existed at a previous point in time, for example right before a failure. As such, regular backups of your log files are an essential component of your database backup and restore strategy for any database that requires point-in-time restore. The other very important reason to back up the log is to control its size. Since your log file has a record of all of the changes that have been made against it, it will obviously take up space. The more transactions that have been run against your database, the larger this log file will grow. If growth is left uncontrolled, the log file may even expand to the point where it fills your hard drive and you receive the dreaded "9002 (transaction log full)" error, and the database will become read-only, which we definitely do not want to happen. We'll discuss this in more detail in Chapter 5. 27 28 Chapter 1: Basics of Backup and Restore SQL Server Backup Categories and Types The data files, filegroups, and log files that make up a SQL Server database can, and generally should, be backed up as part of your database backup and recovery strategy. This includes both user and system databases. There are three broad categories of backup that a DBA can perform: database backups, file backups and transaction log backups, and within these categories several different types of backup are available. Database backups copy into a backup file the data and objects in the primary data file and any secondary data files. Full database backup backs up all the data and objects in the data file(s) for a given database. Differential database backup backs up any data and objects in data file(s) for a given database that have changed since the last full backup. Transaction log backups copy into a backup file all the log records inserted into the transaction log LDF file since the last transaction log backup. File backups copy into a backup file the data and objects in a data file or filegroup. Full file backup backs up all the data and objects in the specified data files or filegroup. Differential file backup backs up the data and objects in the specified data files or filegroup that have changed since the last full file backup. Partial backup backs up the complete writable portion of the database, excluding any read-only files/filegroups (unless specifically included). Differential partial backup backs up the data and objects that have changed since the last partial backup. In my experience as a DBA, it is rare for a database to be subject to file backups, and some DBAs never work with a database that requires them, so the majority of this book 28 29 Chapter 1: Basics of Backup and Restore (Chapters 3 to 8) will focus on database backups (full and differential) and transaction log backups. However, we do cover file backups in Chapters 9 and 10. Note that the exact types of backup that can be performed, and to some extent the restore options that are available, depend on the recovery model in which the database is operating (SIMPLE, FULL or BULK_LOGGED). We'll be discussing this topic in more detail shortly, in the Recovery Models section, but for the time being perhaps the most notable point to remember is that it is not possible to perform transaction log backups for a database operating in SIMPLE recovery model, and so log backups play no part of a database RESTORE operation for these databases. Now we'll take a look at each of these types of backup in a little more detail. SQL Server database backups The database backup, which is a backup of your primary data file plus any secondary database files, is the cornerstone of any enterprise's backup and recovery plan. Any database that is not using file backups will require a strategy for performing database backups. Consider, for example, the situation in which a SQL Server database crashes, perhaps due to a hardware failure, and the live data file is no longer accessible. If no backups (copies) of this file exist elsewhere, then you will suffer 100% data loss; the "meltdown" scenario that all DBAs must avoid at all costs. Let's examine the two types of database backup, full and differential. Each of them contains the same basic type of information: the system and user data and objects stored in the database. However, viewed independently, the former contains a more complete picture of the data than the latter. 29 30 Full database backups Chapter 1: Basics of Backup and Restore You can think of the full database backup file as a complete and total archive of your database as it existed when you began the backup. Note though that, despite what the term "full" might suggest, a full backup does not fully back up all database files, only the data files; the transaction log must be backed up separately.. Generally speaking, we can consider that restoring a full database backup will return the database to the state it was in at the time the backup process started. However, it is possible that the effects of a transaction that was in progress when the backup started will still be included in the backup. Before SQL Server begins the actual data backup portion of the backup operation, it reads the Log Sequence Number (LSN; see Chapter 5), then reads all the allocated data extents, then reads the LSN again; as long as the transaction commits before the second LSN read, the change will be reflected in the full backup. Full database backups will most likely be your most commonly used backup type, but may not be the only type of backup you need, depending on your data recovery requirements. For example, let's say that you rely exclusively on full backups, performing one every day at midnight, and the server experiences a fatal crash at 11 p.m. one night. In this case, you would only be able to restore the full database backup taken at midnight the previous day, and so you would have lost 23 hours' worth of data. 30 31 Chapter 1: Basics of Backup and Restore If that size of potential loss is unacceptable, then you'll need to either take more frequent full backups (often not logistically viable, especially for large databases) or take transaction log backups and, optionally, some differential database backups, in order to minimize the risk of data loss. A full database backup serves as the base for any subsequent differential database backup. Copy-only full backups There is a special type of full backup, known as a copy-only full backup, which exists independently of the sequence of backup files required to restore a database, and cannot act as the base for differential database backups. This topic is discussed in more detail in Chapter 3, Full Database Backups. Differential database backups. If you're interested to know how SQL Server knows which data has changed, it works like this: for all of the data extents in the database the server will keep a bitmap page that contains a bit for each separate extent (an extent is simply a collection of consecutive pages stored in your database file; eight of them, to be exact). If the bit is set to 1, it means that the data in one or more of the pages in the extent has been modified since the base backup, and those eight pages will be included in the differential backup. If the bit for a given extent is still 0, the system knows that it doesn't need to include that set of data in the differential backup file. 31 32 Chapter 1: Basics of Backup and Restore Some DBAs avoid taking differential backups where possible, due to the perceived administrative complexity they add to the backup and restore strategy; they prefer instead to rely solely on a mix of full and regular transaction log backups. Personally, however, I find them to be an invaluable component of the backup strategy for many of my databases. Furthermore, for VLDBs, with a very large full backup footprint, differential backups may become a necessity. Even so, it is still important, when using differential backups, to update the base backup file at regular intervals. Otherwise, if the database is large and the data changes frequently, our differential backup files will end up growing to a point in size where they don't give us much value. We will discuss differential backups further in Chapter 7, where we will dive much deeper into best practices for their use as part of a backup and recovery strategy. SQL Server transaction log backups As a DBA, you will in many cases need to take regular backups of the transaction log file for a given database, in addition to performing database backups. This is important both for enabling point-in-time restore of your database, and for controlling the size of the log file. A full understanding of log backups, how they are affected by the database recovery model, and how and when space inside the log is reused, requires some knowledge of the architecture of the transaction log. We won't get to this till Chapter 5, so we'll keep things as simple as possible here, and get into the details later. Essentially, as discussed earlier, a transaction log file stores a series of log records that provide a historical record of the modifications issued against that database. As long as the database is operating in the FULL or BULK LOGGED recovery model then these log records will remain in the live log file, and never be overwritten, until a log backup operation is performed. 32 33 Chapter 1: Basics of Backup and Restore Therefore, the full transaction "history" can be captured into a backup file by backing up the transaction log. These log backups can then be used as part of a database RESTORE operation, in order to roll the database forward to a point in time at, or very close to, when some "disaster" occurred. The log chain For example, consider our previous scenario, where we were simply taking a full database backup once every 24 hours, and so were exposed to up to 24 hours of data loss. It is possible to perform differential backups in between the full backups, to reduce the risk of data loss. However both full and differential backups are I/O intensive processes and are likely to affect the performance of the database, so they should not be run during times when users are accessing the database. If a database holds business-critical data and you would prefer your exposure to data loss to be measured in minutes rather than hours, then you can use a scheme whereby you take a full database backup, followed by a series of frequent transaction log backups, followed by another full backup, and so on. As part of a database restore operation, we can then restore the most recent full backup (plus differentials, if taken), followed by the chain of available log file backups, up to one covering the point in time to which we wish to restore the database. In order to restore a database to a point in time, either to the end of a particular log backup or to a point in time within a particular log backup, there must exist a full, unbroken chain of log records, from the first log backup taken after a full (or differential backup), right up to the point to which you wish to recover. This is known as the log chain. If the log chain is broken (for example, by switching a database to SIMPLE recovery model, then it will only be possible to recover the database to some point in time before the event occurred that broke the chain. The chain can be restarted by returning the database to FULL (or BULK LOGGED) recovery model and taking a full backup (or differential backup, if a full backup was previously taken for that database). See Chapter 5, Log Backups, for further details. 33 34 Chapter 1: Basics of Backup and Restore Tail log backups. Of course, any sort of tail log backup will only be possible if the log file is still accessible and undamaged but, assuming this is the case, it should be possible to restore right up to the time of the last transaction committed and written to the log file, before the failure occurred. Finally, there is a special case where a tail log backup may not succeed, and that is if there are any minimally logged transactions, recorded while a database was operating in BULK_LOGGED recovery model, in the live transaction log, and a data file is unavailable 34 35 Chapter 1: Basics of Backup and Restore as a result of the disaster. A tail log backup using NO_TRUNCATE may "succeed" (although with reported errors) in these circumstances but a subsequent attempt to restore that tail log backup will fail. This is discussed in more detail in the Minimally logged operations section of Chapter 6. Log space reuse (a.k.a. log truncation) When using any recovery model other than SIMPLE, it is vital to take regular log backups, not only for recovery purposes, but also to control the growth of the log file. The reason for this relates to how and when space in the log is made available for reuse; a process known as log truncation. We'll go into deeper detail in Chapter 5 but, briefly, any segment of the log that contains only log records that are no longer required is deemed "inactive," and any inactive segment can be truncated, i.e. the log records in that segment can be overwritten by log records detailing new transactions. These "segments" of the log file are known as virtual log files (VLFs). If a VLF contains even just a single log record that relates to an open (uncommitted) transaction, or that is still required by some other database process (such as replication), or contains log records that are more recent than the log record relating to the oldest open or still required transaction, it is deemed "active." Any active VLF can never be truncated. Any inactive VLF can be truncated, although the point at which this truncation can occur depends on the recovery model of the database. In the SIMPLE recovery model, truncation can take place immediately upon occurrence of a CHECKPOINT operation. Pages in the data cache are flushed to disk, having first "hardened" the changes to the log file. The space in any VLFs that becomes inactive as a result, is made available for reuse. Therefore, the space in inactive portions of the log is continually overwritten with new log records, upon CHECKPOINT; in other words, a complete "log record history" is not maintained. 35 36 Chapter 1: Basics of Backup and Restore In the FULL (or BULK LOGGED) recovery model, once a full backup of the database has been taken, the inactive portion of the log is no longer marked as reusable on CHECKPOINT, so records in the inactive VLFs are retained alongside those in the active VLFs. Thus we maintain a complete, unbroken series of log records, which can be captured in log backups, for use in point-in-time restore operations. Each time a BACKUP LOG operation occurs, it marks any VLFs that are no longer necessary as inactive and hence reusable. This explains why it's vital to back up the log of any database running in the FULL (or BULK LOGGED) recovery model; it's the only operation that makes space in the log available for reuse. In the absence of log backups, the log file will simply continue to grow (and grow) in size, unchecked. File backups In addition to the database backups discussed previously, it's also possible to take file backups. Whereas database backups back up all data files for a given database, with file backups we can back up just a single, specific data file, or a specific group of data files (for example, all the data files in a particular filegroup). For a VLDB that has been "broken down" into multiple filegroups, file backups (see Chapter 9) can decrease the time and disk space needed for the backup strategy and also, in certain circumstances, make disaster recovery much quicker. For example, let's assume that a database's architecture consists of three filegroups: a primary filegroup holding only system data, a secondary filegroup holding recent business data and a third filegroup holding archive data, which has been specifically designated as a READONLY filegroup. If we were to perform database backups, then each full backup file would contain a lot of data that we know will never be updated, which would simply be wasting disk space. Instead, we can take frequent, scheduled file backups of just the system and business data. 36 37 Chapter 1: Basics of Backup and Restore Furthermore, if a database suffers corruption that is limited to a single filegroup, we may be able to restore just the filegroup that was damaged, rather than the entire database. For instance, let's say we placed our read-only filegroup on a separate drive and that drive died. Not only would we save time by only having to restore the read-only filegroup, but also the database could remain online and just that read-only data would be unavailable until after the restore. This latter advantage only holds true for user-defined filegroups; if the primary filegroup goes down, the whole ship goes down as well. Likewise, if the disk holding the file storing recent business data goes down then, again, we may be able to restore just that filegroup; in this case, we would also have to restore any transaction log files taken after the file backup to ensure that the database as a whole could be restored to a consistent state. Finally, if a catastrophe occurs that takes the database completely offline, and we're using SQL Server Enterprise Edition, then we may be able to perform an online restore, restoring the primary data file and bringing the database back online before we've restored the other data files. We'll cover all this in a lot more detail, with examples, in Chapter 9. The downside of file backups is the significant complexity and administrative burden that they can add to the backup strategy. Firstly, it means that a "full backup" will consist of capturing several backup files, rather than just a single one. Secondly, in addition, we will have to take transaction log backups to cover the time between file backups of different file groups. We'll discuss this in fuller detail in Chapter 9 but, briefly, the reason for this is that while the data is stored in separate physical files it will still be relationally connected; changes made to data stored in one file will affect related data in other files, and since the individual file backups are taken at different times, SQL Server needs any subsequent log backup files to ensure that it can restore a self-consistent version of the database. Keeping track of all of the different backup jobs and files can become a daunting task. This is the primary reason why, despite the potential benefits, most people prefer to deal with the longer backup times and larger file sizes that accompany full database backups. 37 38 Chapter 1: Basics of Backup and Restore Full and differential file backups As noted earlier, a full file backup differs from the full database backup in that it doesn't back up the entire database, but just the contents of one or more files or filegroups. Likewise, differential file backups capture all of the data changes made to the relevant files or filegroups, since the last full file backup was taken. In VLDBs, even single files or filegroups can grow large, necessitating the use of differential file backups. The same caveat exists as for differential database backups: the longer the interval between refreshing the base file backup, the larger the differential backups will get. Refresh the base full file backup at least once per week, if taking differential file backups. Partial and differential partial backups Partial backups, and differential partial backups, are a relative innovation, introduced in SQL Server They were specifically designed for use with databases that are comprised of at least one read-only filegroup and their primary use case was for databases operating within the SIMPLE recovery model (although they are valid for any of the available recovery models). By default, a partial backup will create a full backup of the primary filegroup plus any additional read/write filegroups. It will not back up any read-only filegroups, unless explicitly included in the partial backup. A typical use case for partial backups would be for a very large database (VLDB) that contains a significant portion of read-only data. In most cases, these read-only file groups contain files of archived data, which are still needed by the front end application for reference purposes. However, if this data is never going to be modified again, we don't want to waste time, CPU, and disk space backing it up every time we run a full database backup. 38 39 Chapter 1: Basics of Backup and Restore So, a partial full backup is akin to a full database backup, but omits all READONLY filegroups. Likewise, a partial differential backup is akin to a differential database backup, in that it only backs up data that has been modified since the base partial backup and, again, does not explicitly back up the READONLY filegroups within the database. Differential partial backups use the last partial backup as the base for any restore operations, so be sure to keep the base partial on hand. It is recommended to take frequent base partial backups to keep the differential partial backup file size small and manageable. Again, a good rule of thumb is to take a new base partial backup at least once per week, although possibly more frequently than that if the read/write filegroups are frequently modified. Finally, note that we can only perform partial backups via T-SQL. Neither SQL Server Management Studio nor the Maintenance Plan Wizard supports either type of partial backup. Recovery Models A recovery model is a database configuration option, chosen when creating a new database, which determines whether or not you need to (or even can) back up the transaction log, how transaction activity is logged, and whether or not you can perform more granular restore types that are available, such as file and page restores. All SQL Server database backup and restore operations occur within the context of one of three available recovery models for that database. SIMPLE recovery model certain operations can be minimally logged. Log backups are not supported. Point-in-time restore and page restore are not supported. File restore support is limited to secondary data files that are designated as READONLY. 39 40 Chapter 1: Basics of Backup and Restore. Each model has its own set of requirements and caveats, so we need to choose the appropriate one for our needs, as it will dramatically affect the log file growth and level of recoverability. In general operation, a database will be using either the SIMPLE or FULL recovery model. Can we restore just a single table? Since we mentioned the granularity of page and file restores, the next logical question is whether we can restore individual tables. This is not possible with native SQL Server tools; you would have to restore an entire database in order to extract the required table or other object. However, certain third-party tools, including Red Gate's SQL Compare, do support object-level restores of many different object types, from native backups or from Red Gate SQL Backup files. By default, any new database will inherit the recovery model of the model system database. In the majority of SQL Server editions, the model database will operate with the FULL recovery model, and so all new databases will also adopt use of this recovery model. This may be appropriate for the database in question, for example if it must support point-in-time restore. However, if this sort of support is not required, then it may be more appropriate to switch the database to SIMPLE recovery model after creation. This will remove the need to perform log maintenance in order to control the size of the log. Let's take a look at each of the three recovery models in a little more detail. 40 41 Chapter 1: Basics of Backup and Restore Simple Of the three recovery models for a SQL Server database, SIMPLE recovery model databases are the easiest to manage. In the SIMPLE recovery model, we can take full database backups, differential backups and file backups. The one backup we cannot take, however, is the transaction log backup. As discussed earlier, in the Log space reuse section, whenever a CHECKPOINT operation occurs, the space in any inactive portions of the log file belonging to any database operating SIMPLE recovery model, becomes available for reuse. This space can be overwritten by new log records. The log file does not and cannot maintain a complete, unbroken series of log records since the last full (or differential) backup, which would be a requirement for any log backup to be used in a point-in-time restore operation, so a log backup would be essentially worthless and is a disallowed operation. Truncation and the size of the transaction log There is a misconception that truncating the log file means that log records are deleted and the file reduces in size. It does not; truncation of a log file is merely the act of making space available for reuse. This process of making log space available for reuse is known as truncation, and databases using the SIMPLE recovery model are referred to as being in auto-truncate mode. In many respects, use of the SIMPLE recovery model greatly simplifies log file management. The log file is truncated automatically, so we don't have to worry about log file growth unless caused, for example, by some large and/or long-running batch operation. If a huge number of operations are run as a single batch, then the log file can grow in size rapidly, even for databases running in SIMPLE recovery (it's better to run a series of smaller batches). 41 42 Chapter 1: Basics of Backup and Restore We also avoid the administrative burden of scheduling and testing the log backups, and the storage overhead required for all of the log backup files, as well as the CPU and disk I/O burden placed on the server while performing the log backups. The most obvious and significant limitation of working in SIMPLE model, however, is that we lose the ability to perform point-in-time restores. As discussed earlier, if the exposure to potential data loss in a given database needs to be measured in minutes rather than hours, then transaction log backups are essential and the SIMPLE model should be avoided for that database. However, not every database in your environment needs this level of recoverability, and in such cases the SIMPLE model can be a perfectly sensible choice. For example, a Quality Assurance (QA) server is generally subject to a very strict change policy and if any changes are lost for some reason, they can easily be recovered by redeploying the relevant data and objects from a development server to the QA machine. As such, most QA servers can afford to operate in SIMPLE model. Likewise, if a database that gets queried for information millions of time per day, but only receives new information, in a batch, once per night, then it probably makes sense to simply run in SIMPLE model and take a full backup immediately after each batch update. Ultimately, the choice of recovery model is a business decision, based on tolerable levels of data loss, and one that needs to be made on a database-by-database basis. If the business requires full point-in-time recovery for a given database, SIMPLE model is not appropriate. However, neither is it appropriate to use FULL model for every database, and so take transaction log backups, "just in case," as it represents a considerable resource and administrative burden. If, for example, a database is read-heavy and a potential 12-hours' loss of data is considered bearable, then it may make sense to run in SIMPLE model and use midday differential backups to supplement nightly full backups. 42 43 Chapter 1: Basics of Backup and Restore Full In FULL recovery model, all operations are fully logged in the transaction log file. This means all INSERT, UPDATE and DELETE operations, as well as the full details for all rows inserted during a bulk data load or index creation operations. Furthermore, unlike in SIMPLE model, the transaction log file is not auto-truncated during CHECKPOINT operations and so an unbroken series of log records can be captured in log backup files. As such, FULL recovery model supports restoring a database to any point in time within an available log backup and, assuming a tail log backup can be made, right up to the time of the last committed transaction before the failure occurred. If someone accidentally deletes some data at 2:30 p.m., and we have a full backup, plus valid log backups spanning the entire time from the full backup completion until 3:00 p.m., then we can restore the database to the point in time directly before that data was removed. We will be looking at performing point-in-time restores in Chapter 6, Log Restores, where we will focus on transaction log restoration. The other important point to reiterate here is that inactive VLFs are not truncated during a CHECKPOINT. The only action that can cause the log file to be truncated is to perform a backup of that log file; it is only once a log backup is completed that the inactive log records captured in that backup become eligible for truncation. This means that log backups play a vital dual role: firstly in allowing point-in-time recovery, and secondly in controlling the size of the transaction log file. In the FULL model, the log file will hold a full and complete record of the transactions performed against the database since the last time the transaction log was backed up. The more transactions your database is logging, the faster it will fill up. If your log file is not set to auto-grow (see Chapter 3 for further details), then this database will cease to function correctly at the point when no further space is available in the log. If auto-grow is enabled, the log file will grow and grow until either you take a transaction log backup or the disk runs out of space; I would recommend the first of these two options. 43 44 Chapter 1: Basics of Backup and Restore In short, when operating in FULL recovery model, you must be taking transaction log backups to manage the growth of data in the transaction log; a full database backup does not cause the log file to be truncated. Once you take a transaction log backup, space in inactive VLFs will be made available for new transactions (except in rare cases where you specify a copy-only log backup, or use the NO_TRUNCATE option, which will not truncate the log). Bulk Logged The third, and least-frequently used, recovery model is BULK_LOGGED. It operates in a very similar manner to FULL model, except in the extent to which bulk operations are logged, and the implications this can have for point-in-time restores. All standard operations (INSERT, UPDATE, DELETE, and so on) are fully logged, just as they would be in the FULL recovery model, but many bulk operations, such as the following, will be minimally logged: bulk imports using the BCP utility BULK INSERT INSERT SELECT * FROM OPENROWSET(bulk ) SELECT INTO inserting or appending data using WRITETEXT or UPDATETEXT index rebuilds (ALTER INDEX REBUILD). In FULL recovery model, every change is fully logged. For example, if we were to use the BULK INSERT command to load several million records into a database operating in FULL recovery model, each of the INSERTs would be individually and fully logged. This puts a tremendous overhead onto the log file, using CPU and disk I/O to write each of the transaction records into the log file, and would also cause the log file to grow at a tremendous rate, slowing down the bulk load operation and possibly causing disk usage issues that could halt your entire operation. 44 45 Chapter 1: Basics of Backup and Restore In BULK_LOGGED model, SQL Server uses a bitmap image to capture only the extents that have been modified by the minimally logged operations. This keeps the space required to record these operations in the log to a minimum, while still (unlike in SIMPLE model) allowing backup of the log file, and use of those logs to restore the database in case of failure. Note, however, that the size of the log backup files will not be reduced, since SQL Server must copy into the log backup file all the actual extents (i.e. the data) that were modified by the bulk operation, as well as the transaction log records. Tail log backups and minimally logged operations If the data files are unavailable as a result of a database failure, and the tail of the log contains minimally logged operations recorded while the database was operating in BULK_LOGGED recovery model, then it will not be possible to do a tail log backup, as this would require access to the changed data extents in the data file. The main drawback of switching to BULK_LOGGED model to perform bulk operations, and so ease the burden on the transaction log, is that it can affect your ability to perform point-in-time restores. The series of log records is always maintained but, if a log file contains details of minimally logged operations, it is not possible to restore to a specific point in time represented within that log file. It is only possible to restore the database to the point in time represented by the final transaction in that log file, or to a specific point in time in a previous, or subsequent, log file that does not contain any minimally logged transactions. We'll discuss this in a little more detail in Chapter 6, Log Restores. There is a time and place for use of the BULK_LOGGED recovery model. It is not recommended that this model be used for the day-to-day operation of any of your databases. What is recommended is that you switch from FULL recovery model to BULK_LOGGED recovery model only when you are using bulk operations. After you have completed these operations, you can switch back to FULL recovery. You should make the switch in a way that minimizes your exposure to data loss; this means taking an extra log backup immediately before you switch to BULK_LOGGED, and then another one immediately after you switch the database back to FULL recovery. 45 46 Chapter 1: Basics of Backup and Restore Restoring Databases Of course, the ultimate goal of our entire SQL Server backup strategy is to prepare ourselves for the, hopefully rare, cases where we need to respond quickly to an emergency situation, for example restoring a database over one that has been damaged, or creating a second copy of a database (see aspx) in order to retrieve some data that was accidentally lost from that database. In non-emergency scenarios, we may simply want to restore a copy of a database to a development or test server. For a user database operating in FULL recovery model, we have the widest range of restore options available to us. As noted throughout the chapter, we can take transaction log backups and use them, in conjunction with full and differential backups, to restore a database to a specific point within a log file. In fact, the RESTORE LOG command supports several different ways to do this. We can: recover to a specific point in time we can stop the recovery at a specific point in time within a log backup file, recovering the database to the point it was in when the last transaction committed, before the specified STOPAT time recover to a marked transaction if a log backup file contains a marked transaction (defined using BEGIN TRAN TransactionName WITH MARK 'Description ') then we can recover the database to the point that this transaction starts (STOPBE- FOREMARK) or completes (STOPATMARK) recover to a Log Sequence Number stop the recovery at a specific log record, identified by its LSN (see Chapter 6, Log Restores). We'll cover several examples of the first option (which is by far the most common) in this book. In addition, we can perform more "granular" restores. For example, in certain cases, we can recover a database by restoring only a single data file (plus transaction logs), rather than the whole database. We'll cover these options in Chapters 9 and 47 Chapter 1: Basics of Backup and Restore For databases in BULK_LOGGED model, we have similar restore options, except that none of the point-in-time restore options listed previously can be applied to a log file that contains minimally logged transactions. For SIMPLE recovery model databases, our restore options are more limited. In the main, we'll be performing straightforward restores of the full and differential database backup files. In many cases, certainly for a development database, for example, and possibly for other "non-frontline" systems, this will be perfectly adequate, and will greatly simplify, and reduce the time required for, the backup and restore strategies for these databases. Finally, there are a couple of important "special restore" scenarios that we may run into from time to time. Firstly, we may need to restore one of the system databases. Secondly, if only a single data page is damaged, it may be possible to perform a page restore, rather than restoring the whole database. Restoring system databases The majority of the discussion of backing up and restoring databases takes place in the context of protecting an organization's business data. However, on any SQL Server instance there will also be a set of system databases, which SQL Server maintains and that are critical to its operation, and which also need to be backed up on a regular schedule. The full list of these databases can be found in Books Online (. com/en-us/library/ms aspx), but there are three in particular that must be included in your backup and recovery strategy. The master database holds information on all of your other databases, logins and much more. If you lose your master database you are in a bad spot unless you are taking backups and so can restore it. A full backup of this database, which operates in SIMPLE recovery model, should be part of your normal backup procedures for each SQL Server instance, along with all the user databases on that instance. You should also back up 47 48 Chapter 1: Basics of Backup and Restore master after a significant RDBMS update (such as a major Service Pack). If you find yourself in a situation where your master database has to be rebuilt, as in the case where you do not have a backup, you would also be rebuilding the msdb and model databases, unless you had good backups of msdb and model, in which case you could simply restore them. The msdb database contains SQL Agent jobs, schedules and operators as well as historical data regarding the backup and restore operations for databases on that instance. A full backup of this database should be taken whenever the database is updated. That way, if a SQL Server Agent Job is deleted by accident, and no other changes have been made, you can simply restore the msdb database and regain that job information. Finally, the model database is a "template" database for an instance; all user databases created on that instance will inherit configuration settings, such as recovery model, initial file sizes, file growth settings, collation settings and so on, from those stipulated for the model database. By default, this database operates in the FULL recovery model. It should rarely be modified, but will need a full backup whenever it is updated. Personally, I like to back it up on a similar rotation to the other system databases, so that it doesn't get overlooked. We'll walk through examples of how to restore the master and the msdb databases in Chapter 4, Restoring From Full Backup. Restoring single pages from backup There is another restore type that can be performed on SQL Server databases that can save you a huge amount of time and effort. When you see corruption in a database, it doesn't have to be corruption of the entire database file. You might find that only certain segments of data are missing or unusable. In this situation, you can restore single or multiple pages from a database backup. With this method, you only have to take your database down for a short period of time to restore the missing data, which is extremely helpful when dealing with VLDBs. We won't cover an example in this book, but further details can be found at 48 49 Chapter 1: Basics of Backup and Restore Summary In this chapter, we've covered a lot of necessary ground, discussing the files that comprise a SQL Server database, the critical role played by each, why it's essential that they are backed up, the types of backup that can be performed on each, and how this is impacted by the recovery model chosen for the database. We're now ready to start planning, verifying and documenting our whole backup strategy, answering questions such as: Where will the backups be stored? What tools will be used to take the backups? How do I plan and implement an appropriate backup strategy for each database? How do I verify that the backups are "good?" What documentation do I need? To find out the answers to these questions, and more, move on to Chapter 2. 49 50 Chapter 2: Planning, Storage and Documentation Having covered all of the basic backup types, what they mean, and why they are necessary, we're now ready to start planning our overall backup and restore strategy for each of the databases that are in our care. We'll start our discussion at the hardware level, with consideration of the appropriate storage for backup files, as well as the live data and log files, and then move on to discuss the tools available to capture and schedule the backup operations. Then, in the heart of the chapter, we'll describe the process of planning a backup and restore strategy, and developing a Service Level Agreement that sets out this strategy. The SLA is a vital document for setting appropriate expectations with regard to possible data loss and database down-time, as well as the time and cost of implementing the backup strategy, for both the DBA, and the application owners. Finally, we'll consider how best to gather vital details regarding the file architecture and backup details for each database into one place and document. Backup Storage Hopefully, the previous chapter impressed on you the need to take database backups for all user and system databases, and transaction log backups for any user databases that are not operating in SIMPLE recovery mode. One of our basic goals, as a DBA, is to create an environment where these backups are stored safely and securely, and where the required backup operations are going to run as smoothly and quickly as possible. 50 51 Chapter 2: Planning, Storage and Documentation The single biggest factor in ensuring that this can be achieved (alongside such issues as careful backup scheduling, which we'll discuss later in the chapter) is your backup storage architecture. In the examples in this book, we back up our databases to the same disk drive that stores the live data and log files; of course, this is purely a convenience, designed to make the examples easy to practice on your laptop. In reality, we'd never back up to the same local disk; after all if you simply store them on the same drive as the live files, and that drive becomes corrupt, then not only have you lost the live files, but the backup files too! There are three basic options, which we'll discuss in turn: local disk storage, network storage, and tape storage. Each of these media types has its pros and cons so, ultimately, it is a matter of preference which you use and how. In many cases, a mixture of all three may be the best solution for your environment. For example, you might adopt the scheme below. 1. Back up the data and log files to local disk storage either Direct Attached Storage (DAS) or a Storage Area Network (SAN). In either case, the disks should be in a RAID configuration. This will be quicker for the backup to complete, but you want to make sure your backups are being moved immediately to a separate location, so that a server crash won't affect your ability to restore those files. 2. Copy the backup files to a redundant network storage location again, this space should be driven by some sort of robust storage solution such as SAN, or a RAID of local physical drives. This will take a bit longer than the local drive because of network overhead, but you are certain that the backups are in a separate/secure location in case of emergency. 3. Copy the files from the final location to a tape backup library for storage in an offsite facility. I recommend keeping the files on disk for at least three days for daily backups and a full seven days for weekly (or until the next weekly backup has been taken). If you need files older than that, you can retrieve them from the tape library. 51 52 Chapter 2: Planning, Storage and Documentation The reason I offer the option to write the backups to local storage initially, instead of straight to network storage, is that it avoids the bottleneck of pushing data through the network. Generally speaking, it's possible to get faster write speeds, and so faster backups, to a local drive than to a drive mapped from another network device, or through a drive space shared out through a distributed file system (DFS). However, with storage networks becoming ever faster, it is becoming increasingly viable to skip Step 1, and back up the data and log files directly to network storage. Whether you write first to locally attached storage, or straight to a network share, you'll want that disk storage to be as fast and efficient as possible, and this means that we want to write, not to a single disk, but to a RAID unit, provided either as DAS, or by a SAN. We also want, wherever possible, to use dedicated backup storage. For example, if a particular drive on a file server, attached from the SAN, is designated as the destination for our SQL Server backup files, we don't want any other process storing their data in that location, competing with our backups for space and disk I/O. Local disk (DAS or SAN) Next on our list of backup media is the local disk drive. The main benefit of backing up to disk, rather than tape is simply that the former will be much faster (depending on the type and speed of the drive). Of course, any consideration of local disk storage for backup files is just as relevant to the storage of the online data and log files, since it's likely that the initial backup storage will just be a separate area in the same overall SQL Server storage architecture. Generally speaking, SQL Server storage tends to consist of multiple disk drives, each set of disks forming, with a controller, a Redundant Array of Inexpensive Disks (RAID) device, configured appropriately according the files that are being stored. 52 53 Chapter 2: Planning, Storage and Documentation These RAID-configured disks are made available to SQL Server either as part of Directly Attached Storage, where the disks (which could be SATA, SCSI or SAS) are built into the server or housed in external expansion bays that are attached to the server using a RAID controller, or as Storage Area Network in layman's terms, a SAN is a big box of hard drives, available via a dedicated, high performance network, with a controller that instructs individual "volumes of data" known as Logical Unit Numbers (LUNs) to interact with certain computers. These LUNs appear as local drives to the operating system and SQL Server. Generally, the files for many databases will be stored on a single SAN. RAID configuration The RAID technology allows a collection of disks to perform as one. For our data and log files RAID, depending on the exact RAID configuration, can offer some or all of the advantages below. Redundancy if one of the drives happens to go bad, we know that, depending on the RAID configuration, either the data on that drive will have been mirrored to a second drive, or it will be able to be reconstructed, and so will still be accessible, while the damaged drive is replaced. Improved read and write I/O performance reading from and writing to multiple disk spindles in a RAID array can dramatically increase I/O performance, compared to reading and writing from a single (larger) disk. Higher storage capacity by combining multiple smaller disks in a RAID array, we overcome the single-disk capacity limitations (while also improving I/O performance). For our data files we would, broadly speaking, want a configuration optimized for maximum read performance and, for our log file, maximum write performance. For backup files, the simplest backup storage, if you're using DAS, may just be a separate, single physical drive. 53 54 Chapter 2: Planning, Storage and Documentation However, of course, if that drive were to become corrupted, we would lose the backup files on that drive, and there isn't much to be done, beyond sending the drive to a recovery company, which can be both time consuming and expensive, with no guarantee of success. Therefore, for backup files it's just as important to take advantage of the redundancy advantages offered by RAID storage. Let's take just a brief look at the more popular of the available RAID configurations, as each one provides different levels of protection and performance. RAID 0 (striping) This level of RAID is the simplest and provides only performance benefits. A RAID 0 configuration uses multiple disk drives and stripes the data across these disks. Striping is simply a method of distributing data across multiple disks, whereby each block of data is written to the next disk in the stripe set. This also means that I/O requests to read and write data will be distributed across multiple disks, so improving performance. There is, however, a major drawback in a RAID 0 configuration. There is no fault tolerance in a RAID 0 setup. If one of the disks in the array is lost, for some reason, the entire array will break and the data will become unusable. RAID 1 (mirroring) In a RAID 1 configuration we use multiple disks and write the same data to each disk in the array. This is called mirroring. This configuration offers read performance benefits (since the data could be read from multiple disks) but no write benefits, since the write speed is still limited to the speed of a single disk. However, since each disk in the array has a mirror image (containing the exact same data) RAID 1 does provide redundancy and fault tolerance. One drive in the mirror set can be lost without losing data, or that data becoming inaccessible. As long as one of the disks in the mirror stays online, a RAID 1 system will remain in working order but will take a hit in read performance while one of the disks is offline. 54 55 Chapter 2: Planning, Storage and Documentation RAID 5 (striping with parity) RAID 5 disk configurations use block striping to write data across multiple disks, and so offer increased read and write performance, but also store parity data on every disk in the array, which can be used to rebuild data from any failed drive. Let's say, for example, we had a simple RAID 5 setup of three disks and were writing data to it. The first data block would be written to Disk 1, the second to Disk 2, and the parity data on Disk 3. The next data blocks would be written to Disks 1 and 3, with parity data stored on Disk 2. The next data blocks would be written to Disks 2 and 3, with the parity being stored on Disk 1. The cycle would then start over again. This allows us to lose any one of those disks and still be able to recover the data, since the parity data can be used in conjunction with the still active disk to calculate what was stored on the failed drive. In most cases, we would also have a hot spare disk that would immediately take the place of any failed disk, calculate lost data from the other drives using the parity data and recalculate the parity data that was lost with the failure. This parity does come at a small cost, in that the parity has to be calculated for each write to disk. This can give a small hit on the write performance, when compared to a similar RAID 10 array, but offers excellent read performance since data can be read from all drives simultaneously. RAID 10 (striped pairs of mirrors) RAID 10 is a hybrid RAID solution. Simple RAID does not have designations that go above 9, so RAID 10 is actually RAID 1+0. In this configuration, each disk in the array has at least one mirror, for redundancy, and the data is striped across all disks in the array. RAID 10 does not require parity data to be stored; recoverability is achieved from the mirroring of data, not from the calculations made from the striped data. 55 56 Chapter 2: Planning, Storage and Documentation RAID 10 gives us the performance benefits of data striping, allowing us to read and write data faster than single drive applications. RAID 10 also gives us the added security that losing a single drive will not bring our entire disk array down. In fact, with RAID 10, as long as at least one of the mirrored drives from any set is still online, it's possible that more than one disk can be lost while the array remains online with all data accessible. However, loss of both drives from any one mirrored set will result in a hard failure. With RAID 10 we get excellent write performance, since we have redundancy with the need to deal with parity data. However, read performance will generally be lower that a RAID 5 configuration with the same number of disks, since data can be read simultaneously from only half the disks in the array. Choosing the right storage configuration All SQL Server file storage, including storage for database backup files, should be RAIDconfigured, both for redundancy and performance. For the backup files, what we're mainly after is the "safety net" of storage redundancy, and so the simplest RAID configuration for backup file storage would be RAID 1. However, in my experience, it's quite common that backup files simply get stored on the slower disks of a SAN, in whatever configuration is offered (RAID 5, in my case). Various configurations of RAID-level drive failure protection are available from either DAS or SAN storage. So, in cases where a choice exists, which one should we choose? SAN vs. DAS With the increasing capacity and decreasing cost of hard drives, along with the advent of Solid State Drives, it's now possible to build simple but pretty powerful DAS systems. Nevertheless, there is an obvious physical limit to how far we can expand a server by attaching more hard drives. For VLDBs, this can be a problem and SAN-attached storage is still very popular in today's IT infrastructure landscape, even in smaller businesses. 56 57 Chapter 2: Planning, Storage and Documentation For the added cost and complexity of SAN storage, you have access to storage space far in excess of what a traditional DAS system could offer. This space is easily expandable (up to the SAN limit) simply by adding more disk array enclosures (DAEs), and doesn't take up any room in the physical server. Multiple database servers can share a single SAN, and most SANs offer many additional features (multiple RAID configurations, dynamic snapshots, and so on). SAN storage is typically provided over a fiber optic network that is separated from your other network traffic in order to minimize any network performance or latency issues; you don't have to worry about any other type of network activity interfering with your disks. RAID 5 vs. RAID 10 There is some debate over which of these two High Availability RAID configurations is best for use when storing a relational database. The main point of contention concerns the write penalty of recalculating the parity data after a write in a RAID 5 disk array. This was a much larger deal for RAID disks a few years ago than it is in most of today's implementations. The parity recalculation is no longer an inline operation and is done by the controller. This means that, instead of the parity recalculation happening before you can continue with I/O operations, the controller takes care of this work separately and no longer holds up the I/O queue. You do still see some overhead when performing certain types of write, but for the most part this drawback has been obfuscated by improved hardware design. Nevertheless, my general advice, where a choice has to be made, is to go for a RAID 10 configuration for a database that is expected to be subject to "heavy writes" and RAID 5 for read-heavy databases. However, in a lot of cases, the performance gain you will see from choosing one of these RAID configurations over the other will be relatively small. 57 58 Chapter 2: Planning, Storage and Documentation My experience suggests that advances in controller architecture along with increases in disk speed and cache storage have "leveled the playing field." In other words, don't worry overly if your read-heavy database is on RAID 10, or a reasonably write-heavy database is on RAID 5; chances are it will still perform reliably, and well. Network device The last option for storing our backup files is the network device. Having each server backing up to a separate folder on a network drive is a great way to organize all of the backups in one convenient location, which also happens to be easily accessible when dumping those files to tape media for offsite storage. We don't really care what form this network storage takes, as long as it is as stable and fault tolerant as possible, which basically means RAID storage. We can achieve this via specialized Network Attached Storage (NAS), or simply a file server, backed by physical disks or SAN-attached space. However, as discussed earlier, backing up directly to a network storage device, across a highly utilized network, can lead to latency and network outage problems. That's why I generally still recommend to backup to direct storage (DAS or SAN) and then copy the completed backup files to the network storage device. A good solution is to use a scheduled job, schedulable utility or, in some cases, a third-party backup tool to back up the databases to a local drive and then copy the results to a network share. This way, we only have to worry about latency issues when copying the backup file, but at least at this stage we don't put any additional load on the SQL Server service; if a file copy fails, we just restart it. Plus, with utilities such as robocopy, we have the additional safety net of the knowing the copy will automatically restart if any outage occurs. 58 59 Chapter 2: Planning, Storage and Documentation Tape Firstly, I will state that tape backups should be part of any SQL Server backup and recovery plan and, secondly, that I have never in my career backed up a database or log file directly to tape. The scheme to go for is to back up to disk, and then archive to tape. There are several reasons to avoid backing up directly to tape, the primary one being that writing to tape is slow, with the added complication that the tape is likely to be attached via some sort of relatively high-latency network device. This is a big issue when dealing with backup processes, especially for large databases. If we have a network issue after our backup is 90% completed, we have wasted a lot of time and resources that are going to have to be used again. Writing to a modern, single, physical disk, or to a RAID device, will be much faster than writing to tape. Some years ago, tape storage might still have had a cost advantage over disk but, since disk space has become relatively cheap, the cost of LTO-4 tape media storage is about the same as comparable disk storage. Finally, when backing up directly to tape, we'll need to support a very large tape library in order to handle the daily, weekly and monthly backups. Someone is going to have to swap out, label and manage those tapes and that is a duty that most DBAs either do not have time for, or are just not experienced enough to do. Losing, damaging, or overwriting a tape by mistake could cost you your job. Hopefully, I've convinced you not to take SQL Server backups directly to tape, and instead to use some sort of physical disk for initial storage. However, and despite their other shortcomings, tape backups certainly do have a role to play in most SQL Server recovery plans. The major benefit to tape media is portability. Tapes are small, take up relatively little space and so are ideal for offsite storage. Tape backup is the last and best line of defense for you and your data. There will come a time when a restore operation relies on a backup file that is many months old, and you will be glad you have a copy stored on tape somewhere. 59 60 Chapter 2: Planning, Storage and Documentation With tape backups stored offsite, we also have the security of knowing that we can recover that data even in the event of a total loss of your onsite server infrastructure. In locations where the threat of natural disasters is very real and very dangerous, offsite storage is essential (I have direct experience of this, living in Florida). Without it, one hurricane, flood, or tornado can wipe out all the hard work everyone put into backing up your database files. Storing backup file archive on tape, in a secure and structurally reinforced location, can mean the difference between being back online in a matter of hours and not getting back online at all. Most DBA teams let their server administration teams handle the task of copying backups to tape; I know mine does. The server admins will probably already be copying other important system backups to tape, so there is no reason they cannot also point a backup-to-tape process at the disk location of your database and log backup files. Finally, there is the prosaic matter of who handles all the arrangements for the physical offsite storage of the tapes. Some smaller companies can handle this in-house, but I recommend that you let a third-party company that specializes in data archiving handle the long-term secure storage for you. Backup Tools Having discussed the storage options for our data, log, and backup files, the next step is to configure and schedule the SQL Server backup jobs. There are several tools available to do this, and we'll consider the following: maintenance plans the simplest, but also the most limited solution, offering ease of use but limited options, and lacking flexibility custom backup scripts offers full control over how your backup jobs execute, but requires considerable time to build, implement, and maintain 60 61 Chapter 2: Planning, Storage and Documentation third-party backup tools many third-party vendors offer powerful, highly configurable backup tools that offer backup compression and encryption, as well as well-designed interfaces for ease of scheduling and monitoring. All environments are different and the choice you make must be dictated by your specific needs. The goal is to get your databases backed up, so whichever one you decide on and use consistently is going to be the right choice. Maintenance plan backups The Maintenance Plan Wizard and Designer is a built-in SQL Server tool that allows DBAs to automate the task of capturing full and differential database backups, and log backups. It can also be used to define and schedule other essential database maintenance tasks, such as index reorganization, consistency checks, statistics updates, and so on. I list this tool first, not because it is the best way to automate these tasks, but because it is a simple-to-use, built-in option, and because scheduling backup and other maintenance tasks this way sure is better than not scheduling them at all! In fact, however, from a SQL Server backup perspective, maintenance plans are the weakest choice of the three we'll discuss, for the following reasons: backup options are limited file or partial backups are not supported configurability is limited we are offered a strict set of options in configuring the backup task and we cannot make any other modifications to the process, although it's possible (via the designer) to include some pre- and post-backup logic some important tasks are not supported such as proper backup verification (see later in the chapter). 61 62 Chapter 2: Planning, Storage and Documentation Under the covers, maintenance plans are simply SSIS packages that define a number of maintenance tasks, and are scheduled for execution via SQL Server Agent jobs. The Maintenance Plan Wizard and Designer makes it easy to build these packages, while removing a lot of the power and flexibility available when writing such packages directly in SSIS. For maintenance tasks of any reasonable complexity, it is better to use Business Intelligence Design Studio to design the maintenance packages that suit your specific environment and schedule them through SQL Server Agent. It may not be a traditional maintenance plan in the same sense as one that the wizard would have built, but it is a maintenance package none the less. Custom backup scripts Another option is to write a custom maintenance script, and run it in a scheduled job via SQL Server Agent or some other scheduling tool. Traditionally, DBAs have chosen T-SQL scripts or stored procedures for this task, but PowerShell scripting is gaining in popularity due to its versatility (any.net library can be used inside of a PowerShell script). Custom scripting is popular because it offers the ultimate flexibility. Scripts can evolve to add more and more features and functionality. In this book, we'll create custom scripts that, as well as backing up our databases, will verify database status and alert users on failure and success. However, this really only scratches the surface of the tasks we can perform in our customized backup scripts. The downside of all this is that "ultimate flexibility" tends to go hand in hand with increasing complexity and diminishing consistency, and this problem gets exponentially worse as the number of servers to be maintained grows. As the complexity of a script increases, so it becomes more likely that you'll encounter bugs, especially the kind that might not manifest themselves immediately as hard errors. 62 63 Chapter 2: Planning, Storage and Documentation If a script runs on three servers, this is no big deal; just update the code on each server and carry on. What if it must run on 40 servers? Now, every minor improvement to the backup script, or bug fix, will entail a major effort to ensure that this is reflected consistently on all servers. In such cases, we need a way to centralize and distribute the code so that we have consistency throughout the enterprise, and a quick and repeatable way to make updates to each machine, as needed. Many DBA teams maintain on each server a "DBA database" that holds the stored procedures for all sorts of maintenance tasks, such as backups, index maintenance and more. For example, my team maintains a "master" maintenance script, which will create this database on a server, or update the objects within the database if the version on the server is older than what exists in our code repository. Whenever the script is modified, we have a custom.net tool that will run the script on every machine, and automatically upgrade all of the maintenance code. Third-party tools The final option available to the DBA is to create and schedule the backup jobs using a third-party tool. Several major vendors supply backup tools but the one used in my team, and in this book, is Red Gate SQL Backup ( sql-backup/). Details of how to install this tool can be found in Appendix A, and backup examples can be found in Chapter 8. With SQL Backup, we can create a backup job just as easily as we could with the maintenance plan wizard, and with a lot more flexibility. We can create SQL Server Agent jobs that take full, differential, transaction log or file and filegroup backups from a GUI wizard. We can set up a custom schedule for the backups. We can configure numerous options for the backup files, including location, retention, dynamic naming convention, compression, encryption, network resilience, and more. 63 64 Chapter 2: Planning, Storage and Documentation Be aware, though, that a tool that offers flexibility and ease of use can lead down the road of complex backup jobs. Modifying individual steps within such jobs requires T-SQL proficiency or, alternatively, you'll need to simply drop the job and build it again from scratch (of course, a similar argument applies to custom scripts and maintenance plan jobs). Backup and Restore Planning The most important point to remember is that, as DBAs, we do not devise plans to back up databases successfully; we devise plans to restore databases successfully. In other words, if a database owner expects that, in the event of an outage, his database and data will be back online within two hours, with a maximum of one hour's data loss, then we must devise a plan to support these requirements, assuming we believe them to be reasonable. Of course, a major component of this plan will be the details of the types of backup that will be taken, and when, but never forget that the ultimate goal is not just to take backups, but to support specific restore requirements. This backup and restore plan will be agreed and documented in a contract with the database users, called a Service Level Agreement, or SLA, which will establish a certain level of commitment regarding the availability and recoverability of their database and data. Planning an SLA is a delicate process, and as DBAs we have to consider not only our own experiences maintaining various databases and servers, but also the points of view of the application owners, managers, and end-users. This can be a tricky task, since most owners and users typically feel that their platform is the most important in any enterprise and that it should get the most attention to detail, and highest level of service. We have to be careful not to put too much emphasis on any one system, but also to not let anything fall through the cracks. 64 65 Chapter 2: Planning, Storage and Documentation So how do we get started, when devising an appropriate backup and restore plan for a new database? As DBAs, we'd ideally be intimately familiar with the inner and outer workings of every database that is in our care. However, this is not always feasible. Some DBAs administer too many servers to know exactly what is on each one, or even what sort of data they contain. In such cases, a quick 15-minute discussion with the application owner can provide a great deal of insight into the sort of database that we are dealing with. Over the coming sections, we'll take a look at factors that affect each side of the backuprestore coin, and the sorts of questions we need to ask of our database owners and users. Backup requirements The overriding criterion in determining the types of backup we need to take, and how frequently, is the maximum toleration to possible data loss, for a particular database. However, there are a few other factors to consider as well. On what level of server does this database reside? For example, is it a development box, a production server or a QA machine? We may be able to handle losing a week's worth of development changes, but losing a week's work of production data changes could cost someone their job, especially if the database supports a business-critical, front-line application Do we need to back up this database at all? Not all data loss is career-ending and, as a DBA, you will run into plenty of situations where a database doesn't need to be backed up at all. For example, you may have a development system that gets refreshed with production data on a set schedule. If you and your development team are comfortable not taking backups of data that is refreshed every few days anyway, then go for it. Unless there is a good reason to do so (perhaps the 65 66 Chapter 2: Planning, Storage and Documentation data is heavily modified after each refresh) then you don't need to waste resources on taking backups of databases that are just copies of data from another server, which does have backups being taken. How much data loss is acceptable? Assuming this is a system that has limits on its toleration of data loss, then this question will determine the need to take supplemental backups (transaction log, differential) in addition to full database (or file) backups, and the frequency at which they need to be taken. Now, the application owner needs to be reasonable here. If they state that they cannot tolerate any down-time, and cannot lose any data at all, then this implies the need for a very high availability solution for that database, and a very rigorous backup regime, both of which are going to cost a lot of design, implementation and administrative effort, as well as a lot of money. If they offer more reasonable numbers, such as one hour's potential data loss, then this is something that can be supported as part of a normal backup regime, taking hourly transaction log backups. Do we need to take these hourly log backups all day, every day, though? Perhaps, yes, but it really depends on the answer to next question. At what times of the day is this database heavily used? What we're trying to find out here is when, and how often, backups need to be taken. Full backups of large databases should be carried out at times when the database is least used, and supplemental backups need to be fitted in around our full backup schedule. We'll need to start our log backup schedules well in advance of the normal database use schedules, in order to capture the details of all data changes, and end them after the traffic subsides. Alternatively, we may need to run these log file backups all day, which is not a bad idea, since then we will never have large gaps in time between the transaction log backup files. 66 67 Chapter 2: Planning, Storage and Documentation What are the basic characteristics of the database? Here, we're interested in other details that may impact backup logistics. We'll want to find out, for example: How much data is stored the size of the database will impact backup times, backup scheduling (to avoid normal database operations being unduly affected by backups), amount of storage space required, and so on. How quickly data is likely to grow this may impact backup frequency and scheduling, since we'll want to control log file size, as well as support data loss requirements. We will also want to plan for the future data growth to make sure our SQL Server backup space doesn't get eaten up. The nature of the workload. Is it OLTP, with a high number of both reads and writes? Or mainly read-only? Or mixed and, if so, are certain tables exclusively read-only? Planning for backup and restore starts, ideally, right at the very beginning of a database's life, when we are planning to create it, and define the data and log file properties and architecture for the new database. Answers to questions such as these will not only help define our backup requirements, but also the appropriate file architecture (number of filegroups and data files) for the database, initial file sizes and growth characteristics, as well as the required hardware capacity and configuration (e.g. RAID level). Data and log file sizing and growth We'll discuss this in more detail in Chapter 3, but it's worth noting that the initial size, and subsequent auto-growth characteristics, of the data and log files will be inherited from the properties of the model database for that instance, and there is a strong chance that these will not be appropriate for your database. 67 68 Chapter 2: Planning, Storage and Documentation Restore requirements As well as ensuring we have the appropriate backups, we need a plan in place that will allow us to perform "crash recovery." In other words, we need to be sure that we can restore our backups in such a way that we meet the data loss requirements, and complete the operation within an acceptable period of down-time. An acceptable recovery time will vary from database to database depending on a number of factors, including: the size of the database much as we would like to, we cannot magically restore a full backup of a 500 GB database, and have it back online with all data recovered in 15 minutes where the backup files are stored if they are on site, we only need to account for the time needed for the restore and recovery process; if the files are in offsite storage, we'll need to plan extra time to retrieve them first the complexity of the restore process if we just need to restore a full database backup, this is fairly straightforward process; but if we need to perform a complex point-in-time restore involving full, differential, and log files, this is more complex and may require more time or at least we'll need to practice this type of restore more often, to ensure that all members of our team can complete it successfully and within the required time. With regard to backup file locations, an important, related question to ask your database owners is something along the lines of: How quickly, in general, will problems be reported? That may sound like a strange question, but its intent is to find out how long it's necessary to retain database backup files on network storage before deleting them to make room for more backup files (after, of course, transferring these files to tape for offsite storage). 68 69 Chapter 2: Planning, Storage and Documentation The location of the backup file will often affect how quickly a problem is solved. For example, let's say the company policy is to keep backup files on site for three days, then archive them to tape, in offsite storage. If a data loss occurs and the error is caught quickly, the necessary files will be at hand. If, however, it's only spotted five days later, the process of getting files back from the offsite tape backups will push the recovery time out, and this extra time should be clearly accounted for in the SLA. This will save the headache of having to politely explain to an angry manager why a database or missing data is not yet back online. An SLA template Having asked all of these question, and more, it's time to draft the SLA for that database. This document is a formal agreement regarding the backup regime that is appropriate for that database, and also offers a form of insurance to both the owners and the DBA. You do not, as a DBA, want to be in a position of having a database owner demanding to know why you can't perform a log restore to get a database back how it was an hour before it went down, when you know that they told you that only weekly full backups were required for that database, but you have no documented proof. Figure 2-1 offers a SLA template, which will hopefully provide a good starting point for your Backup SLA contract. It might not have everything you need for your environment, but you can download the template from the supplemental material and modify it, or just create your own from scratch. 69 70 Chapter 2: Planning, Storage and Documentation Server Name: Server Category: Application Name: Application Owner: MYCOMPANYDB1 Production / Development / QA / Staging Sales-A-Tron Sal Esman Database Name : Data Loss: Recovery Time: 4 Hours salesdb 2 Hours Full Backups: Daily / Weekly / Monthly Diff Backups: Daily / Weekly Log Backups: Hour Intervals File Backups: Daily / Weekly / Monthly File Differentials: Daily / Weekly Database Name : Data Loss: Recovery Time: 6 Hours salesarchives 4 Hours Full Backups: Daily / Weekly / Monthly Diff Backups: Daily / Weekly Log Backups: Hour Intervals File Backups: Daily / Weekly / Monthly File Differentials: Daily / Weekly Database Name : Data Loss: Recovery Time: 3 Hours resourcedb 2 Hours Full Backups: Daily / Weekly / Monthly Diff Backups: Daily / Weekly Log Backups: Hour Intervals File Backups: Daily / Weekly / Monthly File Differentials: Daily / Weekly Database Administrator: Application Owner: Date of Agreement: Figure 2-1: An example backup and restore SLA. 70 71 Chapter 2: Planning, Storage and Documentation Example restore requirements and backup schemes Based on all the information gathered for the SLA, we can start planning the detailed backup strategy and restore for each database. By way of demonstrating the process, let's walk through a few common scenarios and the recommended backup strategy. Of course, examples are only intended as a jumping-off point for your own SQL Server backup and restore plans. Each server in your infrastructure is different and may require completely different backup schedules and structures. Scenario 1: Development server, VLDB, simple file architecture Here, we have a development machine containing one VLDB. This database is not structurally complex, containing only one data file and one log file. The developers are happy to accept data loss of up to a day. All activity on this database takes place during the day, with a very few transactions happening after business hours. In this case, it might be appropriate to operate the user database in SIMPLE recovery model, and implement a backup scheme such as the one below. 1. Perform full nightly database backups for the system databases. 2. Perform a full weekly database backup for the VLDB, for example on Sunday night. 3. Perform a differential database backup for the VLDB on the nights where you do not take the full database backups. In this example, we would perform these backups on Monday through Saturday night. 71 72 Chapter 2: Planning, Storage and Documentation Scenario 2: Production server, 3 databases, simple file architecture, 2 hours' data loss In the second scenario, we have a production server containing three actively-used databases. The application owner informs us that no more than two hours of data loss can be tolerated, in the event of corruption or any other disaster. None of the databases are complex structurally, each containing just one data file and one log file. With each database operating in FULL recovery model, an appropriate backup scheme might be as below. 1. Perform full nightly database backups for every database (plus the system databases). 2. Perform log backups on the user databases every 2 hours, on a schedule starting after the full backups are complete and ending before the full backup jobs starts. Scenario 3: Production server, 3 databases, complex file architecture, 1 hour's data loss In this final scenario, we have a production database system that contains three databases with complex data structures. Each database comprises multiple data files split into two filegroups, one read-only and one writable. The read-only file group is updated once per week with newly archived records. The writable file groups have an acceptable data loss of 1 hour. Most database activity on this server will take place during the day. With the database operating in FULL recovery model, the backup scheme below might work well. 1. Perform nightly full database backups for all system databases. 2. Perform a weekly full file backup of the read-only filegroups on each user database, after the archived data has been loaded. 72 73 Chapter 2: Planning, Storage and Documentation 3. Perform nightly full file backups of the writable file groups on each user database. 4. Perform hourly log backups for each user database; the log backup schedule should start after the nightly full file backups are complete, and finish one hour before the full file backup processes start again. Backup scheduling It can be a tricky process to organize the backup schedule such that all the backups that are required to support the Backup and Restore SLA fit into the available maintenance windows, don't overlap, and don't cause undue stress on the server. Full database and file backups, especially of large databases, can be CPU- and Disk I/0-intensive processes, and so have the propensity to cause disruption, if they are run at times when the database is operating under its normal business workload. Ideally, we need to schedule these backups to run at times when the database is not being accessed, or at least is operating under greatly reduced load, since we don't want our backups to suffer because they are fighting with other database processes or other loads on the system (and vice versa). This is especially true when using compressed backups, since a lot of the load that would be done on disk is moved to the CPU in the compression phase. Midnight is usually a popular time to run large backup jobs, and if your shop consists of just a few machines, by all means schedule all your full nightly backups to run at this time. However, if you administer 20, 30, or more servers, then you may want to consider staggering your backups throughout the night, to avoid any possible disk or network contention issues. This is especially true when backing up directly to a network storage device. These devices are very robust and can perform a lot of operations per second, but there is a limit to how much traffic any device can handle. By staggering the backup jobs, you can help alleviate any network congestion. 73 74 Chapter 2: Planning, Storage and Documentation The scheduling of differential backups will vary widely, depending on their role in the backup strategy. For a VLDB, we may be taking differential backups every night, except for on the night of the weekly full backup. At other times, we may run differential backups at random times during the day, for example as a way to safeguard data before performing a large modification or update. Transaction log backups are, in general, much less CPU- and I/O-intensive operations and can be safely run during the day, alongside the normal database workload. In fact, there isn't much point having a transactional backup of your database if no one is actually performing any transactions! The scheduling of log backups may be entirely dictated by the agreed SLA; if no more than five minutes of data can be lost, then take log backups every five minutes! If there is some flexibility, then try to schedule consecutive backups close enough so that the log file does not grow too much between backups, but far enough apart that it does not put undue stress on the server and hardware. As a general rule, don't take log backup much more frequently than is necessary to satisfy the SLA. Remember, the more log backups you take, the more chance there is that one will fail and possibly break your log chain. However, what happens when you have two databases on a server that both require log backups to be taken, but at different intervals? For example, Database A requires a 30-minute schedule, and Database B, a 60-minute schedule. You have two choices: 1. create two separate log backup jobs, one for DB_A running every 30 minutes and one for DB_B, every 60 minutes; this means multiple SQL Agent / scheduled jobs and each job brings with it a little more maintenance and management workload 2. take log backups of both databases using a single job that runs every 30 minutes; you'll have fewer jobs to schedule and run, but more log backup files to manage, heightening the risk of a file being lost or corrupted. My advice in this case would be to create one log backup job, taking log backups every 30 minutes; it satisfies the SLA for both databases and is simpler to manage. The slight downside is that the time between log backups for databases other than the first one in 74 75 Chapter 2: Planning, Storage and Documentation the list might be slightly longer than 30 minutes, since the log backup for a given database in the queue can't start till the previous one finishes. However, since the backups are frequent and so the backup times short, any discrepancy is likely to be very small. Backup Verification and Test Restores Whether there are 10 databases in our environment or 1,000, as DBAs, we must ensure that all backups are valid and usable. Without good backups, we will be in a very bad spot when the time comes to bring some data back from the dead. Backup verification is easy to integrate into normal backup routines, so let's discuss a few tips on how to achieve this. The first, and most effective, way to make sure that the backups are ready for use is simply to perform some test restores. This may seem obvious, but there are too many DBAs who simply assume that their backups are good and let them sit on the shelf. We don't need to restore every backup in the system to check its health, but doing random spot checks now and again is an easy way to gain peace of mind regarding future restores. Each week, choose a random database, and restore its last full backup. If that database is subject to differential and log backups as well, choose a point-in-time test restore that uses a full, differential and a few log backup files. Since it's probably unrealistic to perform regular test restores on every single database, there are a few other practices that a DBA can adopt to maximize the likelihood that backup files are free of any corruption and can be smoothly restored. 75 76 Chapter 2: Planning, Storage and Documentation Back up WITH CHECKSUM We can use the WITH CHECKSUM option, as part of a backup operation, to instruct SQL Server to test each page with its corresponding checksum to makes sure that no corruption has happened in the I/O subsystem. BACKUP DATABASE <DatabaseName> TO DISK = '<Backup_location>' WITH CHECKSUM Listing 2-1: Backup WITH CHECKSUM syntax. If it finds a page that fails this test, the backup will fail. If the backup succeeds then the backup is valid or maybe not. In fact, this type of validation has gotten many DBAs into trouble. It does not guarantee that a database backup is corruption free. The CHECKSUM only verifies that we are not backing up a database that was already corrupt in some way; if the corruption occurs in memory or somehow else during the backup operation, then it will not be detected. As a final note, my experience and that of many others, suggests that, depending on the size of the database that is being used, enabling checksums (and other checks such as torn page detection) will bring with it a small CPU overhead and may slow down your backups (often minimally). However, use of the WITH CHECKSUM option during backups is a valuable safeguard and, if you can spare the few extra CPU cycles and the extra time the backups will take, go ahead. These checks are especially valuable when used in conjunction with restore verification. 76 77 Chapter 2: Planning, Storage and Documentation Verifying restores Since we cannot rely entirely on page checksums during backups, we should also be performing some restore verifications to make sure our backups are valid and restorable. As noted earlier, the surest way to do this is by performing test restores. However, a good additional safety net is to use the RESTORE VERIFYONLY command. This command will verify that the structure of a backup file is complete and readable. It attempts to mimic an actual restore operation as closely as possible without actually restoring the data. As such, this operation only verifies the backup header; it does not verify that the data contained in the file is valid and not corrupt. However, for databases where we've performed BACKUP WITH CHECKSUM, we can then re-verify these checksums as part of the restore verification process. RESTORE VERIFYONLY FROM DISK= '<Backup_location>' WITH CHECKSUM Listing 2-2: RESTORE VERIFYONLY WITH CHECKSUM syntax. This will recalculate the checksum on the data pages contained in the backup file and compare it against the checksum values generated during the backup. If they match, it's a good indication that the data wasn't corrupted during the backup process. DBCC CHECKDB One of the best ways to ensure that databases remain free of corruption, so that this corruption does not creep into backup files, making mincemeat of our backup and restore planning, is to run DBCC CHECKDB on a regular basis, to check the logical and physical integrity of all the objects in the specified database, and so catch corruption as early as possible. 77 78 Chapter 2: Planning, Storage and Documentation We will not discuss this topic in detail in this book, but check out the information in Books Online ( aspx) and if you are not already performing these checks regularly, you should research and start a DBCC CHECKDB regimen immediately. Documenting Critical Backup Information Properly documenting your backup and restore plan goes beyond the SLA which we have previously discussed. There is a lot more information that you must know, and document, for each of the databases in your care, and the backup scheme to which they are subject. The following checklist summarizes just some of the items that should be documented. For further coverage, Brad McGehee is currently writing a series of articles on documenting SQL Server databases, covering the information listed below, and much more. See, for example, database-properties-health-check/. Database File Information Physical File Name: MDF Location: NDF Location(s) (add more rows as needed): Filegroup(s) (add more rows as needed): Includes Partitioned Tables/Indexes: Database Size: Has Database File Layout Been Optimized: 78 79 Chapter 2: Planning, Storage and Documentation Log File Information Physical File Name: LDF Location: Log Size: Number of Virtual Log Files: Backup Information Types of Backups Performed (Full, Differential, Log): Last Full Database Backup: Last Differential Database Backup: Last Transaction Log Backup: How Often are Transaction Logs Backed Up: Average Database Full Backup Time: Database Full Backup Size: Average Transaction Log Backup Size: Number of Full Database Backup Copies Retained: Backups Encrypted: Backups Compressed: Backup To Location: Offsite Backup Location: Backup Software/Agent Used: 79 80 Chapter 2: Planning, Storage and Documentation This information can be harvested in a number of different ways, but ideally will be scripted and automated. Listing 2-3 shows two scripts that will capture just some of this information; please feel free to adapt and amend as is suitable for your environment. SELECT d.name, MAX(d.recovery_model), is_password_protected, --Backups Encrypted: --Last Full Database Backup: MAX(CASE WHEN type = 'D' THEN backup_start_date ELSE NULL END) AS [Last Full Database Backup], --Last Transaction Log Backup: MAX(CASE WHEN type = 'L' THEN backup_start_date ELSE NULL END) AS [Last Transaction Log Backup], --Last Differential Log Backup: MAX(CASE WHEN type = 'I' THEN backup_start_date ELSE NULL END) AS [Last Differential Backup], --How Often are Transaction Logs Backed Up: DATEDIFF(Day, MIN(CASE WHEN type = 'L' THEN backup_start_date ELSE 0 END), MAX(CASE WHEN type = 'L' THEN backup_start_date ELSE 0 END)) / NULLIF(SUM(CASE WHEN type = 'I' THEN 1 ELSE 0 END), 0) [Logs BackUp count], --Average backup times: SUM(CASE WHEN type = 'D' THEN DATEDIFF(second, backup_start_date, Backup_finish_date) ELSE 0 END) / NULLIF(SUM(CASE WHEN type = 'D' THEN 1 ELSE 0 END), 0) AS [Average Database Full Backup Time], SUM(CASE WHEN type = 'I' THEN DATEDIFF(second, backup_start_date, Backup_finish_date) ELSE 0 END) / NULLIF(SUM(CASE WHEN type = 'I' THEN 1 80 81 Chapter 2: Planning, Storage and Documentation ELSE 0 END), 0) AS [Average Differential Backup Time], SUM(CASE WHEN type = 'L' THEN DATEDIFF(second, backup_start_date, Backup_finish_date) ELSE 0 END) / NULLIF(SUM(CASE WHEN type = 'L' THEN 1 ELSE 0 END), 0) AS [Average Log Backup Time], SUM(CASE WHEN type = 'F' THEN DATEDIFF(second, backup_start_date, Backup_finish_date) ELSE 0 END) / NULLIF(SUM(CASE WHEN type = 'F' THEN 1 ELSE 0 END), 0) AS [Average file/filegroup Backup Time], SUM(CASE WHEN type = 'G' THEN DATEDIFF(second, backup_start_date, Backup_finish_date) ELSE 0 END) / NULLIF(SUM(CASE WHEN type = 'G' THEN 1 ELSE 0 END), 0) AS [Average Differential file Backup Time], SUM(CASE WHEN type = 'P' THEN DATEDIFF(second, backup_start_date, Backup_finish_date) ELSE 0 END) / NULLIF(SUM(CASE WHEN type = 'P' THEN 1 ELSE 0 END), 0) AS [Average partial Backup Time], SUM(CASE WHEN type = 'Q' THEN DATEDIFF(second, backup_start_date, Backup_finish_date) ELSE 0 END) / NULLIF(SUM(CASE WHEN type = 'Q' THEN 1 ELSE 0 END), 0) AS [Average Differential partial Backup Time], MAX(CASE WHEN type = 'D' THEN backup_size ELSE 0 END) AS [Database Full Backup Size], 81 82 Chapter 2: Planning, Storage and Documentation SUM(CASE WHEN type = 'L' THEN backup_size ELSE 0 END) / NULLIF(SUM(CASE WHEN type = 'L' THEN 1 ELSE 0 END), 0) AS [Average Transaction Log Backup Size], --Backup compression?: CASE WHEN SUM(backup_size - compressed_backup_size) <> 0 THEN 'yes' ELSE 'no' END AS [Backups Compressed] FROM master.sys.databases d LEFT OUTER JOIN msdb.dbo.backupset b ON d.name = b.database_name WHERE d.database_id NOT IN ( 2, 3 ) GROUP BY d.name, is_password_protected --HAVING MAX(b.backup_finish_date) <= DATEADD(dd, -7, GETDATE()) ; -- database characteristics SELECT d.name, f.name, LOWER(f.type_Desc), physical_name AS [Physical File Name], [size] / 64 AS [Database Size (Mb)], CASE WHEN growth = 0 THEN 'fixed size' WHEN is_percent_growth = 0 THEN CONVERT(VARCHAR(10), growth / 64) ELSE CONVERT(VARCHAR(10), ( [size] * growth / 100 ) / 64) END AS [Growth (Mb)], CASE WHEN max_size = 0 THEN 'No growth allowed' WHEN max_size = -1 THEN 'unlimited Growth' WHEN max_size = THEN '2 TB' ELSE CONVERT(VARCHAR(10), max_size / 64) + 'Mb' END AS [Max Size], CASE WHEN growth = 0 THEN 'no autogrowth' WHEN is_percent_growth = 0 THEN 'fixed increment' ELSE 'percentage' END AS [Database Autogrowth Setting] FROM master.sys.databases d INNER JOIN master.sys.master_files F ON f.database_id = d.database_id ORDER BY f.name, f.file_id Listing 2-3: Collecting database file and backup file information. 82 83 Chapter 2: Planning, Storage and Documentation Summary With the first two chapters complete, the foundation is laid; we've covered database file architecture, the types of backup that are available, and how this, and the overall backup strategy, is affected by the recovery model of the database. We've also considered hardware storage requirements for our database and backup files, the tools available to capture the backups, and how to develop an appropriate Service Level Agreement for a given database, depending on factors such as toleration for data loss, database size, workload, and so on. We are now ready to move on to the real thing; actually taking and restoring different types of backup. Over the coming chapters, we'll build some sample databases, and demonstrate all the different types of backup and subsequent restore, using both native scripting and a third-party tool (Red Gate SQL Backup). 83 84 Chapter 3: Full Database Backups A full database backup is probably the most common type of backup in the SQL Server world. It is essentially a backup of the data file(s) associated with a database. This chapter will demonstrate how to perform these full database backups, using both GUI-driven and T-SQL techniques. We'll create and populate a sample database, then demonstrate how to capture a full database backup using both the SQL Server Management Studio (SSMS) GUI, and T-SQL scripts. We'll capture and store some metrics (backup time, size of backup file) for both full backup techniques. We'll then take a look at the native backup compression feature, which became part of SQL Server Standard edition in SQL Server 2008, and we'll demonstrate how much potential disk storage space (and backup time) can be saved using this feature. By the end of Chapter 3, we'll be ready to restore a full backup file, and so return our sample database to the exact state that it existed at the time the full database backup process was taken. In Chapter 8, we'll see how to automate the whole process of taking full, as well as differential and transaction log, backups using Red Gate SQL Backup. What is a Full Database Backup? As discussed in Chapter 1, a full database backup (herein referred to simply as a "full backup") is essentially an "archive" of your database as it existed at the point in time of the backup operation. 84 85 Chapter 3: Full Database Backups It's useful to know exactly what is contained in this "archive" and a full backup contains: a copy of the database at the time of backup creation all user objects and data system information pertinent to the database user information permissions information system tables and views enough of the transaction log to be able to bring the database back online in a consistent state, in the event of a failure. Why Take Full Backups? Full backups are the cornerstone of a disaster recovery strategy for any user database, in the event of data corruption, or the loss of a single disk drive, or even a catastrophic hardware failure, where all physical media for a server is lost or damaged. In such cases, the availability of a full backup file, stored securely in a separate location, may be the only route to getting an online business back up and running on a new server, with at least most of its data intact. If there are also differential backups (Chapter 7) and log backups (Chapter 5), then there is a strong chance that we can recover the database to a state very close to that in which it existed shortly before the disaster. If a full backup for a database has never been taken then, by definition, there also won't be any differential or log backups (a full backup is a prerequisite for both), and there is very little chance of recovering the data. In the event of accidental data loss or data corruption, where the database is still operational, then, again, the availability of a full backup (in conjunction with other backup types, if appropriate) means we can restore a secondary copy of the database to a previous 85 86 Chapter 3: Full Database Backups point in time, where the data existed, and then transfer that data back into the live database, using a tool such as SSIS or T-SQL (we cover this in detail in Chapter 6, Log Restores). It is important to stress that these methods, i.e. recovering from potential data loss by restoring backup files, represent the only sure way to recover all, or very nearly all, of the lost data. The alternatives, such as use of specialized log recovery tools, or attempting to recover data from secondary or replica databases, offer relatively slim chances of success. Aside from disaster recovery scenarios, there are also a few day-to-day operations where full backups will be used. Any time that we want to replace an entire database or create a new database containing the entire contents of the backup, we will perform a full backup restore. For example: moving a development project database into production for the first time; we can restore the full backup to the production server to create a brand new database, complete with any data that is required. refreshing a development or quality assurance system with production data for use in testing new processes or process changes on a different server; this is a common occurrence in development infrastructures and regular full backup restores are often performed on an automated schedule. Full Backups in the Backup and Restore SLA For many databases, an agreement regarding the frequency and scheduling of full database backups will form only one component of a wider Backup and Restore SLA, which also covers the need for other backup types (differential, log, and so on). However, for certain databases, the SLA may well specify a need for only full backups. These full backups will, in general, be taken either nightly or weekly. If a database is 86 87 Chapter 3: Full Database Backups subject to a moderate level of data modification, but the flexibility of full point-in-time restore, via log backups, is not required, then the Backup SLA can stipulate simply that nightly full database backups should be taken. The majority of the development and testing databases that I look after receive only a nightly full database backup. In the event of corruption or data loss, I can get the developers and testers back to a good working state by restoring the previous night's full backup. In theory, these databases are exposed to a maximum risk of losing just less than 24 hours of data changes. However, in reality, the risk is much lower since most development happens during a much narrower daytime window. This risk is usually acceptable in development environments, but don't just assume this to be the case; make sure you get sign-off from the project owners. For a database subject only to very infrequent changes, it may be acceptable to take only a weekly full backup. Here the risk of loss is just under seven days, but if the database really is only rarely modified then the overall risk is still quite low. Remember that the whole point of a Backup SLA is to get everyone with a vested interest to "sign off" on acceptable levels of data loss for a given database. Work with the database owners to determine the backup strategy that works best for their databases and for you as the DBA. You don't ever want to be caught in a situation where you assumed a certain level of data loss was acceptable and it turned out you were wrong. Preparing for Full Backups We're going to run through examples of how to take full backups only, using both SSMS and T-SQL scripts. Chapter 4 will show how to restore these backups, and then Chapters 5 and 6 will show how to take and restore log backups, and Chapter 7 will cover differential backup and restore. In Chapter 8, we'll show how to manage full, differential and log backups, using a third-party tool (Red Gate SQL Backup) and demonstrate some of the advantages that such tools offer. 87 88 Chapter 3: Full Database Backups Before we get started taking full backups, however, we need to do a bit of preparatory work, namely choosing an appropriate recovery model for our example database, and then creating that database along with some populated sample tables. Choosing the recovery model For the example database in this chapter, we're going to assume that our Backup SLA expresses a tolerance to potential data loss of 24 hours, as might be appropriate for a typical development database. We can satisfy this requirement using just full database backups, so differential and log backups will not form part of our backup strategy, at this stage. Full database backups can be taken in any one of the three supported recovery models; SIMPLE, FULL or BULK LOGGED (see Chapter 1 for details). Given all this, it makes strategic and administrative sense to operate this database in the SIMPLE recovery model. This will enable us to take the full backups we need, and will also greatly simplify the overall management of this database, since in SIMPLE recovery the transaction log is automatically truncated upon CHECKPOINT (see Chapter 1), and so space in the log is regularly made available for reuse. If we operated the database in FULL recovery, then we would end up taking log backups just to control the size of the log file, even though we don't need those log backups for database recovery purposes. This would generate a needless administrative burden, and waste system resources. Database creation The sample database for this chapter will be about as simple as it's possible to get. It will consist of a single data (mdf) file contained in a single filegroup; there will be no secondary data files or filegroups. This one data file will contain just a handful of tables where we will store a million rows of data. Listing 3-1 shows our fairly simple database creation script. Note that in a production database the data and log files would be placed on separate drives. 88 89 Chapter 3: Full Database Backups CREATE DATABASE [DatabaseForFullBackups] ON PRIMARY ( NAME = N'DatabaseForFullBackups', FILENAME = N'C:\SQLData\DatabaseForFullBackups.mdf', SIZE = KB, FILEGROWTH = KB ) LOG ON ( NAME = N'DatabaseForFullBackups_log', FILENAME = N'C:\SQLData\DatabaseForFullBackups_log.ldf', SIZE = KB, FILEGROWTH = 10240KB ) Listing 3-1: Creating the DatabaseForFullBackups sample database. This is a relatively simple CREATE DATABASE statement, though even it is not quite as minimal as it could be; CREATE DATABASE [DatabaseForFullBackups] would work, since all the arguments are optional in the sense that, if we don't provide explicit values for them, they will take their default values from whatever is specified in the model database. Nevertheless, it's instructive, and usually advisable, to explicitly supply values for at least those parameters shown here. We have named the database DatabaseForFullBackups, which is a clear statement of the purpose of this database. Secondly, via the NAME argument, we assign logical names to the physical files. We are adopting the default naming convention for SQL Server 2008, which is to use the database name for logical name of the data file, and append _log to the database name for the logical name of the log file. File size and growth characteristics The FILENAME argument specifies the path and file name used by the operating system. Again, we are using the default storage path, storing the files in the default data directory, and the default file name convention, which is to simply append.mdf and.ldf to the logical file names. 89 90 Chapter 3: Full Database Backups The optional SIZE and FILEGROWTH arguments are the only cases where we use some non-default settings. The default initial SIZE settings for the data and log files, inherited from the model database properties, are too small (typically, 3 MB and 1 MB respectively) for most databases. Likewise the default FILEGROWTH settings (typically 1 MB increments for the data files and 10% increments for the log file) are also inappropriate. In busy databases, they can lead to fragmentation issues, as the data and log files grow in many small increments. The first problem is physical file fragmentation, which occurs when a file's data is written to non-contiguous sectors of the physical hard disk (SQL Server has no knowledge of this). This physical fragmentation is greatly exacerbated if the data and log files are allowed to grow in lots of small auto-growth increments, and it can have a big impact on the performance of the database, especially for sequential write operations. As a best practice it's wise, when creating a new database, to defragment the disk drive (if necessary) and then create the data and log files pre-sized so that they can accommodate, without further growth in file size, the current data plus estimated data growth over a reasonable period. In a production database, we may want to size the files to accommodate, say, a year's worth of data growth. There are other reasons to avoid allowing your database files to grow in multiple small increments. Each growth event will incur a CPU penalty. This penalty can be mitigated for data files by instant file initialization (enabled by granting the perform volume maintenance tasks right to the SQL Server service account). However, the same optimization does not apply to log files. Furthermore, growing the log file in many small increments can cause log fragmentation, which is essentially the creation of a very large number of small VLFs, which can deteriorate the performance of crash recovery, restores, and log backups (in other words, operations that read the log file). We'll discuss this in more detail in Chapter 5. 90 91 Chapter 3: Full Database Backups In any event, in our case, we're just setting the SIZE and FILEGROWTH settings such that SQL Server doesn't have to grow the files while we pump in our test data. We've used an initial data files size of 500 MB, growing in 100 MB increments, and an initial size for the log file of 100 MB, growing in 10 MB increments. When you're ready, execute the script in Listing 3-1, and the database will be created. Alternatively, if you prefer to create the database via the SSMS GUI, rather than using a script, simply right-click on the Databases node in SSMS, select New Database, and fill out the General tab so it looks like that shown in Figure 3-1. Figure 3-1: Creating a database via SSMS. Setting database properties If, via SSMS, we generate a CREATE script for an existing database, it will contain the expected CREATE DATABASE section, specifying the values for the NAME, FILENAME, SIZE, MAXSIZE and FILEGROWTH arguments. However, this will be followed by a swathe of ALTER DATABASE commands that set various other database options. To see them all, simply browse the various Properties pages for any database. All of these options are, under the covers, assigned default values according to those specified by the model system database; hence the name model, since it is used as a model from which to create all user databases. Listing 3-2 shows a script to set six of the more important options. 91 92 Chapter 3: Full Database Backups ALTER DATABASE [DatabaseForFullBackups] SET COMPATIBILITY_LEVEL = 100 ALTER DATABASE [DatabaseForFullBackups] SET AUTO_SHRINK OFF ALTER DATABASE [DatabaseForFullBackups] SET AUTO_UPDATE_STATISTICS ON ALTER DATABASE [DatabaseForFullBackups] SET READ_WRITE ALTER DATABASE [DatabaseForFullBackups] SET RECOVERY SIMPLE ALTER DATABASE [DatabaseForFullBackups] SET MULTI_USER Listing 3-2: Various options of the ALTER DATABASE command. The meaning of each of these options is as follows: COMPATIBILITY_LEVEL This lets SQL Server know with which version of SQL Server to make the database compatible. In all of our examples, we will be using 100, which signifies SQL Server AUTO_SHRINK This option either turns on or off the feature that will automatically shrink your database files when free space is available. In almost all cases, this should be set to OFF. AUTO_UPDATE_STATISTICS When turned ON, as it should be in most cases, the optimizer will automatically keep statistics updated, in response to data modications. READ_WRITE This is the default option and the one to use if you want users to be able to update the database. We could also set the database to READ_ONLY to prevent any users making updates to the database. RECOVERY SIMPLE This tells SQL Server to set the recovery model of the database to SIMPLE. Other options are FULL (the usual default) and BULK_LOGGED. 92 93 Chapter 3: Full Database Backups MULTI_USER For the database to allow connections for multiple users, we need to set this option. Our other choice is SINGLE_USER, which allows only one connection to the database at a time. The only case where we are changing the usual default value is the command to set the recovery model to SIMPLE; in most cases, the model database will, by default, be operating in the FULL recovery model and so this is the recovery model that will be conferred on all user databases. If the default recovery model for the model database is already set to SIMPLE, for your instance, then you won't need to execute this portion of the ALTER script. If you do need to change the recovery model, just make sure you are in the correct database before running this command, to avoid changing another database's recovery model. Alternatively, simpy pull up the Properties for our newly created database and change the recovery model manually, on the Options page, as shown in Figure 3-2. Figure 3-2: The Options page for a database, in SSMS. 93 94 Chapter 3: Full Database Backups Creating and populating the tables Now that we have a brand new database created on our instance, we need to create a few sample tables. Listing 3-3 shows the script to create two message tables, each with the same simple structure. USE [DatabaseForFullBackups] SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON] Listing 3-3: Table creation script. MessageTable1 and MessageTable2 are both very simple tables, comprised of only two columns each. The MessageData column will contain a static character string, mainly to fill up data space, and the MessageDate will hold the date and time that the message was inserted. 94 95 Chapter 3: Full Database Backups Now that we have our tables set up, we need to populate them with data. We want to pack in a few hundred thousand rows so that the database will have a substantial size. However, we won't make it so large that it will risk filling up your desktop/laptop drive. Normally, a DBA tasked with pumping several hundred thousand rows of data into a table would reach for the BCP tool and a flat file, which would be the fastest way to achieve this goal. However, since coverage of BCP is out of scope for this chapter, we'll settle for a simpler, but much slower, T-SQL method, as shown in Listing 3-4. USE [DatabaseForFullBackups] dbo.messagetable1 VALUES GETDATE()) Listing 3-4: Populating MessageTable1. The code uses a neat trick that allows us to INSERT the same data into the MessageTable1 table multiple times, without using a looping mechanism. The statement is normally used as a batch separator, but in this case we pass it a parameter defining the number of times to run the code in the batch. So, our database is now at a decent size, somewhere around 500 MB. This is not a large database by any stretch of the imagination, but it is large enough that it will take more than a few seconds to back up. 95 96 Chapter 3: Full Database Backups Generating testing data Getting a decent amount of testing data into a database can be a daunting task, especially when the database becomes much more complex than our example. Red Gate offers a product, SQL Data Generator, which will scan your database and table structure to give you a very robust set of options for automatically generating test data. You can also write custom data generators for even the most specific of projects. See Taking Full Backups We are now set to go and we're going to discuss taking full backups the "GUI way," in SSMS, and by using native T-SQL Backup commands. As you work through the backup examples in this chapter, and throughout the book, for learning purposes, you may occasionally want to start again from scratch, that is, to drop the example database, re-create it, and retake the backups. The best way to do this is to delete the existing backup files for that database, and then drop the database in a way that also clears out the backup history for that database, which is stored in the msdb database. This will prevent SQL Server from referencing any old backup information. Listing 3-5 shows how to do this. EXEC = N'DatabaseName' USE [master] DROP DATABASE [DatabaseName] Listing 3-5: Dropping a database and deleting backup history. 96 97 Chapter 3: Full Database Backups Alternatively, using the SSMS GUI, simply right-click on the database, and select Delete; by default, the option to Delete backup and restore history information for databases will be checked and this will clear out the msdb historical information. Native SSMS GUI method Taking full backups using the SSMS GUI is a fairly straightforward process. I use this technique mainly to perform a one-time backup, perhaps before implementing some heavy data changes on the database. This provides an easy way to revert the database to the state it was in before the change process began, should something go wrong. We're going to store the backup files on local disk, in a dedicated folder. So go ahead now and create a new folder on the root of the C:\ drive of the SQL Server instance, called SQLBackups and then create a subfolder called Chapter3 where we'll store all the full backup files in this chapter. Again, we use the same drive as the one used to store the online data and log files purely as a convenience; in a production scenario, we'd stored the backups on a separate drive! The Backup Database wizard We are now ready to start the full backup process. Open SQL Server Management Studio, connect to your server, expand the Databases node and then right-click on the DatabaseForFullBackups database, and navigate Tasks Backup, as shown in Figure 98 Chapter 3: Full Database Backups Figure 3-3: Back Up Database menu option. This will start the backup wizard and bring up a dialog box titled Back Up Database DatabaseForFullBackups, shown in Figure 3-4, with several configuration options that are available to the T-SQL BACKUP DATABASE command. Don't forget that all we're really doing here is using a graphical interface to build and run a T-SQL command. 98 99 Chapter 3: Full Database Backups Figure 3-4: Back Up Database wizard. The General page comprises three major sections: Source, Backup set and Destination. In the Source section, we specify the database to be backed up and what type of backup to perform. The Backup type drop-down list shows the types of backup that are available to your database. In our example we are only presented with two options, Full and Differential, since our database is in SIMPLE recovery model. 99 100 Chapter 3: Full Database Backups You will also notice a check box with the label Copy-only backup. A copy-only full backup is one that does not affect the normal backup operations of a database and is used when a full backup is needed outside of a normal scheduled backup plan. When a normal full database backup is taken, SQL Server modifies some internal archive points in the database, to indicate that a new base file has been created, for use when restoring subsequent differential backups. Copy-only full backups preserve these internal archive points and so cannot be used as a differential base. We're not concerned with copy-only backups at this point. The Backup component section is where we specify either a database or file/filegroup backup. The latter option is only available for databases with more than one filegroup, so it is deactivated in this case. We will, however, talk more about this option when we get to Chapter 9, on file and filegroup backups. In the Backup set section, there are name and description fields used to identify the backup set, which is simply the set of data that was chosen to be backed up. The information provided here will be used to tag the backup set created, and record its creation in the MSDB backup history tables. There is also an option to set an expiration date on our backup set. When taking SQL Server backups, it is entirely possible to store multiple copies of a database backup in the same file or media. SQL Server will just append the next backup to the end of the backup file. This expiration date lets SQL Server know how long it should keep this backup set in that file before overwriting it. Most DBAs do not use this "multiple backups per file" feature. There are only a few benefits, primarily a smaller number of files to manage, and many more drawbacks: larger backup files and single points of failure, to name only two. For simplicity and manageability, throughout this book, we will only deal with backups that house a single backup per file. The Destination section is where we specify the backup media and, in the case of disk, the location of the file on this disk. The Tape option button will be disabled unless a tape drive device is attached to the server. 100 101 Chapter 3: Full Database Backups As discussed in Chapter 1, even if you still use tape media, as many do, you will almost never back up directly to tape; instead you'll back up to disk and then transfer older backups to tape. When using disk media to store backups, we are offered three buttons to the right of the file listing window, for adding and removing disk destinations, as well as looking at the different backup sets that are already stored in a particular file. The box will be pre-populated with a default file name and destination for the default SQL Server backup folder. We are not going to be using that folder (simply because we'll be storing our backups in separate folders, according to chapter) so go ahead and use the Remove button on that file to take it out of the list. Now, use the only available button, the Add button to bring up the Select Backup Destination window. Make sure you have the File name option selected, click the browse ( ) button to bring up the Locate Database Files window. Locate the SQLBackups\Chapter3 directory that you created earlier, on the machine and then enter a name for the file, DatabaseForFullBackups_Full_Native_1.bak as shown in Figure 3-5. Figure 3-5: Backup file configuration. 101 102 Chapter 3: Full Database Backups Once this has been configured, click OK to finalize the new file configuration and click OK again on the Select Backup Destination dialog box to bring you back to the Back Up Database page. Now that we are done with the General page of this wizard, let's take a look at the Options Page, shown in Figure 3-6. Figure 3-6: Configuration options for backups. 102 103 Chapter 3: Full Database Backups The Overwrite media section is used in cases where a single file stores multiple backups and backup sets. We can set the new backup to append to the existing backup set or to overwrite the specifically named set that already exists in the file. We can also use this section to overwrite an existing backup set and start afresh. We'll use the option of Overwrite all existing backup sets since we are only storing one backup per file. This will make sure that, if we were to run the same command again in the event of an issue, we would wind up with just one backup set in the file. The Reliability section provides various options that can be used to validate the backup, as follows: Verify backup when finished Validates that the backup set is complete, after the backup operation has completed. It will make sure that each backup in the set is readable and ready for use. Perform checksum before writing to media SQL Server performs a checksum operation on the backup data before writing it to the storage media. As discussed in Chapter 2, a checksum is a special function used to make sure that the data being written to the disk/tape matches what was pulled from the database or log file. This option makes sure your backup data is being written correctly, but might also slow down your backup operation. Continue on error Instructs SQL Server to continue with all backup operations even after an error has been raised during the backup operation. The Transaction log section offers two important configuration options for transaction log backups, and will be covered in Chapter 5. The Tape Drive section of the configuration is only applicable if you are writing your backups directly to tape media. We have previously discussed why this is not the best way for backups to be taken in most circumstances, so we will not be using these options (and they aren't available here anyway, since we've already elected to back up to disk). 103 104 Chapter 3: Full Database Backups The final Compression configuration section deals with SQL Server native backup compression, which is an option that we'll ignore for now, but come back to later in the chapter. Having reviewed all of the configuration options, go ahead and click OK at the bottom of the page to begin taking a full database backup of the DatabaseForFullBackups database. You will notice the progress section begin counting up in percentage. Once this reaches 100%, you should receive a dialog box notifying you that your backup has completed. Click OK on this notification and that should close both the dialog box and the Back Up Database wizard. Gathering backup metrics, Part 1 That wasn't so bad! We took a full backup of our database that has one million rows of data in it. On a reasonably laptop or desktop machine, the backup probably would have taken seconds. On a decent server, it will have been much quicker. However, it's useful to have slightly more accurate timings so that we can compare the performance of the various methods of taking full backups. We also need to check out the size of the backup file, so we can see how much storage space it requires and, later, compare this to the space required for compressed backup files. To find out the size of the backup file, simple navigate to the SQLBackups\Chapter2 folder in Windows Explorer and check out the size of the DatabaseForFullBackups_ Full_Native_1.bak file. You should find that it's roughly the same size as the data (mdf) file, i.e. about 500 MB, or half of a gigabyte. This doesn't seem bad now, but given that some databases are over 1 TB in size, you can begin to appreciate the attraction of backup compression. Checking the exact execution time for the backup process is a little trickier. If we don't want to use a stopwatch to measure the start and stop of the full backup, we can use the backupset system table in MSDB to give us a more accurate backup time. Take a look 104 105 Chapter 3: Full Database Backups at Listing 3-6 for an example of how to pull this information from your system. I'm only returning a very small subset of the available columns, so examine the table more closely, to find more information that you might find useful. USE msdb SELECT database_name, DATEDIFF(SS, backup_start_date, backup_finish_date) AS [RunTImeSec], database_creation_date FROM dbo.backupset ORDER BY database_creation_date DESC Listing 3-6: Historical backup runtime query. Figure 3-7 shows some sample output. Figure 3-7: Historical backup runtime results. On my machine the backup takes 49 seconds. That seems fairly good, but we have to consider the size of the test database. It is only 500 MB, which is not a typical size for most production databases. In the next section, we'll pump more data into the database and take another full backup (this time using T-SQL directly), and we'll get to see how the backup execution time varies with database size. 105 106 Chapter 3: Full Database Backups Native T-SQL method Every DBA needs to know how to write a backup script in T-SQL. Scripting is our route to backup automation and, in cases where we don't have access to a GUI, it may be the only option available; we can simply execute our backups scripts via the osql or sqlcmd command line utilities. Here, however, for general readability, we'll execute the scripts via SSMS. Remember that the commands in this section are essentially the same ones that the GUI is generating and executing against the server when we use the Backup wizard. Before we move on to take another full backup of our DatabaseForFullBackups database, this time using T-SQL, we're first going add a bit more data. Let's put a new message into our second table with a different date and time stamp. We are going to use the same method to fill the second table as we did for the first. The only thing that is going to change is the text that we are entering. Take a look at Listing 3-7 and use this to push another million rows into the database. USE [DatabaseForFull 3-7: Populating the MessageTable2 table. 106 107 Chapter 3: Full Database Backups This script should take about a few minutes to fill the secondary table. Once it is populated with another million rows of the same size and structure as the first, our database file should be hovering somewhere around 1 GB in size, most likely just slightly under. Now that we have some more data to work with, let's move on to taking native SQL Server backups using T-SQL only. A simple T-SQL script for full backups Take a look at Listing 3-8, which shows a script that can be used to take a full backup of the newly populated DatabaseForFullBackups database. USE master BACKUP DATABASE [DatabaseForFullBackups] TO DISK = N'C:\SQLBackups\Chapter3\DatabaseForFullBackups_Full_Native_2.bak' WITH FORMAT, INIT, NAME = N'DatabaseForFullBackups-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 Listing 3-8: Native full backup T-SQL script. This script may look like it has some extra parameters, compared to the native GUI backup that we did earlier, but this is the scripted output of that same backup, with only the output file name modified, and a few other very minor tweaks. So what do each of these parameters mean? The meaning of the first line should be fairly obvious; it is instructing SQL Server to perform a full backup of the DatabaseForFullBackups database. This leads into the second line where we have chosen to back up to disk, and given a complete file path to the resulting backup file. The remainder of the parameters are new, so let's go through each one. 107 108 Chapter 3: Full Database Backups. 108 109 Chapter 3: Full Database Backups STATS This option may prove useful to you when performing query-based backups. The STATS parameter defines the time intervals on which SQL Server should update the "backup progress" messages. For example, using stats=10 will cause SQL Server to send a status message to the query output for each 10 percent of the backup completion. As noted, if we wished to overwrite an existing backup set, we'd want to specify the INIT parameter but, beyond that, none of these secondary parameters, including the backup set NAME descriptor, are required. As such, we can actually use a much simplified BACKUP command, as shown in Listing 3-9. BACKUP DATABASE [DatabaseForFullBackups] TO DISK = N'C:\SQLBackups\Chapter3\DatabaseForFullBackups_Full_Native_2.bak' Listing 3-9: Slimmed-down native T-SQL backup code. Go ahead and start Management Studio and connect to your test server. Once you have connected, open a new query window and use either Listing 3-8 or 3-9 to perform this backup in SSMS. Once it is done executing, do not close the query, as the query output contains some metrics that we want to record. Gathering backup metrics, Part 2 Now that the backup has completed, let's take a look at the query output window to see if we can gather any information about the procedure. Unlike the native GUI backup, we are presented with a good bit of status data in the messages tab of the query window. The post-backup message window should look as shown in Figure 3-8 (if you ran Listing 3-8, it will also contain ten "percent processed" messages, which are not shown here). 109 110 Chapter 3: Full Database Backups Figure 3-8: Native T-SQL backup script message output. This status output shows how many database pages the backup processed as well as how quickly the backup was completed. On my machine, the backup operation completed in just under 80 seconds. Notice here that that the backup processes all the pages in the data file, plus two pages in the log file; the latter is required because a full backup needs to include enough of the log that the backup can produce a consistent database, upon restore. When we ran our first full backup, we had 500 MB in our database and the backup process took 49 seconds to complete. Why didn't it take twice as long this time, now that we just about doubled the amount of data? The fact is that the central process of writing data to the backup file probably did take roughly twice as long, but there are other "overhead processes" associated with the backup task that take roughly the same amount of time regardless of how much data is being backed up. As such, the time to take backups will not increase linearly with increasing database size. But does the size of the resulting backup file increase linearly? Navigating to our SQL Server backup files directory, we can see clearly that the size is nearly double that of the first backup file (see Figure 3-9). The file size of your native SQL Server backups will grow at nearly the same rate as the database data files grow. 110 111 Chapter 3: Full Database Backups Figure 3-9: Comparing native SQL Server backup file sizes. We will compare these metrics against the file sizes and speeds we get from backing up the same files using Red Gate's SQL Backup in Chapter 8. Native Backup Compression In SQL Server 2008 and earlier versions, backup compression was a feature only available in the Enterprise Edition (or Developer Edition) of SQL Server. However, starting with SQL Server 2008 R2, backup compression has been made available in all editions, so let's take a quick look at what savings it can offer over non-compressed backups, in terms of backup file size and the speed of the backup operation. Generally speaking, third-party backup tools still offer a better compression ratio, better speed and more options (such as compressed and encrypted backups) than native backup compression. However, we'll put that to the test in Chapter 8. All we're going to do is perform a compressed backup of the DatabaseForFull- Backups, using the script shown in Listing USE [master] BACKUP DATABASE [DatabaseForFullBackups] TO DISK = N'C:\SQLBackups\Chapter3\SQLNativeCompressionTest.bak' WITH COMPRESSION, STATS = 10 Listing 3-10: SQL native compression backup test. 111 112 Chapter 3: Full Database Backups The only difference between this and our backup script in Listing 3-9, is the use here of the COMPRESSION keyword, which instructs SQL Server to make sure this database is compressed when written to disk. If you prefer to run the compressed backup using the GUI method, simply locate the Compression section, on the Options page of the Backup Wizard, and change the setting from Use the default server setting to Compress backup. Note that, if desired, we can use the sp_configure stored procedure to make backup compression the default behavior for a SQL Server instance. On completion of the backup operation, the query output window will display output similar to that shown in Figure Figure 3-10: Compressed backup results. If you recall, a non-compressed backup of the same database took close to 80 seconds and resulted in a backup file size of just over 1 GB. Here, we can see that use of compression has reduced the backup time to about 32 seconds, and it results in a backup file size, shown in Figure 3-11, of only 13 KB! Figure 3-11: Compressed backup file size. 112 113 Chapter 3: Full Database Backups These results represent a considerable saving, in both storage space and processing time, over non-compressed backups. If you're wondering whether or not the compression rates should be roughly consistent across all your databases, then the short answer is no. Character data, such as that stored in our DatabaseForFullBackups database compresses very well. However, some databases may contain data that doesn't compress as readily such as FILESTREAM and image data, and so space savings will be less. Verifying Backups Having discussed the basic concepts of backup verification in Chapter 2, Listing 3-11 shows a simple script to perform a checksum during a backup of our DatabaseForFullBackups database, followed by a RESTORE VERIFYONLY, recalculating the checksum. BACKUP DATABASE [DatabaseForFullBackups] TO DISK = N'C:\SQLBackups\Chapter3\DatabaseForFullBackups_Full_Native_Checksum.bak' WITH CHECKSUM RESTORE VERIFYONLY FROM DISK = N'C:\SQLBackups\Chapter3\DatabaseForFullBackups_Full_Native_Checksum.bak' WITH CHECKSUM Listing 3-11: Backup verification examples. Hopefully you'll get output to the effect that the backup is valid! 113 114 Chapter 3: Full Database Backups Building a Reusable and Schedulable Backup Script Ad hoc database and transaction log backups can be performed via simple T-SQL scripts or the GUI, in SQL Server Management Studio. However, for production systems, the DBA will need a way to automate these backups, verify that the backups are valid, schedule and monitor them, and so on. Some of the options for automating backups are listed below. SSMS Maintenance Plans Wizard and Designer two tools, built into SSMS, which allow you to configure and schedule a range of core database maintenance tasks, including full database backups and transaction log backups. The DBA can also run DBCC integrity checks, schedule jobs to remove old backup files, and so on. An excellent description of these tools, and their limitations, can be found in Brad McGehee's book, Brad's Sure Guide to SQL Server Maintenance Plans. T-SQL scripts you can write custom T-SQL scripts to automate your backup tasks. A well established and respected set of maintenance scripts is provided by Ola Hallengren (). His scripts create a variety of stored procedures, each performing a specific database maintenance task, including backups, and automated using SQL Agent jobs. PowerShell / SMO scripting more powerful and versatile than T-SQL scripting, but with a steeper learning curve for many DBAs, PowerShell can be used to script and automate almost any maintenance task. There are many available books and resources for learning PowerShell. See, for example, powershell/64316/. Third-party backup tools several third-party tools exist that can automate backups, as well as verify and monitor them. Most offer backup compression and encryption as well as additional features to ease backup management, verify backups, and so on. Examples include Red Gate's SQL Backup, and Quest's LiteSpeed. 114 115 Chapter 3: Full Database Backups In my role as a DBA, I use a third-party backup tool, namely SQL Backup, to manage and schedule all of my backups. Chapter 8 will show how to use this tool to build a script that can be used in a SQL Agent job to take scheduled backups of databases. Summary This chapter explained in detail how to capture full database backups using either SSMS Backup Wizard or T-SQL scripts. We are now ready to move on to the restoration piece of the backup and restore jigsaw. Do not to remove any of the backup files we have captured; we are going to use each of these in the next chapter to restore our DatabaseForFullBackups database. 115 116 Chapter 4: Restoring From Full Backup In the previous chapter, we took two full backups of our DatabaseForFullBackups database, one taken when the database was about 500 MB in size and the second when it was around 1 GB. In this chapter, we're going to restore those full backup files, to re-create the databases as they existed at the point in time that the respective backup processes were taken. In both of our examples we will be restoring over existing databases in order to demonstrate some of the issues that may arise in that situation. In Chapter 6, we'll look at some examples that restore a new copy of a database. Full database restores are the cornerstone of our disaster recovery strategy, but will also be required as part of our regular production and development routines, for example, when restoring to development and test instances. However, for large databases, they can be a time- and disk space-consuming task, as well as causing the DBA a few strategic headaches, which we'll discuss as we progress through the chapter. Full Restores in the Backup and Restore SLA For a database that requires only full backups, the restore process is relatively straightforward, requiring only the relevant full backup file, usually the most recent backup. However, as discussed in Chapter 2, we still need to take into consideration the size of the database being restored, and the location of the backup files, when agreeing an appropriate maximum amount of time that the crash recovery process should take. For example, let's say someone accidentally deleted some records from a table and they need to be recovered. However, by the time the application owner has been notified of the issue, and in turn notified you, five days have passed since the unfortunate event. If local backup files are retained for only three days, then recovering the lost data will involve retrieving data from tape, which will add time to the recovery process. The SLA 116 117 Chapter 4: Restoring From Full Backup needs to stipulate file retention for as long as is reasonably necessary for such data loss or data integrity issues to be discovered. Just don't go overboard; there is no need to keep backup files for 2 weeks on local disk, when 3 5 days will do the trick 99.9% of the time. Possible Issues with Full Database Restores When we restore a database from a full backup file, it's worth remembering that this backup file includes all the objects, data and information that were present in the database at that time, including: all user-defined objects, including stored procedures, functions, views tables, triggers, database diagrams and the rest all data contained in each user-defined table system objects that are needed for regular database use and maintenance all data contained in the system tables user permissions for all objects on the database, including not only default and custom database roles, but also extended object permissions for all explicit permissions that have been set up for any user log file information that is needed to get the database back online, including size, location, and some internal information that is required. In other word, the backup file contains everything needed to re-create an exact copy of the database, as it existed when the backup was taken. However, there may be times when we might not want all of this data and information to be present in the restored database. Let's look at a few examples. 117 118 Chapter 4: Restoring From Full Backup Large data volumes As noted previously, full database restores can be a time-consuming process for large databases, and can quickly eat away at disk space, or even fill up a disk completely. In disaster recovery situations where only a small subset of data has been lost, it often feels frustrating to have to go through a long, full restore process in order to extract what might be only a few rows of data. However, when using only native SQL Server tools, there is no real alternative. If you have licenses for third-party backup and/or data comparison tools, it's worth investigating the possibility of performing what is termed object-level restore. In the case of Red Gate tools, the ones with which I am familiar, their backup products (both SQL Backup and Hyperbac), and SQL Data Compare, offer this functionality. With them, you can compare a backup file directly to a live database, and then restore only the missing object and data, rather than the whole database. Furthermore, Red Gate also offers a different kind of tool to accommodate these large database restore situations, namely SQL Virtual Restore. This tool allows you to mount compressed backups as databases without going through the entire restore process. Since I've yet to use this tool in a production scenario, I won't be including any examples in this book. However, to learn more, check out Brad McGehee's article on Simple Talk, at Restoring databases containing sensitive data If we simply go ahead and perform a full database restore of a production database onto one of our development or testing instances, we could inadvertently be breaking a lot of rules. It's possible that the production instance stores sensitive data and we do not want every developer in the company accessing social security numbers and bank account information, which would be encrypted in production, on their development machines! 118 119 Chapter 4: Restoring From Full Backup It would only take one rogue employee to steal a list of all clients and their sensitive information to sell to a competitor or, worse, a black market party. If you work at a financial institution, you may be dealing on a daily basis with account numbers, passwords and financial transaction, as well as sensitive user information such as social security numbers and addresses. Not only will this data be subject to strict security measures in order to keep customers' information safe, it will also be the target of government agencies and their compliance audits. More generally, while the production servers receive the full focus of attempts to deter and foil hackers, security can be a little lacking in non-production environments. This is why development and QA servers are a favorite target of malicious users, and why having complete customer records on such servers can cause big problems, if a compromise occurs. So, what's the solution? Obviously, for development purposes, we need the database schemas in our development and test servers to be initially identical to the schema that exists in production, so it's common practice, in such situations, to copy the schema but not the data. There are several ways to do this. Restore the full database backup, but immediately truncate all tables, purging all sensitive data. You may then need to shrink the development copy of your database; you don't want to have a 100 GB database shell if that space is never going to be needed. Note that, after a database shrink, you should always rebuild your indexes, as they will get fragmented as a result of such an operation. Use a schema comparison tool, to synch only the objects of the production and development databases. Wipe the database of all user tables and use SSIS to perform a database object transfer of all required user objects. This can be set up to transfer objects only and to ignore any data included in the production system. 119 120 Chapter 4: Restoring From Full Backup Of course, in each case, we will still need a complete, or at least partial, set of data in the development database, so we'll need to write some scripts, or use a data generation tool, such as SQL Data Generator, to establish a set of test data that is realistic but doesn't flout regulations for the protection of sensitive data. Too much permission Imagine now a situation where we are ready to push a brand new database into production, to be exposed to the real world. We take a backup of the development database, restore it on the production machine, turn on the services and website, and let our end-users go wild. A few weeks later, a frantic manager bursts through the door, screaming that the database is missing some critical data. It was there yesterday, but is missing this morning! Some detective work reveals that one of the developers accidentally dropped a table, after work hours the night before. How did this happen? At most, the developer should have had read-only access (via the db_datareader role) for the production machine! Upon investigation of the permissions assigned to that user for the production database, it is revealed that the developer is actually a member of the db_owner database role. How did the user get such elevated permissions? Well, the full database backup includes the complete permission set for the database. Each user's permissions are stored in the database and are associated to the login that they use on that server. When we restore the database from development to production, all database internal permissions are restored as well. If the developer login was assigned db_owner on the development machine, then this permission level will exist on production too, assuming the login was also valid for the production SQL Server. Similarly, if the developer login had db_owner in development but only db_datareader in production, then restoring the development database over the existing production database will effectively elevate the developer to db_owner in 120 121 Chapter 4: Restoring From Full Backup production. Even if a user doesn't have a login on the production database server, the restored database still holds the permissions. If that user is eventually given access to the production machine, he or she will automatically have that level of access, even if it wasn't explicitly given by the DBA team. The only case when this may not happen is when the user is using SQL Server authentication and the internal SID, a unique identifying value, doesn't match on the original and target server. If two SQL logins with the same name are created on different machines, the underlying SIDs will be different. So, when we move a database from Server A to Server B, a SQL login that has permission to access Server A will also be moved to Server B, but the underlying SID will be invalid and the database user will be "orphaned." This database user will need to be "de-orphaned" (see below) before the permissions will be valid. This will never happen for matching Active Directory accounts since the SID is always the same across a domain. In order to prevent this from happening in our environments, every time we restore a database from one environment to another we should: audit each and every login never assume that if a user has certain permissions in one environment they need the same in another; fix any internal user mappings for logins that exist on both servers, to ensure no one gets elevated permissions perform orphaned user maintenance remove permissions for any users that do not have a login on the server to which we are moving the database; the sp_change_ users_login stored procedure can help with this process, reporting all orphans, linking a user to its correct login, or creating a new login to which to link: EXEC sp_change_users_login 'Report' EXEC sp_change_users_login 'Auto_Fix', 'user' EXEC sp_change_users_login 'Auto_Fix', 'user', 'login', 'password' 121 122 Chapter 4: Restoring From Full Backup Don't let these issues dissuade you from performing full restores as and when necessary. Diligence is a great trait in a DBA, especially in regard to security. If you apply this diligence, keeping a keen eye out when restoring databases between mismatched environments, or when dealing with highly sensitive data of any kind, then you'll be fine. Performing Full Restores We are now ready to jump in and start restoring databases! This chapter will mimic the structure of Chapter 3, in that we'll first perform a full restore the "GUI way," in SSMS, and then by using native T-SQL RESTORE commands. In Chapter 8, we'll perform full restores using the Red Gate SQL Backup tool. Native SSMS GUI full backup restore Using the SSMS GUI, we're going to restore the first of the two full backups (Database- ForFullBackups_Full_Native_1.bak) that we took in Chapter 3, which was taken when the database contained about 500 MB of data. First, however, we need to decide whether we are going to restore this file over the current "live" version of the Database- ForFullBackups database, or simply create a new database. In this case, we are going to restore over the existing database, which is a common requirement when, for example, providing a weekly refresh of a development database. But wait, you might be thinking, the current version of DatabaseForFullBackups contains about 1 GB of data. If we do this, aren't we going to lose half that data? Indeed we are, but rest assured that all of those precious rows of data are safe in our second full database backup file, and we'll be bringing that data backup to life later in this chapter. So, go ahead and start SSMS, connect to your test instance, and then expand the databases tree menu as shown in Figure 123 Chapter 4: Restoring From Full Backup Figure 4-1: Getting SSMS prepared for restoration. To start the restore process, right-click on the database in question, DatabaseForFull- Backups, and navigate Tasks Restore Database..., as shown in Figure 4-2. This will initiate the Restore wizard. Figure 4-2: Starting the database restore wizard. 123 124 Chapter 4: Restoring From Full Backup The Restore Database window appears, with some options auto-populated. For example, the name of the database we're restoring to is auto-filled to be the same as the source database that was backed up. Perhaps more surprisingly, the backup set to restore is also auto-populated, as shown in Figure 4-3. What's happened is that SQL Server has inspected some of the system tables in the msdb database and located the backups that have already been taken for this database. Depending on how long ago you completed the backups in Chapter 3, the window will be populated with the backup sets taken in that chapter, letting us choose which set to restore. Figure 4-3: The Restore Database screen. 124 125 Chapter 4: Restoring From Full Backup We are not going to be using this pre-populated form, but will instead configure the restore process by hand, so that we restore our first full backup file. In the Source for restore section, choose the From device option and then click the ellipsis button ( ). In the Specify Backup window, make sure that the media type shows File, and click the Add button. In the Locate Backup File window, navigate to the C:\SQLBackups\Chapter3 folder and click on the DatabaseForFullBackups_Full_Native_1.bak backup file. Click OK twice to get back to the Restore Database window. We will now be able to see which backups are contained in the selected backup set. Since we only ever stored one backup per file, we only see one backup. Tick the box under the Restore column to select that backup file as the basis for the restore process, as shown in Figure 4-4. Figure 4-4: General configurations for full native restore. Next, click to the Options page on the left side of the restore configuration window. This will bring us to a whole new section of options to modify and validate (see Figure 4-5). 125 126 Chapter 4: Restoring From Full Backup Figure 4-5: The options page configurations for a full database restore. The top of the screen shows four Restore options as shown below. Overwrite the existing database the generated T-SQL command will include the REPLACE option, instructing SQL Server to overwrite the currently existing database information. Since we are overwriting an existing database, in this example we want to check this box. Note that it is advised to use the REPLACE option with care, due to the potential for overwriting a database with a backup of a different database. See 126 127 Chapter 4: Restoring From Full Backup Preserve the replication settings only for use in a replication-enabled environment. Basically allows you to re-initialize replication, after a restore, without having to reconfigure all the replication settings. Prompt before restoring each backup receive a prompt before each of the backup files is processed. We only have one backup file here, so leave this unchecked. Restrict access to the restored database restricts access to the restored database to only members of the database role db_owner and the two server roles sysadmin and dbcreator. Again, don't select this option here. The next portion of the Options page is the Restore the database files as: section, where we specify the location for the data (mdf) and log (ldf) files for the restored database. This will be auto-populated with the location of the original files, on which the backup was based. We have the option to move them to a new location on our drives but, for now, let's leave them in their original location, although it's wise to double-check that this location is correct for your system. Finally, we have the Recovery state section, where we specify the state in which the database should be left once the current backup file has been restored. If there are no further files to restore, and we wish to return the database to a useable state, we pick the first option, RESTORE WITH RECOVERY. When the restore process is run, the backup file will be restored and then the final step of the restore process, database recovery (see Chapter 1), will be carried out. This is the option to choose here, since we're restoring just a single full database backup. We'll cover the other two options later in the book, so we won't consider them further here. Our restore is configured and ready to go, so click OK and wait for the progress section of the restore window to notify us that the operation has successfully completed. If the restore operating doesn't show any progress, the probable reason is that there is another active connection to the database, which will prevent the restore operation from starting. Stop the restore, close any other connections and try again. A convenience of scripting, as we'll see a little later, is that we can check for, and close, any other connections before we attempt the restore operation. 127 128 How do we know it worked? Chapter 4: Restoring From Full Backup Let's run a few quick queries, shown in Listing 4-1, against our newly restored DatabaseForFullBackups database to verify that the data that we expect to be here is actually here. USE [DatabaseForFullBackups] SELECT MessageData, COUNT(MessageData) AS MessageCount FROM MessageTable1 GROUP BY MessageData SELECT MessageData, COUNT(MessageData) AS MessageCount FROM MessageTable2 GROUP BY MessageData Listing 4-1: Checking our restored data. The first query should return a million rows, each containing the same message. The second query, if everything worked in the way we intended, should return no rows. Collecting restore metrics Having run the full backup, in Chapter 3, we were able to interrogate msdb to gather some metrics on how long the backup process took. Unfortunately, when we run a restore process using SSMS, there is no record of the length of the operation, not even in the SQL Server log files. The only way to capture the time is to script out the RESTORE command and run the T-SQL. I won't show this script here, as we're going to run a T-SQL RESTORE command very shortly, but if you want to see the stats for yourself, right now, you'll need to 128 129 Chapter 4: Restoring From Full Backup re-create the Restore Database pages as we had them configured in Figures , and then click the Script drop-down button, and select Script Action to New Query Window. This will generate the T-SQL RESTORE command that will be the exact equivalent of what would be generated under the covers when running the process through the GUI. When I ran this T-SQL command on my test system, the restore took just under 29 seconds and processed 62,689 pages of data, as shown in Figure 4-6. Figure 4-6: Full native restore output. Native T-SQL full restore We are now going to perform a second full database restore, this time using a T-SQL script, and the second full backup file from Chapter 3 (DatabaseForFullBackups_ Full_Native_2.bak), which was taken after we pushed another 500 MB of data into the database, bringing the total size of the database to just under 1 GB. You may recall from Chapter 3 that doubling the size of the database did increase the backup time, but it was not a linear increase. We'll see if we get similar behavior when performing restores. Once again, we are going to overwrite the existing DatabaseForFullBackups database. This means that we want to kill all connections that are open before we begin our RESTORE operation, which we can do in one of two ways. The first involves going through the list of processes in master.sys.sysprocesses and killing each SPID associated with the database in question. However, this doesn't always do the trick, since it won't kill connections that run on a different database, but access tables in the database we wish to restore. We need a global way to stop any user process that accesses the database in question. 129 130 Chapter 4: Restoring From Full Backup For this reason, the second and most common way is to place the database into OFFLINE mode for a short period. This will drop all connections and terminate any queries currently processing against the database, which can then immediately be switched back to ONLINE mode, for the restore process to begin. Just be sure not to kill any connections that are processing important data. Even in development, we need to let users know before we just go wiping out currently running queries. Here, we'll be employing the second technique and so, in Listing 4-2, you'll see that we set the database to OFFLINE mode and use option, WITH ROLLBACK IMMEDIATE, which instructs SQL Server to roll those processes back immediately, without waiting for them to COMMIT. Alternatively, we could have specified WITH ROLLBACK AFTER XX SECONDS, where XX is the number of seconds SQL Server will wait before it will automatically start rollback procedures. We can then return the database to ONLINE mode, free of connections and ready to start the restore process. USE [master] ALTER DATABASE [DatabaseForFullBackups] SET OFFLINE WITH ROLLBACK IMMEDIATE ALTER DATABASE [DatabaseForFullBackups] SET ONLINE Listing 4-2: Dropping all user connections before a restore. Go ahead and give this a try. Open two query windows in SSMS; in one of them, start the long-running query shown in Listing 4-3 then, in the second window, run Listing 4-2. USE [DatabaseForFullBackups] WAITFOR DELAY '00:10:00' Listing 4-3: A long-running query. 130 131 Chapter 4: Restoring From Full Backup You'll see that the session with the long-running query was terminated, reporting a severe error and advising that any results returned be discarded. In the absence of a third-party tool, which will automatically take care of existing sessions before performing a restore, this is a handy script. You can include it in any backup scripts you use or, perhaps, convert it into a stored procedure which is always a good idea for reusable code. Now that we no longer have to worry about pesky user connections interfering with our restore process, we can go ahead and run the T-SQL RESTORE command in Listing 4-4. USE [master] RESTORE DATABASE [DatabaseForFullBackups] FROM DISK = N'C:\SQLBackups\Chapter3\DatabaseForFullBackups_Full_Native_2.bak' WITH FILE = 1, STATS = 25 Listing 4-4: Native SQL Server full backup restore. The RESTORE DATABASE command denotes that we wish to restore a full backup file for the DatabaseForFullBackups database. The next portion of the script configures the name and location of the backup file to be restored. If you chose a different name or location for this file, you'll need to amend this line accordingly. Finally, we specify a number of WITH options. The FILE argument identifies the backup set to be restored, within our backup file. As discussed in Chapter 2, backup files can hold more than one backup set, in which case we need to explicitly identify the number of the backup set within the file. Our policy in this book is "one backup set per file," so we'll always set FILE to a value of 1. The STATS argument is also one we've seen before, and specifies the time intervals at which SQL Server should update the "backup progress" messages. Here, we specify a message at 25% completion intervals. Notice that even though we are overwriting an existing database without starting with a tail log backup, we do not specify the REPLACE option here, since DatabaseForFull- Backups is a SIMPLE recovery model database, so the tail log backup is not possible. SQL 131 132 Chapter 4: Restoring From Full Backup Server will still overwrite any existing database on the server called DatabaseForFull- Backups, using the same logical file names for the data and log files that are recorded within the backup file. In such cases, we don't need to specify any of the file names or paths for the data or log files. Note, though, that this only works if the file structure is the same! The backup file contains the data and log file information, including the location to which to restore the data and log files so, if we are restoring a database to a different machine from the original, and the drive letters, for instance, don't match up, we will need to use the WITH MOVE argument to point SQL Server to a new location for the data and log files. This will also be a necessity if we need to restore the database on the same server with a different name. Of course, SQL Server won't be able to overwrite any data or log files if they are still in use by the original database. We'll cover this topic in more detail later in this chapter, and again in Chapter 6. Go ahead and run the RESTORE command. Once it is done executing, do not close the query session, as the query output contains some metrics that we want to record. We can verify that the restore process worked, at this point, by simply opening a new query window and executing the code from Listing 4-1. This time the first query should return a million rows containing the same message, and the second query should also return a million rows containing the same, slightly different, message. Collecting restore metrics, Part 2 Let's take a look at the Message window, where SQL Server directs non-dataset output, to view some metrics for our native T-SQL restore process, as shown in Figure 4-7. The first full restore of the 500 MB database processed 62,689 pages and took almost 29 seconds, on my test server. This restore, of the 1 GB database, processed roughly double the number of pages (125,201) and took roughly twice as long (almost 67 seconds). So, our database restore timings seem to exhibit more linear behavior than was observed for backup time. 132 133 Chapter 4: Restoring From Full Backup Figure 4-7: Results from the second native full backup restore. Before we move on, you may be wondering whether any special options or commands are necessary if restoring a native SQL Server backup file that is compressed. The answer is "No;" it is exactly the same process as restoring a normal backup file. Forcing Restore Failures for Fun The slight problem with the friendly demos found in most technical books is that the reader is set up for success, and often ends up bewildered when errors start occurring. As such, through this book, we'll be looking at common sources of error when backing up and restoring databases, and how you can expect SQL Server to respond. Hopefully this will better arm you to deal with such unexpected errors, as and when they occur in the real world. Here, we're going to start with a pretty blatant mistake, but nevertheless one that I've seen novices make. The intent of the code in Listing 4-5 is, we will assume, to create a copy of DatabaseForFullBackups as it existed when the referenced backup file was taken, and name the new copy DatabaseForFullBackups2. 133 134 Chapter 4: Restoring From Full Backup RESTORE DATABASE [DatabaseForFullBackups2] FROM DISK = N'C:\SQLBackups\Chapter3\DatabaseForFullBackups_Full_Native_2.bak' WITH RECOVERY Listing 4-5 A RESTORE command that will fail. Assuming you have not deleted the DatabaseForFullBackups database, attempting to run Listing 4-5 will result in the following error message (truncated for brevity; basically the same messages are repeated for the log file): Msg 1834, Level 16, State 1, Line 2 The file 'C:\SQLData\DatabaseForFullBackups.mdf' cannot be overwritten. It is being used by database 'DatabaseForFullBackups'. Msg 3156, Level 16, State 4, Line 2 File 'DatabaseForFullBackups' cannot be restored to ' C:\SQLData\DatabaseForFullBackups.mdf'. Use WITH MOVE to identify a valid location for the file. The problem we have here, and even the solution, is clearly stated by the error messages. In Listing 4-5, SQL Server attempts to use, for the DatabaseForFullBackups2 database being restored, the same file names and paths for the data and log files as are being used for the existing DatabaseForFullBackups database, which was the source of the backup file. In other words, it's trying to create data and log files for the DatabaseForFullBackups2 database, by overwriting data and log files that are being used by the DatabaseForFullBackups database. We obviously can't do that without causing the DatabaseForFullBackups database to fail. We will have to either drop the first database to free those file names or, more likely, and as the second part of the error massage suggests, identify a valid location for the log and data files for the new, using WITH MOVE, as shown in Listing 135 Chapter 4: Restoring From Full Backup RESTORE DATABASE [DatabaseForFullBackups2] FROM DISK = 'C:\SQLBackups\Chapter3\DatabaseForFullBackups_Full_Native_2.bak' WITH RECOVERY, MOVE 'DatabaseForFullBackups' TO 'C:\SQLData\DatabaseForFullBackups2.mdf', MOVE 'DatabaseForFullBackups_log' TO 'C:\SQLData\DatabaseForFullBackups2_log.ldf' Listing 4-6: A RESTORE command that renames the data and log files for the new database. We had two choices to fix the script; we could either rename the files and keep them in the same directory or keep the file names the same but put them in a different directory. It can get very confusing if we have a database with the same physical file name as another database, so renaming the files to match the database name seems like the best solution. Let's take a look at a somewhat subtler error. For this example, imagine that we wish to replace an existing copy of the DatabaseForFullBackups2 test database with a production backup of DatabaseForFullBackups. At the same time, we wish to move the data and log files for the DatabaseForFullBackups2 test database over to a new drive, with more space. USE master go RESTORE DATABASE [DatabaseForFullBackups2] FROM DISK = 'C:\SQLBackups\DatabaseForFileBackups_Full_Native_1.bak' WITH RECOVERY, REPLACE, MOVE 'DatabaseForFileBackups' TO 'D:\SQLData\DatabaseForFileBackups2.mdf', MOVE 'DatabaseForFileBackups_log' TO 'D:\SQLData\DatabaseForFileBackups2_log.ldf' Listing 4-7: An "error" while restoring over an existing database. 135 136 Chapter 4: Restoring From Full Backup In fact, no error message at all will result from running this code; it will succeed. Nevertheless, a serious mistake has occurred here: we have inadvertently chosen a backup file for the wrong database, DatabaseForFileBackups instead of DatabaseForFull- Backups, and used it to overwrite our existing DatabaseForFullBackups2 database! This highlights the potential issue with misuse of the REPLACE option. We can presume a DBA has used it here because the existing database is being replaced, without performing a tail log backup (see Chapter 6 for more details). However, there are two problems with this, in this case. Firstly, DatabaseForFullBackups2 is a SIMPLE recovery model database and so REPLACE is not required from the point of view of bypassing a tail log backup, since log backups are not possible. Secondly, use of REPLACE has bypassed the normal safety check that SQL Server would perform to ensure the database in the backup matches the database over which we are restoring. If we had run the exact same code as shown in Listing 4-7, but without the REPLACE option, we'd have received the following, very useful error message: Msg 3154, Level 16, State 4, Line 1 The backup set holds a backup of a database other than the existing 'DatabaseForFullBackups2' database. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. Note that we don't have any further use for the DatabaseForFullBackups2 database, so once you've completed the example, you can go ahead and delete it. Considerations When Restoring to a Different Location When restoring a database to a different server or even a different instance on the same server, there are quite a few things to consider, both before starting and after completing the restore operation. 136 137 Chapter 4: Restoring From Full Backup Version/edition of SQL Server used in the source and destination You may receive a request to restore a SQL Server 2008 R2 database backup to a SQL Server 2005 server, which is not a possibility. Likewise, it is not possible to restore a backup of a database that is using enterprise-only options (CDC, transparent data encryption, data compression, partitioning) to a SQL Server Standard Edition instance. What SQL Server agent jobs or DTS/DTSX packages might be affected? If you are moving the database permanently to a new server, you need to find which jobs and packages that use this database will be affected and adjust them accordingly. Also, depending on how you configure your database maintenance jobs, you may need to add the new database to the list of databases to be maintained. What orphaned users will need to be fixed? What permissions should be removed? There may be SQL Logins with differing SIDs that we need to fix. There may be SQL logins and Active Directory users that don't need access to the new server. You need to be sure to comb the permissions and security of the new location before signing off the restore as complete. Restoring System Databases As discussed briefly in Chapter 2, there are occasions when we may need to restore one of the system databases, such as master, model or msdb, either due to the loss of one of these databases or, less tragically, the loss of a SQL agent job, for example. Restoring system databases in advance of user databases can also be a time saver. Imagine that we need to migrate an entire SQL Server instance to new hardware. We could restore the master, model and msdb databases and already have our permissions, logins, jobs and a lot of other configuration taken care of in advance. In the case of an emergency, of course, knowledge of how to perform system database restores is essential. 137 138 Chapter 4: Restoring From Full Backup In this section, we'll look at how to perform a restore of both the master and the msdb system databases, so the first thing we need to do is make sure we have valid backups of these databases, as shown in Listing 4-8. USE [master] BACKUP DATABASE [master] TO DISK = N'C:\SQLBackups\Chapter4\master_full.bak' WITH INIT BACKUP DATABASE [msdb] TO DISK = N'C:\SQLBackups\Chapter4\msdb_full.bak' WITH INIT BACKUP DATABASE [model] TO DISK = N'C:\SQLBackups\Chapter4\model_full.bak' WITH INIT Listing 4-8: Taking backups of our system databases. Restoring the msdb database We can restore the msdb or model database without making any special modifications to the SQL Server engine or the way it is running, which makes it a relatively straightforward process (compared to restoring the master database). We will work with the msdb database for the examples in this section. In order to restore the msdb database, SQL Server needs to be able to take an exclusive lock on it, which means that we must be sure to turn off any applications that might be using it; specifically SQL Server Agent. There are several ways to do this, and you can choose the one with which you are most comfortable. For example, we can stop it directly from SSMS, use the NET STOP command in a command prompt, use a command script, or stop it from the services snap-in tool. 138 139 Chapter 4: Restoring From Full Backup We'll choose the latter option, since we can use the services snap-in tool to view and control all of the services running on our test machine. To start up this tool, simply pull up the Run prompt and type in services.msc while connected locally or through RDP to the test SQL Server machine. This will bring up the services snap-in within the Microsoft Management Console (MMC). Scroll down until you locate any services labeled SQL Server Agent (instance); the instance portion will either contain the unique instance name, or contain MSSQLSERVER, if it is the default instance. Highlight the agent service, right-click and select Stop from the control menu to bring the SQL Server Agent to a halt, as shown in Figure 4-8. Figure 4-8: Stopping SQL Server Agent with the services snap-in With the service stopped (the status column should now be blank), the agent is offline and we can proceed with the full database backup restore, as shown in Listing 140 Chapter 4: Restoring From Full Backup USE [master] RESTORE DATABASE [msdb] FROM DISK = N'C:\SQLBackups\Chapter4\msdb_full.bak' Listing 4-9: Restoring the msdb database. With the backup complete, restart the SQL Server Agent service from the services MMC snap-in tool and you'll find that all jobs, schedules, operators, and everything else stored in the msdb database, are all back and ready for use. This is a very simple task, with only the small change being that we need to shut down a service before performing the restore. Don't close the services tool yet, though, as we will need it to restore the master database. Restoring the master database The master database is the control database for a SQL Server instance, and restoring it is a slightly trickier task; we can't just restore master while SQL Server is running in standard configuration. The first thing we need to do is turn the SQL Server engine service off! Go back to the services management tool, find the service named SQL Server (instance) and stop it, as described previously. You may be prompted with warnings that other services will have to be stopped as well; go ahead and let them shut down. Once SQL Server is offline, we need to start it again, but using a special startup parameter. In this case, we want to use the m switch to start SQL Server in single-user mode. This brings SQL Server back online but allows only one user (an administrator) to connect, which is enough to allow the restore of the master database. 140 141 Chapter 4: Restoring From Full Backup To start SQL Server in single-user mode, open a command prompt and browse to the SQL Server installation folder, which contains the sqlservr.exe file. Here are the default locations for both SQL Server 2008 and 2008 R2: <Installation Path>\MSSQL10.MSSQLSERVER\MSSQL\Binn <Installation Path>\MSSQL10_50.MSSQLSERVER\MSSQL\Binn From that location, issue the command sqlservr.exe m. SQL Server will begin the startup process, and you'll see a number of messages to this effect, culminating (hopefully) in a Recovery is complete message, as shown in Figure 4-9. Figure 4-9: Recovery is complete and SQL Server is ready for admin connection. Once SQL Server is ready for a connection, open a second command prompt and connect to your test SQL Server with sqlcmd. Two examples of how to do this are given below, the first when using a trusted connection and the second for a SQL Login authenticated connection. 141 142 Chapter 4: Restoring From Full Backup sqlcmd -SYOURSERVER E sqlcmd SYOURSERVER UloginName Ppassword At the sqlcmd prompt, we'll perform a standard restore to the default location for the master database, as shown in Listing 4-10 (if required, we could have used the MOVE option to change the master database location or physical file). RESTORE DATABASE [master] FROM DISK = 'C:\SQLBackups\Chapter4\master_full.bak' Listing 4-10: Restoring the master database. In the first sqlcmd prompt, you should see a standard restore output message noting the number of pages processed, notification of the success of the operation, and a message stating that SQL Server is being shut down, as shown in Figure Figure 4-10: Output from restore of master database. 142 143 Chapter 4: Restoring From Full Backup Since we just restored the master database, we need the server to start normally to pick up and process all of the internal changes, so we can now start the SQL Server in normal mode to verify that everything is back online and working fine. You have now successfully restored the master database! Summary Full database backups are the cornerstone of a DBA's backup and recovery strategy. However, these backups are only useful if they can be used successfully to restore a database to the required state in the event of data loss, hardware failure, or some other disaster. Hopefully, as a DBA, the need to restore a database to recover from disaster will be a rare event, but when it happens, you need to be 100% sure that it's going to work; your organization, and your career as a DBA, my depend on it. Practice test restores for your critical databases on a regular schedule! Of course, many restore processes won't be as simple as restoring the latest full backup. Log backups will likely be involved, for restoring a database to a specific point in time, and this is where things get more interesting. 143 144 Chapter 5: Log Backups When determining a backup strategy and schedule for a given database, one of the major considerations is the extent to which potential data loss can be tolerated in the event of an errant process, or software or hardware failure. If toleration of data loss is, say, 24 hours, then we need do nothing more than take a nightly full backup. However, if exposure to the risk of data loss is much lower than this for a given database, then it's likely that we'll need to operate that database in FULL recovery model, and supplement those nightly full backups with transaction log backups (and possibly differential database backups see Chapter 7). With a log backup, we capture the details of all the transactions that have been recorded for that database since the last log backup (or since the last full backup,if this is the firstever log backup). In this chapter, we'll demonstrate how to capture these log backups using either the SSMS GUI or T-SQL scripts. However, we'll start by taking a look at how space is allocated and used within a log file; this is of more than academic interest, since it helps a DBA understand and troubleshoot certain common issues relating to the transaction log, such as explosive log growth, or internal log fragmentation. Capturing an unbroken sequence of log backups means that we will then, in Chapter 6, be able restore a full backup, then apply this series of log backups to "roll forward" the database to the state in which it existed at various, successive points in time. This adds a great deal of flexibility to our restore operations. When capturing only full (and differential) database backups, all we can do is restore one of those backups, in its entirety. With log backups, we can restore a database to the state in which it existed when a given log backup completed, or to the state in which it existed at some point represented within that log backup. 144 145 Chapter 5: Log Backups A Brief Peek Inside a Transaction Log A DBA, going about his or her daily chores, ought not to be overly concerned with the internal structure of the transaction log. Nevertheless, some discussion on this topic is very helpful in understanding the appropriate log maintenance techniques, and especially in understanding the possible root cause of problems such as log file fragmentation, or a log file that is continuing to grow and grow in size, despite frequent log backups. However, we will keep this "internals" discussion as brief as possible. As discussed in Chapter 1, a transaction log stores a record of the operations that have been performed on the database with which it is associated. Each log record contains the details of a specific change, relating to object creation/modification (DDL) operations, as well any data modification (DML) operations. When SQL Server undergoes database recovery (for example, upon start up, or during a RESTORE operation), it, will roll back (undo) or roll forward (redo) the actions described in these log records, as necessary, in order to reconcile the data and log files, and return the database to a consistent state. Transaction log files are sequential files; in other words SQL Server writes to the transaction log sequentially (unlike data files, which tend to be written in a random fashion, as data is modified in random data pages). Each log record inserted into the log file is stamped with a Log 5-1 depicts a transaction log composed of eight VLFs, and marks the active portion of the log, known as the active log. 145 146 Chapter 5: Log Backups Figure 5-1: A transaction log with 8 VLFs. The concept of the active log is an important one. A VLF can either be "active," if it contains any part of what is termed the active log, or "inactive," if it doesn't. Any log record relating to an open transaction is required for possible rollback and so must be part of the active log. In addition, there are various other activities in the database, including replication, mirroring and CDC (Change Data Capture) that use the transaction log and need transaction log records to remain in the log until they have been processed. These records will also be part of the active log. The log record with the MinLSN, shown in Figure 5-1, is defined as the "oldest log record that is required for a successful database-wide rollback or by another activity or operation in the database." This record marks the start of the active log and is sometimes referred to as the "head" of the log. Any more recent log record, regardless of whether it is still open or required, is also part of the active log; this is an important point as it explains why it's a misconception to think of the active portion of the log as containing only records relating to uncommitted transactions. The log record with the highest LSN (i.e. the most recent record added) marks the end of the active log. 146 147 Chapter 5: Log Backups Therefore, we can see that a log record is no longer part of the active log only when each of the following three conditions below is met. 1. It relates to a transaction that is committed and so is no longer required for rollback. 2. It is no longer required by any other database process, including a transaction log backup when using FULL or BULK LOGGED recovery models. 3. It is older (i.e. has a lower LSN) than the MinLSN record. Any VLF that contains any part of the active log is considered active and can never be truncated. For example, VLF3, in Figure 5-1, is an active VLF, even though most of the log records it contains are not part of the active log; it cannot be truncated until the head of the logs moves forward into VLF4. The operations that will cause the head of the log to move forward vary depending on the recovery model of the database. For databases in the SIMPLE recovery model, the head of the log can move forward upon CHECKPOINT, when pages are flushed from cache to disk, after first being written to the transaction log. As a result of this operation, many log records would now satisfy the first requirement listed above, for no longer being part of the active log. We can imagine that if, as a result, the MinLSN record in Figure 5-1, and all subsequent records in VLF3, satisfied both the first and second criteria, then the head would move forward and VLF3 could now be truncated. Therefore, generally, space inside the log is made available for reuse at regular intervals. Truncation does not reduce the size of the log file It's worth reiterating that truncation does not affect the physical size of the log; it will still take up the same physical space on the drive. Truncation is merely the act of marking VLFs in the log file as available for reuse, in the recording of subsequent transactions. 147 148 Chapter 5: Log Backups For databases using FULL or BULK LOGGED recovery, the head can only move forward as a result of a log backup. Any log record that has not been previously backed up is considered to be still "required" by a log backup operation, and so will never satisfy the second requirement above, and will remain part of the active log. If we imagine that the MinLSN record in Figure 5-1 is the first record added to the log after the previous log backup, then the head will remain in that position till the next log backup, at which point it can move forward (assuming the first requirement is also satisfied). I've stressed this many times, but I'll say it once more for good measure: this is the other reason, in addition to enabling point-in-time restore, why it's so important to back up the log for any database operating in FULL (or BULK_LOGGED) recovery; if you don't, the head of the log is essentially "pinned," space will not be reused, and the log will simply grow and grow in size. The final question to consider is what happens when the active log reaches the end of VLF8. Simplistically, it is easiest to think of space in the log file as being reused in a circular fashion. Once the logical end of the log reaches the end of a VLF, SQL Server will start to reuse the next sequential VLF that is inactive, or the next, so far unused, VLF. In Figure 5-1, this could be VLF8, followed by VLFs 1 and 2, and so. Three uses for transaction log backups The primary reason to take log backups is in order to be able to restore them; in other words, during a RESTORE operation, we can restore a full (plus differential) backup, followed by a complete chain of log backup files. As we restore the series of log backups files, we will essentially "replay" the operations described within in order to re-create the database as it existed at a previous point in time. 148 149 Chapter 5: Log Backups However, the log backups, and subsequent restores, can also be very useful in reducing the time required for database migrations, and for offloading reporting from the Production environment, via log shipping. Performing database restores By performing a series of log backup operations, we can progressively capture the contents of the live log file in a series of log backup files. Once captured in log backup files, the series of log records within these files can, assuming the chain of log records is unbroken, be subsequently applied to full (and differential) database backups as part of a database restore operation. We can restore to the end of one of these backup files, or even to some point in time in the middle of the file, to re-create the database as it existed at a previous point in time, for example, right before a failure. When operating in SIMPLE recovery model, we can only take full and differential backups i.e. we can only back up the data files and not the transaction log. Let's say, for example, we rely solely on full database backups, taken every day at 2 a.m. If a database failure occurs at 1 a.m. one night, all we can do is restore the database as it existed at 2 a.m. the previous day and have lost 23 hours' worth of data. We may be able to reduce the risk of data loss by taking differential backups, in between the nightly full backups. However, both full and differential backups are resource-intensive operations and if you need your risk of data loss to be measured in minutes rather than hours, the only viable option is to operate the database in FULL recovery model and take transaction log backups alongside any full and differential backups. If we operate the database in FULL recovery, and take transaction log backups, say, every 30 minutes then, in the event of our 2 a.m. disaster, we can restore the previous 2 a.m. full backup, followed by a series of 48 log backup files, in order to restore the database as it existed at 1.30 a.m., losing 30 minutes' worth of data. If the live transaction log were still available we could also perform a tail log backup and restore the database to a time directly before the disaster occurred. 149 150 Chapter 5: Log Backups The frequency with which log backups are taken will depend on the tolerable exposure to data loss, as expressed in your Backup and Restore SLA (discussed shortly). Large database migrations It is occasionally necessary to move a database from one server to another, and you may only have a short window of time in which to complete the migration. For example, let's say a server has become overloaded, or the server hardware has reached the end of its life, and a fresh system needs to be put in place. Just before we migrate the database to the new server, we need to disconnect all users (so that no data is committed after your final backup is taken), and our SLA dictates a maximum of an hour down-time, so we've got only one hour to get them all connected again on the new server! Given such constraints, we won't have the time within that window to take a full backup of the database, transfer it across our network, and then restore it on the target server. Fortunately, however, we can take advantage of the small file footprint of the transaction log backup in order to reduce the time required to perform the task. For example, prior to the migration window, we can transfer, to the target server, a full database backup from the night before. We can restore that file to the new server using the WITH NORECOVERY option (discussed in Chapter 6), to put the database into a restoring state and allow transaction log backups to be applied to the database at a later time. After this, we can take small transaction log backups of the migrating database over the period of time until the migration is scheduled. These log backup files are copied over to the target server and applied to our restoring target database (stipulating NORECOVERY as each log backup is applied, to keep the database in a restoring state, so more log backups can be accepted). 150 151 Chapter 5: Log Backups At the point the migration window opens, we can disconnect all users from the original database, take a final log backup, transfer that final file to the target server, and apply it to the restoring database, specifying WITH RECOVERY so that the new database is recovered, and comes online in the same state it was in when you disconnected users from the original. We still need to bear in mind potential complicating factors related to moving databases to different locations, as discussed in Chapter 4. Orphaned users, elevated permissions and connectivity issues would still need to be addressed after the final log was applied to the new database location. Log shipping Almost every DBA has to make provision for business reporting. Often, the reports produced have to be as close to real time as possible, i.e. they must reflect as closely as possible the data as it currently exists in the production databases. However, running reports on a production machine is never a best practice, and the use of High Availability solutions (real-time replication, CDC solutions, log reading solutions, and so on) to get that data to a reporting instance can be expensive and time consuming. Log shipping is an easy and cheap way to get near real-time data to a reporting server. The essence of log shipping is to restore a full database to the reporting server using the WITH STANDBY option, then regularly ship log backup files from the production to the reporting server and apply them to the standby database to update its contents. The STANDBY option will keep the database in a state where more log files can be applied, but will put the database in a read-only state, so that it always reflects the data in the source database at the point when the last applied log backup was taken. 151 152 Chapter 5: Log Backups This means that the reporting database will generally lag behind the production database by minutes or more. This sort of lag is usually not a big problem and, in many cases, log shipping is an easy way to satisfy, not only the production users, but the reporting users as well. Practical log shipping It is out of scope to get into the full details of log shipping here, but the following article offers a practical guide to the process: pop-rivetts-sql-server-faq-no.4-pop-does-log-shipping/. Log Backups in the Backup and Restore SLA As discussed in detail in Chapter 2, determining whether or not log backups are required for a database and, if so, how frequently, will require some conversations with the database owners. The first rule is not to include log backups in the SLA for a given database, unless they are really required. Log backups bring with them considerable extra administrative overhead, with more files to manage, more space required to store them, more jobs to schedule, and so on. While we need to make sure that these operations are performed for every database that needs them, we definitely don't want to take them on all databases, regardless of need. If it's a development database, we most likely won't need to take transaction log backups, and the database can be operated in SIMPLE recovery model. If it's a production database, then talk to the project manager and developers; ask them questions about the change load on the data; find out how often data is inserted or modified, how much data is modified, at what times of the day these modifications take place, and the nature of the data load/modification processes (well defined or ad hoc). 152 153 Chapter 5: Log Backups If it's a database that's rarely, if ever, modified by end-users, but is subject to daily, welldefined data loads, then it's also unlikely that we'll need to perform log backups, so we can operate the database in SIMPLE recovery model. We can take a full database backup after each data load, or simply take a nightly full backup and then, if necessary, restore it, then replay any data load processes that occurred subsequently. If a database is modified frequently by ad hoc end-user processes, and toleration of data loss is low, then it's very likely that transaction log backups will be required. Again, talk with the project team and find out the acceptable level of data loss. You will find that, in most cases, taking log backups once per hour will be sufficient, meaning that the database could lose up to 60 minutes of transactional data. For some databases, an exposure to the risk of data loss of more than 30 or 15 minutes might be unacceptable. The only difference here is that we will have to take, store, and manage many more log backups, and more backup files means more chance of something going wrong; losing a file or having a backup file become corrupted. Refer back to the Backup scheduling section of Chapter 2 for considerations when attempting to schedule all the required backup jobs for a given SQL Server instance. Whichever route is the best for you, the most important thing is that you are taking transaction log backups for databases that require them, and only for those that required them. Preparing for Log Backups In this chapter, we'll run through examples of how to take log backups using both SSMS and T-SQL scripts. In Chapter 8, we'll show how to manage your log and other backups, using a third-party tool (Red Gate SQL Backup) and demonstrate some of the advantages that such tools offer. 153 154 Chapter 5: Log Backups Before we get started taking log backups, however, we need to do a bit of prep work, namely choosing an appropriate recovery model for our example database, and then creating that database along with some populated sample tables. Choosing the recovery model We're going to assume that the Service Level Agreement for our example database expresses a tolerance to potential data loss of no more than 60 minutes. This immediately dictates a need to take log backups at this interval (or shorter), in order that we can restore a database to a state no more than 60 minutes prior to the occurrence of data being lost, or the database going offline. This rules out SIMPLE as a potential recovery model for our database since, as discussed, in this model a database operates in "auto-truncate" mode and any inactive VLFs are made available for reuse whenever a database CHECKPOINT operation occurs. With the inactive VLFs being continuously overwritten, we cannot capture a continuous chain of log records into our log backups, and so can't use these log backups as part of a database RESTORE operation. In fact, it isn't even possible to take log backups for a database that is operating in SIMPLE recovery. This leaves us with a choice of two recovery models: FULL or BULK LOGGED. All databases where log backups are required should be operating in the FULL recovery model, and that's the model we're going to use. However, a database operating in FULL recovery may be switched temporarily to the BULK LOGGED model in order to maximize performance, and minimize the growth of the transaction log, during bulk operations, such as bulk data loads or certain index maintenance operations. When a database is operating in BULK LOGGED model, such operations are only minimally logged, and so require less space in the log file. This can save a DBA the headache of having log files growing out of control, and can save a good deal of time when bulk loading data into your database. 154 155 Chapter 5: Log Backups However, use of BULK LOGGED has implications that make it unsuitable for long-term use in a database where point-in-time restore is required, since it is not possible to restore a database to a point in time within a log file that contains minimally logged operations. We'll discuss this in more detail in the next chapter, along with the best approach to minimizing risk when a database does need to be temporarily switched to BULK LOGGED model. For now, however, we're going to choose the FULL recovery model for our database. Creating the database Now that we know that FULL recovery is the model to use, let's go ahead and get our new database created so we can begin taking some log backups. We are going to stick with the naming convention established in previous chapters, and call this new database DatabaseForLogBackups. We can either create the database via the SSMS GUI or use the script shown in Listing 5-1. USE [master] CREATE DATABASE [DatabaseForLogBackups] ON PRIMARY ( NAME = N'DatabaseForLogBackups', FILENAME = N'C:\SQLData\DatabaseForLogBackups.mdf', SIZE = KB, FILEGROWTH = 51200KB ) LOG ON ( NAME = N'DatabaseForLogBackups_log', FILENAME = N'C:\SQLData\DatabaseForLogBackups_log.ldf', SIZE = 51200KB, FILEGROWTH = 51200KB ) ALTER DATABASE [DatabaseForLogBackups] SET RECOVERY FULL Listing 5-1: Creating our new DatabaseForLogBackups database. 155 156 Chapter 5: Log Backups This script will create for us a new DatabaseForLogBackups database, with the data and log files for this database stored in the C:\SQLData directory. Note that, if we didn't specify the FILENAME option, then the files would be auto-named and placed in the default directory for that version of SQL Server (for example, in SQL Server 2008, this is \Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA). We have assigned some appropriate values for the initial sizes of these files and their file growth characteristics. As discussed in Chapter 3, it's generally not appropriate to accept the default file size and growth settings for a database, and we'll take a deeper look at the specific problem that can arise with a log file that grows frequently in small increments, later in this chapter. For each database, we should size the data and log files as appropriate for their current data requirements, plus predicted growth over a set period. In the case of our simple example database, I know how exactly much data I plan to load into our tables (you'll find out shortly!), so I have chosen an initial data file size that makes sense for that purpose, 500 MB. We don't want the file to grow too much but if it does, we want it to grow in reasonably sized chunks, so I've chosen a growth step of 50 MB. Each time the data file needs to grow, it will grow by 50 MB, which provides enough space for extra data, but not so much that we will have a crazy amount of free space after each growth. For the log file, I've chosen an initial size of 50 MB, and I am allowing it to grow by an additional 50 MB whenever it needs more room to store transactions. Immediately after creating the database, we run an ALTER DATABASE command to ensure that our database is set up to use our chosen recovery mode, namely FULL. This is very important, especially if the model database on the SQL Server instance is set to a different recovery model, since all users' databases will inherit that setting. Now that we have a new database, set to use the FULL recovery model, we can go ahead and start creating and populating the tables we need for our log backup tests. 156 157 Chapter 5: Log Backups Creating and populating tables We are going to use several simple tables that we will populate with a small initial data load. Subsequently, we'll take a full database backup and then perform another data load. This will make it possible to track our progress as we restore our log backups, in the next chapter. USE [DatabaseForLogBackups] SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON CREATE TABLE [dbo].[messagetable1] ( [Message] [nvarchar](100) NOT NULL, [DateTimeStamp] [datetime2] NOT NULL ) ON [PRIMARY] CREATE TABLE [dbo].[messagetable2] ( [Message] [nvarchar](100) NOT NULL, [DateTimeStamp] [datetime2] NOT NULL ) ON [PRIMARY] CREATE TABLE [dbo].[messagetable3] ( [Message] [nvarchar](100) NOT NULL, [DateTimeStamp] [datetime2] NOT NULL ) ON [PRIMARY] Listing 5-2: Creating the tables. 157 158 Chapter 5: Log Backups Listing 5-2 creates three simple message tables, each of which will store simple text messages and a time stamp so that we can see exactly when each message was inserted into the table. Having created our three tables, we can now pump a bit of data into them. We'll use the same technique as in Chapter 3, i.e. a series of INSERT commands, each with the X batch separator, to insert ten rows into each of the three tables, as shown in Listing 5-3. USE [DatabaseForLogBackups] INSERT INTO dbo.messagetable1 VALUES ('This is the initial data for MessageTable1', GETDATE()) 10 INSERT INTO dbo.messagetable2 VALUES ('This is the initial data for MessageTable2', GETDATE()) 10 INSERT INTO dbo.messagetable3 VALUES ('This is the initial data for MessageTable3', GETDATE()) 10 Listing 5-3: Initial population of tables. We'll be performing a subsequent data load shortly, but for now we have a good base of data from which to work, and we have a very important step to perform before we can even think about taking log backups. Even though we set the recovery model to FULL, we won't be able to take log backups (in other words, the database is still effectively in autotruncate mode) until we've first performed a full backup. Before we do that, however, let's take a quick look at the current size of our log file, and its space utilization, using the DBCC SQLPERF (LOGSPACE); command. This will return results for all databases on the instance, but for the DatabaseForLogBackups we should see results similar to those shown in Figure 159 Chapter 5: Log Backups Backup Stage Log Size Space Used Before full backup 50 MB 0.65 % Figure 5-2: DBCC SQLPERF (LOGSPACE) output before full backup. This shows us that we have a 50 MB log file and that it is only using a little more than one half of one percent of that space. Taking a base full database backup Before we can even take log backups for our DatabaseForLogBackups database, we need to take a base full backup of the data file. We've covered all the details of taking full backups in Chapter 3, so simply run Listing 5-4 to take a full backup of the database, placing the backups in a central folder location, in this case C:\SQLBackups\Chapter5 so they are readily available for restores. You've probably noticed that, for our simple example, the backup files are being stored on the same drive as the data and log files. In the real world, you'd store the data, log, and backup files on three separate disk drives. USE [master] BACKUP DATABASE [DatabaseForLogBackups] TO DISK = N'C:\SQLBackups\Chapter5\DatabaseForLogBackups_Native_Full.bak' WITH NAME = N'DatabaseForLogBackups-Full Database Backup', STATS = 10, INIT Listing 5-4: Taking a native full backup. 159 160 Chapter 5: Log Backups So, at this stage, we've captured a full backup of our new database, containing three tables, each with ten rows of data. We're ready to start taking log backups now, but let's run the DBCC SQLPERF(LOGSPACE) command again, and see what happened to our log space. Backup Stage Log Size Space Used Before full backup 50 MB 0.65 % After full backup 50 MB 0.73% Figure 5-3: DBCC SQLPERF (LOGSPACE) output after full backup. What's actually happened here isn't immediately apparent from these figures, so it needs a little explanation. We've discussed earlier how, for a FULL recovery model database, only a log backup can free up log space for reuse. This is true, but the point to remember is that such a database is actually operating in auto-truncate mode until the first full backup of that database completes. The log is truncated as a result of this first-ever full backup and, from that point on, the database is truly in FULL recovery, and a full backup will never cause log truncation. So, hidden in our figures is the fact that the log was truncated as a result of our first full backup, and the any space taken up by the rows we added was made available for reuse. Some space in the log would have been required to record the fact that a full backup took place, but overall the space used shows very little change. Later in the chapter, when taking log backups with T-SQL, we'll track what happens to these log space statistics as we load a large amount of data into our tables, and then take a subsequent log backup. 160 161 Chapter 5: Log Backups Taking Log Backups We're going to discuss taking log backups the "GUI way," in SSMS, and by using native T-SQL Backup commands. However, before taking that elusive first log backup, let's quickly insert ten new rows of data into each of our three tables, as shown in Listing 5-5. Having done so, we'll have ten rows of data for each table that is captured in a full database backup, and another ten rows for each table that is not in the full database backup, but where the details of the modifications are recorded in the live transaction log file. USE [DatabaseForLogBackups] INSERT INTO MessageTable1 VALUES ('Second set of data for MessageTable1', GETDATE()) 10 INSERT INTO MessageTable2 VALUES ('Second set of data for MessageTable2', GETDATE()) 10 INSERT INTO MessageTable3 VALUES ('Second set of data for MessageTable3', GETDATE()) 10 Listing 5-5: A second data load. The GUI way: native SSMS log backups Open SSMS, connect to your test server, locate the DatabaseForLogBackups database, right-click on it and select the Tasks Back Up option. Select the Back Up menu item to bring up the General page of the Back Up Database window, with which you should be familiar from Chapter 3, when we performed full database backups. The first set of configurable options is shown in Figure 162 Chapter 5: Log Backups Figure 5-4: The Source section configuration for log backup. Notice that we've selected the DatabaseForLogBackups database and, this time, we've changed the Backup type option to Transaction Log, since we want to take log backups. Once again, we leave the Copy-only backup option unchecked. COPY_ONLY backups of the log file When this option is used, when taking a transaction log backup, the log archiving point is not affected and does not affect the rest of our sequential log backup files. The transactions contained within the log file are backed up, but the log file is not truncated. This means that the special COPY_ONLY log backup can be used independently of conventional, scheduled log backups, and would not be needed when performing a restore that included the time span where this log backup was taken. The Backup component portion of the configuration is used to specify full database backup versus file or filegroup backup; we're backing up the transaction log here, not the data files, so these options are disabled. The second set of configurable options for our log backup, shown in Figure 5-5, is titled Backup set and deals with descriptions and expiration dates. The name field is auto-filled and the description will be left blank. The backup set expiration date can also be left with 162 163 Chapter 5: Log Backups the default values. As discussed in Chapter 3, since we are only storing one log backup set per file, we do not need to worry about making sure backup sets expire at a given time. Storing multiple log backups would reduce the number of backup files that we need to manage, but it would cause that single file to grow considerably larger with each backup stored in it. We would also run the risk of losing more than just one backup if the file were to become corrupted or was lost. Figure 5-5: The Backup set section configuration for log backups. The final section of the configuration options is titled Destination, where we specify where to store the log backup file and what it will be called. If there is a file already selected for use, click the Remove button because we want to choose a fresh file and location. Now, click the Add button to bring up the backup destination selection window. Click the browse ( ) button and navigate to our chosen backup file location (C:\SQLBackups\ Chapter5) and enter the file name DatabaseForLogBackups_Native_Log_1.trn at the bottom, as shown in Figure 164 Chapter 5: Log Backups Figure 5-6: Selecting the path and filename for your log backup file. Note that, while a full database backup file is identified conventionally by the.bak extension, most DBAs identify log backup files with the.trn extension. You can use whatever extension you like, but this is the standard extension for native log backups and it makes things much less confusing if everyone sticks to a familiar convention. When you're done, click the OK buttons on both the Locate Database Files and Select Backup Destination windows to bring you back to the main backup configuration window. Once here, select the Options menu on the upper left-hand side of the window to bring up the second page of backup options. We are going to focus here on just the Transaction log section of this Options page, shown in Figure 5-7, as all the other options on this page were covered in detail in Chapter 165 Chapter 5: Log Backups Figure 5-7: Configuration options for transaction log backup. The Transaction log section offers two important configuration option configurations. For all routine, daily log backups, the default option of Truncate the transaction log is the one you want; on completion of the log backup the log file will be truncated, if possible (i.e. space inside the file will be made available for reuse). The second option, Back up the tail of the log, is used exclusively in response to a disaster scenario; you've lost data, or the database has become corrupt in some way, and you need to restore it to a previous point in time. Your last action before attempting a RESTORE operation should be to back up the tail of the log, i.e. capture a backup of the remaining contents (those records added since the last log backup) of the live transaction log file, assuming it is still available. This will put the database into a restoring state, and assumes that the next action you wish to perform is a RESTORE. This is a vital option in database recovery, and we'll demonstrate it in Chapter 6, but it won't be a regular maintenance task that is performed. Having reviewed all of the configuration options, go ahead and click OK, and the log backup will begin. A progress meter will appear in the lower left-hand side of the window but since we don't have many transactions to back up, the operation will probably complete very quickly and you'll see a notification that your backup has completed and was successful. If, instead, you receive an error message, you'll need to check your configuration settings and to attempt to find what went wrong. We have successfully taken a transaction log backup! As discussed in Chapter 3, various metrics are available regarding these log backups, such as the time it took to complete, and the size of the log backup file. In my test, the backup time was negligible in this case (measured in milliseconds). However, for busy databases, handling hundreds or thousands of transactions per second, the speed of these log backups can be a very 165 166 Chapter 5: Log Backups important consideration and, as was demonstrated for full backups in Chapter 3, use of backup compression can be beneficial. In Chapter 8, we'll compare the backup speeds and compression ratios obtained for native log backups, versus log backups with native compression, versus log backups with SQL Backup compression. Right now, though, we'll just look at the log backup size. Browse to the C:\SQLBackups\ Chapter5\ folder, or wherever it is that you stored the log backup file and simply note the size of the file. In my case, it was 85 KB. T-SQL log backups Now that we have taken a log backup using the GUI method, we'll insert some new rows into our DatabaseForLogBackups table and then take a second log backup, this time using a T-SQL script. Rerun DBCC SQLPERF(LOGSPACE)before we start. Since we've only added a small amount of data, and have performed a log backup, the live log file is still 50 MB in size, and less than 1% full. Listing 5-6 shows the script to insert our new data; this time we're inserting considerably more rows into each of the three tables (1,000 into the first table, 10,000 into the second and 100,000 into the third) and so our subsequent transaction log backup should also be much larger, as a result. Notice that we print the date and time after inserting the rows into MessageTable2, and then use the WAITFOR DELAY command before proceeding with the final INSERT command; both are to help us when we come to perform some point-in-time restores, in Chapter 167 Chapter 5: Log Backups USE [DatabaseForLogBackups] INSERT INTO dbo.messagetable1 VALUES ( 'Third set of data for MessageTable1', GETDATE() ) 1000 INSERT INTO dbo.messagetable2 VALUES ( 'Third set of data for MessageTable2', GETDATE() ) Note this date down as we'll need it in the next chapter PRINT GETDATE() WAITFOR DELAY '00:02:00' INSERT INTO dbo.messagetable3 VALUES ( 'Third set of data for MessageTable3', GETDATE() ) Listing 5-6: Third data insert for DatabaseForLogBackups (with 2-minute delay). Go ahead and run this script in SSMS (it will take several minutes). Before we perform a T-SQL log backup, let's check once again on the size of the log file and space usage for our DatabaseForLogBackups database. Backup Stage Log Size Space Used Initial stats 50 MB 0.8 % After third data data load 100 MB 55.4% Figure 5-8: DBCC SQLPERF (LOGSPACE) output after big data load. 167 168 Chapter 5: Log Backups We now see a more significant portion of our log being used. The log actually needed to grow above its initial size of 50 MB in order to accommodate this data load, so it jumped in size by 50 MB, which is the growth rate we set when we created the database. 55% of that space is currently in use. When we used the SSMS GUI to build our first transaction log backup, a T-SQL BACKUP LOG command was constructed and executed under the covers. Listing 5-7 simply creates directly in T-SQL a simplified version of the backup command that the GUI would have generated. USE [master] BACKUP LOG [DatabaseForLogBackups] TO DISK = N'C:\SQLBackups\Chapter5\DatabaseForLogBackups_Native_Log_2.trn' WITH NAME = N'DatabaseForLogBackups-Transaction Log Backup', STATS = 10 Listing 5-7: Native T-SQL transaction log backup code. The major difference between this script, and the one we saw for a full database backup in Chapter 3, is that the BACKUP DATABASE command is replaced by the BACKUP LOG command, since we want SQL Server to back up the transaction log instead of the data files. We use the TO DISK parameter to specify the backup location and name of the file. We specify our central SQLBackups folder and, once again, apply the convention of using.trn to identify our log backup. The WITH NAME option allows us to give the backup set an appropriate set name. This is informational only, especially since we are only taking one backup per file/set. Finally, we see the familiar STATS option, which dictates how often the console messages will be updated during the operation. With the value set to 10, a console message should display when the backup is (roughly) 10%, 20%, 30% complete, and so on, as shown in Figure 5-9, along with the rest of the statistics from the successful backup. 168 169 Chapter 5: Log Backups Figure 5-9: Successful transaction log backup message. Notice that this log backup took about 3.5 seconds to complete; still not long, but much longer than our first backup took. Also, Figure 5-10 shows that the size of our second log backup file weighs in at 56 MB. This is even larger than our full backup, which is to be expected since we just pumped in 3,700 times more records into the database than were there during the full backup. The log backup file size is also broadly consistent with what we would have predicted, given the stats we got from DBCC SQLPERF (LOGSPACE), which indicated a 100 MB log which was about 55% full. Figure 5-10: Second log backup size check. 169 170 Chapter 5: Log Backups Let's check the log utilization one last time with the DBCC SQLPERF (LOGSPACE) command. Backup Stage Log Size Space Used Initial stats 50 MB 0.8 % After third data data load After T-SQL log backup 100 MB 55.4% 100 MB 5.55% Figure 5-11: DBCC SQLPERF (LOGSPACE) output after big data load. This is exactly the behavior we expect; as a result of the backup operation, the log has been truncated, and space in inactive VLFs made available for reuse, so only about 5% of the space is now in use. As discussed earlier, truncation does not affect the physical size of the log, so it remains at 100 MB. Forcing Log Backup Failures for Fun Having practiced these log backups a few times, it's rare for anything to go wrong with the actual backup process itself. However, it's still possible to stumble into the odd gotcha. Go ahead and run Listing 5-8 in an SSMS query window; try to work out what the error is going to be before you run it. 170 171 Chapter 5: Log Backups CREATE DATABASE [ForceFailure] -- Using the NO_WAIT option so no rollbacks -- of current transactions are attempted ALTER DATABASE [ForceFailure] SET RECOVERY SIMPLE WITH NO_WAIT BACKUP DATABASE [ForceFailure] TO DISK = 'C:\SQLBackups\Chapter5\ForceFailure_Full.bak' BACKUP LOG [ForceFailure] TO DISK = 'C:\SQLBackups\Chapter5\ForceFailure_Log.trn' --Tidy up once we're done as we no longer need this database DROP DATABASE ForceFailure Listing 5-8: Forcing a backup failure. You should see the message output shown in Figure Figure 5-12: Running a log backup on a SIMPLE model database. We can see that the database was successfully created, the BACKUP DATABASE operation was fine, but then we hit a snag with the log backup operation: Msg 4208, Level 16, State 1, Line 1 The statement BACKUP LOG is not allowed while the recovery model is SIMPLE. Use BACKUP DATABASE or change the recovery model using ALTER DATABASE. 171 172 Chapter 5: Log Backups The problem is very clear: we are trying to perform a log backup on a database that is operating in the SIMPLE recovery model, which is not allowed. The exact course of action, on seeing an error like this, depends on the reason why you were trying to perform the log backup. If you simply did not realize that log backups were not possible in this case, then lesson learned. If log backups are required in the SLA for this database, then the fact that the database is in SIMPLE recovery is a serious problem. First, you should switch it to FULL recovery model immediately, and take another full database backup, to restart the log chain. Second, you should find out when and why the database got switched to SIMPLE, and report what the implications are for point-in-time recovery over that period. An interesting case where a DBA might see this error is upon spotting that a log file for a certain database is growing very large, and assuming that the cause is the lack of a log backup. Upon running the BACKUP LOG command, the DBA is surprised to see the database is in fact in SIMPLE recovery. So, why would the log file be growing so large? Isn't log space supposed to be reused after each CHECKPOINT, in this case? We'll discuss possible reasons why you still might see log growth problems, even for databases in SIMPLE recovery, in the next section. Troubleshooting Log Issues The most common problem that DBAs experience with regard to log management is rapid, uncontrolled (or even uncontrollable) growth in the size of the log. In the worst case, a transaction log can grow until there is no further available space on its drive and so it can grow no more. At this point, you'll encounter Error 9002, the "transaction log full" error, and the database will become read-only. If you are experiencing uncontrolled growth of the transaction log, it is due, either to a very high rate of log activity, or to factors that are preventing space in the log file from being reused, or both. 172 173 Chapter 5: Log Backups As a log file grows and grows, a related problem is that the log can become "internally fragmented," if the growth occurs in lots of small increments. This can affect the performance of any operation that needs to read the transaction log (such as database restores). Failure to take log backups Still the most common cause of a rapidly growing log, sadly, is simple mismanagement; in other words failure to take log backups for a database operating in FULL recovery. Since log backups are the only operation that truncates the log, if they aren't performed regularly then your log file may grow rapidly and eat up most or all of the space on your drive. I rest more peacefully at night in the knowledge that, having read this chapter, this is a trap that won't catch you out. If you're asked to troubleshoot a "runaway" transaction log for a FULL recovery model database, then a useful first step is to interrogate the value of the log_reuse_wait_ desc column in sys.databases, as shown in Listing 5-9. USE master SELECT name, recovery_model_desc, log_reuse_wait_desc FROM sys.databases WHERE name = 'MyDatabase' Listing 5-9: Finding possible causes of log growth. If the value returned for the log_reuse_wait_desc column is LOG BACKUP, then the reason for the log growth is the lack of a log backup. If this database requires log backups, start taking them, at a frequency that will satisfy the terms of the SLA, and control the growth of the log from here in. If the database doesn't require point-in-time recovery, switch the database to SIMPLE recovery. 173 174 Chapter 5: Log Backups In either case, if the log has grown unacceptably large (or even full) in the meantime, refer to the forthcoming section on Handling the 9002 Transaction Log Full error. Other factors preventing log truncation There are a few factors, in addition to a lack of log backups, which can "pin" the head of the log, and so prevent space in the file from being reused. Some of these factors can cause the size of the log to grow rapidly, or prevent the log file from being truncated, even when operating in SIMPLE recovery model. The concept of the active log, and the criteria for log truncation, as discussed earlier in the chapter, also explains why a "rogue" long-running transaction, or a transaction that never gets committed, can cause very rapid log growth, regardless of the recovery model in use. If we have a single, very long-running transaction, then any log records that relate to transactions that started and committed after this transaction must still be part of the active log. This can prevent large areas of the log from being truncated, either by a log backup or by a CHECKPOINT, even though the vast majority of the log records are no longer required. Such problems manifest in a value of ACTIVE TRANSACTION for the log_reuse_wait_ desc column. The course of action in the short term is to discover which transaction is causing the problem and find out if it's possible to kill it. In the longer term, you need to work out how to tune very long-running transactions or break them down into smaller units, and to find out what application issues are causing transactions to remain uncommitted. DBCC OPENTRAN A tool that is useful in tracking down rogue transactions is DBCC OPENTRAN(DatabaseName). It will report the oldest active transaction for that database, along with its start time, and the identity of the session that started the transaction. 174 175 Chapter 5: Log Backups There are several other issues that can be revealed through the log_reuse_wait_ desc column, mainly relating to various processes, such as replication, which require log records to remain in the log until they have been processed. We haven't got room to cover them here, but Gail Shaw offers a detailed description of these issues in her article, Why is my transaction log full? at Transaction+Log/72488/. Excessive logging activity If no problems are revealed by the log_reuse_wait_desc column then the log growth may simply be caused by a very high rate of logging activity. For a database using the FULL recovery model, all operations are fully logged, including bulk data imports, index rebuilds, and so on, all of which will write heavily to the log file, causing it to grow rapidly in size. It is out of scope for this book to delve into the full details of bulk operations but, essentially, you need to find a way to either minimize the logging activity or, if that's not possible, then simply plan for it accordingly, by choosing appropriate initial size and growth settings for the log, as well as an appropriate log backup strategy. As noted in Chapter 1, certain bulk operations can be minimally logged, by temporarily switching the database from FULL to BULK_LOGGED recovery, in order to perform the operation, and then back again. Assuming your SLA will permit this, it is worth considering, given that any bulk logged operation will immediately prevent point-in-time recovery to a point within any log file that contains records relating to the minimally logged operations. We'll cover this option in a little more detail in Chapter 176 Chapter 5: Log Backups Handling the 9002 Transaction Log Full error As noted earlier, in the worst case of uncontrolled log growth you may find yourself facing down the dreaded "9002 Transaction Log Full" error. There is no more space within the log to write new records, there is no further space on the disk to allow the log file to grow, and so the database becomes read-only until the issue is resolved. Obviously the most pressing concern in such cases is to get SQL Server functioning normally again, either by making space in the log file, or finding extra disk space. If the root cause of the log growth turns out to be no log backups (or insufficiently frequent ones), then perform one immediately. An even quicker way to make space in the log, assuming you can get permission to do it, is to temporarily switch the database to SIMPLE recovery to force a log truncation, then switch it back to FULL and perform a full backup. The next step is to investigate all possible causes of the log growth, as discussed in the previous sections, and implement measures to prevent its recurrence. Having done so, however, you may still be left with a log file that has ballooned to an unacceptably large size. As a one-off operation, in such circumstances, it's acceptable to use DBCC SHRINKFILE to reclaim space in the log and reduce its physical size. The recommended way to do this is to disconnect all users from the database (or wait for a time when there is very low activity), take a log backup or just force a CHECKPOINT if the database is in SIMPLE recovery, and then perform the DBCC SHRINKFILE operation, as follows: DBCC SHRINKFILE(<logical name of log file>,truncateonly);. This will shrink the log to its smallest possible size. You can then resize the log appropriately, via an ALTER DATABASE command. This technique, when performed in this manner as a one-off operation, should resize the log and remove any fragmentation that occurred during its previous growth. 176 177 Chapter 5: Log Backups Finally, before moving on, it's worth noting that there is quite a bit of bad advice out there regarding how to respond to this transaction log issue. The most frequent offenders are suggestions to force log truncation using BACKUP LOG WITH TRUNCATE_ONLY (deprecated in SQL Server 2005) or its even more disturbing counterpart, BACKUP LOG TO DISK='NUL', which takes a log backup and discards the contents, without SQL Server having any knowledge that the log chain is broken. Don't use these techniques. The only correct way to force log truncation is to temporarily switch the database to SIMPLE recovery. Likewise, you should never schedule regular DBCC SHRINKFILE tasks as a means of controlling the size of the log as it can cause terrible log fragmentation, as discussed in the next section. Log fragmentation A fragmented log file can dramatically slow down any operation that needs to read the log file. For example, it can cause slow startup times (since SQL Server reads the log during the database recovery process), slow RESTORE operations, and more. Log size and growth should be planned and managed to avoid excessive numbers of growth events, which can lead to this fragmentation. A log is fragmented if it contains a very high number of VLFs. In general, SQL Server decides the optimum size and number of VLFs to allocate. However, a transaction log that auto-grows frequently in small increments, will suffer from log fragmentation. To see this in action, let's simply re-create our previous ForceFailure database, withal its configuration settings set at whatever the model database dictates, and then run the DBCC LogInfo command, which is an undocumented and unsupported command (at least there is very little written about it by Microsoft) but which will allow us to interrogate the VLF architecture. 177 178 Chapter 5: Log Backups CREATE DATABASE [ForceFailure] ALTER DATABASE [ForceFailure] SET RECOVERY FULL WITH NO_WAIT DBCC Loginfo; Listing 5-10: Running DBCC Loginfo on the ForceFailure database. The results are shown in Figure The DBCC LogInfo command returns one row per VLF and, among other things, indicates the Status of that VLF. A Status value of 2 indicates a VLF is active and cannot be truncated; a Status value of 0 indicates an inactive VLF. Figure 5-13: Five VLFs for our empty ForceFailure database. Five rows are returned, indicating five VLFs (two of which are currently active). We are not going to delve any deeper here into the meaning of any of the other columns returned. Now let's insert a large number of rows (one million) into a VLFTest table, in the ForceFailure database, using a script reproduced by kind permission of Jeff Moden (), and then rerun the DBCC LogInfo command, as shown in Listing 179 Chapter 5: Log Backups USE ForceFailure ; IF OBJECT_ID('dbo.VLFTest', 'U') IS NOT NULL DROP TABLE dbo.vlftest ; --===== AUTHOR: Jeff Moden --===== Create and populate 1,000,000 row test table. -- "SomeID" has range of 1 to unique numbers -- "SomeInt" has range of 1 to non-unique numbers -- "SomeLetters2";"AA"-"ZZ" non-unique 2-char strings -- "SomeMoney"; to non-unique numbers -- "SomeDate" ; >=01/01/2000 and <01/01/2010 non-unique -- "SomeHex12"; 12 random hex characters (ie, 0-9,A-F) SELECT TOP SomeID = IDENTITY( INT,1,1 ), SomeInt = ABS(CHECKSUM(NEWID())) % , SomeLetters2 = CHAR(ABS(CHECKSUM(NEWID())) % ) + CHAR(ABS(CHECKSUM(NEWID())) % ), SomeMoney = CAST(ABS(CHECKSUM(NEWID())) % / AS MONEY), SomeDate = CAST(RAND(CHECKSUM(NEWID())) * AS DATETIME), SomeHex12 = RIGHT(NEWID(), 12) INTO dbo.vlftest FROM sys.all_columns ac1 CROSS JOIN sys.all_columns ac2 ; DBCC Loginfo; --Tidy up once you're done as we no longer need this database DROP DATABASE ForceFailure Listing 5-11: Inserting one million rows and interrogating DBCC Loginfo. This time, the DBCC LogInfo command returns 131 rows, indicating 131 VLFs, as shown in Figure 180 Chapter 5: Log Backups Figure 5-14: 131 VLFs for our ForceFailure database, with one million rows. The growth properties inherited from the model database dictate a small initial size for the log files, then growth in relatively small increments. These properties are inappropriate for a database subject to this sort of activity and lead to the creation of a large number of VLFs. By comparison, try re-creating ForceFailure, but this time with some sensible initial size and growth settings (such as those shown in Listing 5-1). In my test, this resulted in an initial 4 VLFs, expanding to 8 VLFs after inserting a million rows. The "right" number of VLFs for a given database depends on a number of factors, including, of course, the size of the database. Clearly, it is not appropriate to start with a very small log and grow in small increments, as this leads to fragmentation. However, it might also cause problems to go to the opposite end of the scale and start with a huge (tens of GB) log, as then SQL Server would create very few VLFs, and this could affect log space reuse. Further advice on how to achieve a reasonable number of VLFs can be found in Paul Randal's TechNet article Top Tips for Effective Database Maintenance, at If you do diagnose a very fragmented log, you can remedy the situation using DBCC SHRINKFILE, as described in the previous section. Again, never use this as a general means of controlling log size; instead, ensure that your log is sized, and grows, appropriately. 180 181 Chapter 5: Log Backups Summary This chapter explained in detail how to capture log backups using either SSMS Backup Wizard or T-SQL scripts. We also explored, and discussed, how to avoid certain log-related issues such as explosive log growth and log fragmentation. Do not to remove any of the backup files we have captured; we are going to use each of these in the next chapter to perform various types of restore operation on our DatabaseForLogBackups database. 181 182 Chapter 6: Log Restores Whereas log backups will form a routine part of our daily maintenance tasks on a given database, log restores, at least in response to an emergency, will hopefully be a much rarer occurrence. However, whenever they need to be performed, it's absolutely vital the job is done properly. In this chapter, we're going to restore the full backup of our DatabaseForLogBackups database from Chapter 5, and then apply our series of log backups in order to return our databases to various previous states. We'll demonstrate how to perform a complete transaction log restore, and how to restore to a point in time within a log file. Log Restores in the SLA With an appropriate backup schedule in place, we know that with the right collection of files and enough time we can get a database back online and, with a point-in-time restore, get it back to a state fairly close to the one it was in before whatever unfortunate event befell it. However, as discussed in Chapter 2, the SLA needs to clearly stipulate an agreed maximum time for restoring a given database, which takes sensible account of the size of the database, where the necessary files are located, and the potential complexity of the restore. 182 183 Chapter 6: Log Restores Possible Issues with Log Restores There are several factors that could possibly affect our ability to perform a point-in-time log restore successfully or at least cause some disruption to the process: an incomplete series of log backup files a missing full backup database file minimally logged transactions in a log file. Let's take a look at each in turn. Missing or corrupt log backup A restore operation to a particular point will require an unbroken log backup chain; that is, we need to be able to apply each and every backup file, in an unbroken sequence, up to the one that contains the log records covering the time to which we wish to restore the database. If we don't have a complete sequence of log files describing a complete LSN chain right up to the point we wish to restore, then we will only be able to restore to the end of the last file before the sequence was broken. The most common cause of a broken log file sequence is a missing or corrupted backup file. However, another possibility is that someone manually forced a truncation of the log without SQL Server's knowledge (see the Handling the 9002 Transaction Log Full error section in Chapter 5). This could prove disastrous, especially in the case of a failure very late in a day. For example, if we are taking log backups each hour, but somehow lose the transaction log backup file that was taken at 1 p.m., a failure that happens at 8 p.m. will cost us eight full hours of data loss. 183 184 Chapter 6: Log Restores Missing or corrupt full backup If we find that the latest full backup file, on which we planned to base a RESTORE operation, is missing or corrupt, then there is still hope that we can perform our point-intime restore. Full backups do not break the log chain, and each log backup contains all the log records since the last log backup, so we can restore the previous good full backup and then restore the full chain of log files extending from this full backup up to the desired point of recovery. However, this is not a reason to skip full backups, or to adopt a cavalier attitude towards full backup failures. When performing a restore operation, the greatest chance of success comes when that operation involves the smallest number of files possible. The more files we have to restore, the greater the chance of failure. Minimally logged operations In Chapter 5, we discussed briefly the fact that it is not possible to restore a database to a point in time within a log file that contains minimally logged operations, recorded while the database was operating in the BULK_LOGGED recovery model. In order to visualize this, Figure 6-1 depicts an identical backup timeline for two databases, each of which we wish to restore to the same point in time (represented by the arrow). The green bar represents a full database backup and the yellow bars represent a series of log backups. The only difference between the two databases is that the first is operating in FULL recovery model, and the second in BULK LOGGED. 184 185 Chapter 6: Log Restores Figure 6-1: Database backup timeline. During the time span of the fifth log backup of the day, a BULK INSERT command was performed on each database, in order to load a set of data. This bulk data load completed without a hitch but, in an unrelated incident, within the time span of the fifth log backup, a user ran a "rogue" transaction and crucial data was lost. The project manager informs the DBA team and requests that the database is restored to a point just where the transaction that resulted in data loss started. (we'll see how to do this later in the chapter). In the BULK LOGGED database, we have a problem. We can restore to any point in time within the first four log backups, but not to any point in time within the fifth log backup, which contains the minimally logged operations. For that log file, we are in an "all or nothing" situation; we must apply none of the operations in this log file, so stopping the restore at the end of the fourth file, or we must apply all of them, proceeding to restore to any point in time within the sixth log backup. 185 186 Chapter 6: Log Restores In other words, we can restore the full database backup, again without recovery, and apply the first four log backups to the database. Unfortunately, we will not have the option to restore to any point in time within the fifth log., enter database recovery, and report the loss of any data changes that were made after this time. Hopefully, this will never happen to you and, unless your SLA adopts a completely "zero tolerance" attitude towards any risk of data loss, it is not a reason to avoid BULK_LOGGED recovery model altogether. There are valid reasons using this recovery model in order to reduce the load on the transaction log, and if we follow best practices, we should not find ourselves in this type of situation. USE [master] BACKUP LOG [SampleDB] TO DISK = '\\path\example\filename.trn' ALTER DATABASE [SampleDB] SET RECOVERY BULK_LOGGED WITH NO_WAIT -- Perform minimally logged transactions here -- Stop minimally logged transactions here ALTER DATABASE [SampleDB] SET RECOVERY FULL WITH NO_WAIT BACKUP LOG [SampleDB] TO DISK = '\\path\example\filename.trn' Listing 6-1: A template for temporarily switching a database to BULK_LOGGED recovery model. 186 187 Chapter 6: Log Restores If we do need to perform maintenance operations that can be minimally logged and we wish to switch to BULK_LOGGED model, the recommended practice is to take a log backup immediately before switching to BULK_LOGGED, and immediately after switching the database back to FULL recovery, as demonstrated in Listing 6-1. This will, as far as possible, isolate the minimally logged transactions in a single log backup file. Performing Log Restores Being prepared to restore your database and log backups is one of the DBAs most important jobs. All DBAs are likely, eventually, to find themselves in a situation where a crash recovery is required, and it can be a scary situation. However, the well-prepared DBA will know exactly where the required backup files are stored, will have been performing backup verification and some random test restores, and can exude a calm assurance that this database will be back online as quickly as possible. As we did for log backups, we're going to discuss how to perform log restores the "GUI way," in SSMS, and then by using native T-SQL Backup commands. We're going to be restoring to various states the DatabaseForLogBackups database from Chapter 5, so before we start, let's review, pictorially, our backup scheme for that database, and what data is contained within each backup file. With our first example, using the GUI, we'll restore the DatabaseForLogBackups database to the state in which it existed at the point after the first log backup was taken; in other words, we'll restore right to the end of the transaction log DatabaseForLog- Backups_Native_Log_1.trn, at which point we should have 20 rows in each of our three message tables. 187 188 Chapter 6: Log Restores Figure 6-2: Backup scheme for DatabaseForLogBackups. In our second example, using a T-SQL script, we'll restore to a specific point in time within the second log backup, just before the last 100,000 records were inserted into MessageTable3. This will leave the first two tables with their final row counts (1,020 and 10,020, respectively), but the third with just 20 rows. As discussed in Chapter 1, there are several different ways to restore to a particular point inside a log backup; in this case, we'll demonstrate the most common means, which is to use the STOPAT parameter in the RESTORE LOG command to restore the database to a state that reflects all transactions that were committed at the specified time. GUI-based log restore We will begin by using the native SQL Server backup files that we took in Chapter 5 to restore our database to the state that it was in after the first transaction log backup was taken. We are going to restore the entire transaction log backup in this example. Go ahead and get SSMS started and connected to your test SQL Server. 188 189 Chapter 6: Log Restores Right-click on DatabaseForLogBackups in the object explorer and select Tasks Restore Database to begin the restore process and you'll see the Restore Database screen, which we examined in detail in Chapter 4. This time, rather than restore directly over an existing database, we'll restore to a new database, basically a copy of the existing DatabaseForLogBackups but as it existed at the end of the first log backup. So, in the To database: section, enter a new database name, such as DatabaseForLogBackups_RestoreCopy. In the Source for restore section of the screen, you should see that the required backup files are auto-populated (SQL Server can interrogate the msdb database for the backup history). This will only be the case if all the backup files are in their original location (C:\ SQLBackups\Chapter5, if you followed through the examples). Therefore, as configured in Figure 6-3, our new copy database would be restored to the end of the second log backup, in a single restore operation. Alternatively, by simply deselecting the second log file, we could restore the database to the end of the first log file. Figure 6-3: Initial Restore Database screen for DatabaseForLogBackups. 189 190 Chapter 6: Log Restores However, if the backup files have been moved to a new location, we'd need to manually locate each of the required files for the restore process, and perform the restore process in several steps (one operation to restore the full backup and another the log file). Since it is not uncommon that the required backups won't still be in their original local folders, and since performing the restore in steps better illustrates the process, we'll ignore this useful auto-population feature for the backup files, and perform the restore manually. Click on From device: and choose the browse button to the right of that option, navigate to C:\SQLBackups\Chapter5 and choose the full backup file. Having done so, the relevant section of the Restore Database page should look as shown in Figure 6-4. Figure 6-4: Restore the full backup file. Next, click through to the Options page. We know that restoring the full backup is only the first step in our restore process, so once the full backup is restored we need the new DatabaseForLogBackups_RestoreCopy database to remain in a restoring state, ready to accept further log files. Therefore, we want to override the default restore state (RESTORE WITH RECOVERY) and choose instead RESTORE WITH NORECOVERY, as shown in Figure 191 Chapter 6: Log Restores Figure 6-5: Restore the full backup file while leaving the database in a restoring state. Note that SQL Server has automatically renamed the data and log files for the new database so as to avoid clashing with the existing DatabaseForLogBackups database, on which the restore is based. Having done this, we're ready to go. First, however, you might want to select the Script menu option, from the top of the General Page, and take a quick look at the script that has been generated under the covers. I won't show it here, as we'll get to these details in the next example, but you'll notice use of the MOVE parameter, to rename the data and log files, and the NORECOVERY parameter, to leave the database in a restoring state. Once the restore is complete, you should see the new DatabaseForLogBackups_RestoreCopy database in your object explorer, but with a green arrow on the database icon, and the word "Restoring " after its name. We're now ready to perform the second step, and restore our first transaction log. Rightclick on the new DatabaseForLogBackups_RestoreCopy database and select Tasks 191 192 Chapter 6: Log Restores Restore Transaction Log. In the Restore source section, we can click on From previous backups of database:, and then select DatabaseForLogBackups database. SQL Server will then retrieve the available log backups for us to select. Alternatively, we can manually select the required log backup, which is the route we'll choose here, so click on From file or tape: and select the first log backup file from its folder location (C:\ SQLBackups\Chapter5\DatabaseForLogBackups_Native_Log_1.trn). The screen should now look as shown in Figure 6-6. Figure 6-6: Preparing to restore the log file. 192 193 Chapter 6: Log Restores Now switch to the Options page and you should see something similar to Figure 6-7. Figure 6-7: Configuring the data and log files for the target database. This time we want to complete the restore and bring the database back online, so we can leave the default option, RESTORE WITH RECOVERY selected. Once you're ready, click OK and the log backup restore should complete and the DatabaseForLogBackups_ RestoreCopy database should be online and usable. As healthy, paranoid DBAs, our final job is to confirm that we have the right rows in each of our tables, which is easily accomplished using the simple script shown in Listing 6-2. USE DatabaseForLogBackups_RestoreCopy SELECT COUNT(*) FROM dbo.messagetable1 SELECT COUNT(*) FROM dbo.messagetable2 SELECT COUNT(*) FROM dbo.messagetable3 Listing 6-2: Checking our row counts. 193 194 Chapter 6: Log Restores If everything ran correctly, there should be 20 rows of data in each table. We can also run a query to return the columns from each table, to make sure that the data matches what we originally inserted. As is hopefully clear, GUI-based restores, when all the required backup files are still available in their original local disk folders, can be quick, convenient, and easy. However, if they are not, for example due to the backup files being moved, after initial backup, from local disk to network space, or after a backup has been brought back from long-term storage on tape media, then these GUI restores can be quite clunky, and the process is best accomplished by script. T-SQL point-in-time restores In this section, we will be performing a point-in-time restore. This operation will use all three of the backup files (one full, two log) to restore to a point in time somewhere before we finalized the last set of INSERT statements that were run against the DatabaseFor- LogBackups database. When doing a point-in-time restore, we are merely restoring completely all of the log backup files after the full database backup, except the very last log file, where we'll stop at a certain point and, during database recovery, SQL Server will only roll forward the transactions up to that specified point. In order to restore to the right point, in this example you will need to refer back to the timestamp value you saw in the output of Listing 5-6, the third data load for the DatabaseForLogBackups database. Hopefully, you will rarely be called upon to perform these types of restore but when you are, it's vital that you're well drilled, having practiced your restore routines many times, and confident of success, based on your backup validation techniques and random test restores. 194 195 Chapter 6: Log Restores GUI-based point-in-time restore It's entirely possible to perform point-in-time restores via the GUI. On the Restore Database page, we simply need to configure a specific date and time, in the To a point in time: option, rather than accept the default setting of most recent possible. We don't provide a full worked example in the GUI, but feel free to play around with configuring and executing these types of restore in the GUI environment. Don't worry; point-in-time restores are not as complicated as they may sound! To prove it, let's jump right into the script in Listing 6-3. The overall intent of this script is to restore the DatabaseForLogBackup full backup file over the top of the DatabaseForLogBackup_RestoreCopy database, created in the previous GUI restore, apply the entire contents of the first log backup, and then the contents of the second log backup, up to the point just before we inserted 100,000 rows into MessageTable3. USE [master] --STEP 1: Restore the full backup. Leave database in restoring state --STEP 2: Completely restore 1st log backup. Leave database in restoring state RESTORE LOG [DatabaseForLogBackups_RestoreCopy] FROM DISK = N'C:\SQLBackups\Chapter5\DatabaseForLogBackups_Native_Log.trn' WITH FILE = 1, NORECOVERY, STATS = 196 Chapter 6: Log Restores --STEP 3: P-I-T restore of 2nd log backup. Recover the database RESTORE LOG [DatabaseForLogBackups_RestoreCopy] FROM DISK = N'C:\SQLBackups\Chapter5\DatabaseForLogBackups_Native_Log_2.trn' WITH FILE = 1, NOUNLOAD, STATS = 10, STOPAT = N'January 30, :34 PM', -- configure your time here RECOVERY Listing 6-3: T-SQL script for a point-in-time restore of DatabaseForLogbackups. The first section of the script restores the full backup file to the restore copy database. We use the MOVE parameter for each file to indicate that, rather than use the data and log files for DatabaseForLogBackups as the target for the restore, we should use those for a different database, in this case DatabaseForLogBackup_RestoreCopy. The NORECOVERY parameter indicates that we wish to leave the target database, Database- ForLogBackup_RestoreCopy, in a restoring state, ready to accept further log backup files. Finally, we use the REPLACE parameter, since we are overwriting the data and log files that are currently being used by the DatabaseForLogBackup_RestoreCopy database. The second step of the restore script applies the first transaction log backup to the database. This is a much shorter command, mainly due to the fact that we do not have to specify a MOVE parameter for each data file, since we already specified the target data and log files for the restore, and those files will have already been placed in the correct location before this RESTORE LOG command executes. Notice that we again use the NORECOVERY parameter in order to leave the database in a non-usable state so we can move on to the next log restore and apply more transactional data. The second and final LOG RESTORE command is where you'll spot the brand new STOPAT parameter. We supply our specific timestamp value to this parameter in order to instruct SQL Server to stop applying log records at that point. The supplied timestamp value is important since we are instructing SQL Server to restore the database to the state it was in at the point of the last committed transaction at that specific time. We need to use the date time that was output when we ran the script in Chapter 5 (Listing 5-6). In my case, the time portion of the output was 3.33 p.m. 196 197 Chapter 6: Log Restores You'll notice that in Listing 6-3 I added one minute to this time, the reason being that the time output does not include seconds, and the transactions we want to include could have committed at, for example, 2:33:45. By adding a minute to the output and rounding up to 2:34:00, we will capture all the rows we want, but not the larger set of rows that inserted next, after the delay. Note, of course, that the exact format of the timestamp, and its actual value, will be different for you! This time, we specify the RECOVERY parameter, so that when we execute the command the database will enter recovery mode, and the database will be restored to the point of the last committed transaction at the specified timestamp. When you run Listing 6-3 as a whole, you should see output similar to that (3.369 MB/sec). 100 percent processed. Processed 0 pages for database 'DatabaseForLogBackups_RestoreCopy', file 'DatabaseForLogBackups' on file 1. Processed 9 pages for database 'DatabaseForLogBackups_RestoreCopy', file 'DatabaseForLogBackups_log' on file 1. RESTORE LOG successfully processed 9 pages in seconds (9.556 MB/sec). 10 percent processed. 20 percent processed. 31 percent processed. 40 percent processed. 50 percent processed. 61 percent processed. 71 percent processed. 80 percent processed. 90 percent processed. 100 percent processed. 197 198 Chapter 6: Log Restores Processed 0 pages for database 'DatabaseForLogBackups_RestoreCopy', file 'DatabaseForLogBackups' on file 1. Processed 7033 pages for database 'DatabaseForLogBackups_RestoreCopy', file 'DatabaseForLogBackups_log' on file 1. RESTORE LOG successfully processed 7033 pages in seconds ( MB/sec). Figure 6-8: Output from the successful point-in-time restore operation. We see the typical percentage completion messages as well as the total restore operation metrics after each file is completed. What we might like to see in the message output, but cannot, is some indication that we performed a point-in-time restore with the STOPAT parameter. There is no obvious way to tell if we successfully did that other than to double-check our database to see if we did indeed only get part of the data changes that are stored in our second log backup file. All we have to do is rerun Listing 6-2 and this time, if everything went as planned, we should have 1,020 rows in MessageTable1, 10,020 rows in MessageTable2, but only 20 rows in MessageTable3, since we stopped the restore just before the final 100,000 rows were added to that table. Possible difficulties with point-in-time restores When restoring a database in order to retrieve lost data after, for example, a rogue transaction erroneously deleted it, getting the exact time right can be the difference between stopping the restore just before the data was removed, which is obviously what we want, or going too far and restoring to the point after the data was removed. The less sure you are of the exact time to which you need to restore, the trickier the process can become. One option, which will be demonstrated in Chapter 8, is to perform a STANDBY restore, which leaves the target database in a restoring state, but keeps it read-only accessible. In this way you can roll forward, query to see if the lost data is still there, roll forward a bit further, and so on. 198 199 Chapter 6: Log Restores Aside from third-party log readers (very few of which offer support beyond SQL Server 2005), there are a couple of undocumented and unsupported functions that can be used to interrogate the contents of log files (fn_dblog) and log backups (fn_dump_dblog). So for example, we can look at the contents of our second log backup files as shown in Listing 6-4. SELECT * FROM fn_dump_dblog(default, DEFAULT, DEFAULT, DEFAULT, 'C:\SQLBackups\Chapter5\DatabaseForLogBackups_Native_Log_2.trn',); Listing 6-4: Exploring log backups with fn_dump_dblog. It's not pretty and it's not supported (so use with caution); it accepts a whole host of parameters, the only one we've defined being the path to the log backup, and it returns a vast array of information that we're not going to begin to get into here but, it does return the Begin Time for each of the transactions contained in the file, and it may give you some help in working out where you need to stop. An alternative technique to point-in-time restores using STOPAT, is to try to work out the LSN value associated with, for example, the start of the rogue transaction that deleted your data. We're not going to walk through an LSN-based restore here, but a good explanation of some of the practicalities involved can be found here: com/2010/07/25/alternative-to-restoring-to-a-point-in-time/. 199 200 Chapter 6: Log Restores Forcing Restore Failures for Fun After successfully performing two restore operations, let's get some first-hand experience with failure. Understanding how SQL Server responds to certain common mistakes is a great first step to being prepared when tragedy actually strikes. Take a look at the script in Listing 6-5, a restore of a full backup over the top of an existing database, and try to predict what's going to go wrong, and why., RECOVERY Listing 6-5: Spot the deliberate mistake, Part 1. Do you spot the mistake? It's quite subtle, so if you don't manage to, simply run the script and examine the error message: Msg 3159, Level 16, State 1, Line 1 The tail of the log for the database "DatabaseForLogBackups_RestoreCopy". The script in Listing 6-5 is identical to what we saw in Step 1 of Listing 6-3, except that here we are restoring over an existing database, and receive an error which is pretty descriptive. The problem is that we are about to overwrite the existing log file for the 200 201 Chapter 6: Log Restores DatabaseForLogBackups_RestoreCopy database, which is a FULL recovery model database, and we have not backed up the tail of the log, so we would lose any transactions that were not previously captured in a backup. This is a very useful warning message to get in cases where we needed to perform crash recovery and had, in fact, forgotten to do a tail log backup. In such cases, we could start the restore process with the tail log backup, as shown in Listing 6-6, and then proceed. USE master BACKUP LOG DatabaseForLogBackups_RestoreCopy TO DISK = 'D:\SQLBackups\Chapter5\DatabaseForLogBackups_RestoreCopy_log_tail.trn' WITH NORECOVERY Listing 6-6: Perform a tail log backup. In cases where we're certain that our restore operation does not require a tail log backup, we can use WITH REPLACE or WITH STOPAT. In this case, the error can be removed, without backing up the tail of the log, by adding the WITH REPLACE clause to Listing 6-5. Let's take a look at a second example failure. Examine the script in Listing 6-7 and see if you can spot the problem. --STEP 1: Restore the log backup, REPLACE 201 202 Chapter 6: Log Restores --Step 2: Restore to the end of the first log backup RESTORE LOG [DatabaseForLogBackups_RestoreCopy] FROM DISK = N'C:\SQLBackups\Chapter5\DatabaseForLogBackups_Native_Log_1.trn' WITH FILE = 1, RECOVERY, STATS = 10 Listing 6-7: Spot the deliberate mistake, Part 2. Look over each of the commands carefully and then execute this script; you should see results similar to those (1.521 MB/sec). Msg 3117, Level 16, State 1, Line 2 The log or differential backup cannot be restored because no files are ready to rollforward. Msg 3013, Level 16, State 1, Line 2 RESTORE LOG is terminating abnormally. Figure 6-9: Output from the (partially) failed restore. We can see that this time the full database backup restore, over the top of the existing database, was successful (note that we remembered to use REPLACE). It processed all of the data in what looks to be the correct amount of time. Since that operation completed, it must be the log restore that caused the error. Let's look at the error messages in the second part of the output, which will be in red. The first error we see in the output is the statement "The log or differential backup cannot be restored because no files are ready to rollforward." What does that mean? If you didn't catch the mistake in the script, it was that we left out an important parameter in the full database restore operation. Take a look again, and you will see that we don't have the NORECOVERY option in the command. Therefore, the first restore command finalized the 202 203 Chapter 6: Log Restores restore and placed the database in a recovered state, ready for user access (with only ten rows in each table!); no log backup files can then be applied as part of the current restore operation. Always specify NORECOVERY if you need to continue further with a log backup restore operation. Of course, there are many other possible errors that can arise if you're not fully paying attention during the restore process, and we can't cover them all. However, as one final example, take a look at Listing 6-8 and see if you can spot the problem., REPLACE RESTORE LOG [DatabaseForLogBackups] FROM DISK = N'C:\SQLBackups\Chapter5\DatabaseForLogBackups_Native_Log_2.trn' WITH FILE = 1, NORECOVERY, STATS = 10 RESTORE LOG [DatabaseForLogBackups] FROM DISK = N'C:\SQLBackups\Chapter5\DatabaseForLogBackups_Native_Log_1.trn' WITH FILE = 1, RECOVERY, STATS = 10 Listing 6-8: Forcing one more fun failure. Did you catch the problem before you ran the script? If not, take a look at your output, examine the error message you get when the first log file restore is attempted. I'm sure you'll be able to figure out what's wrong in short order! 203 204 Chapter 6: Log Restores Summary After reading and working through this chapter, you should also be fairly comfortable with the basics of log restore operations, and in particular with point-in-time restores. The key to successful restores is to be well organized and well drilled. You should know exactly where the required backup files are stored; you should have confidence the operation will succeed, based on your backup validations, and regular, random "spotcheck" restores. You may be under pressure to retrieve critical lost data, or bring a stricken database back online, as quickly as possible, but it's vital not to rush or panic. Proceed carefully and methodically, and your chances of success are high. 204 205 Chapter 7: Differential Backup and Restore A differential database backup in SQL Server is simply a copy of every page that has had a change made to it since the last full backup was taken. We capture, in a differential backup file, the changed data pages then, during a restore process, apply that differential backup to the last full backup, also known as the base backup, in order to roll the database forward to the state in which it existed at the point the differential backup was taken. In my experience, opinion is somewhat split within the DBA community as to the value of differential database backups. Some DBAs seem to regard this third type of backup as an unnecessary complication and prefer, where possible, to restrict their backup strategy to two types of backup: full and log. Others, however, find them to be a necessity, and a useful way of reducing the time required to take backups, space required to store local backup files, and the number of log files that may need to be applied during a point-intime restore. My opinion is that differential backups, used correctly and in the right situations, are a key recovery option in your environment. The basic structure of this chapter will be familiar, so we'll be moving at a quicker pace; after discussing why and when differential backups can form a useful part of your backup and restore strategy, and some pros and cons in their use, we'll walk through examples of taking and then restoring native differential backups, using the SSMS GUI and T-SQL, gathering metrics as we go. Finally, we'll take a look a few common errors that can arise when taking and restoring differential backups. 205 206 Chapter 7: Differential Backup and Restore Differential Backups, Overview As noted in the introduction, a differential database backup is a copy of every page that has changed since the last full backup. The last part of that sentence is worth stressing. It is not possible to take a differential database backup without first taking a full database backup, known as the base backup. Whenever a new full backup is taken, SQL Server will clear any markers in its internal tracking mechanism, which are stored to determine which data pages changed since the last full backup, and so would need to be included in any differential backup. Therefore, each new full backup becomes the base backup for any subsequent differential backups. If we lose a full backup, or it becomes corrupted, any differential backups that rely on that base will be unusable. This is the significant difference in the relationship between a differential backup and its associated full backup, and the relationship between log backups and their associated full backup. While we always restore a full backup before any log backups, the log backups are not inextricably linked to any single full backup; if a full backup file goes missing, we can still go back to a previous full backup and restore the full chain of log files, since every log backup contains all the log that was entered since the last log backup. A full backup resets the log chain, so that we have all the needed information to begin applying subsequent log backups, but doesn't break the chain. A differential backup always contains all changes since the last full backup, and so is tied to one specific full backup. Because the data changes in any differential backup are cumulative since the base backup, if we take one full backup followed by two differential backups then, during a restore process where we wish to return the database to a state as close as possible to that it was in at the time of the disaster, we only need to restore the full backup and the single, most recent differential backup (plus any subsequent transaction log backups). 206 207 Chapter 7: Differential Backup and Restore Advantages of differential backups Perhaps the first question to answer is why we would want to capture only the changed data in a backup; why not just take another full backup? The answer is simple: backup time and backup file space utilization. A full database backup process, for a large database, will be a time- and resource-consuming process, and it is usually not feasible to run this process during normal operational hours, due to the detrimental impact it could have on the performance of end-user and business processes. In most cases, a differential database backup will contain much less data, and require much less processing time than a full database backup. For a large database that is not subject to a huge number of changes, a differential backup can execute in 10% of the time it would take for a full backup. For a real-world example, let's consider one of my moderately-sized production databases. The processing time for a full database backup is usually about 55 minutes. However, the average differential backup takes only about 4 minutes to complete. This is great, since I still have a complete set of backups for recovery, but the processing time is greatly reduced. Remember that the larger a database, the more CPU will be consumed by the backup operation, the longer the backup operation will take, and the greater the risk will be of some failure (e.g. network) occurring during that operation. The other saving that we get with a differential backup, over a full backup, is the space saving. We are only storing the data pages that have been modified since the last full backup. This is typically a small fraction of the total size, and that will be reflected in the size of the differential backup file on disk. As such, a backup strategy that consists of, say, one daily full backup plus a differential backup, is going to consume less disk space than an equivalent strategy consisting of two full backups. 207 208 Chapter 7: Differential Backup and Restore Will differential backups always be smaller than full backups? For the most part, if you refresh your base backup on a regular schedule, you will find that a differential backup should be smaller in size than a full database backup. However, there are situations where a differential backup could become even larger than its corresponding base backup, for example if the base backup is not refreshed for a long period and during that time a large amount of data has been changed or added. We'll discuss this further shortly, in the Possible issues with differential backups section. Let's assume that the same moderately-sized production database, mentioned above, has a backup strategy consisting of a weekly full backup, hourly transaction log backups and daily differential backups. I need to retain on disk, locally, the backups required to restore the database to any point in the last three days, so that means storing locally the last three differential backups, the last three days' transaction log backups, plus the base full backup. The full backup size is about 22 GB, the log backups are, on average, about 3 GB each, and 3 days' worth of differential backups takes up another 3 GB, giving a total of about 28 GB. If I simply took full backups and log backups, I'd need almost 70 GB of space at any time for one database. Deciding exactly the right backup strategy for a database is a complex process. We want to strive as far as possible for simplicity, short backup times, and smaller disk space requirements, but at the same time we should never allow such goals to compromise the overall quality and reliability of our backup regime. Differential backup strategies In what situations can the addition of differential backups really benefit the overall database recovery scheme? Are there any other cases where they might be useful? 208 209 Reduced recovery times Chapter 7: Differential Backup and Restore The number one reason that most DBAs and organizations take differential backups is as a way of reducing the number of log files that would need to be restored in the case of an emergency thus, potentially, simplifying any restore process. For example, say we had a backup strategy consisting of a nightly full backup, at 2 a.m., and then hourly log backups. If a disaster occurred at 6.15 p.m., we'd need to restore 17 backup files (1 full backup and 16 log backups), plus the tail log backup, as shown in Figure 7-1. Figure 7-1: A long chain of transaction log backups. This is a somewhat dangerous situation since, as we have discussed, the more backup files we have to take, store, and manage the greater the chance of one of those files being unusable. This can occur for reasons from disk corruption to backup failure. Also, if any of these transaction log backup files is not usable, we cannot restore past that point in the database's history. If, instead, our strategy included an additional differential backup at midday each day, then we'd only need to restore eight files: the full backup, the differential backup, and six transaction log backups (11 16), plus a tail log backup, as shown in Figure 7-2. We would also be safe in the event of a corrupted differential backup, because we would still have all of the log backups since the full backup was taken. 209 210 Chapter 7: Differential Backup and Restore Figure 7-2: Using a differential backup can shorten the number of backup files to restore. In any situation that requires a quick turnaround time for restoration, a differential backup is our friend. The more files there are to process, the more time it will also take to set up the restore scripts, and the more files we have to work with, the more complex will be the restore operation, and so (potentially) the longer the database will be down. In this particular situation, the savings might not be too dramatic, but for mission-critical systems, transaction logs can be taken every 15 minutes. If we're able to take one or two differential backups during the day, it can cut down dramatically the number of files involved in any restore process. Consider, also, the case of a VLDB where full backups take over 15 hours and so nightly full backups cannot be supported. The agreed restore strategy must support a maximum data loss of one hour, so management has decided that weekly full backups, taken on Sunday, will be supplemented with transaction log backups taken every hour during the day. Everything is running fine until one Friday evening the disk subsystem goes out on that machine and renders the database lost and unrecoverable. We are now going to have to restore the large full backup from the previous Sunday, plus well over 100 transaction log files. This is a tedious and long process. Fortunately, we now know of a better way to get this done, saving a lot of time and without sacrificing too much extra disk space: we take differential database backups each night except Sunday as a supplemental backup to the weekly full one. Now, we'd only need to restore the full, differential and around 20 log backups. 210 211 Database migrations Chapter 7: Differential Backup and Restore In Chapters 3 and 5, we discussed the role of full and log backups in database migrations in various scenarios. Differential restores give us another great way to perform this common task and can save a lot of time when the final database move takes place. Imagine, in this example, that we are moving a large database, operating in SIMPLE recovery model, from Server A to Server B, using full backups. We obviously don't want to lose any data during the transition, so we kill any connections to the database and place it into single-user mode before we take the backup. We then start our backup, knowing full well that no new changes can be made to our database or data. After completion, we take the source database offline and begin the restore the database to Server B. The whole process takes 12 hours, during which time our database is down and whatever front-end application it is that uses that database is also offline. No one is happy about this length of down-time. What could we do to speed the process up a bit? A better approach would be to incorporate differential database backups. 16 hours before the planned migration (allowing a few hours' leeway on the 12 hours needed to perform the full backup and restore), we take a full database backup, not kicking out any users and so not worrying about any subsequent changes made to the database. We restore to Server B the full database backup, using the NORECOVERY option, to leave the database in a state where we can apply more backup files. Once the time has come to migrate the source database in its final state, we kill any connections, place it in single-user mode and perform a differential backup. We have also been very careful not to allow any other full backups to happen via scheduled jobs or other DBA activity. This is important, so that we don't alter the base backup for our newly created differential backup. Taking the differential backup is a fast process (10 15 minutes), and the resulting backup file is small, since it only holds data changes made in the last 12 to 16 hours. Once the backup has completed, we take the source database offline and immediately start the 211 212 Chapter 7: Differential Backup and Restore differential restore on the target server, which also takes only minutes to complete, and we are back online and running. We have successfully completed the migration, and the down-time has decreased from a miserable 12 hours to a scant 30 minutes. There is a bit more preparation work to be done using this method, but the results are the same and the uptime of the application doesn't need to take a significant hit. Possible issues with differential backups In most cases in my experience, the potential issues or downsides regarding differential backups are just minor inconveniences, rather than deal breakers, and usually do not outweigh the savings in disk space usage and backup time, especially for larger databases. Invalid base backup The biggest risk with differential backups is that we come to perform a restore, and find ourselves in a situation where our differential backup file doesn't match up to the base backup file we have ready (we'll see this in action later). This happens when a full backup is taken of which we are unaware. Full database backups are taken for many reasons outside of your nightly backup jobs. If, for example, someone takes a full backup in order to restore it to another system, and then deletes that file after they are done, our next differential is not going to be of much use, since it is using that deleted file as its base. The way to avoid this situation is to ensure that any database backups that are taken for purposes other than disaster recovery are copy-only backups. In terms of the data it contains and the manner in which it is restored, a copy-only backup is just like a normal full backup. However, the big difference is that, unlike a regular full backup, with a copy-only backup, SQL Server's internal mechanism for tracking data pages changed since the last full backup is left untouched and so the core backup strategy is unaffected. 212 213 Chapter 7: Differential Backup and Restore However, if this sort of disaster does strike, all we can do is look through the system tables in MSDB to identify the new base full backup file and hope that it will still be on disk or tape and so can be retrieved for use in your restore operation. If it is not, we are out of luck in terms of restoring the differential file. We would need to switch to using the last full backup and subsequent transaction log backups, assuming that you were taking log backups of that database. Otherwise, our only course of action would be to use the last available full database backup. Bottom line: make sure that a) only those people who need to take backups have the permissions granted to do so, and b) your DBA team and certain administrative users know how and when to use a copy-only backup operation. Missing or corrupt base backup Without a base full backup, there is no way to recover any subsequent differential backup files. If we need a base backup that has been archived to offsite tape storage, then there may be a significant delay in starting the restore process. We can prevent this from happening by making sure the mechanism of backup file cleanup leaves those base files on disk until a new one is created. I keep weekly base backups in a specially-named folder that isn't touched for seven days, so that I know the files will be available in that time span. If a base backup simply goes missing from your local file system, this is an issue that needs careful investigation. Only the DBA team and the Server Administration team should have access to the backup files and they should know much better than to delete files without good cause or reason. Finally, there is no getting around the fact that every restore operation that involves a differential backup will involve a minimum of two files (the base full and the differential) rather than one. Let's say, just for the sake of illustration, that your SLA dictates a maximum of 12 hours' data loss, so you take backups at midnight and midday. If we only take full backups, then we'll only ever need to restore one full backup file; if our midday 213 214 Chapter 7: Differential Backup and Restore backup is a differential, then we'll always need to restore the midnight full, followed by the midday differential. In the event of corruption or loss of a single backup file, the maximum exposure to data loss, in either case, is 24 hours. This really is a small risk to take for the great rewards of having differential backups in your rotation. I have taken many thousands of database backups and restored thousands of files as well. I have only run into corrupt files a few times and they were never caused by SQL Server's backup routines. To help alleviate this concern, we should do two things. Firstly, always make sure backups are being written to some sort of robust storage solution. We discussed this in Chapter 2, but I can't stress enough how important it is to have backup files stored on a redundant SAN or NAS system. These types of systems can cut the risk of physical disk corruptions down to almost nothing. Secondly, as discussed in Chapter 4, we should also perform spot-check restores of files. I like to perform at least one randomly-chosen test restore per week. This gives me even more confidence that my backup files are in good shape without having to perform CHECKSUM tests on each and every backup file. Infrequent base refresh and/or frequently updated databases The next possible issue regards the management and timing of the full base backup. If the base backup isn't refreshed regularly, typically on a weekly basis, the differentials will start to take longer to process. If a database is subject to very frequent modification, then differential backups get bigger and more resource-intensive and the benefits of differential over full, in terms of storage space and backup time reduction, will become more marginal. Ultimately, if we aren't refreshing the base backup regularly then the benefits obtained from differentials may not justify the complicating factor of an additional backup type to manage. 214 215 Chapter 7: Differential Backup and Restore Differentials in the backup and restore SLA So what does all this mean with regard to how differential backups fit into our overall Backup SLA? My personal recommendation is to use differential backups as part of your backup regimen on a SQL Server as often as possible. In the previous sections, we've discussed their reduced file size and faster backup times, compared to taking an additional full backup and the benefits that can bring, especially for large databases. We've also seen how they can be used to reduce the number of transaction log files that need to be processed during point-in-time restores. For some VLDBs, differential backups may be a necessity. I manage several databases that take 12 or more hours to have a full backup and, in these cases, nightly full backups would place undue strain not only on the SQL Server itself but the underlying file system. I would be wasting CPU, disk I/O and precious disk space every night. Performing a weekly full database backup and nightly differential backups has very little negative impact on our recoverability, compared to nightly full backups, and any concerns in this regard shouldn't stop you from implementing them. Nevertheless, some database owners and application managers feel skittish about relying on differential backups in the case of a recovery situation, and you may get some push back. They may feel that you are less protected by not taking a full backup each night and only recording the changes that have been made. This is normal and you can easily calm their nerves by explaining exactly how this type of backup works, and how any risks can be mitigated. 215 216 Chapter 7: Differential Backup and Restore Preparing for Differential Backups Before we get started taking differential backups, we need to do a bit of preparatory work, namely choosing an appropriate recovery model for our database, creating that database along with some populated sample tables, and then taking the base full backup for that database. I could have used the sample database from Chapter 3, plus the latest full backup from that chapter as the base, but I decided to make it easier for people who, for whatever reason, skipped straight to this chapter. Since we've been through this process several times now, the scripts will be presented more or less without commentary; refer back to Chapters 3 and 5 if you need further explanation of any of the techniques. Recovery model There are no recovery model limitations for differential backups; with the exception of the master database, we can take a differential backup of any database in any recovery model. The only type of backup that is valid for the master database is a full backup. For our example database in this chapter, we're going to assume that, in our Backup SLA, we have a maximum tolerance of 24 hours' potential data loss. However, unlike in Chapter 3, this time, we'll satisfy this requirement using a weekly full backup and nightly differential backups. Since we don't need to perform log backups, it makes sense, for ease of log management, to operate the database in SIMPLE recovery model. 216 217 Chapter 7: Differential Backup and Restore Sample database and tables plus initial data load Listing 7-1 shows the script to create the sample DatabaseForDiffBackups database, plus a very simple table (MessageTable), and then load it with an initial 100,000 rows of data. When creating the database, we place the data and log file in a specific, non-default folder, C:\SQLData\, and set appropriate initial size and growth characteristics for these files. We also set the database recovery model to SIMPLE. Feel free to modify the data and log file paths if you need to place them elsewhere on your server. USE [master] CREATE DATABASE [DatabaseForDiffBackups] ON PRIMARY ( NAME = N'DatabaseForDiffBackups', FILENAME = N'C:\SQLData\DatabaseForDiffBackups.mdf', SIZE = KB, FILEGROWTH = 10240KB ) LOG ON ( NAME = N'DatabaseForDiffBackups_log', FILENAME = N'C:\SQLData\DatabaseForDiffBackups_log.ldf', SIZE = 51200KB, FILEGROWTH = 10240KB ) ALTER DATABASE [DatabaseForDiffBackups] SET RECOVERY SIMPLE USE [DatabaseForDiffBackups] CREATE TABLE [dbo].[messagetable] ( [Message] [nvarchar](100) NOT NULL ) ON [PRIMARY] INSERT INTO MessageTable VALUES ( 'Initial Data Load: This is the first set of data we are populating the table with' ) Listing 7-1: DatabaseForDiffBackups database, MessageTable table, initial data load (100,000 rows). 217 218 Chapter 7: Differential Backup and Restore Base backup As discussed earlier, just as we can't take log backups without first taking a full backup of a database, so we also can't take a differential backup without taking a full base backup. Any differential backup is useless without a base. Listing 7-2 performs the base full backup for our DatabaseForDiffBackups database and stores it in the C:\SQLBackups\Chapter7 folder. Again, feel free to modify this path as appropriate f or your system. USE [master] BACKUP DATABASE [DatabaseForDiffBackups] TO DISK = N'C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Full_Native.bak' WITH NAME = N'DatabaseForDiffBackups-Full Database Backup', STATS = 10 Listing 7-2: Base full backup for DatabaseForDiffBackups. We are not going to worry about the execution time for this full backup, so once the backup has completed successfully, you can close this script without worrying about the output. However, we will take a look at the backup size though, which should come out to just over 20 MB. We can take a look at how our differential backup file sizes compare, as we pump more data into the database. Taking Differential Backups As per our usual scheme, we're going to demonstrate how to take differential backups the "GUI way," in SSMS, and by using native T-SQL Backup commands. In Chapter 8, you'll see how to perform differential backups, as part of a complete and scheduled backup routine using the Red Gate SQL Backup tool. 218 219 Chapter 7: Differential Backup and Restore Before taking the first differential backup, we'll INSERT 10,000 more rows into MessageTable, as shown in Listing 7-3. This is typical of the type of load that we would typically find in a differential backup. USE [DatabaseForDiffBackups] INSERT INTO MessageTable VALUES ( 'Second Data Load: This is the second set of data we are populating the table with' ) Listing 7-3: Second data load (10,000 rows) for MessageTable. Native GUI differential backup We are now ready to take our first differential database backup, via the SSMS GUI backup configuration tool; this should all be very familiar to you by now if you worked through previous chapters, so we're going to move fast! In SSMS, right-click on the DatabaseForDiffBackups database, and navigate Tasks Backup to reach the familiar Backup Database configuration screen. In the Source section, double-check that the correct database is selected (a good DBA always doublechecks) and then change the Backup type to Differential. Leave the Backup set section as-is, and move on to the Destination section. Remove the default backup file (which would append our differential backup to the base full backup file) and add a new destination file at C:\SQLBackups\Chapter7\, called DatabaseForDiffBackups_Diff_Native_1.bak. The screen will look as shown in Figure 220 Chapter 7: Differential Backup and Restore Figure 7-3: Native GUI differential backup configuration. There's nothing to change on the Options page, so we're done. Click OK and the differential backup will be performed. Figure 7-4 summarizes the storage requirements and execution time metrics for our first differential backup. The execution time was obtained by scripting out the equivalent BACKUP command and running the T-SQL, as described in Chapter 3. The alternative method, querying the backupset table in msdb (see Listing 3-5), does not provide sufficient granularity for a backup that ran in under a second. 220 221 Chapter 7: Differential Backup and Restore Differential Backup Name Number of Rows Execution Time Storage Required DatabaseForDiffBackups_Diff_ Native_1.bak Seconds 3.16 MB Figure 7-4: Differential backup statistics, Part 1. Native T-SQL differential backup We are now going to take our second native differential backup using T-SQL code. First, however, let's INSERT a substantial new data load of 100,000 rows into MessageTable, as shown in Listing 7-4. USE [DatabaseForDiffBackups] INSERT INTO MessageTable VALUES ( 'Third Data Load: This is the third set of data we are populating the table with' ) Listing 7-4: Third data load (100,000 rows) for MessageTable. We're now ready to perform our second differential backup, and Listing 7-5 shows the code to do it. If you compare this script to the one for our base full backup (Listing 7-2), you'll see they are almost identical in structure, with the exception of the WITH DIFFER- ENTIAL argument that we use in Listing 7-5 to let SQL Server know that we are not going to be taking a full backup, but instead a differential backup of the changes made since the last full backup was taken. Double-check that the path to the backup file is correct, and then execute the script. 221 222 Chapter 7: Differential Backup and Restore USE [master] BACKUP DATABASE [DatabaseForDiffBackups] TO DISK = N'C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Diff_Native_2.bak' WITH DIFFERENTIAL, NAME = N'DatabaseForDiffBackups-Differential Database Backup', STATS = 10 Listing 7-5: Native T-SQL differential backup for DatabaseForDiffBackups. Having run the command, you should see results similar to those shown in Figure 7-5. Figure 7-5: Native differential T-SQL backup script message output. Figure 7-6 summarizes the storage requirements and execution time metrics for our second differential backup, compared to the first differential backup. 222 223 Chapter 7: Differential Backup and Restore Differential Backup Name Number of Rows Execution Time (S) Storage Required (MB) DatabaseForDiffBackups_Diff_ Native_1.bak DatabaseForDiffBackups_Diff_ Native_2.bak Figure 7-6: Differential backup statistics, Part 2. So, compared to the first differential backup, the second one contains 11 times more rows, took about five times longer to execute, and takes up over six times more space. This all seems to make sense; don't forget that backup times and sizes don't grow linearly based on the number of records, since every backup includes in it, besides data, headers, backup information, database information, and other structures. Compressed differential backups By way of comparison, I reran the second differential backup timings, but this time using backup compression. Just as for full backups, the only change we need to make to our backup script is to add the COMPRESSION keyword, which will ensure the backup is compressed regardless of the server's default setting. Remember, this example will only work if you are using SQL Server 2008 R2 or above, or the Enterprise edition of SQL Server Note that, if you want to follow along, you'll need to restore full and first differential backups over the top of the existing DatabaseForDiffBackups database, and then rerun the third data load (alternatively, you could start again from scratch). 223 224 Chapter 7: Differential Backup and Restore USE [master] BACKUP DATABASE [DatabaseForDiffBackups] TO DISK = N'C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Diff_Native_Compressed.bak' WITH NAME = N'DatabaseForDiffBackups-Diff Database Backup', STATS = 10, COMPRESSION Listing 7-6: Compressed differential backup. In my test, the compressed backup offered very little advantage in terms of execution time, but substantial savings in disk space, as shown in Figure 7-7. Differential Backup Name Number of Rows Execution Time (S) Storage Required (MB) DatabaseForDiffBackups_Diff_Native_1. bak DatabaseForDiffBackups_Diff_Native_2. bak DatabaseForDiffBackups_Diff_Native_ Compressed.bak Figure 7-7: Differential backup statistics, Part 225 Chapter 7: Differential Backup and Restore Performing Differential Backup Restores We are now ready to perform a RESTORE on each of our differential backup files, once using the SSMS GUI, and once using a T-SQL script. Native GUI differential restore We are going to restore our first differential backup file (_Diff_Native_1.bak) for the DatabaseForDiffBackups database, over the top of the existing database. If you have already removed this database, don't worry; as long as you've still got the backup files, it will not affect the example in this section. In SSMS, right-click on the DatabaseForDiffBackups database, and navigate Tasks Restore Database to reach the General page of the Restore Database configuration screen, shown in Figure 7-8. Figure 7-8: Starting the restore process for DatabaseForDiffBackups. 225 226 Chapter 7: Differential Backup and Restore Notice that the base full backup, and the second differential backups have been autoselected as the backups to restore. Remember that, to return the database to the most recent state possible, we only have to restore the base, plus the most recent differential; we do not have to restore both differentials. However, in this example we do want to restore the first rather than the second differential backup, so deselect the second differential in the list in favor of the first. Now, click over to the Options screen; in the Restore options section, select Overwrite the existing database. That done, you're ready to go; click OK, and the base full backup will be restored, followed by the first differential backup. Let's query our newly restored database in order to confirm that it worked as expected; if so, we should see a count of 100,000 rows with a message of Initial Data Load, and 10,000 rows with a message of Second Data Load, as confirmed by the output of Listing 7-7 (the message text is cropped for formatting purposes). USE [DatabaseForDiffBackups] SELECT Message, COUNT(Message) AS Row_Count FROM MessageTable GROUP BY Message Message Row_Count Initial Data Load: This is the first set of data Second Data Load: This is the second set of data (2 row(s) affected) Listing 7-7: Query to confirm that the differential restore worked as expected. 226 227 Chapter 7: Differential Backup and Restore Native T-SQL differential restore We're now going to perform a second differential restore, this time using T-SQL scripts. While we could perform this operation in a single script, we're going to split it out into two scripts so that we can see, more clearly than we could during the GUI restore, the exact steps of the process. Once again, our restore will overwrite the existing DatabaseForDiffBackups database. The first step will restore the base backup, containing 100,000 rows, and leave the database in a "restoring" state. The second step will restore the second differential backup file (_Diff_Native_2.bak), containing all the rows changed or added since the base backup (10,000 rows from the second data load and another 100,000 from the third), and then recover the database, returning it to a usable state. Without further ado, let's restore the base backup file, using the script in Listing 7-8. USE [master] RESTORE DATABASE [DatabaseForDiffBackups] FROM DISK = N'C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Full_Native.bak' WITH NORECOVERY, STATS = 10 Listing 7-8: Base full database backup restore. Notice the WITH NORECOVERY argument, meaning that we wish to leave the database in a restoring state, ready to receive further backup files. Also note that we did not specify REPLACE since it is not needed here, since DatabaseForDiffBackups is a SIMPLE recovery model database. Having executed the script, refresh your object explorer window and you should see that the DatabaseForDiffBackups database now displays a Restoring message, as shown in Figure 228 Chapter 7: Differential Backup and Restore Figure 7-9: The DatabaseForDiffBackups database in restoring mode. We're now ready to run the second RESTORE command, shown in Listing 7-9, to get back all the data in the second differential backup (i.e. all data that has changed since the base full backup was taken) and bring our database back online. USE [master] RESTORE DATABASE [DatabaseForDiffBackups] FROM DISK = N'C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Diff_Native_2.bak' WITH STATS = 10 Listing 7-9: Native T-SQL differential restore. We don't explicitly state the WITH RECOVERY option here, since RECOVERY is the default action. By leaving it out of our differential restore we let SQL Server know to recover the database for regular use. 228 229 Chapter 7: Differential Backup and Restore Once this command has successfully completed, our database will be returned to the state it was in at the time we started the second differential backup. As noted earlier, this is actually no reason, other than a desire here to show each step, to run Listings 7-8 and 7-9 separately. We can simply combine them and complete the restores process in a single script. Give it a try, and you should see a Messages output screen similar to that shown in Figure Figure 7-10: Native differential restore results. Remember that the differential backup file contained slightly more data than the base full backup, so it makes sense that a few more pages are processed in the second RESTORE. The faster backup time for the differential backup compared to the base full backup, even though the former had more data to process, can be explained by the higher overhead attached to a full restore; it must prepare the data and log files and create the space for the restore to take place, whereas the subsequent differential restore only needs to make sure there is room to restore, and start restoring data. 229 230 Chapter 7: Differential Backup and Restore Everything looks good, but let's put on our "paranoid DBA" hat and double-check. Rerunning Listing 7-7 should result in the output shown in Figure Figure 7-11: Data verification results from native T-SQL differential restore. Restoring compressed differential backups Listing 7-10 shows the script to perform the same overall restore process as the previous section, but using the compressed differential backup file. USE [master] RESTORE DATABASE [DatabaseForDiffBackups] FROM DISK='C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Full_Native.bak' WITH NORECOVERY RESTORE DATABASE [DatabaseForDiffBackups] FROM DISK='C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Diff_Native_Compressed.bak' WITH RECOVERY Listing 7-10: Restoring a compressed native differential backup. 230 231 Chapter 7: Differential Backup and Restore As you can see, it's no different than a normal differential restore. You don't have to include any special options to let SQL Server know the backup is compressed. That information is stored in the backup file headers, and SQL Server will know how to handle the compressed data without any special instructions from you in the script! You will undoubtedly see that the restore took just a bit longer that the uncompressed backup would take to restore. This is simply because more work is going on to decompress the data and it, and that extra step, cost CPU time, just as an encrypted backup file would also take slightly longer to restore. Forcing Failures for Fun Having scored several notable victories with our differential backup and restore processes, it's time to taste the bitter pill of defeat. Think of it as character building; it will toughen you up for when similar errors occur in the real world! We'll start with a possible error that could occur during the backup part of the process, and then look at a couple of possible errors that could plague your restore process. Missing the base Go ahead and run the script in Listing It creates a brand new database, then attempts to take a differential backup of that database. USE [master] CREATE DATABASE [ForcingFailures] ON PRIMARY ( NAME = N'ForcingFailures', FILENAME = N'C:\SQLDATA\ForcingFailures.mdf', SIZE = 5120KB, FILEGROWTH = 1024KB ) LOG ON ( NAME = N'ForcingFailures_log', FILENAME = N'C:\SQLDATA\ForcingFailures_log.ldf', SIZE = 1024KB, FILEGROWTH = 10%) 231 232 Chapter 7: Differential Backup and Restore BACKUP DATABASE [ForcingFailures] TO DISK = N'C:\SQLBackups\Chapter7\ForcingFailures_Diff.bak' WITH DIFFERENTIAL, STATS = 10 Listing 7-11: A doomed differential backup. If you hadn't already guessed why this won't work, the error message below will leave you in no doubt. Msg 3035, Level 16, State 1, Line 2 Cannot perform a differential backup for database "ForcingFailures", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option. Msg 3013, Level 16, State 1, Line 2 BACKUP DATABASE is terminating abnormally. We can't take a differential backup of this database without first taking a full database backup as the base from which to track subsequent changes! Running to the wrong base Let's perform that missing full base backup of our ForcingFailures database, as shown in Listing USE [master] BACKUP DATABASE [ForcingFailures] TO DISK = N'C:\SQLBackups\Chapter7\ForcingFailures_Full.bak' WITH STATS = 10 Listing 7-12: Base full backup for ForcingFailures database. 232 233 Chapter 7: Differential Backup and Restore We are now fully prepared for some subsequent differential backups. However, unbeknown to us, someone sneaks in and performs a second full backup of the database, in order to restore it to a development server. USE [master] BACKUP DATABASE [ForcingFailures] TO DISK = N'C:\SQLBackups\ForcingFailures_DEV_Full.bak' WITH STATS = 10 Listing 7-13: A rogue full backup of the ForcingFailures database. Back on our production system, we perform a differential backup. USE [master] BACKUP DATABASE [ForcingFailures] TO DISK = N'C:\SQLBackups\Chapter7\ForcingFailures_Diff.bak' WITH DIFFERENTIAL, STATS = 10 Listing 7-14: A differential backup of the ForcingFailures database. Some time later, we need to perform a restore process, over the top of the existing (FULL recovery model) database, so prepare and run the appropriate script, only to get a nasty surprise. USE [master] RESTORE DATABASE [ForcingFailures] FROM DISK = N'C:\SQLBackups\Chapter7\ForcingFailures_Full.bak' WITH NORECOVERY, REPLACE, STATS = 234 Chapter 7: Differential Backup and Restore RESTORE DATABASE [ForcingFailures] FROM DISK = N'C:\SQLBackups\Chapter7\ForcingFailures_Diff.bak' WITH STATS = 10 Processed 176 pages for database 'ForcingFailures', file 'ForcingFailures' on file 1. Processed 1 pages for database 'ForcingFailures', file 'ForcingFailures_log' on file 1. RESTORE DATABASE successfully processed 177 pages in seconds ( MB/sec). Msg 3136, Level 16, State 1, Line 2 This differential backup cannot be restored because the database has not been restored to the correct earlier state. Msg 3013, Level 16, State 1, Line 2 RESTORE DATABASE is terminating abnormally. Listing 7-15: A failed differential restore of the ForcingFailures database. Due to the "rogue" second full backup, our differential backup does not match our base full backup. As a result, the differential restore operation fails and the database is left in a restoring state. This whole mess could have been averted if that non-scheduled full backup had been taken as a copy-only backup, since this would have prevented SQL Server assigning it as the new base backup for any subsequent differentials. However, what can we do at this point? Well, the first step is to examine the backup history in the msdb database to see if we can track down the rogue backup, as shown in Listing USE [MSDB] SELECT bs.type, bmf.physical_device_name, bs.backup_start_date, bs.user_name FROM dbo.backupset bs INNER JOIN dbo.backupmediafamily bmf ON bs.media_set_id = bmf.media_set_id WHERE bs.database_name = 'ForcingFailures' ORDER BY bs.backup_start_date ASC Listing 7-16: Finding our rogue backup. 234 235 Chapter 7: Differential Backup and Restore This query will tell us the type of backup taken (D = full database, I = differential database, somewhat confusingly), the name and location of the backup file, when the backup was started, and who took it. We can check to see if that file still exsists in the designated directory and use it to restore our differential backup (we can also roundly castigate whoever was responsible, and give them a comprehensive tutorial on use of copy-only backups). If the non-scheduled full backup file is no longer in that location and you are unable to track it down, then there is not a lot you can do at this point, unless you are also taking transaction log backups for the database. If not, you'll simply have to recover the database as it exists, as shown in Listing 7-17, and deal with the data loss. USE [master] RESTORE DATABASE [ForcingFailures] WITH RECOVERY Listing 7-17: Bringing our database back online. Recovered, already For our final example, we'll give our beleaguered ForcingFailures database a rest and attempt a differential restore on DatabaseForDiffBackups, as shown in Listing See if you can figure out what is going to happen before executing the command. USE [master] RESTORE DATABASE [DatabaseForDiffBackups] FROM DISK = N'C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Full_Native.bak' WITH REPLACE, STATS = 10 RESTORE DATABASE [DatabaseForDiffBackups] FROM DISK = N'C:\SQLBackups\Chapter7\DatabaseForDiffBackups_Diff_Native_2.bak' WITH STATS = 10 Listing 7-18: Spot the mistake in the differential restore script. 235 236 Chapter 7: Differential Backup and Restore This script should look very familiar to you but there is one small omission from this version which will prove to be very important. Whether or not you spotted the error, go ahead and execute it, and you should see output similar to that shown in Figure Figure 7-12: No files ready to roll forward. The first RESTORE completes successfully, but the second one fails with the error message "The log or differential backup cannot be restored because no files are ready to rollforward." The problem is that we forgot to include the NORECOVERY argument in the first RESTORE statement. Therefore, the full backup was restored and database recovery process proceeded as normal, to return the database to an online and usable state. At this point, the database is not in a state where it can accept further backups. If you see this type of error when performing a restore that takes more than one backup file, differential, or log, you now know that there is a possibility that a previous RESTORE statement on the database didn't include the NORECOVERY argument that would allow for more backup files to be processed. 236 237 Chapter 7: Differential Backup and Restore Summary We have discussed when and why you should be performing differential backups. Differential backups, used properly, can help a DBA make database restores a much simpler and faster process than they would be with other backup types. In my opinion, differential backups should form an integral part of the daily backup arsenal. If you would like more practice with these types of backups, please feel free to modify the database creation, data population and backup scripts provided throughout this chapter. Perhaps you can try a differential restore of a database and move the physical data and log files to different locations. Finally, we explored some of the errors that can afflict differential backup and restore; if you know, up front, the sort of errors you might see, you'll be better armed to deal with them when they pop up in the real world. We couldn't cover every possible situation, of course, but knowing how to read and react to error messages will save you time and headaches in the future. 237 238 Chapter 8: Database Backup and Restore with SQL Backup Pro This chapter will demonstrate how to perform each of the types of database backup we've discussed in previous chapters (namely full and differential database backups and log backups), using Red Gate's SQL Backup Pro tool. At this stage, knowledge of the basic characteristics of each type of backup is assumed, based on prior chapters, in order to focus on the details of capturing the backups using either the SQL Backup GUI or SQL Backup scripts. One of the advantages of using such a tool is the relative ease with which backups can be automated a vital task for any DBA. We'll discuss a basic SQL Backup script that will allow you to take full, log, or differential backups of all selected databases on a SQL Server instance, store those backups in a central location, and receive notifications of any failure in the backup operation. Preparing for Backups As usual, we need to create our sample database, tables, and data. Listing 8-1 shows the script to create a DatabaseForSQLBackups database. USE master go CREATE DATABASE [DatabaseForSQLBackups] ON PRIMARY ( NAME = N'DatabaseForSQLBackups', FILENAME = N'C:\SQLData\DatabaseForSQLBackups.mdf', SIZE = KB, FILEGROWTH = KB ) LOG ON ( NAME = N'DatabaseForSQLBackups_log' 238 239 Chapter 8: Database Backup and Restore with SQL Backup Pro, FILENAME = N'C:\SQLData\DatabaseForSQLBackups_log.ldf', SIZE = KB, FILEGROWTH = 10240KB ) ALTER DATABASE [DatabaseForSQLBackups] SET RECOVERY SIMPLE Listing 8-1: Creating DatabaseForSQLBackups. Notice that we set the database, initially at least, to SIMPLE recovery model. Later we'll want to perform both differential database backups and log backups, so our main, operational model for the database will be FULL. However, as we'll soon be performing two successive data loads of one million rows each, and we don't want to run into the problem of bloating the log file (as described in the Troubleshooting Log Issues section of Chapter 5), we're going to start off in SIMPLE model, where the log will be auto-truncated, and only switch to FULL once these initial "bulk loads" have been completed. Listing 8-2 shows the script to create our two, familiar, sample tables, and then load MessageTable1 with 1 million rows. USE [DatabaseForSQLBackups]] USE [DatabaseForSQLBackups] 239 240 Chapter 8: Database Backup and Restore with SQL Backup Pro MessageTable1 VALUES GETDATE() ) Listing 8-2: Creating sample tables and initial million row data load for MessageTable1. Take a look in the C:\SQLData folder, and you should see that our data and log files are still at their initial sizes; the data file is approximately 500 MB in size (and is pretty much full), and the log file is still 100 MB in size. Therefore we have a total database size of about 600 MB. It's worth noting that, even if we'd set the database in FULL recovery model, the observed behavior would have been the same, up to this point. Don't forget that a database is only fully engaged in FULL recovery model after the first full backup, so the log may still have been truncated during the initial data load. If we'd performed a full backup before the data load, the log would not have been truncated and would now be double that of our data file, just over 1 GB. Of course, this means that the log file would have undergone many auto-growth events, since we only set it to an initial size of 100 MB and to grow in 10 MB steps. 240 241 Chapter 8: Database Backup and Restore with SQL Backup Pro Full Backups In this section, we'll walk through the process of taking a full backup of the example DatabaseForSQLBackups database, using both the SQL Backup GUI, and SQL Backup T-SQL scripts. In order to follow through with the examples, you'll need to have installed the Red Gate SQL Backup Pro GUI on your client machine, registered your test SQL Server instances, and installed the server-side components of SQL Backup. If you've not yet completed any part of this, please refer to Appendix A, for installation and configuration instructions. SQL Backup Pro full backup GUI method Open the SQL Backup management GUI, expand the server listing for your test server, in the left pane, right-click the DatabaseForSQLBackups database and select Back Up from the menu to enter Step 1 (of 5) in the Back Up wizard. The left pane lets you select the server to back up (which will be the test server housing our database) and the right pane allows you to select a backup template that you may have saved on previous runs through the backup process (at Step 5). Since this is our first backup, we don't yet have any templates, so we'll discuss this feature later. Step 2 is where we specify the database(s) to back up (DatabaseForSQLBackups will be preselected) and the type of backup to be performed (by default, Full). In this case, since we do want to perform a full backup, and only of the DatabaseForSQLBackups database, we can leave this screen exactly as it is (see Figure 8-1). 241 242 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-1: SQL Backup Configuration, Step 2. A few additional features to note on this screen are as follows: we can take backups of more than one database at a time; a useful feature when you need one-time backups of multiple databases on a system if we wish to backup most, but not all, databases on a server, we can select the databases we don't wish to backup, and select Exclude these from the top drop-down the Filter list check box allows you to screen out any databases that are not available for the current type of backup; for example, this would ensure that you don't attempt to take a log backup of a database that is running under the SIMPLE recovery model. Step 3 is where we will configure settings to be used during the backup. For this first run through, we're going to focus on just the central Backup location portion of this screen, for the moment, and the only two changes we are going to make are to the backup file 242 243 Chapter 8: Database Backup and Restore with SQL Backup Pro location and name. Adhering closely to the convention used throughout, we'll place the backup file in the C:\SQLBackups\Chapter8\ folder and call it DatabaseForSQLBackups_ Full_1.sqb. Notice the.sqb extension that denotes this as a SQL Backup-generated backup file. Once done, the screen will look as shown in Figure 8-2. Figure 8-2: SQL Backup Step 3 configuration. 243 244 Chapter 8: Database Backup and Restore with SQL Backup Pro There are two options offered below the path and name settings in the Backup location section that allow us to "clean up" our backup files, depending on preferences. Overwrite existing backup files Overwrite any backup files in that location that share the same name. Delete existing backup files Remove any files that are more than x days old, or remove all but the latest x files. We also have the option of cleaning these files up before we start a new backup. This is helpful if the database backup files are quite large and wouldn't fit on disk if room was not cleared beforehand. However, be sure that any backups targeted for removal have first been safely copied to another location. The top drop-down box of this screen offers the options to split or mirror a backup file (which we will cover at the end of this chapter). The Network copy location section allows us to copy the finished backup to a second network location, after it has completed. This is a good practice, and you also get the same options of cleaning up the files on your network storage. What you choose here doesn't have to match what you chose for your initial backup location; for example, you can store just one day of backups on a local machine, but three days on your network location. Backup compression and encryption Step 4 is where we configure some of the most useful features of SQL Backup, including backup compression. We'll compare how SQL Backup compression performs against the native uncompressed and native compressed backups that we investigated in previous chapters. For the time being, let's focus on the Backup processing portion of this screen, where we configure backup compression and backup encryption. 244 245 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-3: SQL Backup Step 4 configuration. Backup compression is enabled by default, and there are four levels of compression, offering progressively higher compression and slower backup speeds. Although the compressed data requires lower levels of disk I/O to write to the backup file, the overriding factor is the increased CPU time required in compressing the data in the first place. As you can probably guess, picking the compression level is a balancing act; the better the compression, the more disk space will be saved, but the longer the backups run, the more likely are issues with the backup operation. Ultimately, the choice of compression level should be guided by the size of the database and the nature of the data (i.e. its compressibility). For instance, binary data does not compress well, so don't spend CPU time attempting to compress a database full of binary images. In other cases, lower levels of compression may yield higher compression ratios. We can't tell until we test the database, which is where the Compression Analyzer comes in. Go ahead and click on the Compression Analyzer button and start a test against the DatabaseForSQLBackups database. Your figures will probably vary a bit, but you should see results similar to those displayed in Figure 246 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-4: SQL Backup Compression Analyzer. For our database, Levels 1 and 4 offer the best compression ratio, and since a backup size of about 4.5 MB (for database of 550 MB) is pretty good, we'll pick Level 1 compression, which should also provide the fastest backup times. Should I use the compression analyzer for every database? Using the analyzer for each database in your infrastructure is probably not your best bet. We saw very fast results when testing this database, because is it very small compared to most production databases. The larger the database you are testing, the longer this test will take to run. This tool is recommended for databases that you are having compression issues with, perhaps on a database where you are not getting the compression ratio that you believe you should be. 246 247 Chapter 8: Database Backup and Restore with SQL Backup Pro The next question we need to consider carefully is this: do we want to encrypt all of that data that we are writing to disk? Some companies operate under strict rules and regulations, such as HIPAA and SOX, which require database backups to be encrypted. Some organizations just like the added security of encrypted backups, in helping prevent a malicious user getting access to those files. If encryption is required, simply tick the Encrypt backup box, select the level of encryption and provide a secure password. SQL Backup will take care of the rest, but at a cost. Encryption will also add CPU and I/O overhead to your backups and restores. Each of these operations now must go through the "compress encrypt store on disk" process to back up, as well as the "retrieve from disk decrypt decompress" routines, adding an extra step to the both backup and restore processes. Store your encryption password in a safe and secure location! Not having the password to those backup files will stop you from ever being able to restore those files again, which is just as bad as not having backups at all! Our database doesn't contain any sensitive data and we are not going to use encryption on this example. Backup optimization and verification We're going to briefly review what options are available in the Optimization and On completion sections, and some of the considerations in deciding the correct values, but for our demo we'll leave all of them either disabled or at their default settings. 247 248 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-5a: Optimization and On completion options. Depending on the system, some good performance benefits can be had by allowing more threads to be used in the backup process. When using a high-end CPU, or many multi-core CPUs, we can split hefty backup operations across multiple threads, each of which can run in parallel on a separate core. This can dramatically reduce the backup processing time. The Maximum transfer size and Maximum data block options can be important parameters in relation to backup performance. The transfer size option dictates the maximum size of each block of memory that SQL Backup will write to the backup file, on disk. The default value is going to be 1,024 KB (i.e. 1 MB), but if SQL Server is experiencing memory pressure, it may be wise to lower this value so that SQL Backup can write to smaller memory blocks, and therefore isn't fighting so hard, with other applications, for memory. If this proves necessary, we can add a DWORD registry key to define a new default value on that specific machine. All values in these keys need to be in multiples of 65,536 (64 KB), up to a maximum of 1,048,576 (1,024 KB, the default). 248 249 Chapter 8: Database Backup and Restore with SQL Backup Pro HKEY_LOCAL_MACHINE\SOFTWARE\Red Gate\SQL Backup\ BackupSettingsGlobal\<instance name>\maxtransfersize (32-bit) HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Red Gate\SQL Backup\ BackupSettingsGlobal\<instance name>\maxtransfersize (64-bit) The maximum data block option (by default 2 MB) determines the size of the actual data blocks on disk, when storing backup data. For optimal performance, we want this value to match or fit evenly in the block size of the media to which the files are being written; if the data block sizes overlap the media block boundaries, it may result in performance degradation. Generally speaking, SQL Server will automatically select the correct block size based on the media. However, if necessary, we can create a registry entry to overwrite the default value: HKEY_LOCAL_MACHINE\SOFTWARE\Red Gate\SQL Backup\ BackupSettingsGlobal\<instance name>\maxdatablock (32-bit) HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Red Gate\SQL Backup\ BackupSettingsGlobal\<instance name>\maxdatablock (64-bit) The network resilience options determine how long to wait before retrying a failed backup operation, and how many times to retry; when the retry count has been exhausted, a failure message will be issued. We want to be able to retry a failed backup a few times, but we don't want to retry so many times that a problem in the network or disk subsystem is masked for too long; in other words, rather than extend the backup operation as it retries again and again, it's better to retry only a few times and then fail, therefore highlighting the network issue. This is especially true when performing full backup operations on large databases, where we'll probably want to knock that retry count down from a default of 10 to 2 3 times. Transaction log backups, on the other hand, are typically a short process, so retrying 10 times is not an unreasonable number in this case. 249 250 Chapter 8: Database Backup and Restore with SQL Backup Pro The On completion section gives us the option to verify our backup files after completion and send an once the operations are complete. The verification process is similar to the CHECKSUM operation in native SQL Server backups (see Chapter 2). SQL Backup will make sure that all data blocks have been written correctly and that you have a valid backup file. Note that, at the time of this book going to print, Red Gate released SQL Backup Pro Version 7, which expands the backup verification capabilities to allow standard backup verification (BACKUP WITH CHECKSUM) and restore verification (RESTORE VERIFYONLY), as shown in Figure 8-5b. We'll revisit the topic of backup verification with SQL Backup later in the chapter. Fig 8-5b: New Backup verification options in SQL Backup 7 The notification can be set to alert on any error, warning, or just when the backup has completed in any state including success. 250 251 Chapter 8: Database Backup and Restore with SQL Backup Pro Running the backup Step 5 of the wizard is simply a summary of the options we selected for our SQL Backup operation. Check this page carefully, making sure each of the selected options looks as it should. At the bottom is a check box that, when enabled, will allow us to save this backup operation configuration as a template, in which case we could then load it on Step 1 of the wizard and then skip straight to Step 5. We are not going to save this example as a template. The Script tab of this step allows us to view the T-SQL code that has been generated to perform this backup. Click on the Finish button and the backup will be executed on your server, and a dialog box will report the progress of the operation. Once it completes, the new window will (hopefully) display two green check marks, along with some status details in the lower portion of the window, as seen in Figure 8-6. Database size : MB Compressed data size: MB Compression rate : 99.40%-6: Status details from a successful SQL Backup GUI backup operation. Backup metrics: SQL Backup Pro vs. native vs. native compressed Figure 8-6 reports an initial database size of 600 MB, a backup file size of 3.6 MB and backup time of about 3 seconds. Remember, these metrics are ballpark figures and won't match exactly the ones you get on your system. Don't necessarily trust SQL Backup on the backup size; double-check it in the backup location folder (in my case it was 3.7 MB; pretty close!). 251 252 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-7 compares these metrics to those obtained for identical operations using native backup, compressed native backup, as well as SQL Backup using a higher compression level (3). Backup Operation Backup File Size in MB (on disk) Backup Time (seconds) % Difference compared to Native Full (+ = bigger / faster) Size Speed Native Full Native Full with Compression SQL Backup Full Compression Level SQL Backup Full Compression Level Figure 8-7: Comparing full backup operations for DatabaseForSQLBackups (0.5 GB). SQL Backup (Compression Level 1) produces a backup file that requires less than 1% of the space required for the native full backup file. To put this into perspective, in the space that we could store 3 days' worth of full native backups using native SQL Server, we could store nearly 400 SQL Backup files. It also increases the backup speed by about 84%. How does that work? Basically, every backup operation reads the data from the disk and writes to a backup-formatted file, on disk. By far the slowest part of the operation is writing the data to disk. When SQL Backup (or native compression) performs its task, it is reading all of the data, passing it through a compression tool and writing much smaller segments of data, so less time is used on the slowest operation. In this test, SQL Backup with Compression Level 1 outperformed the native compressed backup. However, for SQL Backup with Compression Level 3, their performance was almost identical. It's interesting to note that while Level 3 compression should, in theory, have resulted in a smaller file and a longer backup time, compared to Level 1, we in fact saw a larger file and a longer backup time! This highlights the importance of selecting the compression level carefully. 252 253 Chapter 8: Database Backup and Restore with SQL Backup Pro SQL Backup Pro full backup using T-SQL We're now going to perform a second full backup our DatabaseForSQLBackups database, but this time using a SQL Backup T-SQL script, which will look similar to the one we saw on the Script tab of Step 5 of the GUI wizard. Before we do so, let's populate MessageTable2 with a million rows of data of the same size and structure as the data in MessageTable1, so that our database file should be hovering somewhere around 1 GB in size. USE [DatabaseForSQL 8-3: Populating the MessageTable2 table. Take a look in the SQLData folder, and you'll now see that the data file is now about 1GB in size and the log file is still around 100 MB. Now that we have some more data to work with, Listing 8-4 shows the SQL Backup T-SQL script to perform a second full backup of our DatabaseForSQLBackups database. EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_2.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, THREADCOUNT = 2, COMPRESSION = 1"' Listing 8-4: Second full backup of DatabaseForSQLBackups using SQL Backup T-SQL. 253 254 Chapter 8: Database Backup and Restore with SQL Backup Pro The first thing to notice is that the backup is executed via an extended stored procedure called sqlbackup, which resides in the master database and utilizes some compiled DLLs that have been installed on the server. We pass to this stored procedure a set of parameters as a single string, which provides the configuration settings that we wish to use for the backup operation. Some of the names of these settings are slightly different from what we saw for native backups but, nevertheless, the script should look fairly familiar. We see the usual BACKUP DATABASE command to signify that we are about to backup the DatabaseForSQLBackups database. We also see the TO DISK portion and the path to where the backup file will be stored. The latter portion of the script sets values for some of the optimization settings that we saw on Step 4 of the GUI wizard. These are not necessary for taking the backup, but it's useful to know what they do. DISKRETRYINTERVAL One of the network resiliency options; amount of time in seconds SQL Backup will wait before retrying the backup operation, in the case of a failure. DISKRETRYCOUNT Another network resiliency option; number of times a backup will be attempted in the event of a failure. Bear in mind that the more times we retry, and the longer the retry interval, the more extended will be the backup operation. THREADCOUNT Using multiple processors and multiple threads can offer a huge performance boost when taking backups. The only setting that could make much of a difference is the threadcount parameter, since on powerful multi-processor machines it can spread the load of backup compression over multiple processors. Go ahead and run the script now and the output should look similar to that shown in Figure 255 Chapter 8: Database Backup and Restore with SQL Backup Pro Backing up DatabaseForSQLBackups (full database) to: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_2.sqb Database size : GB Compressed data size: MB Compression rate : 99.39%-8: SQL Backup T-SQL script results. We are now presented with two result sets. The first result set, shown in Figure 8-8, provides our detailed backup metrics, including database size, and the size and compression rate of the resulting backup file. The second result set (not shown) gives us the exit code from SQL Backup, an error code from SQL Server and a list of files used in the command. We can see from the first result set that the new backup file size is just under 7 MB, and if we take a look in our directory we can confirm this. Compared to our first Red Gate backup, we can see that this is a bit less than double the original file size, but we will take a closer look at the numbers in just a second. After these metrics, we see the number of pages processed. Referring back to Chapter 3 confirms that this is the same number of pages as for the equivalent native backup, which is as expected. We also see that the backup took just 6 seconds to complete which, again, is roughly double the figure for the first full backup. Figure 8-9 compares these metrics to those obtained for native backups, and compressed native backups. 255 256 Chapter 8: Database Backup and Restore with SQL Backup Pro Backup Operation Backup File Size in MB (on disk) Backup Time (seconds) % Difference compared to Native Full (+ = bigger / faster) Size Speed Native Full Native Full with Compression SQL Backup Full Compression Level SQL Backup Full Compression Level Figure 8-9: Comparing full backup operations for DatabaseForSQLBackups (1 GB). All the results are broadly consistent with those we achieved for the first full backup. It confirms that for this data, SQL Backup Compression Level 1 is the best-performing backup, both in terms of backup time and backup file size. Log Backups In this section, we'll walk through the process of taking log backups of the example DatabaseForSQLBackups database, again using either the SQL Backup GUI, or a SQL Backup T-SQL script. Preparing for log backups At this point, the database is operating in SIMPLE recovery model, and we cannot take log backups. In order to start taking log backups, and prevent the log file from being autotruncated from this point on, we need to switch the recovery model of the database to FULL, and then take another full backup, as shown in Listing 257 Chapter 8: Database Backup and Restore with SQL Backup Pro USE master ALTER DATABASE [DatabaseForSQLBackups] SET RECOVERY FULL EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, THREADCOUNT = 2, COMPRESSION = 1"' Listing 8-5: Switching DatabaseForSQLBackups to FULL recovery and taking a full backup. The database is now operating in FULL recovery and this backup file, DatabaseFor SQLBackups_Full_BASE.sqb, will be the one to restore, prior to restoring any subsequent log backups. Finally, let's perform a third, much smaller, data load, adding ten new rows to each message table, as shown in Listing 8-6. USE [DatabaseForSQLBackups] INSERT INTO dbo.messagetable1 VALUES ( '1st set of short messages for MessageTable1', GETDATE() ) this is just to help with our point-in-time restore (Chapter 8) PRINT GETDATE() 257 258 Chapter 8: Database Backup and Restore with SQL Backup Pro --Dec :33PM INSERT INTO dbo.messagetable2 VALUES ( '1st set of short messages for MessageTable2', GETDATE() ) Listing 8-6: Loading new messages into our message tables. SQL Backup Pro log backups In this section, rather than perform two separate backups, one via the GUI and one via a SQL Backup script, we're just going to do one transaction log backup and let you decide whether you want to perform it via the GUI or the script. If you prefer the GUI approach, open up SQL Backup and start another a backup operation for the DatabaseForSQLBackups database. On Step 2 of the Wizard, select Transaction Log as the Backup type. You'll see that the option Remove inactive entries from transaction log is auto-checked. This means that the transaction log can be truncated upon backup, making the space in the log available for future entries. This is the behavior we want here, so leave it checked. However, if we were to uncheck it, we could take a log backup that leaves the transactions in the file (similar to a NO_TRUNCATE native backup or a copy-only log backup in that it won't affect future backups). The Databases to back up section may list several databases that are grayed out; these will be the ones that are ineligible for transaction log backups, usually because they are operating in SIMPLE recovery model, rather than FULL or BULK LOGGED. Checking the Filter list button at the bottom will limit the list to only the eligible databases. Make sure that only the DatabaseForSQLBackups database is selected, then click Next. 258 259 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-10: Selecting the type of backup and the target database. On Step 3, we set the name and location for our log backup file. Again, adhering to the convention used throughout the book, we'll place the backup file in the C:\SQLBackups\ Chapter8\ folder and call it DatabaseForSQLBackups_Log_1.sqb. Step 4 of the wizard is where we will configure the compression, optimization and resiliency options of our transaction log backup. The Compression Analyzer only tests full backups and all transaction logs are pretty much the same in terms of compressibility. We'll choose Compression Level 1, again, but since our transaction log backup will, in this case, process only a small amount of data, we could select maximum compression (Level 4) without affecting the processing time significantly. We're not going to change any of the remaining options on this screen, and we have discussed them all already, so go ahead and click on Next. If everything looks as expected on the Summary screen, click on Finish to start the backup. If all goes well, within a few seconds the appearance of two green checkmarks will signal that all pieces of the operation have been completed, and some backup metrics will be displayed. If you prefer the script-based approach, Listing 8-7 shows the SQL Backup T-SQL script that does directly what our SQL Backup GUI did under the covers. 259 260 Chapter 8: Database Backup and Restore with SQL Backup Pro USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP LOG [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_1.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 1, THREADCOUNT = 2"' Listing 8-7: A transction log backup, using SQL Backup T-SQL. Whichever way you decide to execute the log backup, you should see backup metrics similar to those shown in Figure Backing up DatabaseForSQLBackups (transaction log) to: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_1.sqb Backup data size : MB Compressed data size: MB Compression rate : 86.13% Processed 6261 pages for database 'DatabaseForSQLBackups', file 'DatabaseForSQLBackups_log' on file 1. BACKUP LOG successfully processed 6261 pages in seconds ( MB/sec). SQL Backup process ended. Figure 8-11: Backup metrics for transaction log backup on DatabaseForSQLBackups. The backup metrics report a compressed backup size of 7 MB, which we can verify by checking the actual size of the file in the C:\SQLBackups\Chapter8\ folder, and a processing time of about 0.7 seconds. Once again, Figure 8-12 compares these metrics to those obtained for native backups, compressed native backups and for SQL Backup with a higher compression level. 260 261 Chapter 8: Database Backup and Restore with SQL Backup Pro Backup Operation Backup File Size in MB (on disk) Backup Time (seconds) % Difference compared to Native (+ = bigger / faster) Size Speed Native Log Native Log with Compression SQL Backup Log Compression Level SQL Backup Log Compression Level Figure 8-12: Comparing log backup operations for DatabaseForSQLBackups. In all cases, there is roughly a 90% saving in disk space for compressed backups, over the native log backup. In terms of backup performance, native log backups, native compressed log backups, and SQL Backup Compression Level 1 all run in sub-second times, so it's hard to draw too many conclusions except to say that for smaller log files the time savings are less significant than for full backups, as would be expected. SQL Backup Compression Level 3 does offer the smallest backup file footprint, but the trade-off is backup performance that is significantly slower than for native log backups. Differential Backups Finally, let's take a very quick look at how to perform a differential database backup using SQL Backup. For full details on what differential backups are, and when they can be useful, please refer back to Chapter 7. First, simply adapt and rerun Listing 8-6 to insert a load of 100,000 rows into each of the message tables (also, adapt the message text accordingly). 261 262 Chapter 8: Database Backup and Restore with SQL Backup Pro Then, if you prefer the GUI approach, jump-start SQL Backup and work through in the exactly the same way as described for the full backup. The only differences will be: at Step 2, choose Differential as the backup type at Step 3, call the backup file DatabaseForSQLBackups_Diff_1.sqb and locate it in the C:\SQLBackups\Chapter8 folder at Step 4, choose Compression Level 1. If you prefer to run a script, the equivalent SQL Backup script is shown in Listing 8-8. USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Diff_1.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 1, THREADCOUNT = 2, DIFFERENTIAL"' Listing 8-8: SQL Backup differential T-SQL backup code. Again, there is little new here; the command is more or less identical to the one for full backups, with the addition of the DIFFERENTIAL keyword to the WITH clause which instructs SQL Server to only backup any data changed since the last full backup was taken. Backing up DatabaseForSQLBackups (differential database) to: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Diff_1.sqb Backup data size : MB Compressed data size: MB Compression rate : 99.04% Processed pages for database 'DatabaseForSQLBackups', file 'DatabaseForSQLBackups' on file 1. Processed 2 pages for database 'DatabaseForSQLBackups', file 'DatabaseForSQLBackups_log' on file 1. BACKUP DATABASE WITH DIFFERENTIAL successfully processed pages in seconds ( MB/sec). SQL Backup process ended. Figure 8-13: S QL Backup differential metrics (Compression Level 1). 262 263 Chapter 8: Database Backup and Restore with SQL Backup Pro Let's do a final metrics comparison for a range of differential backups. Backup Operation Backup File Size in MB (on disk) Backup Time (seconds) % Difference compared to Native (+ = bigger / faster) Size Speed Native Log Native Log with Compression SQL Backup Log Compression Level SQL Backup Log Compression Level Figure 8-14: Comparing differential backup operations for DatabaseForSQLBackups. Once again, the space and time savings from compressed backup are readily apparent, with SQL Backup Compression Level 1 emerging as the most efficient on both counts, in these tests. Building a reusable and schedulable backup script One of the objectives of this book is to provide the reader with a jumping-off point for their SQL Server backup strategy. We'll discuss how to create a SQL Backup script that can be used in a SQL Agent job to take scheduled backups of databases, and allow the DBA to: take a backup of selected databases on a SQL Server instance, including relevant system databases 263 264 Chapter 8: Database Backup and Restore with SQL Backup Pro configure the type of backup required (full, differential, or log) store the backup files using the default naming convention set up in Red Gate SQL Backup Pro capture a report of any error or warning codes during the backup operation. Take a look at the script in Listing 8-9, and then we'll walk through all the major sections. USE [master] NVARCHAR(4) -- Conifgure Options Here = N'\\NetworkServer\ShareName\' + + '\' = = N'DatabaseForDiffBackups_SB' = N'DIFF' -- Do Not Modify Below = WHEN N'FULL' THEN N'-SQL "BACKUP DATABASES [' + N'] TO DISK = ''' + N'<AUTO>.sqb'' WITH MAILTO_ONERRORONLY = ''' + N''', DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2"' WHEN N'LOG' THEN N'-SQL "BACKUP LOGS [' + N'] TO DISK = ''' + N'<AUTO>.sqb'' WITH MAILTO_ONERRORONLY = ''' 264 265 Chapter 8: Database Backup and Restore with SQL Backup Pro + N''', DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2"' WHEN N'DIFF' THEN N'-SQL "BACKUP DATABASES [' + N'] TO DISK = ''' + N'<AUTO>.sqb'' WITH MAILTO_ONERRORONLY = ''' + N''', DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2, DIFFERENTIAL"' END OUTPUT -- If our backup operations return any errors or warnings execute below IF >= 500 > 0 ) BEGIN -- Raise an error to fail your backup job RAISERROR(N'Backup operation error', 10,1) END Listing 8-9: Reusable database backup code. The script starts by declaring the required variables, and the sets the values of the four confugurable variables, as The backup path for our database backup files. This should be pointed at the centralized storage location. Also notice that the servername variable is used as a subdirectory. It is common practice to separate backup files by server, so this will use each server name as a subdirectory to store that set of backup We always want to know when backups fail. In some environments, manually checking all database backup operations is just not feasible. Having alerts sent to the DBA team on failure is a good measure to have in place. Be sure to 265 266 Chapter 8: Database Backup and Restore with SQL Backup Pro test the setting on each server occasionally, to guarantee that any failure alerts are getting through. Details of how to configure the settings are in Appendix A comma delimited text list that contains the names of any databases that we want to back up. SQL Backup allows you to back up any number of databases at one time with a single command. When backing up every database on a server, we can simply omit this parameter from the script and, in the BACKUP command, replace [' + '] with Used to determine what type of database backup will be taken. In this script there are three choices for this variable to take; full, log, and differential. In the next section of the script, we build the BACKUP commands, using the variables for which we have just configured values. We store the BACKUP command in variable, until we are ready to execute it. Notice that a simple CASE statement is used to determine the type of backup operation to be performed, according to the value stored We don't need to modify anything in this section of the script, unless making changes to the other settings being used in the BACKUP command, such as the compression level. Of course, we could also turn those into configurable parameters. The next section of the script is where we execute our BACKUP command, storing the ExitCode and ErrorCode output parameters in our defined variables, where: ExitCode is the output value from the SQL Backup extended stored procedure. Any number above 0 indicates some sort of issue with the backup execution. ExitCode >= 500 indicates a serious problem with at least one of the backup files and will need to investigate further. ExitCode < 500 is just a warning code. The backup operation itself may have run successfully, but there was some issue that was not critical enough to cause the entire operation to fail. 266 267 Chapter 8: Database Backup and Restore with SQL Backup Pro ErrorCode is the SQL Server return value. A value above 0 is returned only when SQL Server itself runs into an issue. Having an error code returned from SQL Server almost always guarantees a critical error for the entire operation. We test the value of each of these codes and, if a serious problem has occurred, we raise an error to SQL Server so that, if this were run in a SQL Server Agent job, it would guarantee to fail and alert someone, if the job were configured to do so. We do have it set up to send from SQL Backup on a failure, but also having the SQL Agent job alert on failure is a nice safeguard to have in place. What we do in this section is totally customizable and dictated by our needs. Restoring Database Backups with SQL Backup Pro Having performed our range of full, log and differential backups, using SQL Backup, it's time to demonstrate several useful restore examples, namely: restoring to the end of a given transaction log a complete restore, including restores of the tail log backup a restore to a particular point in time within a transaction log file. Preparing for restore In order to prepare for our restore operations, we're going to add a few new rows to MessageTable1, perform a log backup, and then add a few new rows to MessageTable2, as shown in Listing 268 Chapter 8: Database Backup and Restore with SQL Backup Pro USE [DatabaseForSQLBackups] INSERT INTO dbo.messagetable1 VALUES ( 'What is the meaning of life, the Universe and everything?', GETDATE() ) 21 USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP LOG [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_2.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 1, INIT, THREADCOUNT = 2"' USE [DatabaseForSQLBackups] INSERT INTO dbo.messagetable2 VALUES ('What is the meaning of life, the Universe and everything?', GETDATE() ) 21 Listing 8-10: Add 21 rows, take a log backup, add 21 rows. So, to recap, we have 2 million rows captured in the base full backup. We switched the database from SIMPLE to FULL recovery model, added 100,000 rows to our tables, then did a log backup, so the TLog1 backup captures the details of inserting those 100,000 rows. We then added another 200,000 rows and took a differential backup. Differentials capture all the data added since the last full backup, so 300,000 rows in this case. We then added 21 rows and took a second log backup, so the TLog2 backup will capture details of 200,021 inserted rows (i.e. all the changes since the last log backup). Finally, we added another 21 rows which are not currently captured in any backup. The backup scheme seems quite hard to swallow when written out like that, so hopefully Figure 8-15 will make it easier to digest. 268 269 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-15: Current backup scheme. SQL Backup Pro GUI restore to the end of a log backup First, let's see how easy it is to perform a one-off restore via SQL Backup, using the GUI, especially if all the required files are still stored locally, in their original location. We'll then look at the process again, step by step, using scripts. In this first example, we're going to restore the database to the state in which it existed at the time we took the second transaction log backup (DatabaseForSQLBackups_Log_2.sqb). Start up the SQL Backup Restore wizard and, at Step 1, select the required transaction log backup, as shown in Figure In this case, all the required backup files are still stored locally, so they are available from the backup history. As such, we only have to select the last file to be restored and, when we click Next, SQL Backup will calculate which other files it needs to restore first, and will load them automatically. 269 270 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-16: Restore the latest transaction log file. However, if the restore is taking place several days or weeks after the backup, then the files will likely have been moved to a new location, and we'll need to manually locate each of the required files for the restore process before SQL Backup will let us proceed. To do so, select Browse for backup files to restore from the top drop-down box, and then click the Add Files button to locate each of the required files, in turn. We can select multiple files in the same directory by holding down the Ctrl button on the keyboard. We can also add a network location into this menu by using the Add Server button, or by pasting in a full network path in the file name box. Whether SQL Backup locates the files for us, or we do it manually, we should end up with a screen that looks similar to that shown in Figure 271 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-17: Identifying the required files for the restore process. In this example, we need our base full backup and our differential backup files. Note that the availability of the differential backup means we can bypass our first transaction log backup (DatabaseForSQLBackups_Log_1.sqb). However, if for some reason the differential backup was unavailable, then we could still complete the restore process using the full backup followed by both the log files. We're going to overwrite the existing DatabaseForSQLBackups database and leave the data and log files in their original C:\SQLData directory. Note that the handy File locations drop-down is an innovation in SQL Backup 6.5; if using an older version, you'll need to manually fill in the paths using the ellipsis ( ) buttons. 271 272 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-18: Overwrite the existing database. Click Next and, on the following screen, SQL Backup warns us that we've not performed a tail log backup and gives us the option to do so, before proceeding. We do not need to restore the tail of the log as part of this restore process as we're deliberately only restoring to the end of our second log backup file. However, remember that the details of our INSERTs into MessageTable2, in Listing 8-10, are not currently backed up, and we don't want to lose details of these transactions, so we're going to go ahead and perform this tail log backup. Accepting this option catapults us into a log backup operation and we must designate the name and location for the tail log backup, and follow through the process, as was described earlier in the chapter. 272 273 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-19: Backing up the tail of the log. Once complete, we should receive a message saying that the backup of the tail of the transaction log was successful and to click Next to proceed with the restore. Having done so, we re-enter Step 3 of our original restore process, offering a number of database restore options. Figure 8-20: Database restore options. 273 274 Chapter 8: Database Backup and Restore with SQL Backup Pro The first section of the screen defines the Recovery completion state, and we're going to stick with the first option, Operational (RESTORE WITH RECOVERY). This will leave our database in a normal, usable state once the restore process is completed. The other two options allow us to leave the database in a restoring state, expecting more backup files, or to restore a database in Standby mode. We'll cover each of these later in the chapter. The Transaction log section of the screen is used when performing a restore to a specific point in time within a transaction log backup file, and will also be covered later. The final section allows us to configure a few special operations, after the restore is complete. The first option will test the database for orphaned users. An orphaned user is created when a database login has permissions set internally to a database, but that user doesn't have a matching login, either as a SQL login or an Active Directory login. Orphaned users often occur when moving a database between environments or servers, and are especially problematic when moving databases between operationally different platforms, such as between development and production, as we discussed in Chapter 3. Be sure to take care of these orphaned users after each restore, by either matching the user with a correct SQL Server login or by removing that user's permission from the database. The final option is used to send an to a person or a group of people, when the operation has completed and can be configured such that a mail is sent regardless of outcome, or when an error or warning occurs, or on error only. This is a valuable feature for any DBA. Just as we wrote notification into our automated backup scripts, so we also need to know if a restore operation fails for some reason, or reports a warning. This may be grayed out unless the mail server options have been correctly configured. Refer to Appendix A if you want to try out this feature here. Another nice use case for these notifications is when performing time-sensitive restores on VLDBs. We may not want to monitor the restore manually as it may run long into the night. Instead, we can use this feature so that the DBA, and the departments that need the database immediately, get a notification when the restore operation has completed. 274 275 Chapter 8: Database Backup and Restore with SQL Backup Pro Click Next to reach the, by now familiar, Summary screen. Skip to the Script tab to take a sneak preview of the script that SQL Backup has generated for this operation. You'll see that it's a three-step restore process, restoring first the base full backup, then the differential backup, and finally the second log backup (you'll see a similar script again in the next section and we'll go over the full details there). EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE.sqb'' WITH NORECOVERY, REPLACE"' EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Diff_1.sqb'' WITH NORECOVERY"' EXECUTE master..sqlbackup '-SQL "RESTORE LOG [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_2.sqb'' WITH RECOVERY, ORPHAN_CHECK"' Listing 8-11: The SQL Backup script generated by the SQL Backup Wizard. As a side note, I'm a little surprised to see the REPLACE option in the auto-generated script; it's not necessary as we did perform a tail log backup. If everything is as it should be, click Finish and the restore process will start. All steps should show green check marks to let us know that everything finished successfully and some metrics for the restore process should be displayed, a truncated view of which is given in Figure <snip> Restoring DatabaseForSQLBackups (database) from: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE). SQL Backup process ended. <snip> 275 276 Chapter 8: Database Backup and Restore with SQL Backup Pro Restoring DatabaseForSQLBackups (database) from: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Diff_1.sqb). SQL Backup process ended. <snip> Restoring DatabaseForSQLBackups (transaction logs) from: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_2.sqb Processed 0 pages for database 'DatabaseForSQLBackups', file 'DatabaseForSQLBackups' on file 1. Processed pages for database 'DatabaseForSQLBackups', file 'DatabaseForSQLBackups_log' on file 1. RESTORE LOG successfully processed pages in seconds ( MB/sec). No orphaned users detected. SQL Backup process ended. <snip> Figure 8-21: Metrics for SQL Backup restore to end of the second log backup. We won't dwell on the metrics here as we'll save that for a later section, where we compare the SQL Backup restore performance with native restores. Being pessimistic DBAs, we won't believe the protestations of success from the output of the restore process, until we see with our own eyes that the data is as it should be. USE DatabaseForSQLBackups SELECT MessageData, COUNT(MessageData) FROM dbo.messagetable1 GROUP BY MessageData SELECT MessageData, COUNT(MessageData) FROM dbo.messagetable2 GROUP BY MessageData Listing 8-12: Verifying our data. 276 277 Chapter 8: Database Backup and Restore with SQL Backup Pro The result confirms that all the data is there from our full, differential, and second log backups, but that the 21 rows we inserted into MessageTable2 are currently missing. Figure 8-22: Results of data verification. Never fear; since we had the foresight to take a tail log backup, we can get those missing 21 rows back. SQL Backup T-SQL complete restore We're now going to walk through the restore process again, but this time we'll use a SQL Backup script, as shown in Listing 8-13, and we'll perform a complete restore, including the tail log backup. USE master go --step 1: Restore the base full backup EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE.sqb'' WITH NORECOVERY, DISCONNECT_EXISTING, REPLACE"' 277 278 Chapter 8: Database Backup and Restore with SQL Backup Pro --step 2: Restore the diff backup EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Diff_1.sqb'' WITH NORECOVERY"' --step 3: Restore the second log backup EXECUTE master..sqlbackup '-SQL "RESTORE LOG [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_2.sqb'' WITH NORECOVERY"' --step 4: Restore the tail log backup and recover the database EXECUTE master..sqlbackup '-SQL "RESTORE LOG [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_Tail.sqb'' WITH RECOVERY, ORPHAN_CHECK"' Listing 8-13: A complete restore operation with SQL Backup T-SQL. There shouldn't be too much here that is new, but let's go over some of the WITH clause options. DISCONNECT_EXISTING Used in Step 1, this kills any current connections to the database. Without this option, we would need to use functionality similar to that which we built into our native restore script in Chapter 3 (see Listing 4-2). REPLACE This is required here, since we are now working with a new, freshlyrestored copy of the database and we aren't performing a tail log backup as the first step of this restore operation. SQL Server will use the logical file names and paths that are stored in the backup file. Remember that this only works if the paths in the backup file exist on the server to which you are restoring. NORECOVERY Used in Steps 1 3, this tells SQL Server to leave the database in a restoring state and to expect more backup files to be applied. ORPHAN_CHECK Used in Step 4, this is the orphaned user check on the database, after the restore has completed, as described in the previous section. RECOVERY Used in Step 4, this instructs SQL Server to recover the database to a normal usable state when the restore is complete. 278 279 Chapter 8: Database Backup and Restore with SQL Backup Pro Execute the script, then, while it is running, we can take a quick look at the SQL Backup monitoring stored procedure, sqbstatus, a feature that lets us monitor any SQL Backup restore operation, while it is in progress. Quickly open a second tab in SSMS and execute Listing EXEC master..sqbstatus Listing 8-14: Red Gate SQL Backup Pro monitoring stored procedure. The stored procedure returns four columns: the name of the database being restored; the identity of the user running the restore; how many bytes of data have been processed; and the number of compressed bytes that have been produced in the backup file. It can be useful to check this output during a long-running restore the first time you perform it, to gauge compression rates, or to get an estimate of completion time for restores and backups on older versions of SQL Server, where Dynamic Management Views are not available to tell you that information. Once the restore completes, you'll see restore metrics similar to those shown in Figure 8-21, but with an additional section for the tail log restore. If you rerun Listing 8-12 to verify your data, you should find that the "missing" 21 rows in MessageTable2 are back! SQL Backup point-in-time restore to standby In our final restore example, we're going to restore a standby copy of the DatabaseFor- SQLBackups database to a specific point in time in order to attempt to retrieve some accidentally deleted data. Standby servers are commonly used as a High Availability solution; we have a secondary, or standby, server that can be brought online quickly in the event of a failure of the primary server. We can restore a database to the standby server, and then successively 279 280 Chapter 8: Database Backup and Restore with SQL Backup Pro ship over and apply transaction logs, using the WITH STANDBY option, to roll forward the standby database and keep it closely in sync with the primary. In between log restores, the standby database remains accessible but in a read-only state. This makes it a good choice for near real-time reporting solutions where some degree of time lag in the reporting data is acceptable. However, this option is occasionally useful when in the unfortunate position of needing to roll forward through a set of transaction logs to locate exactly where a data mishap occurred. It's a laborious process (roll forward a bit, query the standby database, roll forward a bit further, query again, and so on) but, in the absence of any other means to restore a particular object or set of data, such as a tool that supports object-level restore (more on this a little later) it could be a necessity. In order to simplify our point-in-time restore, let's run another full backup, as shown in Listing EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE2.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, THREADCOUNT = 2, COMPRESSION = 1"' Listing 8-15: A new base full backup of DatabaseForSQLBackups. We'll then add some more data to each of our message tables, before simulating a disaster, in the form of someone accidentally dropping MessageTable2. USE [DatabaseForSQLBackups] INSERT INTO dbo.messagetable1 VALUES ( 'MessageTable1, I think the answer might be 41. No, wait...', GETDATE() ) 281 Chapter 8: Database Backup and Restore with SQL Backup Pro /* Find date of final INSERT from previous statement This is to help us with the RESTORE process USE DatabaseForSQLBackups SELECT TOP(1) MessageData,MessageDate FROM dbo.messagetable1 WHERE MessageData LIKE 'MessageTable1%' ORDER BY MessageDate DESC -- Output: :41: */ INSERT INTO dbo.messagetable2 VALUES ( 'MessageTable2, the true answer is 42!', GETDATE() ) final insert time: :42: Disaster strikes! DROP TABLE dbo.messagetable2 Listing 8-16: Disaster strikes MessageTable2. In this simple example, we have the luxury of knowing exactly when each event occurred. However, imagine this is a busy production database, and we only find out about the accidental table loss many hours later. Listing 8-17 simulates one of our regular, scheduled log backups, which runs after the data loss has occurred. USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP LOG [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 1, THREADCOUNT = 2"' Listing 8-17: A post-disaster log backup. 281 282 Chapter 8: Database Backup and Restore with SQL Backup Pro Restore to standby using the SQL Backup Pro GUI Having contrived the data loss, we can now start the restore process. Right-click on DatabaseForSQLBackups, pick the latest transaction log backup (DatabaseForSQL- Backups_Log_3.sqb) and click Next. Again, since the files are still in their original location, SQL Backup will locate any other files it needs from further back down the chain, in this case, just the latest full backup. If you've moved the full backup file, locate it manually, as described before. Figure 8-23: Identifying the backup files for our PIT restore. Our intention here, as discussed, is to restore a copy of the DatabaseForSQLBackups database in Standby mode. This will give us read access to the standby copy as we attempt to roll forward to just before the point where we lost Messagetable2. So, this time, we'll restore to a new database, called DatabaseForSQLBackups_Standby, as shown in Figure 283 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-24: Restoring to a new, standby copy of the DatabaseForSQLBackups database. At Step 3, we're going to choose a new option for the completion state of our restored database, which is Read-only (RESTORE WITH STANDBY). In doing so, we must create an undo file for the standby database. As we subsequently apply transaction log backups to our standby database, to roll forward in time, SQL Server needs to be able to roll back the effects of any transactions that were uncommitted at the point in time to which we are restoring. However, the effects of these uncommitted transactions must be preserved. As we roll further forward in time, SQL Server may need to reapply the effects of a transaction it previously rolled back. If SQL Server doesn't keep a record of that activity, we wouldn't be able to keep our database relationally sound. All of this information regarding the rolled back transactions is managed through the undo file. We'll place the undo file in our usual SQLBackups directory. In the central portion of the screen, we have the option to restore the transaction log to a specific point in time; we're going to roll forward in stages, first to a point as close as we can after 10:41:36.540, which should be the time we completed the batch of 41 INSERTs into MessageTable1. Again, remember that in a real restore scenario, you will probably not know which statements were run when. 283 284 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-25: Restoring WITH STANDBY to a specific point in a transaction log. Click Next to reach the Summary screen, where we can also take a quick preview of the script that's been generated (we'll discuss this in more detail shortly). Click Finish to execute the restore operation and it should complete quickly and successfully, with the usual metrics output. Refresh the SSMS Object Explorer to reveal a database called DatabaseForSQLBackups_Standby, which is designated as being in a Standby/ Read-Only state. We can query it to see if we restored to the point we intended. USE DatabaseForSQLBackups_Standby SELECT MessageData, COUNT(MessageData) FROM dbo.messagetable1 GROUP BY MessageData 284 285 Chapter 8: Database Backup and Restore with SQL Backup Pro SELECT MessageData, COUNT(MessageData) FROM dbo.messagetable2 GROUP BY MessageData Listing 8-18: Querying the standby database. As we hoped, we've got both tables back, and we've restored the 41 rows in MessageTable1, but not the 42 in MessageTable2. Figure 8-26: Verifying our data, Part 1. To get these 42 rows back, so the table is back to the state it was when dropped, we'll need to roll forward a little further, but stop just before the DROP TABLE command was issued. Start another restore operation on DatabaseForSQLBackups, and proceed as before to Step 2. This time, we want to overwrite the current DatabaseForSQLBackups_ Standby database, so select it from the drop-down box. 285 286 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-27: Overwriting the current standby database. At Step 3, we'll specify another standby restore, using the same undo file, and this time we'll roll forward to just after we completed the load of 42 rows into Messagetable2, but just before that table got dropped (i.e. as close as we can to 10:42:45.897). 286 287 Chapter 8: Database Backup and Restore with SQL Backup Pro Figure 8-28: Rolling further forward. Once again, the operation should complete successfully, with metrics similar to those shown in Figure Restoring DatabaseForSQLBackups_Standby (database) from: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE2.sqb Processed pages for database 'DatabaseForSQLBackups_Standby', file 'DatabaseForSQLBackups' on file 1. Processed 3 pages for database 'DatabaseForSQLBackups_Standby', file 'DatabaseForSQLBackups_ log' on file 1. RESTORE DATABASE successfully processed pages in seconds ( MB/sec). SQL Backup process ended. Restoring DatabaseForSQLBackups_Standby (transaction logs) from: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_3.sqb Processed 0 pages for database 'DatabaseForSQLBackups_Standby', file 'DatabaseForSQLBackups' on file 1. Processed 244 pages for database 'DatabaseForSQLBackups_Standby', file 'DatabaseForSQLBackups_ log' on file 1. RESTORE LOG successfully processed 244 pages in seconds ( MB/sec). No orphaned users detected. SQL Backup process ended. Figure 8-29: Output metrics for the PIT restore. 287 288 Chapter 8: Database Backup and Restore with SQL Backup Pro Rerun our data verification query, from Listing 8-18 and you should see that we now have the 42 rows restored to MessageTable2. Restore to standby using a SQL Backup script Before we move on, it's worth taking a more detailed look at the SQL Backup T-SQL script that we'd use to perform the same point-in-time restore operation. EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForSQLBackups_Standby] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE2.sqb'' WITH NORECOVERY, MOVE ''DatabaseForSQLBackups'' TO ''C:\SQLData\DatabaseForSQLBackups_Standby.mdf'', MOVE ''DatabaseForSQLBackups_log'' TO ''C:\SQLData\DatabaseForSQLBackups_Standby_log.ldf''"' EXECUTE master..sqlbackup '-SQL "RESTORE LOG [DatabaseForSQLBackups_Standby] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Log_3.sqb'' WITH STANDBY = ''C:\SQLBackups\Undo_DatabaseForSQLBackups_Standby.dat'', STOPAT = '' T10:42:46'', ORPHAN_CHECK"' Listing 8-19: Restore to standby SQL Backup script. The first command restores the base backup file to a new database, using the MOVE argument to copy the existing data and log files to the newly designated files. We specify NORECOVERY so that the database remains in a restoring state, to receive further backup files. The second command applies the log backup file to this new database. Notice the use of the WITH STANDBY clause, which indicates the restored state of the new database, and associates it with the correct undo file. Also, we use the STOPAT clause, with which you should be familiar from Chapter 4, to specify the exact point in time to which we wish to roll forward. Any transactions that were uncommitted at the time will be rolled back during the restore. 288 289 Chapter 8: Database Backup and Restore with SQL Backup Pro This is the first of our restore operations that didn't end with the RECOVERY keyword. The STANDBY is one of three ways (RECOVERY, NORECOVERY, STANDBY) to finalize a restore, and one of the two ways to finalize a restore and leave the data in an accessible state. It's important to know which finalization technique to use in which situations, and to remember they don't all do the same thing. Alternatives to restore with standby In a real-world restore, our next step, which we have not yet tackled here, would be to transfer the lost table back into the production database (DatabaseForSQLBackups). This, however, is not always an easy thing to do. If we do have to bring back data this way, we may run into referential integrity issues with data in related tables. If the data in other tables contains references to the lost table, but the database doesn't have the proper constraints in place, then we could have a bit of a mess to clean up when we import that data back into the production database. Also, one of the problems with this restore to standby approach is that you might also find yourself in a position where you have a VLDB that would require a great deal of time and space to restore just to get back one table, as in our example. In this type of situation, if your VLDB wasn't designed with multiple data files and filegroups, you might be able turn to an object-level restore solution. These tools will restore, from a database backup, just a single table or other object without having to restore the entire database. Restore metrics: native vs. SQL Backup Pro In order to get some comparison between the performances of native restores versus restores from compressed backups, via SQL Backup, we'll first simply gather metrics for a SQL backup restore of our latest base backup of DatabaseForSQBackups (_Full_ BASE2.sqb), shown in Listing 290 Chapter 8: Database Backup and Restore with SQL Backup Pro USE master go --Restore the base full backup EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE2.sqb'' WITH RECOVERY, DISCONNECT_EXISTING, REPLACE"' Restoring DatabaseForSQLBackups (database) from: C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_BASE2). Listing 8-20: Metrics for SQL restore of full database backup. Then, we'll take a native, compressed full backup of the newly restored database, and then a native restore from that backup. USE [master] BACKUP DATABASE DatabaseForSQLBackups TO DISK = N'C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_Native.bak' WITH COMPRESSION, INIT, NAME = N'DatabaseForSQLBackups-Full Database Backup' RESTORE DATABASE [DatabaseForSQLBackups] FROM DISK = N'C:\SQLBackups\Chapter8\DatabaseForSQLBackups_Full_Native.bak' WITH FILE = 1, STATS = 25, REPLACE 25 percent processed. 50 percent processed. 75 percent processed. 100 percent processed.). Listing 8-21: Code and metrics for native restore of full database backup. 290 291 Chapter 8: Database Backup and Restore with SQL Backup Pro As a final test, we can rerun Listing 8-21, but performing a native, non-compressed backup, and then restoring from that. In my tests, the restore times for native compressed and SQL Backup compressed backups were roughly comparable, with the native compressed restores performing slightly faster. Native non-compressed restores were somewhat slower, running in around 21 seconds in my tests. Verifying Backups As discussed in Chapter 2, the only truly reliable way of ensuring that your various backup files really can be used to restore a database is to perform regular test restores. However, there are a few other things you can do to minimize the risk that, for some reason, one of your backups will be unusable. SQL Backup backups, like native backups can, to some extent, be checked for validity using both BACKUP...WITH CHECKSUM and RESTORE VERIFYONLY. If both options are configured for the backup process (see Figure 8-5b) then SQL Backup will verify that the backup is complete and readable and then recalculate the checksum on the data pages contained in the backup file and compare it against the checksum values generated during the backup. Listing 8-22 shows the script. EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForSQLBackups] TO DISK = ''D:\SQLBackups\Chapter8\DatabaseForSQLBackups_FULL.sqb'' WITH CHECKSUM, DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, THREADCOUNT = 2, VERIFY"' Listing 8-22: BACKUP WITH CHECKSUM and RESTORE VERIFYONLY with SQL Backup. Alternatively, we can run either of the validity checks separately. As discussed in Chapter 2, BACKUP WITH CHECKSUM verifies only that each page of data written to the backup file is error free in relation to how it was read from disk. It does not validate that the backup data is valid, only that what was being written, was written correctly. It can cause a lot of overhead and slow down backup operations significantly, so evaluate its use carefully, based on available CPU capacity. 291 292 Chapter 8: Database Backup and Restore with SQL Backup Pro Nevertheless, these validity checks do provide some degree of reassurance, without the need to perform full test restores. Remember that we also need to be performing DBCC CHECKDB routines on our databases at least weekly, to make sure they are in good health and that our backups will be restorable. There are two ways to do this: we can run DBCC CHECKDB before the backup, as a T-SQL statement, in front of the extended stored procedure that calls SQL Backup Pro or, with version 7 of the tool, we can also enable the integrity check via the Schedule Restore Jobs wizard, as shown in Figure Figure 8-30: Configuring DBCC CHECKDB options as part of a restore job. Backup Optimization We discussed many ways to optimize backup storage and scheduling back in Chapter 2, so here, we'll focus just on a few optimization features that are supported by SQL Backup. The first is the ability to back up to multiple backup files on multiple devices. This is one of the best ways to increase throughput in backup operations, since we can write to several backup files simultaneously. This only applies when each disk is physically separate hardware; if we have one physical disk that is partitioned into two or more logical drives, we will not see a performance increase since the backup data can only be written to one of those logical drives at a time. 292 293 Chapter 8: Database Backup and Restore with SQL Backup Pro Listing 8-23 shows the SQL Backup command to back up a database to multiple backup files, on separate disks. The listing will also show how to restore from these multiple files, which requires the addition of extra file locations. EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_FULL_1.sqb'', DISK = ''D\SQLBackups\Chapter8\DatabaseForSQLBackups_FULL_2.sqb'', WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3"' EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForSQLBackups] FROM DISK = ''C:\SQLBackups\Chapter8\DatabaseForSQLBackups_FULL_1.sqb'', DISK = ''D:\SQLBackups\Chapter8\DatabaseForSQLBackups_FULL_2.sqb'', WITH RECOVERY"' Listing 8-23: Backing up a database to multiple files with SQL Backup. This is useful not only for backup throughput/performance; it is also a useful way to cut down on the size of a single backup, for transfer to other systems. Nothing is more infuriating than trying to copy a huge file to another system only to see it fail after 90% of the copy is complete. With this technique, we can break down that large file and copy the pieces separately. Note that backup to multiple files is also supported in native T-SQL and for native backups, as shown in Listing BACKUP DATABASE [DatabaseForSQLBackups] TO DISK = 'C:\SQLBackups\Chapter8\DatabaseForSQLBackups_FULL_Native_1.bak', DISK = 'C:\SQLBackups\Chapter8\DatabaseForSQLBackups_FULL_Native_2.bak' Listing 8-24: Native backup to multiple files. Having covered how to split backups to multiple locations, let's now see how to back up a single file, but have it stored in multiple locations, as shown in Listing This is useful when we want to back up to a separate location, such as a network share, when taking the original backup. This is only an integrated option when using the SQL Backup tool. 293 294 Chapter 8: Database Backup and Restore with SQL Backup Pro EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForSQLBackups] TO DISK = ''C:\SQLBackups\Chatper8\DatabaseForSQLBackups_FULL.sqb'' WITH COMPRESSION = 3, COPYTO = ''\\NETWORKMACHINE\SHARENAME\'', THREADCOUNT = 2"' Listing 8-25: Back up a database with a copy to a separate location. This will cause the backup process to run a bit longer, since it has to back up the database as well as copy it to another location. Just remember that network latency can be a major time factor in the completion of the backups, when using this option. Ultimately, to get the most performance out of a backup, we will need to tune each backup routine to match the specific environment. This requires testing the disk subsystem, the throughput of SQL Server, and making adjustments in the backup process to get the best backup performance. There is a useful listing on the Red Gate support site that offers some tips on how to do this using just a few extra parameters in the SQL Backup stored procedure: SBU_OptimizingBackup. Summary All of the same principles that we have discussed when using native SQL Server backup procedures apply when using Red Gate SQL Backup Pro. We want to follow the same best practices, and we can implement the same type of backup strategies if we are using SQL Backup in our environment. Red Gate SQL Backup Pro is not a requirement for our backup strategies, but it is a great tool that can save substantial amounts of time and disk space. Always remember to use the right tool for the right job. 294 295 Chapter 9: File and Filegroup Backup and Restore So far in this book, all of the full and differential backups we've taken have been database backups; in other words our full and differential backups have been capturing the contents of all data files in all filegroups. However, it's also possible to perform these types of backups on individual files or filegroups. Likewise, it's also possible in some cases, and assuming you have a complete set of accompanying transaction log backups, to restore just a subset of a database, such as an individual file or filegroup, rather than the entire database. I'll state upfront that, in my experience, it is relatively rare for a DBA to have too many databases that are subject to either file backups or filed-based restores. Even databases straying into the hundreds of gigabytes range, which may take a few hours to back up, will be managed, generally, using the usual mix of full database backups, supplemented by differential database backups (and, of course, log backups). Likewise, these databases will be restored, as necessary, using the normal database restore techniques we've discussed in previous chapters. However, it's when we start managing databases that run into the terabyte range that we start getting into trouble using these standard database backup and restore techniques, since it may no longer be possible to either back up the database in the required time, or restore the whole database within the limits of acceptable down-time. In such cases, filebased backup and restore becomes a necessity, and in this chapter, we'll discuss: potential advantages of file-based backup and restore common file architectures to support file-based backup and restore performing full and differential file backups using native SSMS and T-SQL, as well as SQL Backup 295 296 Chapter 9: File and Filegroup Backup and Restore performing several different types of file-based restore, namely: complete restores restore right to the point of failure point-in-time restores restore to a specific point in a transaction log backup using the STOPAT parameter restoring just a single data file recovering the database as a whole, by restoring just a single "failed" secondary data file online piecemeal restore, with partial database recovery bringing a database back online quickly after a failure, by restoring just the primary data file, followed later by the other data files. This operation requires SQL Server Enterprise (or Developer) Edition. Advantages of File Backup and Restore In order to exploit the potential benefits of file-based backup and restore, we need to have a database where the filegroup architecture is such that data is split down intelligently across multiple filegroups. We'll discuss some ways to achieve this in the next section but, assuming for now that this is the case, the benefits below can be gained. Easier VLDB backup administration For very large databases (i.e. in the terabyte range) it is often not possible to run a nightly database backup, simply due to time constraints and the fact that such backups have a higher risk of failure because of the very long processing times. In such cases, file backups become a necessity. Restoring a subset of the data files Regardless of whether we take database or file backups, it's possible in some cases to recover a database by restoring only a subset of that database, say, a single filegroup (though it's also possible to restore just a single page), rather than the whole database. Online piecemeal restores In the Enterprise Edition of SQL Server 2005, and later, we can make a database "partially available" by restoring only the PRIMARY filegroup and bringing the database back online, then later restoring other secondary filegroups. 296 297 Chapter 9: File and Filegroup Backup and Restore Improved disk I/O performance Achieved by separating different files and filegroups onto separate disk drives. This assumes that the SAN, DAS, or local storage is set up to take advantage of the files being on separate physical spindles, or SSDs, as opposed to separate logical disks. Piecemeal restores can be a massive advantage for any database, and may be a necessity for VLDBs, where the time taken for a full database restore would fall outside the down-time stipulated in the SLA. In terms of disk I/O performance, it's possible to gain performance advantages by creating multiple data files within a filegroup, and placing each file on a separate drive and, in some case, by separating specific tables and indexes into a filegroup, again on a dedicated drive. It's even possible to partition a single object across multiple filegroups (a topic we won't delve into further in this book, but see library/ms aspx for a general introduction). In general, I would caution against going overboard with the idea of trying to optimize disk I/O by manual placement of files and filegroups on different disk spindles, unless it is a proven necessity from a performance or storage perspective. It's a complex process that requires a lot of prior planning and ongoing maintenance, as data grows. Instead, I think there is much to be said for keeping file architecture as simple as is appropriate for a given database, and then letting your SAN or direct-attached RAID array take care of the disk I/O optimization. If a specific database requires it, then by all means work with the SAN administrators to optimize file and disk placement, but don't feel the need to do this on every database in your environment. As always, it is best to test and see what overhead this would place on the maintenance and administration of the server as opposed to the potential benefits which it provides. With that in mind, let's take a closer, albeit still relatively brief, look at possible filegroup architectures. 297 298 Chapter 9: File and Filegroup Backup and Restore Common Filegroup Architectures A filegroup is simply a logical collection of one or more database files. So far in this book, our database creation statements have been very straightforward, and similar in layout to the one shown in Listing 9-1. CREATE DATABASE [FileBackupsTest] ON PRIMARY ( NAME = N'FileBackupsTest', FILENAME = N'C:\SQLData\FileBackupsTest.mdf' ) LOG ON ( NAME = N'FileBackupsTest_log', FILENAME = N'C:\SQLData\FileBackupsTest_log.ldf' ) Listing 9-1: A simple database file architecture. In other words, a single data file in the PRIMARY filegroup, plus a log file (remember that log files are entirely separate from data files; log files are never members of a filegroup). However, as discussed in Chapter 1, it's common to create more than one data file per filegroup, as shown in Listing 9-2. CREATE DATABASE [FileBackupsTest] ON PRIMARY ( NAME = N'FileBackupsTest', FILENAME = N'C:\SQLData\FileBackupsTest.mdf'), ( NAME = N'FileBackupsTest2', FILENAME = N'D:\SQLData\FileBackupsTest2.ndf') LOG ON ( NAME = N'FileBackupsTest_log', FILENAME = N'E:\SQLData\FileBackupsTest_log.ldf' ) Listing 9-2: Two data files in the PRIMARY filegroup. 298 299 Chapter 9: File and Filegroup Backup and Restore Now we have two data files in the PRIMARY filegroup, plus the log file. SQL Server will utilize all the data files in a given database on a "proportionate fill" basis, making sure that each data file is used equally, in a round-robin fashion. We can also back up each of those files separately, if we wish. We can place each data file on a separate spindle to increase disk I/O performance. However, we have no control over exactly which data gets placed where, so we may end up with most of the data that is very regularly updated written to one file and most of the data that is rarely touched in the second. We'd have one disk working to peak capacity while the other sat largely idle, and we wouldn't achieve the desired performance benefit. The next step is to exert some control over exactly what data gets stored where, and this means creating some secondary filegroups, and dictating which objects store their data where. Take a look at Listing 9-3; in it we create the usual PRIMARY filegroup, holding our mdf file, but also a user-defined filegroup called SECONDARY, in which we create three secondary data files. CREATE DATABASE [FileBackupsTest] ON PRIMARY ( NAME = N'FileBackupsTest', FILENAME = N'E:\SQLData\FileBackupsTest.mdf', SIZE = 51200KB, FILEGROWTH = 10240KB ), FILEGROUP [Secondary] ( NAME = N'FileBackupsTestUserData1', FILENAME = N'G:\SQLData\FileBackupsTestUserData1.ndf', SIZE = KB, FILEGROWTH = KB ), ( NAME = N'FileBackupsTestUserData2', FILENAME = N'H:\SQLData\FileBackupsTestUserData2.ndf', SIZE = KB, FILEGROWTH = KB ), ( NAME = N'FileBackupsTestUserData3', FILENAME = N'I:\SQLData\FileBackupsTestUserData3.ndf', SIZE = KB, FILEGROWTH = KB ) LOG ON ( NAME = N'FileBackupsTest_log', FILENAME = N'F:\SQLData\FileBackupsTest_log.ldf', SIZE = KB, FILEGROWTH = KB ) USE [FileBackupsTest] 299 300 Chapter 9: File and Filegroup Backup and Restore IF NOT EXISTS ( SELECT name FROM sys.filegroups WHERE is_default = 1 AND name = N'Secondary' ) ALTER DATABASE [FileBackupsTest] MODIFY FILEGROUP [Secondary] DEFAULT Listing 9-3: A template for creating a multi-filegroup database. Crucially, we can now dictate, to a greater or less degree, what data gets put in which filegroup. In this example, immediately after creating the database, we have stipulated the SECONDARY filegroup, rather than the PRIMARY filegroup, as the default filegroup for this database. This means that our PRIMARY filegroup will hold only our system objects and data (plus pointers to the secondary data files). By default, any user objects and data will now be inserted into one of the data files in the SECONDARY filegroup, unless this was overridden by specifying a different target filegroup when the object was created. Again, the fact that we have multiple data files means that we can back each file up separately, if the entire database can't be backed up in the allotted nightly window. There are many different ways in which filegroups can be used to dictate, at the file level, where certain objects and data are stored. In this example, we've simply decided that the PRIMARY is for system data, and SECONDARY is for user data, but we can take this further. We might decide to store system data in PRIMARY, plus any other data necessary for the functioning of a customer-facing sales website. The actual sales and order data might be in a separate, dedicated filegroup. This architecture might be especially beneficial when running an Enterprise Edition SQL Server, where we can perform online piecemeal restores. In this case, we could restore the PRIMARY first, and get the database back online and the website back up. Meanwhile we can get to work restoring the other, secondary filegroups. We might also split tables logically into different filegroups, for example: separating rarely-used archive data from current data (e.g. current year's sales data in one filegroup, archive data in others) 300 301 Chapter 9: File and Filegroup Backup and Restore separating out read-only data from read-write data separating out critical reporting data. Another scheme that you may encounter is use of filegroups to separate the non-clustered indexes from the indexed data, although this seems to be a declining practice in cases where online index maintenance is available, with Enterprise Editions of SQL Server, and due to SAN disk systems becoming faster. Remember that the clustered index data is always in the same filegroup as the base table. We can also target specific filegroups at specific types of storage, putting the most frequently used and/or most critical data on faster media, while avoiding eating up highspeed disk space with data that is rarely used. For example, we might use SSDs for critical report data, a slower SAN-attached drive for archive data, and so on. All of these schemes may or may not represent valid uses of filegroups in your environment, but almost all of them will add complexity to your architecture, and to your backup and restore process, assuming you employ file-based backup and restore. As discussed earlier, I only recommend you go down this route if the need is proven, for example for VLDBs where the need is dictated by backup and or restore requirements. For databases of a manageable size, we can continue to use database backups and so gain the benefit of using multiple files/filegroups without the backup and restore complexity. Of course, one possibility is that a database, originally designed with very simple file architecture, grows to the point that it is no longer manageable in this configuration. What is to be done then? Changing the file architecture for a database requires very careful planning, both with regard to immediate changes to the file structure and how this will evolve to accommodate future growth. For the initial redesign, we'll need to consider questions such as the number of filegroups required, how the data is going to be separated out across those filegroups, how many data files are required, where they are going to be stored on disk, and so on. Having done this, we're then faced with the task of planning how to move all the data, and how much time is available each night to get the job done. 301 302 Chapter 9: File and Filegroup Backup and Restore These are all questions we need to take seriously and plan carefully, with the help of our most experienced DBAs; getting this right the first time will save some huge headaches later. Let's consider a simple example, where we need to re-architect the file structure for a database which currently stores all data in a single data file in PRIMARY. We have decided to create an additional secondary filegroup, named UserDataFilegroup, which contains three physical data files, each of which will be backed up during the nightly backup window. This secondary filegroup will become the default filegroup for the database, and the plan is that from now on only system objects and data will be stored in the PRIMARY data file. How are we going to get the data stored in the primary file into this new filegroup? It depends on the table index design, but ideally each table in the database will have a clustered index, in which case the easiest way to move the data is to re-create the clustered index while moving the data currently in the leaf level of that index over to the new filegroup. The code would look something like that shown in Listing 9-4. In Enterprise editions of SQL Server, we can set the ONLINE parameter to ON, so that the index will be moved but still be available. When using Standard edition go ahead and switch this to OFF. CREATE CLUSTERED INDEX [IndexName] ON [dbo].[tablename] ([ColumnName] ASC) WITH (DROP_EXISTING = ON, ONLINE = ON) ON [UserDataFileGroup] Listing 9-4: Rebuilding an index in a new filegroup. If the database doesn't have any clustered indexes, then this was a poor design choice; it should! We can create one for each table, on the most appropriate column or columns, using code similar to Listing 9-4 (omitting the DROP_EXISTING clause, though it won't hurt to include it). Once the clustered index is built, the table will be moved, along with the new filegroup. 302 303 Chapter 9: File and Filegroup Backup and Restore If this new index is not actually required, we can go ahead and drop it, as shown in Listing 9-5, but ideally we'd work hard to create a useful index instead that we want to keep. DROP INDEX [IndexName] ON [dbo].[tablename] WITH ( ONLINE = ON ) Listing 9-5: Dropping the newly created index. Keep in mind that these processes will move the data and clustered indexes over to the new filegroup, but not the non-clustered, or other, indexes. We will still need to move these over manually. Many scripts can be found online that will interrogate the system tables, find all of the non-clustered indexes and move them. Remember, also, that the process of moving indexes and data to a different physical file or set of files can be long, and disk I/O intensive. Plan out time each night, over a certain period, to get everything moved with as little impact to production as possible. This is also not a task to be taken lightly, and it should be planned out with the senior database administration team. File Backup When a database creeps up in size towards the high hundreds of gigabytes, or into the terabyte realm, then database backups start to become problematic. A full database backup of a database of this size could take over half of a day, or even longer, and still be running long into the business day, putting undue strain on the disk and CPU and causing performance issues for end-users. Also, most DBAs have experienced the anguish of seeing such a backup fail at about 80% completion, knowing that starting it over will eat up another 12 hours. 303 304 Chapter 9: File and Filegroup Backup and Restore Hopefully, as discussed in the previous sections, this database has been architected such that the data is spread across multiple data files, in several filegroups, so that we can still back up the whole database bit by bit, by taking a series of file backups, scheduled on separate days. While this is the most common reason for file backups, there are other valid reasons too, as we have discussed; for example if one filegroup is read-only, or modified very rarely, while another holds big tables, subject to frequent modifications, then the latter may be on a different and more frequent backup schedule. A file backup is simply a backup of a single data file, subset of data files or an entire filegroup. Each of the file backups contains only the data from the files or filegroups that we have chosen to be included in that particular backup file. The combination of all of the file backups, along with all log backups taken over the same period of time, is the equivalent of a full database backup. Depending on the size of the database, the number of files, and the backup schedule, this can constitute quite a large number of backups. We can capture both full file backups, capturing the entire contents of the designated file or filegroup, and differential file backups, capturing only the pages in that file or filegroup that have been modified since the last full file backup (there are also partial and partial differential file backups, but we'll get to those in Chapter 10). Is there a difference between file and filegroup backups? The short answer is no. When we take a filegroup backup we are simply specifying that the backup file should contain all of the data files in that filegroup. It is no different than if we took a file backup and explicitly referenced each data file in that group. They are the exact same backup and have no differences. This is why you may hear the term file backup used instead of filegroup backup. We will use the term file backup for the rest of this chapter. Of course, the effectiveness of file backups depends on these large databases being designed so that there is, as best as can be achieved, a distribution of data across the data files and filegroups such that the file backups are manageable and can complete in the required time frame. For example, if we have a database of 900 GB, split across three 304 305 Chapter 9: File and Filegroup Backup and Restore file groups, then ideally each filegroup will be a more manageable portion of the total size. Depending on which tables are stored where, this ideal data distribution may not be possible, but if one of those groups is 800 GB, then we might as well just take full database backups. For reasons that we'll discuss in more detail in relation to file restores, it's essential, when adopting a backup strategy based on file backups, to also take transaction log backups. It's not possible to perform file-based restores unless SQL Server has access to the full set of accompanying transaction logs. The file backups can and will be taken at different times, and SQL Server needs the subsequent transaction log backups in order to guarantee its ability to roll forward each individual file backup to the required point, and so restore the database, as a whole, to a consistent state. File backups and read-only filegroups The only time we don't have to apply a subsequent transaction log backup when restoring a file backup, is when SQL Server knows for a fact that the data file could not have been modified since the backup was taken, because the backup was of a filegroup explicitly designated as READ_ONLY). We'll cover this in more detail in Chapter 10, Partial Backup and Restore. So, for example, if we take a weekly full file backup of a particular file or filegroup, then in the event of a failure of that file, we'd potentially need to restore the file backup plus a week's worth of log files, to get our database back online in a consistent state. As such, it often makes sense to supplement occasional full file backups with more frequent differential file backups. In the same way as differential database backups, these differential file backups can dramatically reduce the number of log files that need processing in a recovery situation. In coming sections, we'll first demonstrate how to take file backups using both the SSMS GUI and native T-SQL scripts. We'll take full file backups via the SSMS GUI, and then differential file backups using T-SQL scripts. We'll then demonstrate how to perform the same actions using the Red Gate SQL Backup tool. 305 306 Chapter 9: File and Filegroup Backup and Restore Note that it is recommended, where possible, to take a full database backup and start the log backups, before taking the first file backup (see: en-us/library/ms aspx). We'll discuss this in more detail shortly, but note that, in order to focus purely on the logistics of file backups, we don't follow this advice in our examples. Preparing for file backups Before we get started taking file backups, we need to do the usual preparatory work, namely choosing an appropriate recovery model for our database, and then creating that database along with some populated sample tables. Since we've been through this process many times now, I'll only comment on those parts of the scripts that are substantially different from what has gone before. Please refer back to Chapters 3 and 5 if you need further explanation of any other aspects of these scripts. Recovery model Since we've established the need to take log backups, we will need to operate the database in FULL recovery model. We can also take log backups in the BULK_LOGGED model but, as discussed in Chapter 1, this model is only suitable for short-term use during bulk operations. For the long-term operation of databases requiring file backups, we should be using the FULL recovery model. Sample database and tables plus initial data load Listing 9-6 shows the script to create a database with both a PRIMARY and a SECONDARY filegroup, and one data file in each filegroup. Again, note that I've used the same drive for each filegroup and the log file, purely as a convenience for this demo; in reality, they would be on three different drives, as demonstrated previously in Listing 307 Chapter 9: File and Filegroup Backup and Restore USE [master] CREATE DATABASE [DatabaseForFileBackups] ON PRIMARY ( NAME = N'DatabaseForFileBackups', FILENAME = N'C:\SQLData\DatabaseForFileBackups.mdf', SIZE = 10240KB, FILEGROWTH = 10240KB ), FILEGROUP [SECONDARY] ( NAME = N'DatabaseForFileBackups_Data2', FILENAME = N'C:\SQLData\DatabaseForFileBackups_Data2.ndf', SIZE = 10240KB, FILEGROWTH = 10240KB ) LOG ON ( NAME = N'DatabaseForFileBackups_log', FILENAME = N'C:\SQLData\DatabaseForFileBackups_log.ldf', SIZE = 10240KB, FILEGROWTH = 10240KB ) ALTER DATABASE [DatabaseForFileBackups] SET RECOVERY FULL Listing 9-6: Multiple data file database creation script. The big difference between this database creation script, and any that have gone before, is that we're creating two data files: a primary (mdf) data file called DatabaseForFile- Backups, in the PRIMARY filegroup, and a secondary (ndf) data file called Database- ForFileBackups_Data2 in a user-defined filegroup called SECONDARY. This name is OK here, since we will be storing generic data in the second filegroup, but if the filegroup was designed to store a particular type of data then it should be named appropriately to reflect that. For example, if creating a secondary filegroup that will group together files used to store configuration information for an application, we could name it CONFIGURATION. Listing 9-7 creates two sample tables in our DatabaseForFileBackups database, with Table_DF1 stored in the PRIMARY filegroup and Table_DF2 stored in the SECONDARY filegroup. We then load a single initial row into each table. 307 308 Chapter 9: File and Filegroup Backup and Restore USE [DatabaseForFileBackups] CREATE TABLE dbo.table_df1 ( Message NVARCHAR(50) NOT NULL ) ON [PRIMARY] CREATE TABLE dbo.table_df2 ( Message NVARCHAR(50) NOT NULL ) ON [SECONDARY] INSERT INTO Table_DF1 VALUES ( 'This is the initial data load for the table' ) INSERT INTO Table_DF2 VALUES ( 'This is the initial data load for the table' ) Listing 9-7: Table creation script and initial data load for file backup configuration. Notice that we specify the filegroup for each table as part of the table creation statement, via the ON keyword. SQL Server will create a table on whichever of the available filegroups is marked as the default group. Unless specified otherwise, the default group will be the PRIMARY filegroup. Therefore, the ON PRIMARY clause, for the first table, is optional, but the ON SECONDARY clause is required. In previous chapters, we've used substantial data loads in order to capture meaningful metrics for backup time and file size. Here, we'll not be gathering these metrics, but rather focusing on the complexities of the backup (and restore) processes, so we're keeping row counts very low. 308 309 Chapter 9: File and Filegroup Backup and Restore SSMS native full file backups We're going to perform full file backups of each of our data files for the Database- ForFileBackups database, via the SSMS GUI. So, open up SSMS, connect to your test server, and bring up the Back Up Database configuration screen. In the Backup component section of the screen, select the Files and filegroups radio button and it will instantly bring up a Select Files and Filegroups window. We want to back up all the files (in this case only one file) in the PRIMARY filegroup, so tick the PRIMARY box, so the screen looks as shown in Figure 9-1, and click OK. Figure 9-1: Selecting the files to include in our file backup operation. 309 310 Chapter 9: File and Filegroup Backup and Restore Following the convention used throughout the book, we're going to store the backup files in C:\SQLBackups\Chapter9, so go ahead and create that subfolder on your database server, and then, in the Backup wizard, Remove the default backup destination, click Add, locate the Chapter9 folder and call the backup file DatabaseForFileBackups_ FG1_Full.bak. Once back on the main configuration page, double-check that everything on the screen looks as expected and if so, we have no further work to do, so we can click OK to start the file backup operation. It should complete in the blink of an eye, and our first file/filegroup backup is complete! We aren't done yet, however. Repeat the whole file backup process exactly as described previously but, this time, pick the SECONDARY filegroup in the Select Files and Filegroups window and, when setting the backup file destination, call the backup file Database- ForFileBackups_FG2_Full.bak. Having done this, check the Chapter9 folder and you should find your two backup files, ready for use later in the chapter. We've completed our file backups, but we're still not quite done here. In order to be able to restore a database from its component file backups, we need to be able to apply transaction log backups so that SQL Server can confirm that it is restoring the database to a consistent state. So, we are going to take one quick log backup file. Go back into the Back Up Database screen a third time, select Transaction Log as the Backup type, and set the backup file destination as C:\SQLBackups\Chapter9\DatabaseForFileBackups_ TLOG.trn. Native T-SQL file differential backup We're now going to perform differential file backups of each of our data files for the DatabaseForFileBackups database, using T-SQL scripts. First, let's load another row of sample data into each of our tables, as shown in Listing 311 Chapter 9: File and Filegroup Backup and Restore USE [DatabaseForFileBackups] INSERT INTO Table_DF1 VALUES ( 'This is the second data load for the table' ) INSERT INTO Table_DF2 VALUES ( 'This is the second data load for the table' ) Listing 9-8: Second data load for DatabaseForFileBackups. Without further ado, the script to perform a differential file backup of our primary data file is shown in Listing 9-9. USE [master] BACKUP DATABASE [DatabaseForFileBackups] FILE = N'DatabaseForFileBackups' TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_FG1_Diff.bak' WITH DIFFERENTIAL, STATS = 10 Listing 9-9: Differential file backup of the primary data file for DatabaseForFileBackups. The only new part of this script is the use of the FILE argument to specify which of the data files to include in the backup. In this case, we've referenced by name our primary data file, which lives in the PRIMARY filegroup. We've also used the DIFFERENTIAL argument to specify a differential backup, as described in Chapter 7. Go ahead and run the script now and you should see output similar that shown in Figure 312 Chapter 9: File and Filegroup Backup and Restore Figure 9-2: File differential backup command results. What we're interested in here are the files that were processed during the execution of this command. You can see that we only get pages processed on the primary data file and the log file. This is exactly what we were expecting, since this is a differential file backup, capturing the changed data in the primary data file, and not a differential database backup. If we had performed a differential (or full) database backup on a database that has multiple data files, then all those files will be processed as part of the BACKUP command and we'd capture all data changed in all of the data files. So, in this case, it would have processed both the data files and the log file. Let's now perform a differential file backup of our secondary data file, as shown in Listing USE [master] BACKUP DATABASE [DatabaseForFileBackups] FILEGROUP = N'SECONDARY' TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_FG2_Diff.bak' WITH DIFFERENTIAL, STATS = 10 Listing 9-10: Differential filegroup backup for DatabaseForFileBackups. 312 313 Chapter 9: File and Filegroup Backup and Restore This time, the script demonstrates the use of the FILEGROUP argument to take a backup of the SECONDARY filegroup as a whole. Of course, in this case there is only a single data file in this filegroup and so the outcome of this command will be exactly the same as if we had specified FILE = N'DatabaseForFileBackups_Data2' instead. However, if the SECONDARY filegroup had contained more than one data file, then all of these files would have been subject to a differential backup. If you look at the message output, after running the command, you'll see that only the N'DatabaseForFileBackups_Data2 data file and the log file are processed. Now we have a complete set of differential file backups that we will use to restore the database a little later. However, we are not quite done. Since we took the differential backups at different times, there is a possible issue with the consistency of the database, so we still need to take another transaction log backup. In Listing 9-11, we first add one more row each to the two tables and capture a date output (we'll need this later in a point-in-time restore demo) and then take another log backup. USE [master] INSERT INTO DatabaseForFileBackups.dbo.Table_DF1 VALUES ( 'Point-in-time data load for Table_DF1' ) INSERT INTO DatabaseForFileBackups.dbo.Table_DF2 VALUES ( 'Point-in-time data load for Table_DF2' ) SELECT GETDATE() -- note the date value. We will need it later. BACKUP LOG [DatabaseForFileBackups] TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_TLOG2.trn' Listing 9-11: Taking our second native transaction log backup. 313 314 Chapter 9: File and Filegroup Backup and Restore SQL Backup file backups Let's take a quick look at how to capture both types of file backup, full and differential, via the Red Gate SQL Backup tool. I assume basic familiarity with the tool, based on the coverage provided in the previous chapter, and so focus only on aspects of the process that are different from what has gone before, with database backups. We're going to take a look at file backups using SQL Backup scripts only, and not via the SQL Backup GUI. There are a couple of reasons for this: the version of SQL Backup (6.4) that was used in this book did not support differential file backups via the GUI assuming that, if you worked through Chapter 8, you are now comfortable using the basic SQL Backup functionality, so you don't need to see both methods. To get started, we'll need a new sample database on which to work. We can simply adapt the database creation script in Listing 9-6 to create a new, identically-structured database, called DatabaseForFileBackup_SB and then use Listing 9-7 to create the tables and insert the initial row. Alternatively, the script to create the database and tables, and insert the initial data, is provided ready-made (DatabaseForFileBackup_SB.sql) in code download for this book, at ShawnMcGehee/SQLServerBackupAndRestore_Code.zip. SQL Backup full file backups Having created the sample database and tables, and loaded some initial data, let's jump straight in and take a look at the script to perform full file backups of both the PRIMARY and SECONDARY data files, as shown in Listing 315 Chapter 9: File and Filegroup Backup and Restore USE [master] --SQL Backup Full file backup of PRIMARY filegroup EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForFileBackups_SB] FILEGROUP = ''PRIMARY'' TO DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2"' --SQL Backup Full file backup of secondary data file EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForFileBackups_SB] FILE = ''DatabaseForFileBackups_SB_Data2'' TO DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_Data2.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2"' Listing 9-12: SQL Backup full file backup of primary and secondary data files. Most of the details of this script, with regard to the SQL Backup parameters that control the compression and resiliency options, have been covered in detail in Chapter 8, so I won't repeat them here. We can see that we are using the FILEGROUP parameter here to perform the backup against all files in our PRIMARY filegroup. Since this filegroup includes just our single primary data file, we could just as well have specified the file explicitly, which is the approach we take when backing up the secondary data file. Having completed the full file backups, we are going to need to take a quick log backup of this database, just as we did with the native backups, in order to ensure we can restore the database to a consistent state, from the component file backups. Go ahead and run Listing 9-13 in a new query window to get a log backup of our DatabaseForFile- Backups_SB test database. USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP LOG [DatabaseForFileBackups_SB] TO DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_TLOG.sqb''"' Listing 9-13: Taking our log backup via SQL Backup script. 315 316 Chapter 9: File and Filegroup Backup and Restore SQL Backup differential file backups The SQL Backup commands for differential file backups are very similar to those for the full file backups, so we'll not dwell long here. First, we need to insert a new row into each of our sample tables, as shown in Listing USE [DatabaseForFileBackups_SB] INSERT INTO Table_DF1 VALUES ( 'This is the second data load for the table' ) INSERT INTO Table_DF2 VALUES ( 'This is the second data load for the table' ) Listing 9-14: Second data load for DatabaseForFileBackups_SB. Next, Listing 9-15 shows the script to perform the differential file backups for both the primary and secondary data files. USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForFileBackups_SB] FILEGROUP = ''PRIMARY'' TO DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_Diff.sqb'' WITH DIFFERENTIAL, DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2"' EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForFileBackups_SB] FILE = ''DatabaseForFileBackups_SB_Data2'' TO DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_Data2_Diff.sqb'' WITH DIFFERENTIAL, DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2"' Listing 9-15: SQL Backup differential file backup of primary and secondary data files. 316 317 Chapter 9: File and Filegroup Backup and Restore The only significant difference in this script compared to the one for the full file backups, apart from the different backup file names, is use of the DIFFERENTIAL argument to denote that the backups should only take into account the changes made to each file since the last full file backup was taken. Take a look at the output for this script, shown in truncated form in Figure 9-3; the first of the two SQL Backup operations processes the primary data file (DatabaseForFile- Backups), and the transaction log, and the second processes the secondary data file (DatabaseForFileBackups_Data2) plus the transaction log. Backing up DatabaseForFileBackups_SB (files/filegroups differential) to: C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_Diff.sqb Backup data size : MB Compressed data size: KB Compression rate : 98.33% Processed 56 pages for database 'DatabaseForFileBackups_SB', file 'DatabaseForFileBackups_SB' on file 1. Processed 2 pages for database 'DatabaseForFileBackups_SB', file 'DatabaseForFileBackups_SB_log' on file 1. BACKUP DATABASE...FILE=<name> WITH DIFFERENTIAL successfully processed 58 pages in seconds ( MB/sec). SQL Backup process ended. Figure 9-3: SQL Backup differential file backup results. Having completed the differential file backups, we do need to take one more backup and I think you can guess what it is. Listing 9-16 takes our final transaction log backup of the chapter. USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP LOG [DatabaseForFileBackups_SB] TO DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_TLOG2.sqb''"' Listing 9-16: Second SQL Backup transaction log backup. 317 318 Chapter 9: File and Filegroup Backup and Restore File Restore In all previous chapters, when we performed a restore operation, we restored the database as a whole, including all the data in all the files and filegroups, from the full database backup, plus any subsequent differential database backups. If we then wished to roll forward the database, we could do so by applying the full chain of transaction log backups. However, it is also possible to restore a database from a set individual file backups; the big difference is that that we can't restore a database just from the latest set of full (plus differential) file backups. We must also apply the full set of accompanying transaction log backups, up to and including the log backup taken after the final file backup in the set. This is the only way SQL Server can guarantee that it can restore the database to a consistent state. Consider, for example, a simple case of a database comprising three data files, each in a separate filegroup and where FG1_1, FG2_1, FG3_1 are full files backups of each separate filegroup, as shown in Figure 9-4. Figure 9-4: A series of full file and transaction log backups. 318 319 Chapter 9: File and Filegroup Backup and Restore Notice that the three file backups are taken at different times. In order to restore this database, using backups shown, we have to restore the FG1_1, FG2_1 and FG3_1 file backups, and then the chain of log backups 1 5. Generally speaking, we need the chain of log files starting directly after the oldest full file backup in the set, and finishing with the one taken directly after the most recent full file backup. Note that even if we are absolutely certain that in Log5 no further transactions were recorded against any of the three filegroups, SQL Server will not trust us on this and requires this log backup file to be processed in order to guarantee that any changes recorded in Log5 that were made to any of the data files, up to the point the FG3_1 backup completed, are represented in the restore, and so the database has transactional consistency. We can also perform point-in-time restores, to a point within the log file taken after all of the current set of file backups; in Figure 9-4, this would be to some point in time within the Log5 backup. If we wished to restore to a point in time within, say, Log4, we'd need to restore the backup for filegroup 3 taken before the one shown in Figure 9-4 (let's call it FG3_0), followed by FG1_1 and FG2_1, and then the chain of logs, starting with the one taken straight after FG3_0 and ending with Log4. This also explains why Microsoft recommends taking an initial full database backup and starting the log backup chain before taking the first full file backup. If we imagine that FG1_1, FG2_1 and FG3_1 file backups were the first-ever full file backups for this database, and that they were taken on Monday, Wednesday and Friday, then we'd have no restore capability in that first week, till the FG3_1 and Log5 backups were completed. It's possible, in some circumstances, to restore a database by restoring only a single file backup (plus required log backups), rather than the whole set of files that comprise the database. This sort of restore is possible as long as you've got a database composed of several data files or filegroups, regardless of whether you're taking database or file backups; as long as you've also got the required set of log backups, it's possible to restore a single file from a database backup. 319 320 Chapter 9: File and Filegroup Backup and Restore The ability to recover a database by restoring only a subset of the database files can be very beneficial. For example, if a single data file for a VLDB goes offline for some reason, we have the ability to restore from file backup just the damaged file, rather than restoring the entire database. With a combination of the file backup, plus the necessary transaction log backups, we can get that missing data file back to the state it was in as close as possible to the time of failure, and much quicker than might be possible if we needed to restore the whole database from scratch! With Enterprise Edition SQL Server, as discussed earlier, we also have the ability to perform online piecemeal restores, where again we start by restoring just a subset of the data files, in this case the primary filegroup, and then immediately bringing the database online having recovered only this subset of the data. As you've probably gathered, restoring a database from file backups, while potentially very beneficial in reducing down-time, can be quite complex and can involve managing and processing a large number of backup files. The easiest way to get a grasp of how the various types of file restore work is by example. Therefore, over the following sections, we'll walk though some examples of how to perform, with file backups, the same restore processes that we've seem previously in the book, namely a complete restore and a pointin-time restore. We'll then take a look at an example each of recovering from a "single data file failure," as well as online piecemeal restore. We're not going to attempt to run through each type of restore in four different ways (SSMS, T-SQL, SQL Backup GUI, SQL Backup T-SQL), as this would simply get tedious. We'll focus on scripted restores using either native T-SQL or SQL Backup T-SQL, and leave the equivalent restores, via GUI methods, as an exercise for the reader. It's worth noting, though, that whereas for database backups the SQL Backup GUI will automatically detect all required backup files (assuming they are still in their original locations), it will not do so for file backups; each required backup file will need to be located manually. 320 321 Chapter 9: File and Filegroup Backup and Restore Performing a complete restore (native T-SQL) We're going to take a look at an example of performing a complete restore of our DatabaseForFileBackups database. Before we start, let's insert a third data load, as shown in Listing 9-17, just so we have one row in each of the tables in the database that isn't yet captured in any of our backup files. USE [DatabaseForFileBackups] INSERT INTO Table_DF1 VALUES ( 'This is the third data load for the table' ) INSERT INTO Table_DF2 VALUES ( 'This is the third data load for the table' ) Listing 9-17: Third data load for DatabaseForFileBackups. Figure 9-5 depicts the current backups we have in place. We have the first data load captured in full file backups, the second data load captured in the differential file backups, and a third data load that is not in any current backup file, but we'll need to capture it in a tail log backup in order to restore the database to its current state. In a case where we were unable to take a final tail log backup we'd only be able to roll forward to the end of the TLOG2 backup. In this example, we are going to take one last backup, just to get our complete database back intact. 321 322 Chapter 9: File and Filegroup Backup and Restore Figure 9-5: Required backups for our complete restore of DatabaseForFileBackups. The first step is to capture that tail log backup, and prepare for the restore process, as shown in Listing USE master --backup the tail BACKUP LOG [DatabaseForFileBackups] TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_TLOG_TAIL.trn' WITH NORECOVERY Listing 9-18: Tail log backup. Notice the use of the NORECOVERY option in a backup; this lets SQL Server know that we want to back up the transactions in the log file and immediately place the database into a restoring state. This way, no further transactions can slip past us into the log while we are preparing the RESTORE command. We're now ready to start the restore process. The first step is to restore the two full file backups. We're going to restore over the top of the existing database, as shown in Listing 323 Chapter 9: File and Filegroup Backup and Restore Listing 9-19: Restoring the full file backups. Processed 184 pages for database 'DatabaseForFileBackups', file 'DatabaseForFileBackups' on file 1. Processed 6 pages for database 'DatabaseForFileBackups', file 'DatabaseForFileBackups_log' on file 1. The roll forward start point is now at log sequence number (LSN) Additional roll forward past LSN is required to complete the restore sequence. This RESTORE statement successfully performed some actions, but the database could not be brought online because one or more RESTORE steps are needed. Previous messages indicate reasons why recovery cannot occur at this point. RESTORE DATABASE... FILE=<name> successfully processed 190 pages in seconds ( MB/sec). Processed 16 pages for database 'DatabaseForFileBackups', file 'DatabaseForFileBackups_Data2' on file 1. Processed 2 pages for database 'DatabaseForFileBackups', file 'DatabaseForFileBackups_log' on file 1. RESTORE DATABASE... FILE=<name> successfully processed 18 pages in seconds (1.274 MB/sec). Figure 9-6: Output message from restoring the full file backups. Notice that we didn't specify the state to which to return the database after the first RESTORE command. By default this would attempt to bring the database back online, with recovery, but in this case SQL Server knows that there are more files to process, so it 323 324 Chapter 9: File and Filegroup Backup and Restore keeps the database in a restoring state. The first half of the message output from running this command, shown in Figure 9-6, tells us that the roll forward start point is at a specific LSN number but that an additional roll forward is required and so more files will have to be restored to bring the database back online. The second part of the message simply reports that the restore of the backup for the secondary data file was successful. Since we specified that the database should be left in a restoring state after the second restore command, SQL Server doesn't try to recover the database to a usable state (and is unable to do so). If you check your Object Explorer in SSMS, you'll see that Database- ForFileBackups is still in a restoring state. After the full file backups, we took a transaction log backup (_TLOG), but since we're rolling forward past the subsequent differential file backups, where any data changes will be captured for each filegroup, we don't need to restore the first transaction log, on this occasion. So, let's go ahead and restore the two differential file backups, as shown in Listing USE master Listing 9-20: Restore the differential file backups. 324 325 Chapter 9: File and Filegroup Backup and Restore The next step is to restore the second transaction log backup (_TLOG2), as shown in Listing When it comes to restoring the transaction log backup files, we need to specify NORECOVERY on all of them except the last. The last group of log backup files we are restoring (represented by only a single log backup in this example!) may be processing data for all of the data files and, if we do not specify NORECOVERY, we can end up putting the database in a usable state for the user, but unable to apply the last of the log backup files. USE master RESTORE DATABASE [DatabaseForFileBackups] FROM DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_TLOG2.trn' WITH NORECOVERY Listing 9-21: Restore the second log backup. Finally, we need to apply the tail log backup, where we know our third data load is captured, and recover the database. RESTORE DATABASE [DatabaseForFileBackups] FROM DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_TLOG_TAIL.trn' WITH RECOVERY Listing 9-22: Restore the tail log backup and recover the database. A simple query of the restored database will confirm that we've restored the database, with all the rows intact. USE [DatabaseForFileBackups] SELECT * FROM Table_DF1 SELECT * FROM Table_DF2 325 326 Chapter 9: File and Filegroup Backup and Restore Message This is the initial data load for the table This is the second data load for the table This is the point-in-time data load for the table This is the third data load for the table (4 row(s) affected) Message This is the initial data load for the table This is the second data load for the table This is the point-in-time data load for the table This is the third data load for the table (4 row(s) affected) Listing 9-23: Verifying the restored data. Restoring to a point in time (native T-SQL) Listing 9-24 shows the script to restore our DatabaseForFileBackups to a point in time either just before, or just after, we inserted the two rows in Listing 326 327 Chapter 9: File and Filegroup Backup and Restore RESTORE DATABASE [DatabaseForFileBackups] FROM DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_TLOG2.trn' WITH RECOVERY, STOPAT = ' T13:18:00' -- enter your time here Listing 9-24: Filegroup restore to a point in time. Notice that we include the REPLACE keyword in the first restore, since we are trying to replace the database and in this case aren't starting with a tail log backup, and there may be transactions in the log that haven't been backed up yet. We then restore the second full file backup and the two differential file backups, leaving the database in a restoring state each time. Finally, we restore the second transaction log backup, using the STOPAT parameter to indicate the time to which we wish to restore the database. In my example, I set the time for the STOPAT parameter to be about 30 seconds before the two INSERTs were executed, and Listing 9-25 confirms that only the first two data loads are present in the restored database. 327 328 Chapter 9: File and Filegroup Backup and Restore USE DatabaseForFileBackups SELECT * FROM dbo.table_df1 SELECT * FROM dbo.table_df2 Message This is the initial data load for the table This is the second data load for the table (2 row(s) affected) Message This is the initial data load for the table This is the second data load for the table (2 row(s) affected) Listing 9-25: Checking on the point-in-time restore data. Great! We can see that our data is exactly what we expected. We have restored the data to the point in time exactly where we wanted. In the real world, you would be restoring to a point in time before a disaster struck or when data was somehow removed or corrupted. Restoring after loss of a secondary data file One of the major potential benefits of file-based restore, especially for VLDBs, is the ability to restore just a single data file, rather than the whole database, in the event of a disaster. Let's imagine that we have a VLDB residing on one of our most robust servers. Again, this VLDB comprises three data files in separate filegroups that contain completely different database objects and data, with each file located on one of three separate physical hard drives in our SAN attached drives. 328 329 Chapter 9: File and Filegroup Backup and Restore Everything is running smoothly, and we get great performance for this database until our SAN suddenly suffers a catastrophic loss on one of its disk enclosures and we lose the drive holding one of the secondary data files. The database goes offline. We quickly get a new disk attached to our server, but the secondary data file is lost and we are not going to be able to get it back. As this point, all of the tables and data in that data file will be lost but, luckily, we have been performing regular full file, differential file, and transaction log backups of this database, and if we can capture a final tail-of-thelog backup, we can get this database back online using only the backup files for the lost secondary data file, plus the necessary log files, as shown in Figure 9-7. Figure 9-7: Example backups required to recover a secondary data file. In order to get the lost data file back online, with all data up to the point of the disk crash, we would: perform a tail log backup this requires a special form of log backup, which does not truncate the transaction log restore the full file backup for the lost secondary data file restore the differential file backup for the secondary data file, at which point SQL Server would expect to be able to apply log backups 3 and 4 plus the tail log backup recover the database. 329 330 Chapter 9: File and Filegroup Backup and Restore Once you do this, your database will be back online and recovered up to the point of the failure! This is a huge time saver since restoring all of the files could take quite a long time. Having to restore only the data file that was affected by the crash will cut your recovery time down significantly. However, there are a few caveats attached to this technique, which we'll discuss as we walk through a demo. Specifically, we're going to use SQL Backup scripts (these can easily be adapted into native T-SQL scripts) to show how we might restore just a single damaged, or otherwise unusable, secondary data file for our DatabaseForFile- Backups_SB database, without having to restore the primary data file. Note that if the primary data file went down, we'd have to restore the whole database, rather than just the primary data file. However, with Enterprise Edition SQL Server, it would be possible to get the database back up and running by restoring only the primary data file, followed subsequently by the other data files. We'll discuss that in more detail in the next section. This example is going to use our DatabaseForFileBackups_SB database to restore from a single disk / single file failure, using SQL Backup T-SQL scripts. A script demonstrating the same process using native T-SQL is available with the code download for this book. If you recall, for this database we have Table_DF1, stored in the primary data file (DatabaseForFileBackups_SB.mdf) in the PRIMARY filegroup, and Table_DF2, stored in the secondary data file (DatabaseForFileBackups_SB_Data2.ndf) in the SECONDARY filegroup. Our first data load (one row into each table) was captured in full file backups for each filegroup. We then captured a first transaction log backup. Our second data load (an additional row into each table) was captured in differential file backups for each filegroup. Finally, we took a second transaction log backup. Let's perform a third data load, inserting one new row into Table_DF2. 330 331 Chapter 9: File and Filegroup Backup and Restore USE [DatabaseForFileBackups_SB] INSERT INTO Table_DF2 VALUES ( 'This is the third data load for Table_DF2' ) Listing 9-26: Third data load for DatabaseForFileBackups_SB. Now we're going to simulate a problem with our secondary data file which takes it (and our database) offline; in Listing 9-27 we take the database offline. Having done so, navigate to the C:\SQLData\Chapter9 folder and delete the secondary data file! -- Take DatabaseForFileBackups_SB offline USE master ALTER DATABASE [DatabaseForFileBackups_SB] SET OFFLINE; /*Now delete DatabaseForFileBackups_SB_Data2.ndf!*/ Listing 9-27: Take DatabaseForFileBackups_SB offline and delete secondary data file! Scary stuff! Next, let's attempt to bring our database back online. USE master ALTER DATABASE [DatabaseForFileBackups_SB] SET ONLINE; Msg 5120, Level 16, State 5, Line 1 Unable to open the physical file "C:\SQLData\DatabaseForFileBackups_SB_Data2.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Msg 945, Level 14, State 2, Line 1 Database 'DatabaseForFileBackups_SB' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 1 ALTER DATABASE statement failed. Listing 9-28: Database cannot come online due to missing secondary data file. 331 332 Chapter 9: File and Filegroup Backup and Restore As you can see, this is unsuccessful, as SQL Server can't open the secondary data file. Although unsuccessful, this attempt to bring the database online is still necessary, as we need the database to attempt to come online, so the log file can be read. We urgently need to get the database back online, in the state it was in when our secondary file failed. Fortunately, the data and log files are on separate drives from the secondary, so these are still available. We know that there is data in the log file that isn't captured in any log backup, so our first task is to back up the log. Unfortunately, a normal log backup operation, such as shown in Listing 9-29 will not succeed. -- a normal tail log backup won't work USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP LOG [DatabaseForFileBackups_SB] TO DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_TLOG_TAIL.sqb'' WITH NORECOVERY"' Listing 9-29: A standard tail log backup fails. In my tests, this script just hangs and has to be cancelled, which is unfortunate. The equivalent script in native T-SQL results in an error message to the effect that: "Database 'DatabaseForFileBackups_SB' cannot be opened due to inaccessible files or insufficient memory or disk space." SQL Server cannot back up the log this way because the log file is not available and part of the log backup process, even a tail log backup, needs to write some log info into the database header. Since it cannot, we need to use a different form of tail backup to get around this problem. 332 333 Chapter 9: File and Filegroup Backup and Restore What we need to do instead, is a special form of tail log backup that uses the NO_ TRUNCATE option, so that SQL Server can back up the log without access to the data files. In this case the log will not be truncated upon backup, and all log records will remain in the live transaction log. Essentially, this is a special type of log backup and isn't going to remain useful to us after this process is over. When we do get the database back online and completely usable, we want to be able to take a backup of the log file in its original state and not break the log chain. In other words, once the database is back online, we can take another log backup (TLOG3, say) and the log chain will be TLOG2 followed by TLOG3 (not TLOG2, TLOG_TAIL, TLOG3). I would, however, suggest attempting to take some full file backups immediately after a failure, if not a full database backup, if that is at all possible. USE [master] EXECUTE master..sqlbackup '-SQL "BACKUP LOG [DatabaseForFileBackups_SB] TO DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_TLOG_TAIL.sqb'' WITH NO_TRUNCATE"' Listing 9-30: Performing a tail log backup with NO_TRUNCATE for emergency single file recovery. Note that if we cannot take a transaction log backup before starting this process, we cannot get our database back online without restoring all of our backup files. This process will only work if we lose a single file from a drive that does not also house the transaction log. Now that we have our tail log backup done, we can move on to recovering our lost secondary data file. The entire process is shown in Listing You will notice that there are no backups in this set of RESTORE commands that reference our primary data file. This data would have been left untouched and doesn't need to be restored. Having to restore only the lost file will save us a great deal of time. 333 334 Chapter 9: File and Filegroup Backup and Restore USE [master] -- restore the full file backup for the secondary data file EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForFileBackups_SB] FILE = ''DatabaseForFileBackups_SB_Data2'' FROM DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_Data2.sqb'' WITH NORECOVERY"' -- restore the differential file backup for the secondary data file EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForFileBackups_SB] FILE = ''DatabaseForFileBackups_SB_Data2'' FROM DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_Data2_Diff.sqb'' WITH NORECOVERY"' -- restore the subsequent transaction log backup EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForFileBackups_SB] FROM DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_TLOG2.sqb'' WITH NORECOVERY"' -- restore the tail log backup and recover the database EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForFileBackups_SB] FROM DISK = ''C:\SQLBackups\Chapter9\DatabaseForFileBackups_SB_TLOG_TAIL.sqb'' WITH RECOVERY"' Listing 9-31: Single file disaster SQL Backup restoration script. What we have done here is restore the full file, the differential file and all of the transaction log file backups that were taken after the differential file backup, including the tail log backup. This has brought our database back online and right up to the point where the tail log backup was taken. Now you may be afraid that you didn't get to it in time and some data may be lost, but don't worry. Any transactions that would have affected the missing data file would not have succeeded after the disaster, even if the database stayed online, and any that didn't use that missing data file would have been picked up by the tail log backup. 334 335 Chapter 9: File and Filegroup Backup and Restore Quick recovery using online piecemeal restore When using SQL Server Enterprise Edition, we have access to a great feature that can save hours of recovery time in the event of a catastrophic database loss. With this edition, SQL Server allows us to perform an online restore. Using the Partial option, we can restore the primary data file of our database and bring it back online without any other data files needing to be restored. In this way, we can bring the database online and in a usable state very quickly, and then apply the rest of the backups at a later time, while still allowing users to access a subset of the data. The tables and data stored in the secondary data files will not be accessible until they are restored. The database will stay online throughout the secondary data file restores, though. This way, we can restore the most important and most used data first, and the least used, archive data for instance, later, once we have all of the other fires put out. Let's take a look at an example; again, this requires an Enterprise (or Developer) Edition of SQL Server. Listing 9-32 creates a new database, with two tables, and inserts a row of data into each of the tables. Note, of course, that in anything other than a simple test example, the primary and secondary files would be on separate disks. USE [master] CREATE DATABASE [DatabaseForPartialRestore] ON PRIMARY ( NAME = N'DatabaseForPartialRestore', FILENAME = N'C:\SQLData\DatabaseForPartialRestore.mdf', SIZE = 5120KB, FILEGROWTH = 5120KB ), FILEGROUP [Secondary] ( NAME = N'DatabaseForPartialRestoreData2', FILENAME = N'C:\SQLData\DatabaseForPartialRestoreData2.ndf', SIZE = 5120KB, FILEGROWTH = 5120KB ) LOG ON ( NAME = N'DatabaseForPartialRestore_log', FILENAME = N'C:\SQLData\DatabaseForPartialRestore_log.ldf', SIZE = 5120KB, FILEGROWTH = 5120KB ) 335 336 Chapter 9: File and Filegroup Backup and Restore USE [DatabaseForPartialRestore] IF NOT EXISTS ( SELECT name FROM sys.filegroups WHERE is_default = 1 AND name = N'Secondary' ) ALTER DATABASE [DatabaseForPartialRestore] MODIFY FILEGROUP [Secondary] DEFAULT USE [master] ALTER DATABASE [DatabaseForPartialRestore] SET RECOVERY FULL WITH NO_WAIT USE [DatabaseForPartialRestore] CREATE TABLE [dbo].[message_primary] ( [Message] [varchar](50) NOT NULL ) ON [Primary] CREATE TABLE [dbo].[message_secondary] ( [Message] [varchar](50) NOT NULL ) ON [Secondary] INSERT INTO Message_Primary VALUES ( 'This is data for the primary filegroup' ) INSERT INTO Message_Secondary VALUES ( 'This is data for the secondary filegroup' ) Listing 9-32: Creating our database and tables for testing. This script is pretty long, but nothing here should be new to you. Our new database contains both a primary and secondary filegroup, and we establish the secondary filegroup as the DEFAULT, so this is where user objects and data will be stored, unless we specify otherwise. We then switch the database to FULL recovery model just to be sure 336 337 Chapter 9: File and Filegroup Backup and Restore that we can take log backups of the database; always validate which recovery model is in use, rather than just relying on the default being the right one. Finally, we create one table in each filegroup, and insert a single row into each table. Listing 9-33 simulates a series of file and log backups on our database; we can imagine that the file backups for each filegroup are taken on successive nights, and that the log backup after each file backup represents the series of log files that would be taken during the working day. Note that, in order to keep focused, we don't start proceedings with a full database backup, as would generally be recommended. USE [master] BACKUP DATABASE [DatabaseForPartialRestore] FILEGROUP = N'PRIMARY' TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_FG_Primary.bak' WITH INIT BACKUP LOG [DatabaseForPartialRestore] TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_LOG_1.trn' WITH NOINIT BACKUP DATABASE [DatabaseForPartialRestore] FILEGROUP = N'SECONDARY' TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_FG_Secondary.bak' WITH INIT BACKUP LOG [DatabaseForPartialRestore] TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_LOG_2.trn' WITH NOINIT Listing 9-33: Taking our filegroup and log backups. 337 338 Chapter 9: File and Filegroup Backup and Restore With these backups complete, we have all the tools we need to perform a piecemeal restore! Remember, though, that we don't need to be taking file backups in order to perform a partial/piecemeal restore. If the database is still small enough, we can still take full database backups and then restore just a certain filegroup from that backup file, in the manner demonstrated next. Listing 9-34 restores just our primary filegroup, plus subsequent log backups, and then beings the database online without the secondary filegroup! USE [master] RESTORE DATABASE [DatabaseForPartialRestore] FILEGROUP = 'PRIMARY' FROM DISK = 'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_FG_Primary.bak' WITH PARTIAL, NORECOVERY, REPLACE RESTORE LOG [DatabaseForPartialRestore] FROM DISK = 'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_LOG_1.trn' WITH NORECOVERY RESTORE LOG [DatabaseForPartialRestore] FROM DISK = 'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_LOG_2.trn' WITH RECOVERY Listing 9-34: Restoring our primary filegroup via an online piecemeal restore. Notice the use of the PARTIAL keyword to let SQL Server know that we will want to only partially restore the database, i.e. restore only the primary filegroup. We could also restore further filegroups here, but the only one necessary is the primary filegroup. Note the use of the REPLACE keyword, since we are not taking a tail log backup. Even though we recover the database upon restoring the final transaction log, we can still, in this case, restore the other data files later. The query in Listing 9-35 attempts to access data in both the primary and secondary filegroups. 338 339 Chapter 9: File and Filegroup Backup and Restore USE [DatabaseForPartialRestore] SELECT Message FROM Message_Primary SELECT Message FROM Message_Secondary Listing 9-35: Querying the restored database. The first query should return data, but the second one will fail with the following error: Msg 8653, Level 16, State 1, Line 1 The query processor is unable to produce a plan for the table or view 'Message_Secondary' because the table resides in a filegroup which is not online. This is exactly the behavior we expect, since the secondary filegroup is still offline. In a well-designed database we would, at this point, be able to access all of the most critical data, leaving just the least-used data segmented into the filegroups that will be restored later. The real bonus is that we can subsequently restore the other filegroups, while the database is up and functioning! Nevertheless, the online restore is going to be an I/O-intensive process and we would want to affect the users as little as possible, while giving as much of the SQL Server horsepower as we could to the restore. That means that it's best to wait till a time when database access is sparse before restoring the subsequent filegroups, as shown in Listing USE [master] RESTORE DATABASE [DatabaseForPartialRestore] FROM DISK = 'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_FG_SECONDARY.bak' WITH RECOVERY 339 340 Chapter 9: File and Filegroup Backup and Restore RESTORE LOG [DatabaseForPartialRestore] FROM DISK = 'C:\SQLBackups\Chapter9\DatabaseForPartialRestore_LOG_2.trn' WITH RECOVERY Listing 9-36: Restoring the secondary filegroup. We restore the secondary full file backup, followed by all subsequent log backups, so that SQL Server can bring the other filegroup back online while guaranteeing relational integrity. Notice that in this case each restore is using WITH RECOVERY; in an online piecemeal restore, with the Enterprise edition, each restore leaves the database online and accessible to the end-user. The first set of restores, in Listing 9-34, used NORECOVERY, but that was just to get us to the point where the primary filegroup was online and available. All subsequent restore steps use RECOVERY. Rerun Listing 9-35, and all of the data should be fully accessible! Common Issues with File Backup and Restore What's true for any backup scheme we've covered throughout the book, involving the management of numerous backup files of different types, is doubly true for file backups: the biggest danger to the DBA is generally the loss or corruption of one of those files. Once again, the best strategy is to minimize the risk via careful planning and documentation of the backup strategy, careful storage and management of the backup files and regular, random backup verification via test restores (as described in Chapter 2). If a full file backup goes missing, or is corrupt, we'll need to start the restore from the previous good full file backup, in which case we had better have a complete chain of transaction log files in order to roll past the missing full file backup (any differentials that rely on the missing full one will be useless). 340 341 Chapter 9: File and Filegroup Backup and Restore A similar argument applies to missing differential file backups; we'll have to simply rely on the full file backup and the chain of log files. If a log file is lost, the situation is more serious. Transaction log files are the glue that holds together the other two file backup types, and they should be carefully monitored to make sure they are valid and that they are being stored locally as well as on long-term storage. Losing a transaction log backup can be a disaster if we do not have a set of full file and file differential backups that cover the time frame of the missing log backup. If a log file is unavailable or corrupt, and we really need it to complete a restore operation, we are in bad shape. In this situation, we will not be able to restore past that point in time and will have to find a way to deal with the data loss. This is why managing files carefully, and keeping a tape backup offsite is so important for all of our backup files. File Backup and Restore SLA File backup and restore will most likely be a very small part of most recoverability strategies. For most databases, the increased management responsibility of file backups will outweigh the benefits gained by adopting such a backup strategy. As a DBA, you may want to "push back" on a suggestion of this type of backup unless you judge the need to be there. If the database is too large to perform regular (e.g. nightly) full database backups and the acceptable down-time is too short to allow a full database restore, then file backups would be needed, but they should not be the norm for most DBAs. If file backups are a necessity for a given database, then the implications of this, for backup scheduling and database recovery times, need to be made clear in the SLA. Of course, as we've demonstrated, down time could in some circumstances, such as where online piecemeal restore is possible, be much shorter. As with all backup and restore SLAs, once agreed, we need to be sure that we can implement a backup and recovery strategy that will comply with the maximum tolerance to data loss and that will bring a database back online within agreed times in the event of a disaster, such as database failure or corruption. 341 342 Chapter 9: File and Filegroup Backup and Restore Considerations for the SLA agreement, for any databases requiring file backups include those shown below. Scheduling of full file backups I recommend full file backup at least once per week, although I've known cases where we had to push beyond this as it wasn't possible to get a full backup of each of the data files in that period. Scheduling of differential file backups I recommend scheduling differential backups on any day where a full file backup is not being performed. As discussed in Chapter 7, this can dramatically decrease the number of log files to be processed for a restore operation. Scheduling of transaction log backups These should be taken daily, at an interval chosen by yourself and the project manager whose group uses the database. I would suggest taking log backups of a VLDB using file backups at an interval of no more than 1 hour. Of course, if the business requires a more finely-tuned window of recoverability, you will need to shorten that schedule down to 30 or even 15 minutes, as required. Even if the window of data loss is more than 1 hour, I would still suggest taking log backups hourly. For any database, it's important to ensure that all backups are completing successfully, and that all the backup files are stored securely and in their proper location, whether on a local disk or on long-term tape storage. However, in my experience, the DBA needs to be exceptionally vigilant in this regard for databases using file backups. There are many more files involved, and a missing log file can prevent you from being able to restore a database to a consistent state. With a missing log file and in the absence of a full file or differential file backup that covers the same time period, we'd have no choice but to restore to a point in time before the missing log file was taken. These log backups are also your "backup of a backup" in case a differential or full file backup goes missing. I would suggest keeping locally at least the last two full file backups, subsequent differential file backup and all log backups spanning the entire time frame of these file backups, even after they have been written to tape and taken to offsite storage. 342 343 Chapter 9: File and Filegroup Backup and Restore It may seem like a lot of files to keep handy, but since this is one of the most file intensive types of restores, it is better to be more safe than sorry. Waiting for a file from tape storage can cost the business money and time that they don't want to lose. Restore times for any database using file backups can vary greatly, depending on the situation; sometimes we'll need to restore several, large full file backups, plus any differential backups, plus all the necessary log backups. Other times, we will only need to restore a single file or filegroup backup plus the covering transaction log backups. This should be reflected in the SLA, and the business owners and end-users should be prepared for this varying window of recovery time, in the event that a restore is required. If the estimated recovery time is outside acceptable limits for complete restore scenarios, then if the SQL Server database in question is Enterprise Edition, y consider supporting the online piecemeal restore process discussed earlier. If not, then the business owner will need to weigh up the cost of upgrading to Enterprise Edition licenses, against the cost of extended down-time in the event of a disaster. As always, the people who use the database will drive the decisions made and reflected in the SLA. They will know the fine details of how the database is used and what processes are run against the data. Use their knowledge of these details when agreeing on appropriate data loss and recovery time parameters, and on the strategy to achieve them. Forcing Failures for Fun After a number of graceful swan dives throughout this chapter, get ready for a few bellyflops! Listing 9-37 is an innocent-looking script that attempts to take a full file backup of the secondary data file of the DatabaseForFileBackups database. We've performed several such backups successfully before, so try to figure out what the problem is before you execute it. 343 344 Chapter 9: File and Filegroup Backup and Restore USE [master] BACKUP DATABASE [DatabaseForFileBackups] FILE = N'SECONDARY' TO DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_FG1_Full2.bak' WITH STATS = 10 Listing 9-37: File/filegroup confusion. The error is quite subtle so, if you can't spot it, go ahead and execute the script and take a look at the error message, shown in Figure 9-8. Figure 9-8: The file "SECONDARY" is not part of the database The most revealing part of the error message here states that The file "SECONDARY" is not part of database "DatabaseForFileBackups. However, we know that the SECONDARY filegroup is indeed part of this database. The error we made was with our use of the FILE parameter; SECONDARY is the name of the filegroup, not the secondary data file. We can either change the parameter to FILEGROUP (since we only have one file in this filegroup), or we can use the FILE parameter and reference the name of the secondary data file explicitly (FILE= N'DatabaseForFileBackups_Data2'). Let's now move on to a bit of file restore-based havoc. Consider the script shown in Listing 9-38, the intent of which appears to be to restore our DatabaseForFileBackups database to the state in which it existed when we took the second transaction log backup file. 344 345 Chapter 9: File and Filegroup Backup and Restore USE [Master] RESTORE DATABASE [DatabaseForFileBackups] FILE = N'DatabaseForFileBackups' FROM DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_FG1_Full.bak' WITH NORECOVERY, REPLACE RESTORE DATABASE [DatabaseForFileBackups] FILE = N'DatabaseForFileBackups_Data2' FROM DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_FG2_Full.bak' WITH NORECOVERY RESTORE DATABASE [DatabaseForFileBackups] FILE = N'DatabaseForFileBackups' FROM DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_FG1_Diff.bak' WITH NORECOVERY RESTORE DATABASE [DatabaseForFileBackups] FROM DISK = N'C:\SQLBackups\Chapter9\DatabaseForFileBackups_TLOG2.trn' WITH RECOVERY Listing 9-38: File restore failure. The error in the script is less subtle, and I'm hoping you worked out what the problem is before seeing the error message in Figure 9-8. Listing 9-39: The backup set is too recent 345 346 Chapter 9: File and Filegroup Backup and Restore We can see that the first three RESTORE commands executed successfully but the fourth failed, with a message stating that the LSN contained in the backup was too recent to apply. Whenever you see this sort of message, it means that a file is missing from your restore script. In this case, we forgot to apply the differential file backup for the secondary data file; SQL Server detects the gap in the LSN chain and aborts the RESTORE command, leaving the database in a restoring state. The course of action depends on the exact situation. If the differential backup file is available and you simply forgot to include it, then restore this differential backup, followed by TLOG2, to recover the database. If the differential file backup really is missing or corrupted, then you'll need to process all transaction log backups taken after the full file backup was created. In our simple example this just means TLOG and TLOG2, but in a real-world scenario this could be quite a lot of log backups. Again, hopefully this hammers home the point that it is a good idea to have more than one set of files on hand, or available from offsite storage, which could be used to bring your database back online in the event of a disaster. You never want to be in the situation where you have to lose more data than is necessary, or are not be able to restore at all. Summary In my experience, the need for file backup and restore has tended to be relatively rare, among the databases that I manage. The flipside to that is that the databases that do need them tend to be VLDBs supporting high visibility projects, and all DBAs need to make sure that they are well-versed in taking, as well as restoring databases from, the variety of file backups. File backup and restore adds considerable complexity to our disaster recovery strategy, in terms of both the number and the type of backup file that must be managed. To gain full benefit from file backup and restores, the DBA needs to give considerable thought to the file and filegroup architecture for that database, and plan the backup and restore process 346 347 Chapter 9: File and Filegroup Backup and Restore accordingly. There are an almost infinite number of possible file and filegroup architectures, and each would require a subtly different backup strategy. You'll need to create some test databases, with multiple files and filegroups, work through them, and then document your approach. You can design some test databases to use any number of data files, and then create jobs to take full file, differential file, and transaction log backups that would mimic what you would use in a production environment. Then set yourself the task of responding to the various possible disaster scenarios, and bring your database back online to a certain day, or even a point in time that is represented in one of the transaction log backup files. 347 348 Chapter 10: Partial Backup and Restore Partial the backup footprint for large databases that contain a high proportion of read-only data. After all, why would we back up the same data each night when we know that it cannot have been changed, since it was in a read-only state? We wouldn't, or shouldn't, be backing up that segment of the data on a regular schedule. In this chapter, we will discuss: why and where partial backups may be applicable in a backup and restore scheme how to perform partial and differential partial backups how to restore a database that adopts a partial backup strategy potential problems with partial backups and how to avoid them. 348 349 Chapter 10: Partial Backup and Restore Why Partial Backups? Partial backups allow us to back up, on a regular schedule, only the read-write objects and data in our databases. Any data and objects stored in read-only filegroups will not, by default, be captured in a partial backup. From what we've discussed in Chapter 9, it should be clear that we can achieve the same effect by capturing separate file backups of each of the read-write filegroups. However, suppose we have a database that does not require support for point-in-time restores; transaction log backups are required when performing file-based backups, regardless, so this can represent an unnecessarily complex backup and restore process.. So, in what situations might we want to adopt partial backups on a database in our infrastructure? Let's say we have a SQL Server-backed application for a community center and its public classes. The database holds student information, class information, grades, payment data and other data about the courses. At the end of each quarter, one set of courses completes, and a new set begins. Once all of the information for a current quarter's courses is entered into the system, the data is only lightly manipulated throughout the course lifetime. The instructor may update grades and attendance a few times per week, but the database is not highly active during the day. Once the current set of courses completes, at quarter's end, the information is archived, and kept for historical and auditing purposes, but will not be subject to any further changes. This sort of database is a good candidate for partial backups. The "live" course data will be stored in a read-write filegroup. Every three months, this data can be appended to a 349 350 Chapter 10: Partial Backup and Restore set of archive tables for future reporting and auditing, stored in a read-only filegroup (this filegroup would be switched temporarily to read-write in order to run the archive process). We can perform this archiving in a traditional data-appending manner by moving all of the data from the live tables to the archive tables, or we could streamline this process via the use of partitioning functions. Once the archive process is complete and the new course data has been imported, we can take a full backup of the whole database and store it in a safe location. From then on, we can adopt a schedule of, say, weekly partial backups interspersed with daily differential partial backups. This way we are not wasting any space or time backing up the read-only data. Also, it may well be acceptable to operate this database in SIMPLE recovery model, since we know that once the initial course data is loaded, changes to the live course data are infrequent, so an exposure of one day to potential data loss may be tolerable. In our example, taking a full backup only once every three months may seem a little too infrequent. Instead, we might consider performing a monthly full backup, to provide a little extra insurance, and simplify the restore process. Performing Partial Database Backups In most DBAs' workplaces, partial backups will be the least used of all the backup types so, rather than walk through two separate examples, one for native SQL Server backup commands and one for SQL Backup, we'll only work through an example of partial backup and restore using native T-SQL commands, since partial backups are not supported in either SSMS or in the Maintenance Plan wizard. The equivalent SQL Backup commands will be presented, but outside the context of a full worked example. 350 351 Chapter 10: Partial Backup and Restore Preparing for partial backups Listing 10-1 shows the script to create a DatabaseForPartialBackups database with multiple data files. The primary data file will hold our read-write data, and a secondary data file, in a filegroup called Archive, will hold our read-only data. Having created the database, we immediately alter it to use the SIMPLE recovery model. USE [master] CREATE DATABASE [DatabaseForPartialBackups] ON PRIMARY ( NAME = N'DatabaseForPartialBackups', FILENAME = N'C:\SQLData\DatabaseForPartialBackups.mdf', SIZE = 10240KB, FILEGROWTH = 10240KB ), FILEGROUP [Archive] ( NAME = N'DatabaseForPartialBackups_ReadOnly', FILENAME = N'C:\SQLData\DatabaseForPartialBackups_ReadOnly.ndf', SIZE = 10240KB, FILEGROWTH = 10240KB ) LOG ON ( NAME = N'DatabaseForPartialBackups_log', FILENAME = N'C:\SQLData\DatabaseForPartialBackups_log.ldf', SIZE = 10240KB, FILEGROWTH = 10240KB ) ALTER DATABASE [DatabaseForPartialBackups] SET RECOVERY SIMPLE Listing 10-1: Creating the DatabaseForPartialBackups test database. The script is fairly straightforward and there is nothing here that we haven't discussed in previous scripts for multiple data file databases. The Archive filegroup will eventually be set to read-only, but first we are going to need to create some tables in this filegroup and populate one of them with data, as shown in Listing 352 Chapter 10: Partial Backup and Restore USE [DatabaseForPartialBackups] CREATE TABLE dbo.maindata ( ID INT NOT NULL IDENTITY(1, 1), Message NVARCHAR(50) NOT NULL ) ON [PRIMARY] CREATE TABLE dbo.archivedata ( ID INT NOT NULL, Message NVARCHAR(50) NOT NULL ) ON [Archive] INSERT INTO dbo.maindata VALUES ( 'Data for initial database load: Data 1' ) INSERT INTO dbo.maindata VALUES ( 'Data for initial database load: Data 2' ) INSERT INTO dbo.maindata VALUES ( 'Data for initial database load: Data 3' ) Listing 10-2: Creating the MainData and ArchiveData tables and populating the MainData table. The final preparatory step for our example is to simulate an archiving process, copying data from the MainData table into the ArchiveData table, setting the Archive filegroup as read-only, and then deleting the archived data from MainData, and inserting the next set of "live" data. Before running Listing 10-3, make sure there are no other query windows connected to the DatabaseForPartialBackups database. If there are, the conversion of the secondary file group to READONLY will fail, as we need to have exclusive access on the database before we can change filegroup states. 352 353 Chapter 10: Partial Backup and Restore USE [DatabaseForPartialBackups] INSERT INTO dbo.archivedata SELECT ID, Message FROM MainData ALTER DATABASE [DatabaseForPartialBackups] MODIFY FILEGROUP [Archive] READONLY DELETE FROM dbo.maindata INSERT INTO dbo.maindata VALUES ( 'Data for second database load: Data 4' ) INSERT INTO dbo.maindata VALUES ( 'Data for second database load: Data 5' ) INSERT INTO dbo.maindata VALUES ( 'Data for second database load: Data 6' ) Listing 10-3: Data archiving and secondary data load. Finally, before we take our first partial backup, we want to capture one backup copy of the whole database, including the read-only data, as the basis for any subsequent restore operations. We can take a partial database backup before taking a full one, but we do want to make sure we have a solid restore point for the database, before starting our partial backup routines. Therefore, Listing 10-4 takes a full database backup of our DatabaseForPartialBackups database. Having done so, it also inserts some more data into MainData, so that we have fresh data to capture in our subsequent partial backup. 353 354 Chapter 10: Partial Backup and Restore USE [master] BACKUP DATABASE DatabaseForPartialBackups TO DISK = N'C:\SQLBackups\Chapter10\DatabaseForPartialBackups_FULL.bak' INSERT INTO DatabaseForPartialBackups.dbo.MainData VALUES ( 'Data for third database load: Data 7' ) INSERT INTO DatabaseForPartialBackups.dbo.MainData VALUES ( 'Data for third database load: Data 8' ) INSERT INTO DatabaseForPartialBackups.dbo.MainData VALUES ( 'Data for third database load: Data 9' ) Listing 10-4: Full database backup of DatabaseForPartialBackups, plus third data load. The output from the full database backup is shown in Figure Notice that, as expected, it processes both of our data files, plus the log file.. BACKUP DATABASE successfully processed 194 pages in seconds ( MB/sec). Figure 10-1: Output from the full database backup. Partial database backup using T-SQL We are now ready to perform our first partial database backup, which will capture the data inserted in our third data load, as shown in Listing 355 Chapter 10: Partial Backup and Restore BACKUP DATABASE DatabaseForPartialBackups READ_WRITE_FILEGROUPS TO DISK = N'C:\SQLBackups\Chapter10\DatabaseForPartialBackups_PARTIAL_Full.bak' Listing 10-5: A partial backup of DatabaseForPartialBackups. The only difference between this backup command and the full database backup command shown in Listing 10-4 is the addition of the READ_WRITE_FILEGROUPS option. This option lets SQL Server know that the command is a partial backup and to only process the read-write filegroups contained in the database. The output should be similar to that shown in Figure Notice that this time only the primary data file and the log file are processed. This is exactly what we expected to see: since we are not processing any of the read-only data, we shouldn't see that data file being accessed in the second backup command. Processed 176 pages for database 'DatabaseForPartialBackups', file 'DatabaseForPartialBackups' on file 1. Processed 2 pages for database 'DatabaseForPartialBackups', file 'DatabaseForPartialBackups_log' on file 1. BACKUP DATABASE...FILE=<name> successfully processed 178 pages in seconds ( MB/sec). Figure 10-2: Partial backup results. Differential partial backup using T-SQL Just as we can have differential database backups, which refer to a base full database backup, so we can take differential partial database backups that refer to a base partial database backup, and will capture only the data that changed in the read-write data files, since the base partial backup was taken. 355 356 Chapter 10: Partial Backup and Restore Before we run a differential partial backup, we need some fresh data to process. USE [DatabaseForPartialBackups] INSERT INTO MainData VALUES ( 'Data for fourth database load: Data 10' ) INSERT INTO MainData VALUES ( 'Data for fourth database load: Data 11' ) INSERT INTO MainData VALUES ( 'Data for fourth database load: Data 12' ) Listing 10-6: Fourth data load, in preparation for partial differential backup. Listing 10-7 shows the script to run our partial differential backup. The one significant difference is the inclusion of the WITH DIFFERENTIAL option, which converts the command from a full partial to a differential partial backup. USE [master] BACKUP DATABASE [DatabaseForPartialBackups] READ_WRITE_FILEGROUPS TO DISK = N'C:\SQLBackups\Chapter10\DatabaseForPartialBackups_PARTIAL_Diff.bak' WITH DIFFERENTIAL Listing 10-7: Performing the partial differential backup. Once this command is complete, go ahead and check the output of the command in the messages tab to make sure only the proper data files were processed. We are done taking partial backups for now and can move on to our restore examples. 356 357 Chapter 10: Partial Backup and Restore Performing Partial Database Restores We will be performing two restore examples: one restoring the DatabaseForPartial- Backups database to the state in which it existed after the third data load, using the full database and full partial backup files, and one restoring the database to its state after the fourth data load, using the full database, full partial, and differential partial backup files. Restoring a full partial backup Listing 10-8 show the two simple steps in our restore process. The first step restores our full database backup file, which will restore the data and objects in both our read-write and read-only filegroups. In this step, we include the NORECOVERY option, so that SQL Server leaves the database in a state where we can apply more files. This is important so that we don't wind up with a database that is online and usable before we apply the partial backup. The second step restores our full partial backup file. This will overwrite the read-write files in the existing database with the data from the backup file, which will contain all the data we inserted into the primary data file, up to and including the third data load. We specify that this restore operation be completed with RECOVERY. RECOVERY Listing 10-8: Restoring the partial full database backup. 357 358 Chapter 10: Partial Backup and Restore The output from running this script is shown in Figure We should see all files being processed in the first command, and only the read-write and transaction log file being modified in the second command.. RESTORE DATABASE successfully processed 194 pages in seconds (1.986 MB/sec). Processed 176 pages for database 'DatabaseForPartialBackups', file 'DatabaseForPartialBackups' on file 1. Processed 2 pages for database 'DatabaseForPartialBackups', file 'DatabaseForPartialBackups_log' on file 1. RESTORE DATABASE... FILE=<name> successfully processed 178 pages in seconds ( MB/sec). Figure 10-3: Partial database backup restore output. Everything looks good, and exactly as expected, but let's put on our Paranoid DBA hat once more and check that the restored database contains the right data. USE [DatabaseForPartialBackups] SELECT ID, Message FROM dbo.maindata SELECT ID, Message FROM dbo.archivedata Listing 10-9: Checking out our newly restored data. 358 359 Chapter 10: Partial Backup and Restore Hopefully, we'll see three rows of data in the ArchiveData table and six rows of data in the read-write table, MainData, as confirmed in Figure Figure 10-4: Results of the data check on our newly restored database. Restoring a differential partial backup Our restore operation this time is very similar, except that we'll need to process all three of our backup files, to get the database back to its state after the final data load. You may be wondering why it's necessary to process the full partial backup in this case, rather than just the full backup followed by the differential partial backup. In fact, the full database backup cannot serve as the base for the differential partial backup; only a full partial backup can serve as the base for a differential partial backup, just as only a full database backup can serve as a base for a differential database backup. Each differential partial backup holds all the changes since the base partial backup so, if we had a series of differential partial backups, we would only need to restore the latest one in the series. 359 360 Chapter 10: Partial Backup and Restore Listing shows the script; we restore the full database and full partial backups, leaving the database in a restoring state, then apply the differential partial backup and recover the database. NORECOVERY RESTORE DATABASE [DatabaseForPartialBackups] FROM DISK = N'C:\SQLBackups\Chapter10\DatabaseForPartialBackups_PARTIAL_Diff.bak' WITH RECOVERY Listing 10-10: Restoring the partial differential backup file. Once again, check the output from the script to make sure everything looks as it should, and then rerun Listing 10-9 to verify that there are now three more rows in the MainData table, for a total of nine rows, and still only three rows in the ArchiveData table. Special case partial backup restore Here, we'll take a quick look at a special type of restore operation that we might term a "partial online piecemeal restore," which will bring the database online by restoring only the read-only filegroup in the database (requires Enterprise edition), as shown in Listing This type of restore can be done with both full partial backups as well as full file 360 361 Chapter 10: Partial Backup and Restore backups, provided the full file backup contains the primary filegroup, with the database system information. This is useful if we need to recover a specific table that exists in the read-write filegroups, or we want to view the contents of the backup without restoring the entire database. -- restore the read-write filegroups RESTORE DATABASE [DatabaseForPartialBackups] FROM DISK = N'C:\SQLBackups\Chapter10\DatabaseForPartialBackups_PARTIAL_Full.bak' WITH RECOVERY, PARTIAL Listing 10-11: Performing a partial online restore. Now, we should have a database that is online and ready to use, but with only the read-write filegroup accessible, which we can verify with a few simple queries, shown in Listing USE [DatabaseForPartialBackups] SELECT ID, Message FROM MainData SELECT ID, Message FROM ArchiveData Listing 10-12: Selecting data from our partially restored database. The script attempts to query both tables and the output is shown in Figure 362 Chapter 10: Partial Backup and Restore Figure 10-5: Unable to pull information from the archive table. We can see that we did pull six rows from the MainData table, but when we attempted to pull data from the ArchiveData table, we received an error, because that filegroup was not part of the file we used in our restore operation. We can see the table exists and even see its structure, if so inclined, since all of that information is stored in the system data, which was restored with the primary filegroup. SQL Backup Partial Backup and Restore In this section, without restarting the whole example from scratch, we will take a look at the equivalent full partial and differential partial backup commands in SQL Backup, as shown in Listing USE master -- full partial backup with SQL Backup EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForPartialBackups] READ_WRITE_FILEGROUPS TO DISK = ''C:\SQLBackups\Chapter10\DatabaseForPartialBackups_Partial_Full.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2"' 362 363 Chapter 10: Partial Backup and Restore -- differential partial backup with SQL Backup EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [DatabaseForPartialBackups] READ_WRITE_FILEGROUPS TO DISK = ''C:\SQLBackups\Chapter10\DatabaseForPartialBackups_Partial_Diff.sqb'' WITH DIFFERENTIAL, DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 3, THREADCOUNT = 2"' Listing 10-13: A SQL Backup script for full partial and differential partial backups. The commands are very similar to the native commands, and nearly identical to the SQL Backup commands we have used in previous chapters. The only addition is the same new option we saw in the native commands earlier in this chapter, namely READ_WRITE_FILEGROUPS. Listing shows the equivalent restore commands for partial backups; again, they are very similar to what we have seen before in other restore scripts. We restore the last full database backup leaving the database ready to process more files. This will restore all of the read-only data, and leave the database in a restoring state, ready to apply the partial backup data. We then apply the full partial and differential partial backups, and recover the database. -- full database backup restore EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForPartialBackups] FROM DISK = ''C:\SQLBackups\Chapter10\DatabaseForPartialBackups_FULL.sqb'' WITH NORECOVERY"' -- full partial backup restore EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForPartialBackups] FROM DISK = ''C:\SQLBackups\Chapter10\DatabaseForPartialBackups_Partial_Full.sqb'' WITH NORECOVERY"' -- differential partial backup restore EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DatabaseForPartialBackups] FROM DISK = ''C:\SQLBackups\Chapter10\DatabaseForPartialBackups_Partial_Diff.sqb'' WITH RECOVERY"' Listing 10-14: A SQL Backup script for full and differential partial restores. 363 364 Chapter 10: Partial Backup and Restore Possible Issues with Partial Backup and Restore The biggest issue when restoring a database using partial backups is the storage and management of the read-only data. This data is captured in a full database backup, directly after the data import, and then not backed up again. As such, it is easy for its existence to slip from a DBA's mind. However, the fact that this portion of the database is backed up less frequently, does not mean it is less important; it is still an integral piece of the recovery strategy, so manage each file in your backup strategy carefully and make sure the full database backup doesn't get lost in the archival jungle. When performing one of the full database backups on a database where the backup strategy includes subsequent partial backups, it's a good idea to perform a checksum on that full database backup (see Chapter 2 for details). This can, and most likely will, slow down the backup speed but if you can take the speed hit, it's nice to have the reassurance that this full backup is valid, since they will be captured infrequently. Of course, performing a test restore with the newly created full backup file is even better! If you do find that your full database backup has not been stored properly or that the file is somehow corrupted, what can you do? Well, not a whole lot. Hopefully you have multiple copies stored in different locations for just this type of situation. Keeping a copy on long-term storage, as well as locally on a robust disk, are other good ideas when dealing with data that is not backed up on a regular basis. Be diligent when you are managing your backup files. Remember that your data is your job! 364 365 Chapter 10: Partial Backup and Restore Partial Backups and Restores in the SLA Like file backup and restore, partial backup and restore will most likely form a very small part of your overall recoverability strategy; partial backups are a very useful and time saving tool in certain instances, but they will not likely be the "go-to" backup type in most situations, largely because most databases simply don't contain a large proportion of read-only data. However, if a strategy involving partial backups seems a good fit for a database then, in the SLA for that database, you'll need to consider such issues as: frequency of refreshing the full database backup frequency of full partial and differential partial backups are transaction log backups still required? As noted earlier, partial backups are designed with SIMPLE recovery model databases in mind. When applied in this way, it means that we can only restore to the last backup that was taken, most likely a full partial or a differential partial backup, and so we do stand to lose some data modifications in the case of, say, a midday failure. That is something you have to weigh against your database restore needs to decide if this type of backup will work for you. Like every other type of backup, weigh up the pros and cons to see which type, or combination of types, is right for your database. Just because this type of backup was designed mainly for SIMPLE recovery model databases, it doesn't mean that we can only use it in such cases. For a FULL recovery model database that has a much smaller window of data loss acceptability, such as an hour, but does contain a large section of read-only data, this backup type can still work to our advantage. We would, however, need to take transaction log backups in addition to the partial full and partial differentials. This will add a little more complexity to the backup and restore processes in the event of emergency, but will enable point-intime restore. 365 366 Chapter 10: Partial Backup and Restore Forcing Failures for Fun Much as I hate to end the chapter, and the book, on a note of failure, that is sometimes the lot of the DBA; deal with failures as and when they occur, learn what to look out for, and enforce measures to ensure the problem does not happen again. With that in mind, let's walk through a doomed partial backup scheme. Take a look at the script in Listing and, one last time, try to work out what the problem is before running it._Diff.bak' WITH RECOVERY Listing 10-15: Forcing a restore error with partial database backup restore. Do you spot the mistake? Figure 10-6 shows the resulting SQL Server error messages. Figure 10-6: Forced failure query results 366 367 Chapter 10: Partial Backup and Restore The first error we get in the execution of our script is the error "File DatabaseForPartial- Backups is not in the correct state to have this differential backup applied to it." This is telling us that the database is not prepared to process our second restore command, using the differential partial backup file. The reason is that we have forgotten to process our partial full backup file. Since the partial full file, not the full database file, acts as the base for the partial differential, we can't process the partial differential without it. This is why our database is not in the correct state to process that differential backup file. Summary You should now be familiar with how to perform both partial full and partial differential backups and be comfortable restoring this type of backup file. With this, I invite you to sit back for a moment and reflect on the fact that you now know how to perform all of the major, necessary types of SQL Server backup and restore operation. Congratulations! The book is over, but the journey is not complete. Backing up databases and performing restores should be something all DBAs do on a very regular basis. This skill is paramount to the DBA and we should keep working on it until the subject matter becomes second nature. Nevertheless, when and if disaster strikes a database, in whatever form, I hope this book, and the carefully documented and tested restore strategy that it helps you generate, will allow you to get that database back online with an acceptable level of data loss, and minimal down-time. Good luck! Shawn McGehee 367 368 Appendix A: SQL Backup Pro Installation and Configuration This appendix serves as a Quick Reference on how to download, install and configure the SQL Backup tool from Red Gate Software, so that you can work through any examples in the book that use this tool. You can download a fully-functional trial version of the software from Red Gate's website, at By navigating to the Support pages for this tool, you can find further and fuller information on how to install, configure and use this tool, including step-by-step tutorials and troubleshooting guides. See, for example, Content/SQL_Backup/help/6.5/SBU_Gettingstarted, and the links therein. With the software package downloaded, there are basically two steps to the installation process, first installing the SQL Backup GUI, and then the SQL Backup services. SQL Backup Pro GUI Installation This section will demonstrate how to install the SQL backup GUI tool, which will enable you to register servers for use, and then execute and manage backups across your servers (having first installed the SQL Backup services on each of these SQL Server machines). The GUI tool will normally be installed on the client workstation, but can also be installed on a SQL Server. Open the SQL Backup zip file, downloaded from the Red Gate website, and navigate to the folder called SQLDBABundle. Several files are available for execution, as shown in Figure A-1. The one you want here, in order to install the SQL Backup GUI is 368 369 Appendix A: SQL Backup Pro Installation and Configuration SQLDBABundle.exe, so double-click the file, or copy the file to an uncompressed folder, in order to begin the installation. Figure A-1: Finding the installation file. There is the option to install several DBA tools from Red Gate, but here you just want to install SQL Backup (current version at time of writing was SQL Backup 6.5), so select that tool, as shown in Figure A-2, and click the Next button to continue. Figure A-2: Installing SQL Backup 370 Appendix A: SQL Backup Pro Installation and Configuration On the next page, you must accept the license agreement. This is a standard EULA and you can proceed with the normal routine of just selecting the "I accept " check box and clicking Next to continue. If you wish to read through the legalese more thoroughly, print a copy for some bedtime reading. On the next screen, select the folder where the SQL Backup GUI and service installers will be stored. Accept the default, or configure a different location, if required, and then click Install. The installation process should only take a few seconds and, once completed, you should see a success message and you can close the installer program. That's all there is to it, and you are now ready to install the SQL Backup services on your SQL Server machine. SQL Backup Pro Services Installation You can now use the SQL Backup GUI tool to connect to a SQL Server and install the SQL Backup services on that machine. Open the SQL Backup GUI and, once past the flash page offering some import/registration tools and demo screens (feel free to peruse these at a later time), you'll arrive at the main SQL Backup screen. Underneath the top menu items (currently grayed out), is a tab that contains a server group named after the current time zone on the machine on which the GUI installed. In the bottom third of the screen are several tables where, ultimately, will be displayed the backup history for a selected server, current processes, log copy queues and job information. Right-click on the server group and select Add SQL Server, as shown in Figure A 371 Appendix A: SQL Backup Pro Installation and Configuration Figure A-3: Starting the server registration process. Once the Add SQL Server screen opens, you can register a new server. Several pieces of information will be required, as shown below. SQL Server: This is the name of the SQL Server to register. Authentication type: Choose to use your Windows account to register or a SQL Server login. User name: SQL Login user name, if using SQL Server authentication. Password: SQL Login password, if using SQL Server authentication. Remember password: Check this option to save the password for a SQL Server login. Native Backup and restore history: Time period over which to import details of native SQL Server backup and restore activity into your local cache. 371 372 Appendix A: SQL Backup Pro Installation and Configuration Install or upgrade server components: Leaving this option checked will automatically start the installation or upgrade of a server's SQL Backup services. Having have filled in the General tab information, you should see a window similar to that shown in Figure A-4. Figure A-4: SQL Server registration information. There is a second tab, Options, which allows you to modify the defaults for the options below. Location: You can choose a different location so the server will be placed in the tab for that location. Group: Select the server group in which to place the server being registered. Alias: An alias to use for display purposes, instead of the server name. 372 373 Appendix A: SQL Backup Pro Installation and Configuration Network protocol: Select the SQL Server communication protocol to use with this server. Network packet size: Change the default packet size for SQL Server communications in SQL Backup GUI. Connection time-out: Change the length of time that the GUI will attempt communication with the server before failing. Execution time-out: Change the length of time that SQL Backup will wait for a command to start before stopping it. Accept all the defaults for the Options page, and go ahead and click Connect. Once the GUI connects to the server, you will need to fill out two more pages of information about the service that is being installed. On the first page, select the account under which the SQL Backup service will run; it will need to be one that has the proper permissions to perform SQL Server backups, and execute the extended stored procedures that it will install in the master database, and it will need to have the sysadmin server role, for the GUI interface use. You can use a built-in system account or a service account from an Active Directory pool of users (recommended, for management across a domain). For this example, use the Local System account. Remember, too, that in SQL Server 2008 and later, the BUILTIN\Administrators group is no longer, by default, a sysadmin on the server, so you will need to add whichever account you are using to the SQL Server to make sure you have the correct permissions set up. Figure A-5 shows an example of the security setup for a typical domain account for the SQL Backup service on a SQL Server. 373 374 Appendix A: SQL Backup Pro Installation and Configuration Figure A-5: Security setup for new server installation. 374 375 Appendix A: SQL Backup Pro Installation and Configuration On the next page, you will see the SQL Server authentication credentials. You can use a different account from the service account, but it is not necessary if the permissions are set up correctly. Stick with the default user it has selected and click Finish to start the installation. Once that begins, the installation files will be copied over and you should, hopefully, see a series of successful installation messages, as shown in Figure A-6. Figure A-6: Successful SQL backup service installation. You have now successfully installed the SQL Backup service on a SQL Server. In the next section, you'll need to configure that service. 375 376 Appendix A: SQL Backup Pro Installation and Configuration SQL Backup Pro Configuration Your newly-registered server should be visible in the default group (the icon to the right of the server name indicates that this is a trial version of the software). Right-click the server name to bring up a list of options, as shown in Figure A-7, some of which we will be using throughout this book. Figure A-7: Getting to the SQL Backup options configuration. Go ahead and select Options to bring up the Server Options window. File management The File Management tab allows you to configure backup file names automatically, preserve historical data and clean up MSDB history on a schedule. 376 377 Appendix A: SQL Backup Pro Installation and Configuration The first option, Backup folder, sets the default location for all backup files generated through the GUI. This should point to the normal backup repository for this server so that any backups that are taken, especially if one is taken off the regular backup schedule, will end up in the right location. The next option, File name format, sets the format of the auto-generated name option that is available with SQL Backup, using either the GUI or T-SQL code. Of course, it is up to you how to configure this setting, but my recommendation is this: <DATABASE>_<DATETIME yyyymmdd_hhnnss>_<type> Using this format, any DBA can very quickly identify the database, what time the backup was taken, and what type of backup it was, and it allows the files to be sorted in alphabetical and chronological order. If you store in a shared folder multiple backups from multiple instances that exist on the same server, which I would not recommend, you can also add the <INSTANCE> tag for differentiation. The next option, Log file folder, tells SQL Backup where on the server to store the log files from SQL Backup operations. Also in this section of the screen, you can configure log backup management options, specifying for how long backups should be retained in the local folder. I would recommend keeping at least 90 days' worth of files, but you could keep them indefinitely if you require a much longer historical view. The final section of the screen is Server backup history, which will clean up the backup history stored in the msdb database; you may be surprised by how much historical data will accumulate over time. The default is to remove this history every 90 days, which I think is a little low. I would keep at least 180 days of history, but that is a choice you will have to make based on your needs and regulations. If the SQL Backup server components are installed on an older machine which has never had its msdb database cleaned out, then the first time SQL Backup runs the msdb "spring clean," it can take quite a long time and cause some blocking on the server. There are no indexes on the backup and restore history tables in msdb, so it's a good idea to add some. 377 378 Appendix A: SQL Backup Pro Installation and Configuration Once you are done configuring these options, the window should look similar, but probably slightly different, to that shown in Figure A-8. Figure A-8: File management configuration. settings The Settings tab will allow you to configure the settings for the s that SQL Backup can send to you, or to a team of people, when a failure occurs in, or warning is raised for, one of your backup or restore operations. It's a very important setting to get configured correctly, and tested. There are just five fields that you need to fill out. SMTP Host: This is the SMTP server that will receive any mail SQL Backup needs to send out. 378 SQL Server Backup and Restore The Red Gate Guide SQL Server Backup and Restore Shawn McGehee ISBN: 978-1-906434-74-8 SQL Server Backup and Restore By Shawn McGehee First published by Simple Talk Publishing April 2012 Copyright AprilMore information Database Maintenance Essentials Database Maintenance Essentials Brad M McGehee Director of DBA Education Red Gate Software What We Are Going to Learn Today 1. Managing MDF Files 2. Managing LDF Files 3. Managing Indexes 4. MaintainingMore information theMore information Nimble Storage Best Practices for Microsoft SQL Server BEST PRACTICES GUIDE: Nimble Storage Best Practices for Microsoft SQL Server Summary Microsoft SQL Server databases provide the data storage back end for mission-critical applications. Therefore, it sMore information databaseMore information Backups and Maintenance Backups and Maintenance Backups and Maintenance Objectives Learn how to create a backup strategy to suit your needs. Learn how to back up a database. Learn how to restore from a backup. Use the DatabaseMore informationMore information:More information Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-sideMore information Backup and Restore Strategies for SQL Server Database. Nexio Motion. 10-February-2015. Revsion: DRAFT Backup and Restore Strategies for SQL Server Database Nexio Motion v4 10-February-2015 Revsion: DRAFT v4 Publication Information 2015 Imagine Communications Corp. Proprietary and Confidential. ImagineMore information Nexio Motion 4 Database Backup and Restore Nexio Motion 4 Database Backup and Restore 26-March-2014 Revision: A Publication Information 2014 Imagine Communications Corp. Proprietary and Confidential. Imagine Communications considers this documentMore information DBA 101: Best Practices All DBAs Should Follow The World s Largest Community of SQL Server Professionals DBA 101: Best Practices All DBAs Should Follow Brad M. McGehee Microsoft SQL Server MVP Director of DBA Education Red Gate Software information Bulletproof your Database Backup and Recovery Strategy Whitepaper Bulletproof your Database Backup and Recovery Strategy By Shawn McGehee and Tony Davis The most critical task for all DBAs is to have a Backup and Recovery strategy that ensures, every day,More information MOC 20462C: Administering Microsoft SQL Server Databases MOC 20462C: Administering Microsoft SQL Server Databases Course Overview This course provides students with the knowledge and skills to administer Microsoft SQL Server databases. Course Introduction CourseMore information SQL Server Database Administrator s Guide SQL Server Database Administrator s Guide Copyright 2011 Sophos Limited. All rights reserved. No part of this publication may be reproduced, stored in retrieval system, or transmitted, in any form or byMore information:More informationMore information Getting to Know the SQL Server Management Studio HOUR 3 Getting to Know the SQL Server Management Studio The Microsoft SQL Server Management Studio Express is the new interface that Microsoft has provided for management of your SQL Server database. ItMore information Module 07. Log Shipping Module 07 Log Shipping Agenda Log Shipping Overview SQL Server Log Shipping Log Shipping Failover 2 Agenda Log Shipping Overview SQL Server Log Shipping Log Shipping Failover 3 Log Shipping Overview DefinitionMore information aMore information Best Practices Every SQL Server DBA Must Know The World s Largest Community of SQL Server Professionals Best Practices Every SQL Server DBA Must Know Brad M McGehee SQL Server MVP Director of DBA Education Red Gate Software MyMore informationMore information tairways handbook SQL Server Transaction Log Management tairways handbook SQL Server Transaction Log Management By Tony Davis and Gail Shaw SQL Server Transaction Log Management By Tony Davis and Gail Shaw First published by Simple Talk Publishing October 2012More information.More information Database Administrator Certificate Capstone Project Evaluation Checklist Database Administrator Certificate Capstone Project Evaluation Checklist The following checklist will be used by the Capstone Project instructor to evaluate your project. Satisfactory completion of theMore information EMC APPSYNC AND MICROSOFT SQL SERVER A DETAILED REVIEW EMC APPSYNC AND MICROSOFT SQL SERVER A DETAILED REVIEW ABSTRACT This white paper discusses how EMC AppSync integrates with Microsoft SQL Server to provide a solution for continuous availability of criticalMore information This article Includes: Log shipping has been a mechanism for maintaining a warm standby server for years. Though SQL Server supported log shipping with SQL Server 2000 as a part of DB Maintenance Plan, it has become a built-inMore information Administering Microsoft SQL Server Databases Course 20462C: Administering Microsoft SQL Server Databases Page 1 of 7 Administering Microsoft SQL Server Databases Course 20462C: 4 days; Instructor-Led Introduction This four-day instructor-led courseMore information Course Outline: This five-day instructor-led course provides students with the knowledge and skills to maintain a Microsoft SQL Server 2014 database. The course focuses on teaching individuals how to use SQL Server 2014More information Backup and Recovery in MS SQL Server. Andrea Imrichová Backup and Recovery in MS SQL Server Andrea Imrichová Types of Backups copy-only backup database backup differential backup full backup log backup file backup partial backup Copy-Only Backups without affectingMore information SQL Server 2008 Designing, Optimizing, and Maintaining a Database Session 1 SQL Server 2008 Designing, Optimizing, and Maintaining a Database Course The SQL Server 2008 Designing, Optimizing, and Maintaining a Database course will help you prepare for 70-450 exam from Microsoft.More information 20462- Administering Microsoft SQL Server Databases Course Outline 20462- Administering Microsoft SQL Server Databases Duration: 5 days (30 hours) Target Audience: The primary audience for this course is individuals who administer and maintain SQL ServerMore information Chapter 14: Recovery System Chapter 14: Recovery System Chapter 14: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Remote Backup Systems Failure Classification Transaction failureMore information Outline. Failure Types Outline Database Management and Tuning Johann Gamper Free University of Bozen-Bolzano Faculty of Computer Science IDSE Unit 11 1 2 Conclusion Acknowledgements: The slides are provided by Nikolaus AugstenMore information Handling a Full SQL Server Transaction Log Handling a Full SQL Server Transaction Log T he transaction log for a SQL Server database is critical to the operation of the database and the ability to minimize data loss in the event of a disaster.More information ADMINISTERING MICROSOFT SQL SERVER DATABASES Education and Support for SharePoint, Office 365 and Azure COURSE OUTLINE ADMINISTERING MICROSOFT SQL SERVER DATABASES Microsoft Course Code 20462 About this course This five-dayMore information Destiny system backups white paper Destiny system backups white paper Establishing a backup and restore plan for Destiny Overview It is important to establish a backup and restore plan for your Destiny installation. The plan must be validatedMore information DBAMore information Database Maintenance Guide Database Maintenance Guide Medtech Evolution - Document Version 5 Last Modified on: February 26th 2015 (February 2015) This documentation contains important information for all Medtech Evolution usersMore information Administering Microsoft SQL Server Databases Course 20462C: Administering Microsoft SQL Server Databases Module 1: Introduction to SQL Server 2014 Database Administration This module introduces the Microsoft SQL Server 2014 platform. It describesMore information Administering Microsoft SQL Server Databases Course 20462C: Administering Microsoft SQL Server Databases Module 1: Introduction to SQL Server 2014 Database Administration This module introduces the Microsoft SQL Server 2014 platform. It describesMore information Administering Microsoft SQL Server Databases 20462C; 5 days Lincoln Land Community College Capital City Training Center 130 West Mason Springfield, IL 62702 217-782-7436 Administering Microsoft SQL Server Databases 20462C; 5 days Course DescriptionMore information BrightStor ARCserve Backup for Windows BrightStor ARCserve Backup for Windows Agent for Microsoft SQL Server r11.5 D01173-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for theMore information Securing Your Microsoft SQL Server Databases in an Enterprise Environment Securing Your Microsoft SQL Server Databases in an Enterprise Environment Contents Introduction...1 Taking Steps Now to Secure Your Data...2 Step 1: Back Up Everything...2 Step 2: Simplify as Much asMore information SQL Backup and Restore using CDP CDP SQL Backup and Restore using CDP Table of Contents Table of Contents... 1 Introduction... 2 Supported Platforms... 2 SQL Server Connection... 2 Figure 1: CDP Interface with the SQL Server... 3 SQLMore information Chapter 15: Recovery System Chapter 15: Recovery System Failure Classification Storage Structure Recovery and Atomicity Log-Based Recovery Shadow Paging Recovery With Concurrent Transactions Buffer Management Failure with Loss ofMore information Course: 20462 Administering Microsoft SQL Server Databases Overview Course length: 5 Days Microsoft SATV Eligible Course: 20462 Administering Microsoft SQL Server Databases Overview About this Course This five-day instructor-led course provides students with the knowledgeMore information Cyber Security: Guidelines for Backing Up Information. A Non-Technical Guide Cyber Security: Guidelines for Backing Up Information A Non-Technical Guide Essential for Executives, Business Managers Administrative & Operations Managers This appendix is a supplement to the Cyber Security:More information SQL Server 2012 Database Administration With AlwaysOn & Clustering Techniques SQL Server 2012 Database Administration With AlwaysOn & Clustering Techniques Module: 1 Module: 2 Module: 3 Module: 4 Module: 5 Module: 6 Module: 7 Architecture &Internals of SQL Server Engine Installing,More information Perforce Backup Strategy & Disaster Recovery at National Instruments Perforce Backup Strategy & Disaster Recovery at National Instruments Steven Lysohir National Instruments Perforce User Conference April 2005-1 - Contents 1. Introduction 2. Development Environment 3. ArchitectureMore informationMore information SQL Server Maintenance Plans SQL Server Maintenance Plans BID2WIN Software, Inc. September 2010 Abstract This document contains information related to SQL Server 2005 and SQL Server 2008 and is a compilation of research from variousMore information 20462 Administering Microsoft SQL Server Databases 20462 Administering Microsoft SQL Server Databases Audience Profile The primary audience for this course is individuals who administer and maintain SQL Server databases. These individuals perform databaseMore information SQL Server 2005 Backing Up & Restoring Databases Institute of Informatics, Silesian University of Technology, Gliwice, Poland SQL Server 2005 Backing Up & Restoring Databases Dariusz Mrozek, PhD Course name: SQL Server DBMS Part 1: Backing Up OverviewMore information Enterprise PDM - Backup and Restore DS SOLIDWORKS CORPORATION Enterprise PDM - Backup and Restore Field Services - Best Practices [Enterprise PDM 2010] [September 2010] [Revision 2] September 2010 Page 1 Contents Brief Overview:... 4 NotesMore information 20462C: Administering Microsoft SQL Server Databases 20462C: Administering Microsoft SQL Server Databases Course Details Course Code: Duration: Notes: 20462C 5 days This course syllabus should be used to determine whether the course is appropriate for theMore information Oracle Database 10g: Backup and Recovery 1-2 Oracle Database 10g: Backup and Recovery 1-2 Oracle Database 10g: Backup and Recovery 1-3 What Is Backup and Recovery? The phrase backup and recovery refers to the strategies and techniques that are employedMore information Making Database Backups in Microsoft Business Solutions Navision Making Database Backups in Microsoft Business Solutions Navision MAKING DATABASE BACKUPS IN MICROSOFT BUSINESS SOLUTIONS NAVISION DISCLAIMER This material is for informational purposes only. MicrosoftMore information Working with SQL Server Agent Jobs Chapter 14 Working with SQL Server Agent Jobs Microsoft SQL Server features a powerful and flexible job-scheduling engine called SQL Server Agent. This chapter explains how you can use SQL Server AgentMore information Mind Q Systems Private Limited MS SQL Server 2012 Database Administration With AlwaysOn & Clustering Techniques Module 1: SQL Server Architecture Introduction to SQL Server 2012 Overview on RDBMS and Beyond Relational Big picture ofMore information Chapter 13 File and Database Systems Chapter 13 File and Database Systems Outline 13.1 Introduction 13.2 Data Hierarchy 13.3 Files 13.4 File Systems 13.4.1 Directories 13.4. Metadata 13.4. Mounting 13.5 File Organization 13.6 File AllocationMore information Chapter 13 File and Database Systems Chapter 13 File and Database Systems Outline 13.1 Introduction 13.2 Data Hierarchy 13.3 Files 13.4 File Systems 13.4.1 Directories 13.4. Metadata 13.4. Mounting 13.5 File Organization 13.6 File AllocationMore information Using Continuous Operations Mode for Proper Backups Using Continuous Operations Mode for Proper Backups A White Paper From Goldstar Software Inc. For more information, see our web site at Using Continuous Operations Mode for Proper Backups Last Updated:More information 20462 - Administering Microsoft SQL Server Databases 20462 - Administering Microsoft SQL Server Databases Duration: 5 Days Course Price: $2,975 Software Assurance Eligible Course Description Note: This course is designed for customers who are interestedMore information Maximum Availability Architecture. Oracle Best Practices For High Availability Preventing, Detecting, and Repairing Block Corruption: Oracle Database 11g Oracle Maximum Availability Architecture White Paper May 2012 Maximum Availability Architecture Oracle Best Practices For HighMore information Redundancy Options. Presented By: Chris Williams Redundancy Options Presented By: Chris Williams Table of Contents Redundancy Overview... 3 Redundancy Benefits... 3 Introduction to Backup and Restore Strategies... 3 Recovery Models... 4 Cold Backup...More information Restore and Recovery Tasks Objectives After completing this lesson, you should be able to: Describe the causes of file loss and determine the appropriate action Describe major recovery operations Back Administering and Managing Log Shipping 26_0672329565_ch20.qxd 9/7/07 8:37 AM Page 721 CHAPTER 20 Administering and Managing Log Shipping Log shipping is one of four SQL Server 2005 high-availability alternatives. Other SQL Server 2005 high-availabilityMore information Support Document: Microsoft SQL Server - LiveVault 7.6X Contents Preparing to create a Microsoft SQL backup policy... 2 Adjusting the SQL max worker threads option... 2 Preparing for Log truncation... 3 Best Practices... 3 Microsoft SQL Server 2005, 2008, orMore information 50238: Introduction to SQL Server 2008 Administration 50238: Introduction to SQL Server 2008 Administration 5 days Course Description This five-day instructor-led course provides students with the knowledge and skills to administer SQL Server 2008. The course VirtualCenter Database Maintenance VirtualCenter 2.0.x and Microsoft SQL Server Technical Note VirtualCenter Database Maintenance VirtualCenter 2.0.x and Microsoft SQL Server This document discusses ways to maintain the VirtualCenter database for increased performance and manageability.More information SQL LiteSpeed 3.0 Best Practices SQL LiteSpeed 3.0 Best Practices Optimize SQL LiteSpeed to gain value time and resources for critical SQL Server Backups September 9, 2003 Written by: Jeremy Kadlec Edgewood Solutions information Chapter 16: Recovery System Chapter 16: Recovery System Failure Classification Failure Classification Transaction failure : Logical errors: transaction cannot complete due to some internal error condition System errors: the databaseMore information Microsoft SQL Server Guide. Best Practices and Backup Procedures Microsoft SQL Server Guide Best Practices and Backup Procedures Constellation HomeBuilder Systems Inc. This document is copyrighted and all rights are reserved. This document may not, in whole or in part,More information Contingency Planning and Disaster Recovery Contingency Planning and Disaster Recovery Best Practices Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge Date: October 2014 2014 Perceptive Software. All rights reserved PerceptiveMore information Local Government Cyber Security: Local Government Cyber Security: Guidelines for Backing Up Information A Non-Technical Guide Essential for Elected Officials Administrative Officials Business Managers Multi-State Information Sharing andMore information theMore information Course 20462C: Administering Microsoft SQL Server Databases Course 20462C: Administering Microsoft SQL Server Databases Duration: 35 hours About this Course The course focuses on teaching individuals how to use SQL Server 2014 product features and tools relatedMore information 10775A Administering Microsoft SQL Server 2012 Databases 10775A Administering Microsoft SQL Server 2012 Databases Five days, instructor-led About this Course This five-day instructor-led course provides students with the knowledge and skills to maintain a MicrosoftMore information Chapter 8 Service Management Microsoft SQL Server 2000 Chapter 8 Service Management SQL Server 2000 Operations Guide Abstract This chapter briefly presents the issues facing the database administrator (DBA) in creating a service levelMore information Backup and Recovery by using SANWatch - Snapshot Backup and Recovery by using SANWatch - Snapshot 2007 Infortrend Technology, Inc. All rights Reserved. Table of Contents Introduction...3 Snapshot Functionality...3 Eliminates the backup window...3 RetrievesMore information SQL Server Transaction Log from A to Z Media Partners SQL Server Transaction Log from A to Z Paweł Potasiński Product Manager Data Insights [email protected] Why About Transaction Log (Again)? informationMore information Integrating SQL LiteSpeed in Your Existing Backup Infrastructure Integrating SQL LiteSpeed in Your Existing Backup Infrastructure March 11, 2003 Written by: Jeremy Kadlec Edgewood Solutions 888.788.2444 2 Introduction Needless to say, backupsMore information [email protected] Administering Microsoft SQL Server Databases Administering Microsoft SQL Server Databases This five-day instructor-led course provides students with the knowledge and skills to maintain a Microsoft SQL Server 2014 database. The course focuses onMore information siteMore information SQL SERVER Anti-Forensics. Cesar Cerrudo SQL SERVER Anti-Forensics Cesar Cerrudo Introduction Sophisticated attacks requires leaving as few evidence as possible Anti-Forensics techniques help to make forensics investigations difficult Anti-ForensicsMore information SQL Server Storage: The Terabyte Level. Brent Ozar, Microsoft Certified Master, MVP Consultant & Trainer, SQLskills.com SQL Server Storage: The Terabyte Level Brent Ozar, Microsoft Certified Master, MVP Consultant & Trainer, SQLskills.com BrentOzar.com/go/san Race Facts 333 miles 375 boats invited 33 DNFs Typical TerabyteMore information More information Administering Microsoft SQL Server Databases CÔNG TY CỔ PHẦN TRƯỜNG CNTT TÂN ĐỨC TAN DUC INFORMATION TECHNOLOGY SCHOOL JSC LEARN MORE WITH LESS! Course 20462 Administering Microsoft SQL Server Databases Length: 5 Days Audience: IT Professionals Level:More information sql server best practice sql server best practice 1 MB file growth SQL Server comes with a standard configuration which autogrows data files in databases in 1 MB increments. By incrementing in such small chunks, you risk endingMore informationMore informationMore information theMore information ! Volatile storage: ! Nonvolatile storage: Chapter 17: Recovery System Failure Classification! Failure Classification! Storage Structure! Recovery and Atomicity! Log-Based Recovery! Shadow Paging! Recovery With Concurrent Transactions! Buffer Management!More information 10775 Administering Microsoft SQL Server Databases 10775 Administering Microsoft SQL Server Databases Course Number: 10775 Category: Microsoft SQL Server 2012 Duration: 5 days Certification: Exam 70-462 Administering Microsoft SQL Server 2012 DatabasesMore information
http://docplayer.net/1206021-Sql-server-backup-and-restore.html
CC-MAIN-2017-30
en
refinedweb
std::variant The class template std::variant represents a type-safe union. An instance of std::variant at any given time either holds a value of one of its alternative types, or it holds.] Example #include <variant> #include <string> #include <cassert> using namespace std::literals;&) {} std::variant<std::string> x("abc"); // converting constructors work when unambiguous x = "def"; // converting assignment also works when unambiguous std::variant<std::string, bool> y("abc"); // holds *bool*, not std::string assert(std::holds_alternative<bool>(y)); // succeeds y = "xyz"s; assert(std::holds_alternative<std::string>(y)); //succeeds }
http://en.cppreference.com/w/cpp/utility/variant
CC-MAIN-2017-30
en
refinedweb
Jan Harkes <[email protected]> writes:> On Sun, Jan 18, 2009 at 08:34:53AM +0100, Oleg Nesterov wrote:>> Needs an ack from maintaner, I do not know where coda_in_hdr->pgid is used.>> It is used to uniquely identify a process and any of it children during> conflict resolution.>> When a conflict is detected, all accesses to the inconsistent object are> blocked. A special resolver process is forked off by the cache manager> and this is run in a new process group and only accesses from processes> in this group are allowed. The resolver process (or any of it's children)> compare the conflicting replicas, and ideally resolve the inconsistency> after which normal accesses are unblocked.>> So yes this should not a per namespace thing, but also not a process> specific pid, the resolver forks off different helper processes> depending on the type of files that are involved in the conflict, i.e.> mbox files require different merge strategy compared to opendocument> files.>> I'm not sure what you are trying to do.We currently have two pid data types in the kernel.pid_t and struct pid *.pid_t's are the tokens we pass to user space to talk about a process, a process group or a session.struct pid pointers are used internally to the kernel, arereference counted, are not susceptible to pid wrap around,and are generally faster to use for sending signals or othertasks that require looking up a process.With the introduction of the pid namespaces the difference betweenpid_t's and struct pid has become even more important. Becausebased on the pid namespace you are in a given struct pid will havea different pid_t value. So internally we are moving as muchas possible to using struct pid pointers.Oleg is in the process of cleaning up some of the transition codeand we just need to convert the last couple of pieces of codeso we can do that.In the case of coda I'm assuming it is the user space daemon thatdecides if the access is from the resolver process group or not?That the user space filesystem code does the blocking based on whichprocess group you are in.In that case it looks like what needs to happen is that alloc_upcallneeds to know which pid namespace your user space daemon is in.Probably grab the pid namespace at either mount or connect time (isthere a difference).Then since I believe the values in the upcall go straight to the userspace daemon we should do roughly:inp->in.pid = task_pid_nr_ns(&fs_daemon_pidns, current);inp->in.pgid = task_pgrp_nr_ns(&fs_daemon_pidns, current);Does that make sense?Eric
https://lkml.org/lkml/2009/1/21/20
CC-MAIN-2017-30
en
refinedweb
Indexing and Selecting Data ¶ As explained in the Building composite objects and Dimensioned Containers guides, HoloViews allows you to build up hierarchical containers that express the natural relationships between your data items, in whatever multidimensional space best characterizes your tutorial, we show how to specify such selections, using five different (but related) operations that can act on an element e : These operations are all concerned with selecting some subset of your data values, without combining across data values (e.g. averaging) or otherwise transforming your actual data. In the Tabular guide if you have not done so already. import numpy as np import holoviews as hv hv.extension('bokeh', 'matplotlib')
http://holoviews.org/user_guide/Indexing_and_Selecting_Data.html
CC-MAIN-2017-30
en
refinedweb
GETCPU(2) Linux Programmer's Manual GETCPU(2) getcpu - determine CPU and NUMA node on which the calling thread is running #include <linux/getcpu.h> int getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *tcache); Note: There is no glibc wrapper for this system call; see NOTES. The getcpu() system call identifies the processor and node on which the calling thread or process is currently running and writes them into the integers pointed to by the cpu and node arguments. The processor is a unique small integer identifying a CPU. The node is a unique small identifier identifying a NUMA node. When either cpu or node is NULL nothing is written to the respective pointer. The third argument to this system call is nowadays unused, and should be specified as NULL unless portability to Linux 2.6.23 or earlier is required (see NOTES). The information placed in cpu is guaranteed to be current only at the time of the call: unless the CPU affinity has been fixed using sched_setaffinity(2), the kernel might change the CPU at any time. (Normally this does not happen because the scheduler tries to minimize movements between CPUs to keep caches hot, but it is possible.) The caller must allow for the possibility that the information returned in cpu and node is no longer current by the time the call returns. On success, 0 is returned. On error, -1 is returned, and errno is set appropriately. EFAULT Arguments point outside the calling process's address space. getcpu() was added in kernel 2.6.19 for x86_64 and i386. getcpu() is Linux-specific. Linux makes a best effort to make this call as fast as possible. The intention of getcpu() is to allow programs to make optimizations with per-CPU data or for NUMA optimization. Glibc does not provide a wrapper for this system call; call it using syscall(2); or use sched_getcpu(3) instead. The tcache argument is unused since Linux 2.6.24. In earlier kernels, if this argument was non-NULL, then it specified a pointer to a caller-allocated buffer in thread-local storage that was used to provide a caching mechanism for getcpu(). Use of the cache could speed getcpu() calls, at the cost that there was a very small chance that the returned information would be out of date. The caching mechanism was considered to cause problems when migrating threads between CPUs, and so the argument is now ignored. mbind(2), sched_setaffinity(2), set_mempolicy(2), sched_getcpu(3), cpuset(7), vdso(7) This page is part of release 4.12 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2015-12-28 GETCPU(2) Pages that refer to this page: get_mempolicy(2), mbind(2), sched_setaffinity(2), set_mempolicy(2), syscalls(2), sched_getcpu(3), cpuset(7)
http://man7.org/linux/man-pages/man2/getcpu.2.html
CC-MAIN-2017-30
en
refinedweb
Leave Comments, Critiques, and Suggestions Here Logging is an essential function of most software applications. Historically, logging APIs have been designed in-house by development teams, usually providing nothing more than basic output to a file. Some go a step further and provide simple compile-time configuration options. Such simple implementations do not allow the flexibility that modern software demands. Compile-time configuration of a logging system restricts a particular build to the same output. The most common configurations options are for release mode output and debug mode output. The release build is shipped to the customer and typically only logs enough information to indicate that the program is functioning properly, or when errors occur. Debug mode output is typically more verbose and is used to diagnose and troubleshoot problems. When a customer has a problem in the field, requiring them to download and run a different build of the software is inconvenient. Furthermore, there is no way to control how much debug information is output, or which sections of the program should be ignored. More compile-time configuration options could be provided to do so, but each configuration requires yet another build of the software. Tango's logging API is based in large part on the popular Java logging API, Log4J. The API gives developers a great amount of flexibility by providing the means to configure the logging system at run-time. They can configure how much information is logged, which sections of the application are logged, where the log output is sent, and even the format of the output. The API is designed from the ground up with performance in mind so that even when logging extensively the impact on memory and performance is as small as possible. This chapter of the manual examines Tango's logging API in detail. There are five major components of the API: Levels, Loggers, Appenders, The Hierarchy, and Layouts. We will look at each component in turn. Everything in Tango's logging API revolves around Loggers. All log output goes through Loggers and all configuration is done through Loggers. But in order to fully understand how to utilize Loggers effectively, it is important to understand the other features of the API. As such, our treatment of Loggers in this section will be brief. Here, we will only examine how to create a Logger instance. In each successive section, we will discuss the Logger interface as it relates to each API feature. Loggers are not instantiated directly. In order to get a reference to a Logger object, the user must go through another class: tango.util.log.Log. The Log class is not instantiable. It provides a collection of static methods with which the user interacts. For now, we will only consider the following method: static Logger lookup (char[] name); Each Logger used in an application should have a unique name. The first time a name is passed to the above method, a new Logger instance will be created and a reference to it returned. Each consecutive time the same name is passed to the method, the same reference will be returned, i.e. one name is associated with exactly one reference. Logger logOne = Log.lookup("One"); // a new Logger instance Logger logTwo = Log.lookup("Two"); // a new Logger instance Logger logThree = Log.lookup("One"); // the same instance as logOne Note that lookup is case-sensitive, so that "One", "oNe", and "ONE" all refer to unique Logger instances. Once a reference to a Logger instance is obtained, the user can write to the log or configure it. Writing to a log requires understanding Levels, while configuration requires understanding Hierarchies. To make use of the Log and Logger classes: import tango.util.log.Log; Now we will examine the Logger interface in the context of Levels, Appenders, and The Hierarchy. Even the most primitive logging systems usually provide compile time options to control the amount of information logged to disk. The reason is that disk IO can be expensive. Writing too much information to a log can adversely affect a program's performance. Things can get even worse when logging to targets other than disk, such as a network connection. Tango's logging API allows users to control, at run time, how much information is written to a Logger via the use of Levels. The Level is an enum that is accessed via the Logger class. No imports other than tango.util.log.Logger are required. A specific level is accessed via the following syntax: Logger.Level.LevelName Tango defines six log levels. Each Level is assigned a priority. Higher priority Levels take precedence over lower priority levels. When a Level is assigned to a Logger, any output designated with a lower priority Level will be ignored. Following are descriptions of the six Levels. The descriptions, as given, are only guidelines. Developers are free to assign any meaning they see fit to each Level. The Levels are, from lowest to highest priority: Trace This Level is intended to be used for debug output. Because it has the lowest priority, enabling this Level on a Logger effectively causes the Logger to log all output. This Level is intended to be used for debug output. Because it has the lowest priority, enabling this Level on a Logger effectively causes the Logger to log all output. Info This Level is intended to be used for informational purposes. For example, this level might be assigned to output which provides performance metrics or other statistical data. This Level is intended to be used for informational purposes. For example, this level might be assigned to output which provides performance metrics or other statistical data. Warn. Error Some errors are recoverable and not severe enough to cause program termination. It is for logging those such errors that this Level is intended. Some errors are recoverable and not severe enough to cause program termination. It is for logging those such errors that this Level is intended. Fatal This Level is intended for logging errors which cause program termination. This Level is intended for logging errors which cause program termination. None. A Logger's Level is configured via the following method: Logger setLevel (Logger.Level level = Logger.Level.Trace); There are two items of note about this method: it returns a reference to the Logger instance to allow for method chaining; calling this method with no arguments defaults to the Trace Level. Also, there is another version of this method which takes an additional boolean argument. We will consider it after the discussion on Hierarchies. To determine which Level is currently set, the following Logger method is used: Logger.Level getLevel(); Finally, the following convenience method can be used to determine if a specific Level is set on a Logger: bool isEnabled(Logger.Level level); This method will return false if the Logger's Level is currently set to one of higher priority than Level. It will return true if the Logger's Level is currently set to level or to one of lower priority. myLogger.setLevel(Logger.Level.Error); // only Error and Fatal output Logger.Level level = myLogger.getLevel(); // returns Level.Error bool ret = myLogger.isEnabled(Logger.Level.Trace); // returns false ret = myLogger.isEnabled(Logger.Level.Info); // returns false ret = myLogger.isEnabled(Logger.Level.Warn); // returns false ret = myLogger.isEnabled(Logger.Level.Error); // returns true ret = myLogger.isEnabled(Logger.Level.Fatal); // returns true The None level is a special case. Because it is a higher priority than any other Level, the following method call will always return true. bool ret = myLogger.isEnabled(Logger.Level.None); As such, the above call cannot be used to determine if logging has been turned off for a particular Logger instance. In order to do so, the following call should be used instead: bool ret = myLogger.isEnabled(Logger.Level.Fatal); If this call returns false, then logging is completely disabled for myLogger. In the following example, assume that myLogger is a valid Logger reference: There are two ways to assign a Level to log output. The first is to do so explicitly via the following Logger method: Logger append(Logger.Level level, char[] msg); This method returns the Logger instance for method chaining. If level is of equal or higher priority than the Logger's current Level, the message will be logged. Otherwise, it will be ignored: myLogger.setLevel(Logger.Level.Info).append(Logger.Level.Info, "hello"); // "hello" will be logged myLogger.append(Logger.Level.Trace, "trace me"); // output ignored There are also several convenience methods that assign a Level to the output automatically. Each method's name corresponds to a particular log level: Logger trace(char[] msg); Logger info(char[] msg); Logger warn(char[] msg); Logger error(char[] msg); Logger fatal(char[] msg); When software is released to the wild, it is common to send all log output to disk. In the case of on-site tech support, the technician can pull up the logs to assist in troubleshooting. In the case of remote tech support, users can send the log files to the developers as email attachments, or perhaps through a tool provided by the developers. But what if a developer wants to view the output of a log in real time? During the development process, it is inconvenient to open the log files after each run of the program. It would be much better to send log output to the system console. It's can also be useful to be able to see real time log messages remotely, perhaps via telnet or a custom client. Such techniques may even be a requirement for some server projects. The ability to configure log output targets at run-time would make it possible to switch from disk output to console or network output on demand. Tango's Appenders are designed to allow just that. Each Logger instance needs to know where to send its output. But restricting a Logger to one output target would go against the goal of flexibility. As such, Tango allows each Logger to have multiple Appenders attached. Attaching an appender to a Logger is done with the following method: Logger add(Appender appender); To remove all appenders from a Logger: Logger clearAppenders(); Both methods return a reference to the Logger for method chaining. Note that there is currently no way to remove a single Appender. When output is sent to a Logger with no Appender attached, there are no errors or thrown exceptions. The call will silently fail. For most users, it is not important to understand the Appender interface. Tango ships with several stock Appender implementations that will cover most common logging needs: AppendConsole, AppendFile, AppendFiles, AppendSocket, AppendMail, and AppendNull. It should be noted that all of the stock Appender constructors accept a Layout as an optional argument, which we will ignore in this section. See the section on Layouts for information on how to use them. This Appender, the simplest of the lot, directs log output to the operating system's stderr. It resides in the module tango.util.log.AppendConsole? and has one constructor: this (Layout layout = null); Usage: // send output of myLogger to stderr myLogger.add(new AppendConsole()); This Appender sends all output to a file. It is found in the module tango.util.log.AppendFile?. It has two constructors, one of which accepts a tango.io.FilePath? instance and another which accepts a path string: this (FilePath fp, Layout layout = null); this (char[] fp, Layout layout = null); // send output of myLogger to a log file named "mylog.log" // in the current working directory AppendFile fileAppender = new AppendFile("mylog.log") myLogger.add(fileAppender); ... // when finished with the file, it must be closed fileAppender.close(); The file is opened in AppendFile's constructor and remains open until close is called. It should be noted that if the file already exists, it will not be overwritten -- all output will be appended to the file. This Appender logs to a group of files in turn. It is found in the module tango.util.log.AppendFiles?. Rather than writing to a single file, this Appender sends output to a group of files, rotating between each one when the the active file reaches a maximum size. Like the AppendFile, the AppendFiles accepts either a FilePath or a string in its constructors. The user also must specify the number of files in the group and the maximum size of each file. this (FilePath p, int count, ulong maxSize, Layout layout = null); this (char[] fp, int count, ulong maxSize, Layout layout = null); In both cases, count must between 1 and 9 (inclusive). There is no limit imposed on maximum size. // send output of myLogger to a group of 5 files // named mylog0.log, mylog1.log, mylog2.log, mylog3.log // and mylog4.log in the current working directory. Each // log will have a maximum size of 1 MB (1024 * 1024 bytes) myLogger.add(new AppendFiles("mylog.log", 5, 1024*1024)); In this example, the AppendFiles will begin sending output to mylog0.log. When the size of that file exceeds 1 MB, output will be shifted to mylog1.log. This process will continue. When mylog4.log, the last in the group, becomes full, output will shift back to mylog0.log to repeat the process. When a file becomes the output target again, it is truncated so that any existing log output it contains is lost. This Appender allows remote logging by sending output to a socket. This class can be accessed by importing tango.util.log.AppendSocket?. It has one constructor, which accepts a tango.net.InternetAddress? as an argument. this (InternetAddress address, Layout layout = null); // send output of myLogger to localhost, port 17000 AppendSocket sockAppender = new AppendSocket("127.0.0.1", 17000); myLogger.add(sockAppender); ... // when finished with the AppendSocket, the network connection // must be closed sockAppender.close(); Assuming that a server is listening on 127.0.0.1:1700, the above example will open and maintain a TCP connection with it and send myLoggers output to it. The connection is opened in SocketAppenders constructor and maintained until close is called. This Appender sends each log output as email. This class is defined in tango.util.log.AppendMail?. It has one constructor, which accepts a tango.net.InternetAddress? as an argument along with the relevant email fields: from, to, and subject. this (InternetAddress server, char[] from, char[] to, char[] subj, Layout layout = null); myLogger.add(new AppendMail(mySMTPAddress, "MyProgram", "[email protected]", "Log Entry")); This Appender need not be explicitly closed, as no connection is maintained. Each time output is appended to the Logger, via append or one of the Level-specific methods, a new email is constructed and sent for that message. As such, this Appender should be used sparingly and in specific circumstances. The tango.util.log.AppendNull? does nothing. It is useful as a template for creating custom Appenders. It could also be used for benchmarking. Unlike Levels and Appenders, hierarchies are not configurable. provide a convenient means of grouping Loggers together. This allows new Logger instances to automatically be configured upon creation. It also allows for Appenders to be configured for several Loggers at once. Conceptually, each Logger instantiated can be viewed as a node in a tree. When a new Logger instance is created, it is inserted into the appropriate position in the tree. The Logger at the node above the new instance becomes its parent. The new instance inherits the Level and Appender configuration of its parent. This is a powerful feature that allows entire groups of Loggers to be configured at once. Internally, Tango's logging API maintains a special Logger instance, the root logger. The only way to get a reference to the root logger is by the following: Logger root = Log.root; Note that calling Log.lookup("root") or Log.lookup("Root") will both return references to different Loggers, not the root Logger. Also, while it is possible to do all of an application's logging through the root Logger, it is recommended that this not be done. The purpose of the root Logger is to establish a minimum configuration. Since the root Logger is, either directly or indirectly, the parent of every Logger created, it is the best place to establish the default configuration. By default, no configuration is set up. The following compilable example demonstrates: import tango.util.log.Log; void main() { Log.lookup("test").info("Hello Tango!"); } Compiling and running this example results in no output at all. While it would be possible to configure the Logger named "test", it is better to configure the root Logger. Configuring "test" would mean that any Loggers that are not children of "test" would not receive a default configuration. The following example prints "Info test - Hello Tango!" to stderr: import tango.util.log.Log; import tango.util.log.AppendConsole; void main() { Log.root.add(new AppendConsole()); Log.lookup("test").info("Hello Tango!"); } The Logger hierarchy is constructed from Logger names and, as such, is completely transparent to the user. But the user does have control over where a particular Logger instance fits in the hierarchy. Consider the following: Log.lookup("one"); Log.lookup("two"); Log.lookup("three"); In this case, three Loggers are created. Each Logger will be direct children of the root Logger and inherit its configuration. The next example demonstrates how to extend the hierarchy: Log.lookup("one"); Log.lookup("one.two"); Log.lookup("one.two.three"); This time, only the first Logger, "one", is a direct child of the root Logger. The second, second Logger, "one.two", is a child of "one", while "one.two.three" is a child of "two". So to establish a hierarchy, names must be separated by '.'. Conveniently, since D's package and module system follows the same convention, it is easy to associate Loggers with specific modules or classes. However, since rootLogger-lookup must be done at runtime, initialization of static loggers must happen in a static constructors of the class or module. module mypackage.mymodule; Logger logger; static this() { logger = Log.lookup("mypackage.mymodule"); } class MyClass { Logger logger; static this() { logger = Log.lookup("mypackage.mymodule.MyClass"); } } To be perfectly clear, the first time any dot-separated name is passed to lookup, only one Logger instance is created: Log.lookup("mypackage.mymodule"); Here, if no instance of "mypackage" has been previously created, then it is not created now. In that case, the Logger named "mypackage" will only be a conceptual Logger in the hierarchy and not a real instance. The new instance will inherit the configuration of the root Logger in that case. However, as the following example demonstrates, this is all hidden from the user: Log.lookup("mypackage.mymodule").info("Foo"); Log.lookup("mypackage").info("Bar"); In this case, even though the Logger "mypackage.mymodule" was created first, the Logger "mypackage" will still be inserted into the hierarchy in the appropriate place -- as a direct child of the root Logger and the parent of "mypackage.mymodule". The user never need be concerned whether or not an instance returned by lookup is new or not, or whether a child is created before the parent. Just be aware that an instance is created only when a Logger by that name does not already exist and that only one instance will be created at a time, no matter the number of dots in the name. The third and final configuration option for Tango's logging API controls the format of log output. In the section on Appenders, it was noted that all Appenders take an optional Layout parameter. Appenders interact with the Layout interface to construct the final output text. Most users will never need to touch the Layout class, though understanding the different stock Layouts is useful. Those implementing custom Appenders will need to be familiar with the Layout class interface. Only when implementing a custom Layout is more detail needed. First, we will examine the stock Layouts and then discuss what is required to implement a custom Layout. Tango ships with 5 stock Layout implementations: SpartanLayout, SimpleLayout, SimpleTimerLayout, DateLayout, and !Log4Layout. This is a very basic Layout which prints the name of the Logger followed by the output text. It is found in the tango.util.log.Layout module. Example Output: myLogger - Hello Tango! This Layout prefixes the output with the Level name and Logger name. It is found in the tango.util.log.Layout module. It is also the default Layout created by the base Appender class when no Layout is given to an Appender constructor. Info myLogger - Hello Tango! This Layout prefixes the output with the number of milliseconds since the application started, the Level name, and the Logger name. It is declared in tango.util.log.Layout. 1 Info myLogger - Hello Tango! This Layout prefixes output with ISO-8601 date information, followed by the Level name and the Logger name. It is found in tango.util.log.DateLayout?. 2007-01-01 01:00:28,001 Info myLogger - Hello Tango! This layout prints XML output which conforms to the Log4J specifications. It is found in tango.util.log.Log4Layout. <log4j:event <log4j:message><![CDATA[Log4]]></log4j:message> <log4j:properties> <log4j:data <log4j:data </log4j:properties> </log4j:event> The abstract Layout class defines four methods. When implementing custom Appenders and Layouts (explained in the next section) it is important to understand the purpose of these methods: abstract char[] header (Event event); char[] footer (Event event); char[] content (Event event); final char[] ultoa (char[] s, ulong l); The header, footer, and content methods should be overridden by custom Layout subclasses to provide the proper formatting. Because the header method is abstract, it must be overridden by subclasses. The default footer implementation returns an empty string, while the default content implementation returns the raw output string originally passed to a Logger. Appenders call the header, footer and content methods in order to append the formatted output to the final output buffer. The ultoa method is a convenience method which converts a millisecond time value to a string. When the stock Appenders and Layouts don't meet a project's logging requirements, developers can implement their own. This section explains what is needed to do so. All Appenders must derive from the abstract tango.util.log.Appender. It declares three abstract methods which must be overridden: abstract Mask getMask (); abstract char[] getName (); abstract void append (Event event); The Appender class also maintains a reference to its Layout. This reference can be set and fetched by subclasses via the following two methods: void setLayout(Layout layout); Layout getLayout(); Because Logger output propagates up the Hierarchy to parent Loggers, it is possible that the output could pass to multiple appenders with the same target. Masks are used internally to ensure that any given output is sent to a particular Appender exactly once. This means that each Mask must uniquely identify an Appender. To create a Mask, an Appender must call its register method with a unique string. protected Mask register (char[] tag); What constitutes a unique string depends on the nature of the Appender. For example, since there is only one stderr on a user's system, the AppendConsole uses its class name as the argument to register. This ensures that even if multiple instances of AppendConsole exist, output will only be printed to stderr once even when it is sent to each instance. SocketAppenders create masks from the internet address given in the constructor, MailAppenders from a concatenation of the to and subject fields. Whatever the string used to create the mask, it should uniquely identify the output target such that no output is sent to the same target twice. The getName method is expected to return the name of the Appender's class, hence all implementations should generally be of this form: char[] getName () { return this.classinfo.name; } It's possible, of course, to return a hardcoded string, such as "MyClass?". But using the above approach ensures that the result of calling getName will always be correct, even when a class is refactored and the name changed. Externally, output is appended to a Logger in the form of strings. Internally, the output is converted to a tango.util.log.Event instance. Events contain a lot of useful information about an output string which is intended to be used by the Layout assigned to an Appender. As such, the Appender can ignore the data an event object contains and pass it on to the formatting methods of the Layout object. When the append method is called, the Appender should pass the Event parameter to each of the following Layout methods in turn: header, content, and footer. The string returned from each method should be appended to a final output string and, finally, the buffer sent to the output target. The following example shows an Appender that sends output to stdout. This could, perhaps, be implemented to extend AppendConsole, but it is implemented as a separate class for demonstration purposes. // public import of tango.util.Event and tango.util.Layout import tango.util.log.Appender; import tango.io.Console; // for Tango's stdout interface - Cout class StdoutAppender : Appender { private Mask mask; // stores the value returned by the register method this(Layout layout = null) { // get a unique mask for this Appender. Because there is only one stdout, // it makes sense in this case to pass the class name to the register // method. This will ensure that even if multiple instances of this // class are found in the Hierarchy, any given output will only be // sent to stderr once mask = register(getName()); // the default Appender constructor sets the Layout instance // to a new SimpleLayout, so the Layout passed to this constructor // should be given to setLayout to change the default. setLayout(layout); } Mask getMask() { return mask; } char[] getName() { return this.classinfo.name; } void append(Event event) { // fetch a reference to the layout Layout layout = getLayout(); // append the formatted output to the current stdout line Cout.append(layout.header); Cout.append(layout.content); Cout.append(layout.footer); // calling Cout.newline will flush the output Cout.newline; } } Inspection of Tango's source will reveal that this class is identical to the AppendConsole, with the exception that Cout is used rather than Cerr. Anyone wanting to implement a custom Appender would be well served to examine the source of the stock appenders to see how they are implemented. Implementing a custom Layout is no more difficult than implementing a custom Appender. Before discussing the required steps, a brief examination of the tango.util.log.Event object would be useful. When a string is passed to a Logger through one of its append methods, it goes on a journey through Appenders, Layouts and the Hierarchy. Before that journey begins, the string is assigned to an Event object pulled out of a free list. The following methods are available to obtain information from an Event: char[] getName Returns the name of the Logger to which the output was originally given. Returns the name of the Logger to which the output was originally given. Logger.Level getLevel() Returns the Level assigned to the output. Returns the Level assigned to the output. char[] getLevelName() Convenience method that returns a string representaion of the output Level. Convenience method that returns a string representaion of the output Level. ulong getTime() Returns the time, in milliseconds, that the output was produced relative to the start of program execution. Returns the time, in milliseconds, that the output was produced relative to the start of program execution. ulong getEpochTime() Returns the time, in milliseconds, that the output was produced relative to January 1, 1970. Returns the time, in milliseconds, that the output was produced relative to January 1, 1970. char[] toUtf8(): Returns the original output string. Returns the original output string. Event objects also maintain a special scratch buffer that Layouts can use to construct formatted strings. This is a convenient means of allowing string manipulation without allocating more memory. Text can be appended to the scratch buffer via the append method and the content of the scratch buffer can be obtained via the getContent method: Event append(char[] x); char[] getContent() Another convenience method returns the number of milliseconds since the application was started. This is a static method: static ulong getRuntime () Since the Layout's header method is abstract, all custom Layouts must implement it. If the format of the original output string is to be altered, the content method must be implemented. If text is to be appended to the end of the original output string, the footer method should be implemented. The following example, CapLayout, prepends output with the Level name, converts the original output to upper case, and appends the output with the Logger name. It makes use of the Event object's scratch buffer to construct the formatted strings. import tango.util.log.Layout; import tango.text.Ascii; class CapLayout { char[] header(Event event) { event.append(event.getLevelName()).append(" - "); return event.getContent(); } char[] content(Event event) { return toUpper(event.toUtf8()); } char[] footer(Event event) { event.append(" - ").append(event.getName()); return event.content(); } } Click here for a dependency graph of the logging package. It shows import relationships between modules, and has hotspots to the relevant module documentation. See Dependency Graphs for more information.
http://dsource.org/projects/tango/wiki/ChapterLogging
CC-MAIN-2017-30
en
refinedweb
.persistence.btreeimpl.btreestorage;20 21 import java.io.*;22 23 import org.netbeans.mdr.persistence.*;24 25 /**26 * This represents a page fetched from the cache.27 */28 public class CachedPage extends IntrusiveList.Member {29 30 /** Description of which page this is */31 PageID key;32 /** How many times this page has been pinned and not unpinned */33 private int pinCount;34 /** true if this page has been modified */35 boolean isDirty;36 /** true if the log file must be flushed before this page can be written */37 boolean heldForLog;38 39 /** the contents of the page */40 public byte contents[];41 42 /* cache we were created by */43 private FileCache owner;44 45 /** create a page of the specified size 46 * @param size the page size for the cache47 */48 CachedPage(int size) {49 key = null;50 pinCount = 0;51 isDirty = false;52 heldForLog = false;53 owner = null;54 contents = new byte[size];55 }56 57 public FileCache getOwner() {58 return owner;59 }60 61 /** Make this page writable. If it was not writable previously,62 * this causes it to be logged. This must be called before the page63 * is modified. If the cache is not currently in a transaction,64 * this implicitly begins one.65 * @exception StorageException I/O error logging the page66 */67 public synchronized void setWritable() throws StorageException{68 owner.setWritable(this);69 }70 71 /** reinitialize thie object to point to a different file page 72 * @param id the file and page number this will become73 */74 void reInit(FileCache owner, PageID id) {75 this.owner = owner;76 key = id;77 pinCount = 0;78 isDirty = false;79 heldForLog = false;80 }81 82 public int pin(FileCache owner) {83 assert pinCount == 0 || this.owner == owner;84 this.owner = owner;85 return pinCount++;86 }87 88 public int getPinCount() {89 return pinCount;90 }91 92 public int innerUnpin() {93 pinCount--;94 return pinCount;95 }96 97 /** client calls this when it is done with the page */98 public void unpin() throws StorageException {99 owner.unpin(this);100 }101 102 /** format debugging info */103 public String toString() {104 StringBuffer debug = new StringBuffer ("" + key);105 if (pinCount > 0)106 debug.append(" pin count: " + pinCount);107 if (isDirty)108 debug.append(" dirty");109 if (heldForLog)110 debug.append(" held");111 debug.append("\n");112 113 int j = 0;114 for (int i = 0; i < contents.length; i++, j++) {115 if (j >= 16) {116 debug.append("\n");117 j = 0;118 }119 int data = (int)(contents[i]) & 0xFF;120 debug.append(Integer.toHexString(data));121 debug.append(" ");122 }123 debug.append("\n");124 125 return debug.toString();126 }127 128 129 }130 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/mdr/persistence/btreeimpl/btreestorage/CachedPage.java.htm
CC-MAIN-2017-30
en
refinedweb
Module: Essential Tools Module Group: Generic Does not inherit #include <rw/gvector.h> declare(RWGVector,val) implement(RWGVector,val) RWGVector(val) a; // A Vector of val's. Class RWGVector(val) represents a group of ordered elements, accessible by an index. Duplicates are allowed. This class is implemented as an array. Objects of type RWGVector(val) are declared with macros defined in the standard C++ header file <generic.h>. NOTE -- RWGVector is deprecated. Please use RWTValVector or RWTPtrVector. Note that it is a value-based collection: items are copied in and out of the collection. The class val must have: a default constructor; well-defined copy semantics (val::val(const val&) or equivalent); well-defined assignment semantics (val::operator=(const val&) or equivalent). For each type of RWGVector, you must include one (and only one) call to the macro implement, somewhere in your code. None #include <rw/gvector.h> #include <rw/rwdate.h> #include <rw/rstream.h> declare(RWGVector, RWDate) /* Declare a vector of dates */ implement(RWGVector, RWDate) /* Implement a vector of dates */ int main() { RWGVector(RWDate) oneWeek(7); for (int i=1; i<7; i++) oneWeek(i) = oneWeek(0) + i; for (i=0; i<7; i++) std::cout << oneWeek(i) << std::endl; return 0; } Program output: 04/12/93 04/13/93 04/14/93 04/15/93 04/16/93 04/17/93 04/18/93 RWGVector(val)(); Construct an empty vector. RWGVector(val)(size_t n); Construct a vector with length n. The initial values of the elements can (and probably will) be garbage. RWGVector(val)(size_t n, val v); Construct a vector with length n. Each element is assigned the value v. RWGVector(val)(RWGVector(val)& s); Copy constructor. The entire vector is copied, including all embedded values. RWGVector(val)& operator=(RWGVector(val)& s); Assignment operator. The entire vector is copied. RWGVector(val)& operator=(val v); Sets all elements of self to the value v. val operator()(size_t i) const; val& operator()(size_t i); Return the ith element in the vector. The index i must be between zero and the length of the vector less one. No bounds checking is performed. The second variant can be used as an lvalue. val operator[](size_t i) const; val& operator[](size_t i); Return the ith element in the vector. The index i must be between zero and the length of the vector less one. Bounds checking is performed. const val* data() const; Returns a pointer to the raw data of self. Should be used with care. size_t length() const; Returns the length of the vector. void reshape(size_t n); Resize the vector. If the vector shrinks, it will be truncated. If the vector grows, then the value of the additional elements will be undefined. Rogue Wave and SourcePro are registered trademarks of Quovadx, Inc. in the United States and other countries. All other trademarks are the property of their respective owners. Contact Rogue Wave about documentation or support issues.
http://www.xvt.com/sites/default/files/docs/Pwr%2B%2B_Reference/rw/docs/html/toolsref/rwgvector.html
CC-MAIN-2017-51
en
refinedweb
Databases can grow to be very complex, spanning hundreds of tables over multiple logically independent schemata. Handling such a database in a single Domain Model could be a troublesome and not very pleasant task. Such scenario would be more easily manageable and cleaner if you could use separate models for the various logical parts of your database. Telerik Data Access allows you to do just that – you can have multiple Domain Models targeting the same database via a single connection string. In this blog post we will explain to you how Telerik Data Access handles such situation and how you need to configure your models for it. Each Telerik Data Access model has its own metadata which describes it. This metadata is cached once the model is initialized so that it ca be reused by the Telerik Data Access Runtime when the model is further instantiated. The key which is used to identify the cached metadata is based on the connection string used by the model. This means that two models targeting different parts of a single database via the same connection string will have the same cache key for their metadata. When an instance of the first model is created, its metadata will be cached. When the second model is instantiated the already cached metadata will be used. This could pose a problem as the metadata of the first model does not contain any information about the second model. To avoid such issue Telerik Data Access will merge the metadata of the two models, if they are configured correctly, allowing you to use multiple models with the same connection string. Doing so is more simple than you might expect. Lets see how. In order to merge the metadata of models using the same connection string, Telerik Data Access has only one requirement - the models should be located in the same namespace. The models can still be in different projects, but their context classes (the ones deriving from OpenAccessContext) should share the same namespace. The persistent classes themselves on the other hand can be in different namespaces. For example if you have two Domain Models in your solution – one for the HumanResources schema of AdventureWorks and one for the Production schema. You would need to set the namespaces of both models to match. This can be done from the properties window of the Visual Designer – just click on the designer surface and press F4 on the keyboard or use the context menu to open it. As you can see configuring your models so that their metadata can be merged is so simple that you may be already using this feature without knowing. The Q2 2014 release is nearing. Are you interested in what will Telerik Data Access bring to the table for it? Check out our blog next week when we will talk about one of our new, highly requested features. Kristian Nikolov is Support Officer.
https://www.telerik.com/blogs/telerik-data-access---when-one-model-is-just-not-enough
CC-MAIN-2017-51
en
refinedweb
Jifty::Manual::Style - Jifty coding style guide When in doubt, default to whatever Damian Conway's Perl Best Practices says. When documenting a private method, or providing documentation which is not useful to the user of the module (and is presumably useful to the developer), wrap it in =begin/end private. This way it does not show up in perldoc where a user would see it and yet is still available and well formatted (that is, not just a lump comment) when looking at the code. =begin private =head2 import_extra Called by L<Test::More>'s C<import> code when L<Jifty::Test> is first C<use>'d, it calls L</setup>, and asks Test::More to export its symbols to the namespace that C<use>'d this one. =end private sub import_extra { ... } Files created by tests should be declared as such using Jifty::Test->test_file() so they are cleaned up on a successful test run. Shell::Command has a number of functions which work like common shell file commands such as touch, cp and mv. They are battle tested and cross-platform. Use them instead of coding your own. For example, instead of this: open my $file, ">foo"; close $file; Do this: use Shell::Command; touch $file; To check if a string equals another string case insensitively, do this lc $foo eq lc $bar; lc $foo eq 'bar'; not this: $foo =~ /^\Q$bar\E/i; $foo =~ /^bar$/i;
http://search.cpan.org/~jesse/Jifty/lib/Jifty/Manual/Style.pod
CC-MAIN-2017-51
en
refinedweb
Helios::Service - base class for services in the Helios job processing system Helios::Service is the base class for all services intended to be run by the Helios parallel job processing system. It handles the underlying TheSchwartz job queue system and provides additional methods to handle configuration, job argument parsing, logging, and other functions. A Helios::Service subclass must implement only one method: the run() method. The run() method will be passed a Helios::Job object representing the job to performed. The run() method should mark the job as completed successfully, failed, or permanently failed (by calling completedJob(), failedJob(), or failedJobPermanent(), respectively) before it ends. The following 3 methods are used by the underlying TheSchwartz job queuing system to determine what work is to be performed and, if a job fails, how it should be retried. YOU DO NOT NEED TO TOUCH THESE METHODS TO CREATE HELIOS SERVICES. These methods manage interaction between Helios and TheSchwartz. You only need to be concerned with these methods if you are attempting to extend core Helios functionality. Controls how many times a job will be retried. Controls how long (in secs) before a failed job will be retried. These two methods should return the number of times a job can be retried if it fails and the minimum interval between those retries, respectively. If you don't define them in your subclass, they default to zero, and your job(s) will not be retried if they fail. The work() method is the method called by the underlying TheSchwartz::Worker (which in turn is called by the helios.pl service daemon) to perform the work of a job. Effectively, work() sets up the worker process for the Helios job, and then calls the service subclass's run() method to run it. The work() method is passed a job object from the underlying TheSchwartz job queue system. The service class is instantiated, and the the job is recast into a Helios::Job object. The service's configuration parameters are read from the system and made available as a hashref via the getConfig() method. The job's arguments are parsed from XML into a Perl hashref, and made available via the job object's getArgs() method. Then the service object's run() method is called, and is passed the Helios::Job object. Once the run() method has completed the job and returned, work() determines whether the worker process should exit or stay running. If OVERDRIVE mode is enabled and the service hasn't been HALTed or told to HOLD, the worker process will stay running, and work() will be called to setup and run another job. If the service is not in OVERDRIVE mode, the worker process will exit. Given a metajob, the metarun() method runs the job, returning 0 if the metajob was successful and nonzero otherwise. This is the default metarun() for Helios. In the default Helios system, metajobs consist of multiple simple jobs. These jobs are defined in the metajob's argument XML at job submission time. The metarun() method will burst the metajob apart into its constituent jobs, which are then run by another service. Metajobs' primary use in the base Helios system is to speed the job submission process of large job batches. One metajob containing a batch of thousands of jobs can be submitted and burst apart by the system much faster than thousands of individual jobs can be submitted. In addition, the faster jobs enter the job queue, the faster Helios workers can be launched to handle them. If you have thousands (or tens of thousands, or more) of jobs to run, especially if you are running your service in OVERDRIVE mode, you should use metajobs to greatly increase system throughput. These accessors will be needed by subclasses of Helios::Service. get/setConfig() get/setHostname() get/setIniFile() get/setJob() get/setJobType() errstr() debug() Most of these are handled behind the scenes simply by calling the prep() method. After calling prep(), calling getConfig() will return a hashref of all the configuration parameters relevant to this service class on this host. If debug mode is enabled (the HELIOS_DEBUG env var is set to 1), debug() will return a true value, otherwise, it will be false. Some of the Helios::Service methods will honor this value and log extra debugging messages either to the console or the Helios log (helios_log_tb table). You can also use it within your own service classes to enable/disable debugging messages or behaviors. The new() method doesn't really do much except create an object of the appropriate class. (It can overridden, of course.) It does set the job type for the object (available via the getJobType() method). When writing normal Helios services, the methods listed in this section will have already been dealt with before your run() method is called. If you are extending Helios itself or instantiating a Helios service outside of Helios (for example, to retrieve a service's config params), you may be interested in some of these, primarily the prep() method.. The getConfigFromIni() method opens the helios.ini file, grabs global params and, and is the preferred method. The getConfigFromDb() method connects to the Helios database, retrieves. There's an important subtle difference between getConfigFromIni() and getConfigFromDb(): getConfigFromIni() erases any previously set parameters from the class's internal {config} hash, while getConfigFromDb() merely updates it. This is due to the way helios.pl uses the methods: the INI file is only read once, while the database is repeatedly checked for configuration updates. For individual service classes, the best thing to do is just call the prep() method; it will take care of things for the most part. Queries the collective database for the funcid of the service class and returns it to the calling routine. The service name used in the query is the value returned from the getJobType() accessor method. This method is most commonly used by helios.pl to get the funcid associated with a particular service class, so it can scan the job table for waiting jobs. If their are jobs for the service waiting, helios.pl may launch new worker processes to perform these jobs. Scans the job queue for jobs that are ready to run. Returns the number of jobs waiting. Only meant for use with the helios.pl service daemon. Creates a Data::ObjectDriver object connected to the Helios database and returns it to the calling routine. Normally called by getDriver() if an D::OD object has not already been initialized. The initDriver() method calls setDriver() to cache the D::OD object for use by other methods. This will greatly reduce the number of open connections to the Helios database. Determine whether or not to exit if OVERDRIVE mode is enabled. The config params will be checked for HOLD, HALT, or OVERDRIVE values. If HALT is defined or HOLD == 1 this method will return a true value, indicating the worker process should exit(). This method is used by helios.pl and Helios::Service->work(). Normal Helios services do not need to use this method directly. The methods in this section are available for use by Helios services. They allow your service to interact with the Helios environment. Method to connect to a database in a "safe" way. If the connection parameters are not specified, a connection to the Helios collective database will be returned. If a connection to the given database already exists, dbConnect() will return a database handle to the existing connection rather than create a new connection. The dbConnect() method uses the DBI->connect_cached() method to reuse database connections and thus reduce open connections to your database (often important when you potentially have hundreds of active worker processes working in a Helios collective). It "tags" the connections it creates with the current PID to prevent reusing a connection that was established by a parent process. That, combined with helios.pl clearing connections after the fork() to create a worker process, should allow for safe database connection/disconnection in a forking environment. Given a message to log, an optional priority level, and an optional Helios::Job object, logMsg() will record the message in the logging systems that have been configured. The internal Helios logging system is the only system enabled by default. In addition to the log message, there are two optional parameters: The current Helios::Job object being processed. If specified, the jobid will be logged in the database along with the message. The priority level of the message as defined by Helios::LogEntry::Levels. These are really integers, but if you import Helios::LogEntry::Levels (with the :all tag) into your namespace, your logMsg() calls will be much more readable. There are 8 log priority levels, corresponding (for historical reasons) to the log priorities defined by Sys::Syslog: name priority LOG_EMERG 0 LOG_ALERT 1 LOG_CRIT 2 LOG_ERR 3 LOG_WARNING 4 LOG_NOTICE 5 LOG_INFO 6 LOG_DEBUG 7 LOG_DEBUG, LOG_INFO, LOG_NOTICE, LOG_WARNING, and LOG_ERR are the most common used by Helios itself; LOG_INFO is the default. The host, process id, and service class are automatically recorded with your log message. If you supplied either a Helios::Job object or a priority level, these will also be recorded with your log message. This method returns a true value if successful and throws a Helios::Error::LoggingError if errors occur. Several parameters are available to configure Helios logging. Though these options can be set either in helios.ini or in the Ctrl Panel, it is strongly recommended these options only be set in helios.ini. Changing logging configurations on-the-fly could potentially cause a Helios service (and possibly your whole collective) to become unstable! The following options can be set in either a [global] section or in an application section of your helios.ini file. loggers=HeliosX::Logger::Syslog,HeliosX::Logger::Log4perl A comma delimited list of interface classes to external logging systems. Each of these classes should implement (or otherwise extend) the Helios::Logger class. Each class will have its own configuration parameters to set; consult the documentation for the interface class you're trying to configure. internal_logger=on|off Whether to enable the internal Helios logging system as well as the loggers specified with the 'loggers=' line above. The default is on. If set to off, the only logging your service will do will be to the external logging systems. log_priority_threshold=1|2|3|4|5|6 You can specify a logging threshold to better control the logging of your service on-the-fly. Unlike the above parameters, log_priority_threshold can be safely specified in your Helios Ctrl Panel. Specifying a 'log_priority_threshold' config parameter in your helios.ini or Ctrl Panel will cause log messages of a lower priority (higher numeric value) to be discarded. For example, a line in your helios.ini like: log_priority_threshold=6 will cause any log messages of priority 7 (LOG_DEBUG) to be discarded. This configuration option is supported by the internal Helios logger (Helios::Logger::Internal). Other Helios::Logger systems may or may not support it; check the documentation of the logging module you plan to use. If anything goes wrong with calling the configured loggers' logMsg() methods, this method will attempt to catch the error and log it to the Helios::Logger::Internal internal logger. It will then rethrow the error as a Helios::Error::LoggingError exception. The initConfig() method is called to initialize the configuration parsing class. This method is normally called by the prep() method before a service's run() method is called; most Helios application developers do not need to worry about this method. The normal Helios config parsing class is Helios::Config. This can be changed by specifying another config class with the ConfigClass() method in your service. This method will throw a Helios::Error::ConfigError if anything goes wrong with config class initialization. The initLoggers() method is called to initialize all of the configured Helios::Logger classes. This method is normally called by the prep() method before a service's run() method is called. This method sets up the Helios::Logger subclass's configuration by calling setConfig(), setHostname(), setJobType(), and setDriver(). It then calls the logger's init() method to finish the initialization phase of the logging class. This method will throw a Helios::Error::Logging error if anything goes wrong with the initialization of a logger class. It will also attempt to fall back to the Helios::Logger::Internal logger to attempt to log the initialization error.. These methods control how many times a job should be retried if it fails and how long the system should wait before a retry is attempted. If you don't defined these, jobs will not be retried if they fail. Defines which job class to instantiate the job as. The default is Helios::Job, which should be fine for most purposes. If necessary, however, you can create a subclass of Helios::Job and set your JobClass() method to return that subclass's name. The service's work() method will instantiate the job as an instance of the class you specified rather than the base Helios::Job. NOTE: Please remember that "jobs" in Helios are most often only used to convey arguments to services, and usually only contain enough logic to properly parse those arguments and mark jobs as completed. It should be rare to need to extend the Helios::Job object. OTOH, if you are attempting to extend Helios itself to provide new abilities and not just writing a normal Helios application, you can use JobClass() to use your extended job class rather than the default. Defines which configuration class to use to parse your service's configuration. The default is Helios::Config, which should work fine for most applications. If necessary, you can create a subclass of Helios::Config and set your ConfigClass() method to return that subclass's name. The service's prep() method will initialize your custom config class and use it to parse your service's configuration information. See the Helios::Config documentation for more information about creating custom config classes. Helios, helios.pl, Helios::Job, Helios::Error, Helios::Config, TheSchwartz Andrew Johnson, <lajandy at cpan dot org> Portions of this software, where noted, are Copyright (C) 2009 by Andrew Johnson. Portions of this software, where noted, are Copyright (C) 2011-2012 by Andrew Johnson. Portions of this software, where noted, are Copyright (C) 2012.
http://search.cpan.org/~lajandy/Helios-2.61/lib/Helios/Service.pm
CC-MAIN-2017-51
en
refinedweb
Most .NET developers discover with their first Xamarin.Forms solution that these apps are much more complicated than what they’ve experienced under the other XAML platforms. The solutions contain multiple projects, involve a variety of NuGet packages, and require interacting with native Xamarin.iOS and Xamarin.Android projects. It can be intimidating to new users and time-consuming for experts. For our 2017 v3 release, we’ve added some new project templates for all supported Xamarin platforms to streamline the process and make development easier. What's in a Template? We're offering three varieties of templates for our Xamarin projects to support Xamarin.Forms, Xamarin.iOS, and Xamarin.Android. To use the new templates, all you need to do is install 2017 v3 from our website, and the new templates will become available when you create a new project in Visual Studio 2015 and 2017. The new templates will streamline the number of steps a developer needs to go through to create a project using ComponentOne Studio for Xamarin controls. First, these projects come with the NuGet packages preloaded for all our controls. This means you won’t need to take any special steps to add our controls to your project. Secondly, these templates also include the initialization necessary for our custom renderers in each platform project. This means that you no longer need to worry about manually initializing these controls for each platform because we’ve already taken care of it. All you need to do is add a license key to your project via our website or the GrapeCity License Manager Add-in. In Xamarin.Forms, the key can be added to your App.xaml.cs file: { public partial class App : Application { public App() { InitializeComponent(); C1.Xamarin.Forms.Core.LicenseManager.Key = "Add your key here"; MainPage = new App15.MainPage(); } protected override void OnStart() { // Handle when your app starts } In Xamarin.Android, it's located in your MainActivity class: public class MainActivity : Activity { protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); C1.Android.Core.LicenseManager.Key = "Add your key here."; // Set our view from the "main" layout resource // SetContentView (Resource.Layout.Main); } } And in Xamarin.iOS, it's in your AppDelegate class: public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions) { C1.iOS.Core.LicenseManager.Key = "Add your key here."; // create a new window instance based on the screen size Window = new UIWindow(UIScreen.MainScreen.Bounds); // If you have defined a root view controller, set it here: // Widnow.RootViewController = myViewController; // make the window visible Window.MakeKeyAndVisible(); return true; } After that, you'll be ready to use the ComponentOne Studio for Xamarin controls. Future Plans We’ll be making some future updates to the projects as Xamarin makes changes to their platform. Eventually, we intend to also support Xamarin.Forms applications that use a .NET Standard Library instead of a PCL, since this is the future of cross-platform code. Also, as more platforms are added to Xamarin.Forms, we’ll examine and update the templates as needed. We’re open to user feedback on any features you would like to see added to the templates in the future, so if you have any suggestions, feel free to leave a comment here or in our forums! Download ComponentOne Studio for Xamarin.
https://www.grapecity.com/en/blogs/project-templates-componentone-studio-xamarin
CC-MAIN-2017-51
en
refinedweb
Python alternatives for PHP functions import re string = "multiple line string with \n" parts = string.split("\n") parts = re.split("\n+", string) (PHP 4, PHP 5) split — Split string into array by regular expression Splits a string into array by regular expression. Case sensitive. The input string. If limit is set, the returned array will contain a maximum of limit elements with the last element containing the whole rest of string .. Example #1 split() example To split off the first four fields from a line from /etc/passwd: <?phplist(().
http://www.php2python.com/wiki/function.split/
CC-MAIN-2017-51
en
refinedweb
I’ve gone back to doing some work on Gates of Dawn, my terse Python library for creating PureData patches. The idea here is I want to make synthesizers in PD but I don’t particularly like drawing boxes and connectors using the clunky UI. Here’s a brief overview from the Readme file. Showing some simple code examples and introducing the key ideas. Idea #1 : Gates of Dawn uses function composition as the way to wire together the units in Pure Data. In PureData traditionally, you’d do something like wire the number coming out of a slider control to the input of an oscillator and then take that to the dac audio output. Here’s how to express that using Gates of Dawn. dac_ ( sin_ ( slider("pitch",0,1000) ) ) To create the slider you call the slider function (giving a label and a range). Then you pass the result of calling that as an argument to the sin_ function (which creates an osc~ object in PD). The output of that function is passed as an argument to the dac~. (Note we are now trying to use the convention that functions that represent signal objects (ie. those that have a ~ in their name in PD) will have a _ suffix. This is not ideal but it’s the best we can do in Python.) In Gates of Dawn programs are structured in terms of a bunch of functions which represent either individual PD objects or sub-assemblies of PD objects. You pass as arguments to the functions those things that are upstream and coming in the inlet, and the return value of the function is suitable to be passed to downstream objects. Things obviously get a bit more complicated than that, but before we go there, let’s just grasp the basic outline of a Gates of Dawn program. Idea #2 : Files are managed inside with patch() context blocks. Here’s the complete program. from god import * with patch("hello.pd") as f : dac_ ( sin_ ( slider("pitch",0,1000) ) ) We use Python’s "with" construction in conjunction with our patch() function to wrap the objects that are going into a file. This is a complete program which defines the simple slider -> oscillator -> dac patch and saves it into a file called "hello.pd" Try the example above by putting it into a file called hello.py in the examples directory and running it with Python. Then try opening the hello.pd file in PureData. If you want to create multiple pd files from the same Python program you just have to wrap each in a separate with patch() block. Idea #3 : Variables are reusable points in the object graph. This should be no surprise, given the way function composition works. But we can rewrite that simple patch definition like this : s = sin_ ( slider("pitch",0,1000) ) dac_(s) The variable s here is storing the output of the sin_ function (ie. the outlet from the oscillator). The next line injects it into the dac. This isn’t useful here, but we’ll need it when we want to send the same signal into two different objects later on. NB: Don’t try to reuse variables between one patch block and another. There’s a weird implementation behind the scenes and it won’t preserve connections between files. (Not that you’d expect it to.) Idea #4 : Each call of a Gates of Dawn function makes a new object. with patch("hello.pd") as f : s = sin_ ( slider("pitch",0,1000) ) s2 = sin_ ( slider("pitch2",0,1000) ) dac_(s,s2) Here, because we call the functions twice, we create two different oscillator objects, each with its own slider control object. Note that dac_ is an unusual function for Gates of Dawn, in that it takes any number of signal arguments and combines them. Idea #5 : You can use Python functions to make cheap reusable sub-assemblies. Here’s where Gates of Dawn gets interesting. We can use Python functions for small bits of re-usable code. For example here is simple function that takes a signal input and puts it through a volume control with its own slider. def vol(sig,id=1) : return mult_(sig,num(slider("vol_%s" % id,0,1))) Its first argument is any signal. The second is optional and used to make a unique label for the control. We can combine it with our current two oscillator example like this : def vol(sig,id=1) : return mult_(sig,num(slider("vol_%s" % id,0,1))) with patch("hello.pd") as f : s = vol (sin_ ( slider("pitch",0,1000) ), "1") s2 = vol (sin_ ( slider("pitch2",0,1000) ), "2") dac_(s,s2) Notice that we’ve defined the vol function once, but we’ve called it twice, once for each of our oscillators. So we get two copies of this equipment in our patch. Of course, we can use Python to clean up and eliminate the redundancy here. def vol(sig,id=1) : return mult_(sig,num(slider("vol_%s" % id,0,1))) def vol_osc(id) : return vol( sin_( slider("pitch_%s"%id,0,1000) ), id) with patch("hello.pd") as f : dac_(vol_osc("1"),vol_osc("2")) Idea #6 : UI is automatically layed-out (but it’s work in progress). You’ll notice, when looking at the resulting pd files, that they’re ugly but usable. Gates of Dawn basically thinks that there are two kinds of PD objects. Those you want to interact with and those you don’t. All the objects you don’t want to see are piled up in a column on the left. All the controls you need to interact with are layed-out automatically in the rest of the screen so you can use them. This is still very much work in progress. The ideal for Gates of Dawn is that you should be able to generate everything you want, just from the Python script, without having to tweak the PD files by hand later. But we’re some way from that at this point. At the moment, if you need to make a simple and usable PD patch rapidly, Gates of Dawn will do it. But it’s not going to give you a UI you’ll want to use long-term. Idea #7 : You still want to use PD’s Abstractions Although Python provides convenient local reuse, you’ll still want to use PD’s own Abstraction mechanism in the large. Here’s an example of using it to make four of our simple oscillators defined previously : with patch("hello.pd") as f : outlet_ ( vol_osc("$0") ) guiCanvas() with patch("hello_main.pd") as f : dac_( abstraction("hello",800,50), abstraction("hello",800,50), abstraction("hello",800,50), abstraction("hello",800,50) ) In this example we have two "with patch()" blocks. The first defines a "hello.pd" file containing a single vol_osc() call. (This is the same vol_osc function we defined earlier.) Note some of the important extras here : * outlet_() is the function to create a PD outlet object. When the hello.pd patch is imported into our main patch, this is how it will connect to everything else. * You must add a call to guiCanvas() inside any file you intend to use as a PD Abstraction. It sets up the graph-on-parent property in the patch so that the UI controls will appear in any container that uses it. * Note that we pass $0 to the vol_osc function. $0 is a special variable that is expanded by PD into a different random number for every instance of a patch that’s being included inside a container. PD doesn’t have namespaces so any name you use in an Abstraction is repeated in every copy. This can be problematic. For example a delay may use a named buffer as storage. If you import the same delay Abstraction twice, both instances of the delay will end up trying to use the same buffer, effectively merging the two delayed streams into one. Adding the $0 to the beginning of the name of the buffer will get around this problem as each instance of the delay will get a unique name. In our simple example we don’t need to use $0. But I’ve added it as the label for our vol_osc to make the principle clear. The second "with patch()" block defines a containing patch called hello_main.pd. It simply imports the hello.pd Abstraction 4 times and passes the four outputs into the dac_. Note that right now, layout for abstractions is still flaky. So you’ll see that the four Abstractions are overlapping. You’ll want to go into edit mode and separate them before you try running this example. Once you do that, though, things should work as expected.
http://sdi.thoughtstorms.info/?tag=pure-data
CC-MAIN-2017-51
en
refinedweb
So I'm trying to make my junit test into an executable so I can execute it with a .bat file. Does anybody know how to do this? The way I've been trying to use is turning it into an executable .jar file. If there is a better way let me know. How do I generate a main method for this script so I can export it as an executable .jar? import com.thoughtworks.selenium.*; import org.junit.After; import org.junit.Before; import org.junit.Test; import static org.junit.Assert.*; import java.util.regex.Pattern; public class Hello extends SeleneseTestCase { private Selenium selenium; @Before public void setUp() throws Exception { selenium = new DefaultSelenium("localhost", 4444, "*firefox C:\\Program Files (x86)\\Mozilla Firefox\\firefox.exe", ---------------------------- Please feel free to add me on skyep - ScytheMarketing I'm willing to make a donation to anybody willing to help me.
https://www.blackhatworld.com/seo/need-somebody-savy-with-selenium-java.736501/
CC-MAIN-2017-51
en
refinedweb
Detecting Origin of CMake Invocation For some projects, you may want to have a capability to identify the instance that invokes CMake: CLion or another instance (console, for example). Such a feature could be useful when your project should have different options or functionality depending on CMake invocation origin. To control and identify that, CLion suggests a special variable CLION_IDE which you can use in CMakeLists.txt file. This variable is TRUE when CMake is invoked from CLion IDE. Example if ($ENV{CLION_IDE}) <Some code is here> endif () See Also How tos: Last modified: 19 July 2017
https://www.jetbrains.com/help/clion/2017.1/detecting-origin-of-cmake-invocation.html
CC-MAIN-2018-47
en
refinedweb
output and in making figures, but it has also versatile enough for the computationally intensive task of running the model itself. Having said that, I need to be honest about where I see the R programing language standing now and into the future (via tirade). Included in this will be a combination of strengths, weaknesses and features that will either make or break R in the coming years. The Good One area where R has excelled at is the interactive nature of the language. While Python and Matlab both are interactive, I’ve only ever seen explicit use of interactive and non-interactive modes used in R. For an R script, the program behaves different based on whether or not it is run interactively. While trivial, this simply idea can help make functions, class, and packages more user friendly and versatile by making the UI easier to piece together. But I must admit that the golden egg of r, so to speak, has to be the package management system. With a simple one line command, any package available from a repository (e.g. CRAN) can be quickly downloaded and installed without a second thought, and this is precisely what a package management system should be. Over the years Python has improve on this front with pip to add similar functionality, but R is the only language I’ve seen where the package management system is integral to the design of the language. For comparison, Matlab is the antithesis of a well executed package management system. Not only are many “official” packages, or Toolboxes, not free, the ones that are all have to be downloaded and installed from websites and forums. Frankly, it is a mess and a real pain in the ass. The Bad Every programming language has some conception of a Namespace, so let’s take a look at how R manages its own. A Namespace is a generic term for how a programming language manages and structures the various names of the variables, functions and other constructs within the program. For example, consider the following Python program. def g(): x = 10 print(x) x = 3 g() print(x) Scoping refers to where in a Namespace, or perhaps more correctly, which Namespace, an variable is defined. So above, there is a variable inside as well as a variable outside the function. Are they the same and will the script print and then or and then ? In fact, since Python has simple and intuitive scoping rules, it is easy to figure out where in the Namespace a variable or function resides. This script does output and then since the inside the function is not the same as the outside the function. The R language unfortunately uses a different set of scoping rules–and while it may just be my own inability to shift from the set used in Python, Java, C, and others–the Namespace in R is all messed up. Not only is it hard to figure out how a variable is scoped, there is no easy way to ensure a variable is in a particular Namespace. With a rather opaque system, it makes high level programming challenging since program behaviour is reliant on how the Namespace is set up and maintained. I would really like to see this changed (perhaps in an offshoot of R). The Ugly Multithreading is not only a valuable commodity these days of high powered, multi-core machines, it is virtual prerequisite for scientific computations where most problems naturally scale across processes. Since R is based off of S and both emerged when multi-core machines were relegated to costly servers, neither have any ability to run on more than one thread. In fact, so much of the language is incompatible with the concept that studies looking at how to adapt R to a multithread design have come up with nill (except perhaps not using R). Since R is good at interfacing with other languages, there are workarounds if certain algorithms are bottlenecking the pipeline since they could be written in C, Fortran, Java, Python etc without too much difficulty. Just like Python, R makes for a good ‘glue’ language to piece together the various blocks (and then analyze...
https://www.r-bloggers.com/the-future-of-r-2/
CC-MAIN-2018-47
en
refinedweb
high efficient types of iron ore crusher from china iron ore crushing plant in china. iron ore crusher plant description iron ore crusher plant contains vibrating zenith could be the world s primary rock and mineralknow more crusher supplier from china for iron ore iron ore manufacturers. iron ore manufacturer/factory, china iron ore manufacturering supplier list, find qualified china brand coal iron ore crusher / crushing ironknow more china iron ore crusher iron ore plant china stone pulverizing machinery. ore crusher plant china, iron ore crushing and screening. metal ore; non metallic ore; stone crushing lineknow more iron ore crushing primary jaw crusher china iron ore crushing primary jaw crusher, find details about china jaw crusher, primary jaw crusher from iron ore crushing primary jaw crusher hengxing heavy equipment co., ltd.know more china best manufacturer mining iron ore cone crusher for sale shanghai manufacturer track mounted crushing. contact us shanghai zme company iron ore china top 1 mobile stone production crusher line for sale. china top 1 trackknow more china iron ore cone crusher suppliers, iron ore cone import china iron ore cone crusher from various high quality chinese iron ore cone crusher suppliers & manufacturers on globalsources.com.know more china iron ore crusher mobile iron crusher maker in china laos iron ore processing plant. laos became one important southeast asian market of cme since 2005,know more china iron ore crusher china cost effective iron ore crusher for sale. china cost effective iron ore crusher for sale, find details about china iron ore crusher, crushing machine from costknow more china iron ore crusher china cost effective iron ore crusher for sale. china cost effective iron ore crusher for sale, find details about china iron ore crusher, crushing machine from costknow more iron ore crusher control iron ore processing pilot plant for iron ore pelletizing; roll crusher factories; south jinqiao area, pudong, shanghai, china: 0086 21 58386256 0086 21know more china iron ore crushers in india industry news. jaw crusher for iron ore in india, buy from . china india is very rich in natural resources, such as coal, iron ore, bauxite, manganese oreknow more china iron ore crusher in the world, there are abundant iron ore, then iron ore crushing equipment is very important in iron ore mining industry, was founded in 1997, it has become aknow more china supplier iron ore used jaw crusher price iron mining jaw crusher, iron mining jaw crusher iron mining jaw crusher, energy saving manganese ore/basalt/iron mine jaw crusher from china supplier .. iron oreknow more used iron ore cone crusher price india used iron ore mobile crusher in india grinding mill china. used iron ore jaw crusher manufacturer india. cone crusher a iron ore india; used iron ore iron ore crusherknow more duoling iron ore impact crusher supplier china crusher products grinding mill china. iron ore fine crusher screening plant,china crusher duoling is china crushing plant manufacturer, offer jaw crusher,know more mobile iron ore crusher plant, mobile crusher plant made iron ore jaw crusher in china iron ore crusher,mobile iron ore crusher,iron ore jaw crusher made in china.jaw crusher manufacturer simple biggest stone crushing plantknow more china a high efficiency iron ore mobile crusher plant price low price ce iron ore mobile crusher price, low low price ce iron ore mobile crusher price, wholesale various high quality low price ce iron ore mobile crusher priceknow more china iron ore crushers china iron ore crushers. as a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for anyel sizeknow more jaw crusher for iron ore crushing from shanghai china to jaw crusher for iron ore jaw crusher for iron ore stone crusher machine price. jaw crusher electric maize . china mashed potato ore crushing plant . shanghai lipuknow more mobile iron ore crusher china underground mobile rock ore crusher jodha.co.in. 201453 about iron ore mobile crusher plant scale mining. small gold ore crusher,china mobile gold from the mine open pit and fromknow more high safety attractive style cone crusher for iron ore china gyratory crusher for sale crusherasia crusher for metal sale in usa iron ore crusher for sale usa jaw crusher, jaw crusher for sale, china primary stone crusherknow more
https://regencyinngatesvilletx.com/10/22/china-iron-ore-crusher/
CC-MAIN-2018-47
en
refinedweb
First is the Java agent that makes the connection and performs the query in DB2. import lotus.domino.*; import java.sql.*; import java.util.*; import javax.swing.JOptionPane; public class DB2Lookup extends AgentBase { private static String dbuser = "userid"; private static String dbpwd = "pswd"; private static String _url = "jdbc:db2://fully.qualified.server:db2portid/dbname"; private static String _drv = "com.ibm.db2.jcc.DB2Driver"; private java.sql.ResultSet rs; private java.sql.Statement stmt; private java.sql.Connection con; public String[] NotesMain(String tableID, String custNum, String custName) { Vector vec = new Vector(); try { Class.forName(_drv); Properties p = new Properties(); p.put("user", dbuser); p.put("password", dbpwd); con = DriverManager.getConnection(_url, p); stmt = con.createStatement(); rs = stmt.executeQuery("SELECT DISTINCT NAME,CUSTNUM FROM SCHEMA." + tableID + " WHERE CUSTNUM='" + custNum + "' AND UPPER(NAME) LIKE('%" + custName.toUpperCase() + "%') ORDER BY NAME"); while (rs.next()) { String sN = rs.getString("NAME"); String sO = rs.getString("OBLIGOR"); vec.add(sN + "" + sO); } } catch (SQLException se) { // Inform user of any SQL errors System.out.println("SQL Exception: " + se.getMessage()); se.printStackTrace(System.out); } catch (Exception e) { e.printStackTrace(); } finally { try { rs.close(); stmt.close(); con.close(); } catch (SQLException se) { System.out.println("SQL Exception: " + se.getMessage()); se.printStackTrace(System.out); } } if (vec.isEmpty()) { vec.add("NOTHING"); } String forms[] = (String []) vec.toArray( new String[vec.size()]); return forms; } } The agent takes parameters passed to it, executes the DB2 query, processes the result set and creates a string array to return to the caller. The biggest stumbling block I hit was handling queries that returned no records. Until I hit on adding the "NOTHING" entry if the vector came back empty, I kept crashing my Notes client. So beware! IBM has a JDBC Type 4 driver available for free re-distribution as part of your application. You can find it here (IBM registration required, and this is for DB2 UDB, Cloudscape and Derby. UPDATE: The link points to the version is for Linux/Unix/Windows; not sure if there are drivers for other OSes). The db2jcc.jar and db2jcc_license_cu.jar archives need to be attached to your Java agent (use the Edit Project button to do this). Here is the LotusScript agent that calls the Java agent. ' Declarations Option Public Option Declare Use "DB2Lookup" Uselsx "*javacon" Sub Initialize Dim docThis As NotesDocument Dim myClass As JavaClass Dim myObject As JavaObject Dim mySession As JavaSession Dim ws As NotesUIWorkspace Dim IDs As Variant Dim stat As Variant Set ws = New NotesUIWorkspace Set docThis = ws.CurrentDocument.Document If docThis.CustName(0) = "" Or docThis.CustNum(0) = "" Or docThis.TableID(0) = "" Then Messagebox "You must have entries in the Customer Name, Customer Number and Table fields before using this function",16,"Error" Exit Sub End If Set mySession = New JavaSession() Set myClass = mySession.GetClass("DB2Lookup") Set myObject = myClass.CreateObject IDs = myObject.NotesMain(docThis.TableID(0),docThis.CustNum(0),docThis.Borrower(0)) If IDs(0) = "NOTHING" Then Messagebox "No customers found meeting your criteria; please revise and try again", 64,"Customer Lookup" Exit Sub End If If Ubound(IDs) > 0 Then stat= ws.Prompt(PROMPT_OKCANCELLIST,"Select Customer","Select the Customerfrom this list:","",IDs) If stat = "" Then Exit Sub Else stat = IDs(0) End If With docThis .CustName = Trim(Strleft(stat,"")) .CustNum = Trim(Strright(stat,"")) End With Call ws.CurrentDocument.Reload End Sub provided by Julian Robichaux at nsftools.com. The agent makes sure there are entries in all the fields passed for the JDBC call. If no records are found, the user is informed. If one record is found, the fields are updated automatically. If more than one record is found, a dialog box is presented from which the user can choose an entry. Who says Notes/Domino is a closed environment? Now go forth and JDBC! Technorati: Show-n-Tell Thursday Categories: Show-n-Tell Thursday_ 7 comments: Probably a simple question...I am getting a licensing error :. From my initial Googling, it appears that the problem is that the db2jcc_license_cu.jar has not been included. I have included this in my Java Project (Script library). Any ideas? Nick, I'm not sure if there is a simple answer. A couple of things to check though: - The link to the IBM site that I included is for the driver for Linux/Unix/Windows. Is that the OS your DB2 server is on? - I'm not sure how it happened because I didn't do it manually, but the db2jcc and license jars are in my CLASSPATH. Beyond that I'm not sure what to say. Let me know if either of these things help and we can try to go from there. Don I'm testing from a windows XP machine, and connecting to DB2 on an AS400. I actually didn't put all of the error message in my last post, which was giving me the answer!:. An appropriate license file db2jcc_license_*.jar for this target platform must be installed to the application classpath. Connectivity to QAS databases is enabled by any of the following license files: { db2jcc_license_cisuz.jar } Pretty sure this is telling me that I need to add db2jcc_license_cisuz.jar to my project, rather than the db2jcc_license_cu.jar At : under the Important section : For DB2 UDB for iSeries® and z/OS servers (provided with DB2 Connect and DB2 Enterprise Server Edition): db2jcc_license_cisuz.jar I have used the type 2 driver before, but in the FAQ on the page above it mentions that if DB2 on same machine, stick with Type 2, if on diff' machine they have basically the same performance. General Ramble... I do a lot of integration with our ERP system on an AS400, using LC LSX and a bunch of hand written. I wrote some test java code (type 2 driver), which was being called on a Win 2003 server, but this didn't do anything for me performance wise, so I am in the process of switching over to stored procedures...but...a week or so ago, I bumped into this post on the enterprise forum where Charles has done way more than me in attempting to reduce dat a retreival times. I was interested to see if the Type 4 driver could reduce my retreival time, but it looks like the FAQ mentioned above is telling me know. ...General Ramble over. I should still try and get hold of the db2jcc_license_cu.jar though. But thanks, great post Don. Glad it seems like a simple solution. I'm not really sure of the performance of each of the driver types. I haven't had a need to go any deeper than what is in my post, and that seems to work for us at the moment. I agree that a stored procedure would be more efficient if there were reasonably complex operations to be performed on the data - assuming you could do it in a procedure - because there is less traffic. But every situation is different. Thanks for reading! How can I get db2jcc_license_cisuz.jar file? WOuld you mind sending me a copy of that file to baijingyu at hotmail.com ? Thanks. I haven't been able to find it. I found this at the IBM site:. You may be able to locate it through a search engine. Sorry but that is the best I can do. Don The link to the download page for the JDBC driver is no longer valid, it returns "2004-09-20 10:09:21.003415R download was not found in the database". It happens.
http://dmcnally.blogspot.com/2006/06/sntt-jdbc-in-notes.html
CC-MAIN-2018-47
en
refinedweb
I think I have an answer for my own question. It seems to work but it seems like a hack to me. I worry about hacks because they have a habit of failing at the wrong time. I’d be thankful for some opinions on this “solution” and what possible problems might arise. See update #2 below for details. Original Question Objective: I want to layer three canvasses on top of each other within one “box” of a bootstrap grid so that I can draw each canvas independently. Problem: The first canvas (called backgroundCanvas below) fits inside the grid box but the layered canvases (middleCanvas and topCanvas) do not. When I draw X’s on the canvasses in various corners and in the center, it can be seen that the red Xs (written to backgroundCanvas) fit within the bootstrap box (which has a black border) but the black and white Xs (written to middleCanvas and topCanvas respectively) do not match the box at the bottom right corner nor do their centers align with the background’s center. I have highlighted these problem areas with green ellipses in the image linked below. Screen Capture Showing Issue Mouse clicks on the canvas target the topCanvas and output from them can be seen in the developer Console log. These confirm that clicks on topCanvas outside of the grid box are being detected. The HTML uses position:relative for the first canvas (the one that fits) and position:absolute for the other canvasses (that don’t fit). This is how I layered the canvasses in the actual application before I started to port it to Meteor and Bootstrap. Changing the other canvasses to also use position:relative does fit them within the grid but they are not layered – they are sequential. Environment: This is a Meteor application on Windows 10 using Chrome. Reproduction: I pared the code down to the bare minimum needed to generate and demonstrate the problem. The code consists of main.html, main.css, and main.js which are inserted below. To reproduce this I tried to create a jsFiddle but was not sure how to do that for a Meteor app. Some of the Internet wisdom suggested it was not yet possible but, if I’m wrong, please let me know. To reproduce, the following steps should suffice: - Create an app: meteor create canvas_size - Add Bootstrap: meteor add twbs:bootstrap - Edit Code: replace the content of the files generated in the client directory with the code shown below or at this DropBox folder:. - Run meteor: meteor The Code: main.html <head> <title>Canvas Size</title> </head> <body> <h1>Welcome to My Troubles</h1> {{> theGrid}} </body> <template name="theGrid"> <div class="container"> <div class="row"> <div class="col-md-6" style="border: 1px solid black; "> {{> sheet}} </div> <!-- /col-md-6 --> </div> <!-- /row --> </div> </template> <template name="sheet"> <div id="sheetView" > <!-- Overlap three canvasses. --> {{>sheetBackground}} {{>sheetMiddle}} {{>sheetTop}} </div> </template> <template name="sheetBackground"> <canvas id="backgroundCanvas" class="playingArea" style="z-index: 0; position:relative; left: 0px; top: 0px; " width="400" height="600" > HTML 5 Canvas not supported by your browser. </canvas> </template> <template name="sheetMiddle"> <canvas id="middleCanvas" class="playingArea" style="z-index: 1; position: absolute; left: 0px; top: 0px; " width="400" height="600"> HTML 5 Canvas not supported by your browser. </canvas> </template> <template name="sheetTop"> <canvas id="topCanvas" class="playingArea" style="z-index: 2; position: absolute; left: 0px; top: 0px; " width="400" height="600"> HTML 5 Canvas not supported by your browser. </canvas> </template> main.css: #backgroundCanvas { background-color: lightgrey; } .playingArea, #sheetView { width: 100%; height: auto; } main.js: import { Meteor } from 'meteor/meteor'; import { Template } from 'meteor/templating'; import './main.html'; // Render the backgroundCanvas when system signals onRendered. // This only gets called when the thing is first rendered. Not when resized. Template.sheetBackground.onRendered( function() { let context = $( '#backgroundCanvas' )[0].getContext( "2d" ); drawX( context, 399, 1, 30, "red", 9 ); drawX( context, 200, 300, 30, "red", 15 ); drawX( context, 1, 599, 30, "red", 9 ); } ); Template.sheetMiddle.onRendered( function() { let context = $( '#middleCanvas' )[0].getContext( "2d" ); drawX( context, 1, 1, 30, "black", 9 ); drawX( context, 200, 300, 30, "black", 9 ); drawX( context, 399, 599, 30, "black", 9 ); } ); Template.sheetTop.onRendered( function() { let context = $( '#topCanvas' )[0].getContext( "2d" ); drawX( context, 1, 1, 24, "white", 3 ); drawX( context, 200, 300, 24, "white", 3 ); drawX( context, 399, 599, 24, "white", 3 ); } ); Template.sheetTop.events( { 'mousedown': function( event ) { let canvasPos = windowPointToCanvasPoint( event.clientX, event.clientY, $( '#topCanvas' )[0] ); console.log( "sheet.js: mousedown: " + "clientXY=<" + event.clientX + "," + event.clientY + "> " + "canvasXY=<" + Math.floor(canvasPos.x) + "," + Math.floor(canvasPos.y) + "> " ); }, } ); export function drawLine( context, startX, startY, endX, endY ) { context.beginPath(); context.moveTo( startX, startY ); context.lineTo( endX, endY ); context.stroke(); context.closePath(); } function drawX( context, centerX, centerY, size, colour, thickness ) { context.save(); context.strokeStyle = colour; context.lineWidth = thickness; // Not correct. Total line length will actually be Math.hypot( size, size ); var lineLen = size / 2; drawLine( context, centerX - lineLen, centerY - lineLen, centerX + lineLen, centerY + lineLen ); drawLine( context, centerX - lineLen, centerY + lineLen, centerX + lineLen, centerY - lineLen ); context.restore(); } function windowPointToCanvasPoint( windowX, windowY, canvas ) { var boundRect = canvas.getBoundingClientRect(); var canvasX = ( windowX - boundRect.left ) * ( canvas.width / boundRect.width ); var canvasY = ( windowY - boundRect.top ) * ( canvas.height / boundRect.height ); return { "x": canvasX, "y": canvasY }; } Update #1: I read here () that, unless contained in another element with relative positioning, absolute positioning positions elements relative to the document’s body. So I modified some of the HTML above to add a relative positioning to the div that contains all the canvasses and change the first canvas to also be absolute positioning. <template name="sheet"> <div id="sheetView" style="position: relative; "> <p>dummy text to force bootstrap row height to be more visible<> Now, at least, the canvasses are aligned with each other and, as the column width shrinks due to resizing, the canvasses shrink to fit the column. Problem: The height of the bootstrap row does not adjust to the canvasses. It is (perhaps) 1px high. To make that more obvious, I added some dummy text (see HTML above) which, at least, forces the row to be 1 paragraph in height – now you can see the border which is NOT surrounding the canvas. If I add a second column to the row, when I resize down to a small screen size, the second column overlaps the canvasses instead of going below. See and . Question: Can I force the row height in some manner or is there a better solution than what I am attempting here? The code: I’ve put the code with the new HTML into this Dropbox folder:. I did not change the .js or .css for this update. Update #2: I can control the height of the Bootstrap row by inserting a paragraph into the same row/column box as the overlapping canvasses. The canvasses are still positioned absolutely but the paragraph height is controlled to be the same as the canvas height. A snippet of the modified code is here. Note the HTML paragraph near the top and the corresponding CSS changes. main.html <template name="sheet"> <div id="sheetView" style="position: relative; "> <p id="hackyHeightPara">Dummy text allowing CSS to force bootstrap row height.<> main.css #hackyHeightPara { height: 525px; /* Set to control row height */ } .playingArea, #sheetView { width: auto; /* Swapped from width:100% and height:auto */ height: 100%; } Thoughts?
http://w3cgeek.com/layered-canvasses-extend-outside-bootstrap-grid.html
CC-MAIN-2018-47
en
refinedweb
NTN 6211-2ZN Retailer|6211-2ZN bearing in India 2z Bearing - Find 2z Bearing. Search for New & Used Autos, Parts & Accessories. Browse by Make, Model & Year.Contact us Find Top Products on eBay - Seriously, We have EVERYTHING Over 70% New & Buy It Now; THIS is the new eBay. Find Great Deals now!Contact us NTN 6211-Z bearing in Germany NTN SD3048G bearing Ball Bearing 6211 2RS1.Chrome steel material2.NTN bearings3.Original product home appliance Deep Groove Ball Bearing 6211N 6211 Z 6211 2Z TWB | SKF 6211 2ZN bearing in India SKF 6211 2ZN bearing in India are widely used in industrial drive, agriculture, compressors, motors and generators, NTN 6211-2ZN. SKF Bearings. Contact us.Contact us import ntn 6211-2zn bearing | Product import ntn 6211-2zn bearing High Quality And Low Price. import ntn 6211-2zn bearing are widely used in industrial drive, agriculture, compressors, motors andContact us Deep groove ball bearings In addition to the information provided on this page, consider what is provided under Deep groove ball bearings. For information on selecting the appropriate bearingContact us Welcome to NTN Bearing NTN is one of the world's leading manufacturers of bearing products to OEMs, distributors, and end users.Contact us NTN 6011-2ZN bearing original in Botswana | GRAND Bearing name:NTN 6011-2ZN bearing We guarantee to provide you with the best NTN 6211-2ZN Bearings,At the same time to provide you with the NTN 6211-2ZN typesContact us 【ntn 6211-2zn bearing plant】-Mongolia Bearing ntn 6211 2zn bearing is one of the best products we sell, our company is also one of the best ntn 6211-2zn bearing plant. Expect us to cooperate. Our ntn 6211-2znContact us 6211 2Z SKF Deep Groove Bearing - 55x100x21mm Product Description. This 6211 2Z SKF bearing is a Shielded Deep Groove Radial Ball Bearing with a standard radial internal clearance The bearing's dimensions areContact us NTN 6211-2Z bearing NTN 6211-2Z bearing using a wide range, NTN 6211-2Z bearings are mainly used in construction machinery, machine tools, automobiles, metallurgy, mining,Contact us NTN 51220 Bearings /NTN-bearings/NTN_51220_39859.html NTN 51220 bearings founded in Japan.The largest merits of our NTN 51220 bearing is best quality,competitive price and fast shipping.MeanwhileNTN 51220 bearing is veryContact us bearing 6211-2Z,6211-2Z bearings,NTN 6211-2Z,Deep Groove Ball Description of bearing 6211-2Z,6211-2Z bearings,NTN 6211-2Z, Deep Groove Ball Bearings 6211-2Z. Deep groove ball bearings are the most representative and widelyContact us 【ntn 6211-2z bearing assembly】-Mauritius Bearing ntn 6211 2z bearing is one of the best products we sell, our company is also one of the best ntn 6211-2z bearing assembly. Expect us to cooperate. Our ntn 6211-2zContact us SKF 6211-2Z bearings We, ERIC BEARING Co.,Ltd, as one of the largest exporters and distributors of SKF 6211-2Z bearing in China, FAG bearings, NSK bearings, NTN bearings,Contact us 6211-2ZN SKF bearings Eric bearing limited company mainly supply high precision, high speed, low friction bearing 6211-2ZN SKF. In the past 12 years, 6211-2ZN SKF is widely used inContact us NTN 6211 2ZN bearing in South Africa | Product NTN 6211 2ZN bearing in South Africa High Quality And Low Price. NTN 6211 2ZN bearing in South Africa are widely used in industrial drive, agriculture, compressorsContact us SKF 6211-2Z bearings SKF 6211-2Z bearings are rust free, corrosion resistant, highly durable bearings you can get easily at unbeatable prices. Deals in all kind of high tensile, premiumContact us Shop Bearings & Drive Line. - DB Electrical® - Official Site Buy High Quality Electrical Parts. Free Standard Shipping & 1 Year Warranty!Contact us INA 6211.2Z Bearing – Ball Roller Bearings Supplier Bearinga.com Product Description Brand: INA Bearing Category: Deep Groove Ball Bearings Model: 6211.2Z d: 55 mm D: 100 mm B: 21 mm Cr: 43000 N C0r: 29000 N Grease RPM: 8000 1/minContact us - KOYO M-451 Factory|M-451 bearing in Ghana - RHP 7238B/DB Standard Sizes|7238B/DB bearing in Rwanda - NSK HJ2208 Seals|HJ2208 bearing in Ontario Ottawa - INA (SCHAEFFLER) TC4052 Limiting Speed|(SCHAEFFLER) TC4052 bearing in Guatemala - KOYO NRB TRA-3446 Manufacturers|NRB TRA-3446 bearing in Central Africa - KOYO 7202C Cross Reference|7202C bearing in Hong Kong
http://welcomehomewesley.org/?id=5404&bearing-type=NTN-6211-2ZN-Bearing
CC-MAIN-2018-47
en
refinedweb
Agile development has created a culture of newly weds, programmers coupled in pairs oblivious to the fate that awaits them. As with all forms of coupling, the short-term benefits are outweighed by the long-term consequences. The optimism of a new relationship spelled out in code never lives up to the story, no matter how it is prioritised. There has been much talk and many studies about how effective pair programming is, but clearly all those involved are looking for some kind of meaningful justification that makes sense of their predicament. Apparently pairing improves code quality and is enjoyable, but I doubt that: how can you really have fun and program well when you keep having to remove your headphones to listen to someone else questioning your mastery of code? Good pairing is supposed to involve alternately navigating and driving. From what I can tell, this means navigating the quirks of another’s style and conventions while driving home your own beliefs about how to organise things properly. It is a contest in which there will be a winner and a loser. So much for team spirit! I suspect that financial debt – which is like technical debt but with money – is a contributory factor. PCs, however, are not that expensive. Surely companies can spare enough money to supply each programmer with their own PC or, at the very least, a keyboard they can call their own? The point of a PC is that it’s personal – the clue is in the name. Sharing a computer is like sharing a toothbrush, only more salacious. For example, the practice of promiscuous pairing is often promoted. You swing from one partner to the next willingly, openly and frequently. Such loose coupling demonstrates a lack of commitment and sends out the wrong moral message. If you’re going to have to pair, you should do it properly, all the way from ‘I do’ to ‘Done’. It is likely that there will be an eventual be a separation of concerns, but that at least avoids the risk of communicating state-transition diagrams and infecting your C++ code with explicit use of the standard library namespace. One thing that might be said in favour of pairing is picking up new skills. For example, I have learnt to use a Dvorak keyboard and a succession of editors with obscure key bindings and shortcuts. Being able to present new and existing partners with an unfamiliar and hostile environment puts them off their guard and sends out a clear signal about the roles in the relationship. I also find pairing can be effective with newbies. They can either sit and watch for a few hours or they can drive while you correct them from the back seat. These benefits, however, are few and far between. The day-to-day reality is more cynical: the constant nagging, the compromises you make, the excuses you have to make up, the methods you use, the arguments, the rows, the columns... and you sometimes have to put up with your partner snoring after you’ve offered an extended and enlightening explanation of some minor coding nuance they seemed apparently unaware of! So, don’t impair to code, decouple.
https://accu.org/index.php/journals/1983
CC-MAIN-2018-47
en
refinedweb
Thoughts on Software Development TDD: Testing delegation Thoughts on Software Development TDD: Testing delegation Join the DZone community and get the full member experience.Join For Free I recently came across an interesting blog post by Rod Hilton on unit testing and it reminded me of a couple of conversations Phil, Raph and I were having about the best way to test classes which delegate some responsibility to another class. An example that we ran into recently was where we wrote some code which required one controller to delegate to another. public class ControllerOne extends Controller { public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws Exception { } } public class ControllerTwo extends Controller { private final ControllerOne controllerOne; public ControllerTwo(ControllerOne controllerOne) { this.controllerOne = controllerOne; } public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws Exception { .... return controllerOne.handleRequest(...); } } @Test public void theTest() { ControllerOne controller One = mock(ControllerOne.class); ControllerTwo controllerTwo = new ControllerTwo(controllerOne); controllerTwo.handleRequest(...) verify(controllerOne).handleRequest(...); } When we discussed this Raph and Phil both pointed out that we didn't care that specifically about the implementation of how the request was handled. What we care about is that the result we get after the request is handled is as expected. We therefore changed our test to be more like this: @Test public void theTest() { ControllerOne controller One = mock(ControllerOne.class); ModelAndView myModelAndView = new ModelAndView(); when(controllerOne.handleRequest(...).thenReturn(myModelAndView); ControllerTwo controllerTwo = new ControllerTwo(controllerOne); ModelAndView actualModelAndView = controllerTwo.handleRequest(...) assertThat(actualModelAndView, equalTo(myModelAndView)); } I've been finding more and more recently that when it comes to writing tests which do some sort of delegation the 'stub + assert' approach seems to work out better than just verifying. You lose the fine grained test that verifying mocks provides but we can still pretty much tell indirectly whether the dependency was called because if it wasn't then it's unlikely (but of course still possible) that we would have received the correct 'ModelAndView' in our assertion. My current approach is that I'd probably only mock and verify an interaction if the dependency is a service which makes a network call or similarly expensive call where the interaction is as important as the result obtained. For example we probably wouldn't want to make that call multiple times and with verification we're able to ensure that doesn't happen. I find as I've used mocking frameworks more I feel like I'm drifting from a mockist style of testing to one getting closer to the classicist approach. I wonder if that's quite a normal progression. Published at DZone with permission of Mark Needham , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/thoughts-software-development-news
CC-MAIN-2018-47
en
refinedweb
Gecode::TraceTraits< Float::FloatView > Class Template Reference Trace traits for float views. More... #include <traits.hpp> Detailed Description template<> class Gecode::TraceTraits< Float::FloatView > Trace traits for float/float/trace/traits.hpp
https://www.gecode.org/doc-latest/reference/classGecode_1_1TraceTraits_3_01Float_1_1FloatView_01_4.html
CC-MAIN-2018-47
en
refinedweb
Serialization Interface¶ The Signing Interface only signs strings. To sign other types, the Serializer class provides a dumps/ loads interface similar to Python’s json module, which serializes the object to a string then signs that. Use dumps() to serialize and sign the data: from itsdangerous.serializer import Serializer s = Serializer("secret-key") s.dumps([1, 2, 3, 4]) b'[1, 2, 3, 4].r7R9RhGgDPvvWl3iNzLuIIfELmo' Use loads() to verify the signature and deserialize the data. s.loads('[1, 2, 3, 4].r7R9RhGgDPvvWl3iNzLuIIfELmo') [1, 2, 3, 4] By default, data is serialized to JSON. If simplejson is installed, it is preferred over the built-in json module. This internal serializer can be changed by subclassing. To record and validate the age of the signature, see Signing With Timestamps. To serialize to a format that is safe to use in URLs, see URL Safe Serialization. The Salt¶: from itsdangerous.url_safe import URLSafeSerializer s1 = URLSafeSerializer("secret-key", salt="activate") s1.dumps(42) 'NDI.MHQqszw6Wc81wOBQszCrEE_RlzY' s2 = URLSafeSerializer("secret-key", salt="upgrade") s2.dumps(42) 'NDI.c0MpsD6gzpilOAeUPra3NShPXsE' The second serializer can’t load data dumped with the first because the salts differ: s2.loads(s1.dumps(42)) Traceback (most recent call last): ... itsdangerous.exc.BadSignature: Signature "MHQqszw6Wc81wOBQszCrEE_RlzY" does not match Only the serializer with the same salt can load the data: s2.loads(s2.dumps(42)) 42 Responding to Failure¶ Exceptions have helpful attributes which allow you to inspect the payload if the signature check failed. This has to be done with extra care because at that point you know that someone tampered with your data but it might be useful for debugging purposes. from itsdangerous.serializer import Serializer from itsdangerous.exc import BadSignature, BadData s = URLSafeSerializer("secret-key") decoded_payload = None try: decoded_payload = s.loads(data) # This payload is decoded and safe except BadSignature as e: if e.payload is not None: try: decoded_payload = s.load_payload(e loads_unsafe(): sig_okay, payload = s.loads_unsafe(data) The first item in the returned tuple is a boolean that indicates if the signature was correct. API¶ - class itsdangerous.serializer. Serializer(secret_key, salt=b'itsdangerous', serializer=None, serializer_kwargs=None, signer=None, signer_kwargs=None, fallback_signers=None)¶ This class provides a serialization interface on top of the signer. It provides a similar API to json/pickle and other modules but is structured differently. You do not need to subclass this class in order to switch out or customize the Signer. You can instead pass a different class to the constructor as well as keyword arguments as a dict that should be forwarded. s = Serializer(signer_kwargs={'key_derivation': 'hmac'}) You may want to upgrade the signing parameters without invalidating existing signatures that are in use. Fallback signatures can be given that will be tried if unsigning with the current signer fails. Fallback signers can be defined by providing a list of fallback_signers. Each item can be one of the following: a signer class (which is instantiated with signer_kwargs, salt, and secret_key), a tuple (signer_class, signer_kwargs), or a dict of signer_kwargs. For example, this is a serializer that signs using SHA-512, but will unsign using either SHA-512 or SHA1: s = Serializer( signer_kwargs={"digest_method": hashlib.sha512}, fallback_signers=[{"digest_method": hashlib.sha1}] ) Changed in version 0.14:: The signerand signer_kwargsparameters were added to the constructor. Changed in version 1.1.0:: Added support for fallback_signersand configured a default SHA-512 fallback. This fallback is for users who used the yanked 1.0.0 release which defaulted to SHA-512. default_fallback_signers= [{'digest_method': <built-in function openssl_sha512>}]¶ The default fallback signers. default_serializer= <module 'json' from '/usr/lib/python3.5/json/__init__.py'>¶ If a serializer module or class is not passed to the constructor this one is picked up. This currently defaults to json. default_signer¶ alias of itsdangerous.signer.Signer dump(obj, f, salt=None)¶ Like dumps()but dumps into a file. The file handle has to be compatible with what the internal serializer expects. dump_payload(obj)¶ Dumps the encoded object. The return value is always bytes. If the internal serializer returns text, the value will be encoded as UTF-8. dumps(obj, salt=None)¶ Returns a signed string serialized with the internal serializer. The return value can be either a byte or unicode string depending on the format of the internal serializer. iter_unsigners(salt=None)¶ Iterates over all signers to be tried for unsigning. Starts with the configured signer, then constructs each signer specified in fallback_signers. load_payload(payload, serializer=None)¶ Loads the encoded object. This function raises BadPayloadif the payload is not valid. The serializerparameter can be used to override the serializer stored on the class. The encoded payloadshould always be bytes. load_unsafe(f, *args, **kwargs)¶ Like loads_unsafe()but loads from a file. New in version 0.15. loads(s, salt=None)¶ Reverse of dumps(). Raises BadSignatureif the signature validation fails. loads_unsafe(s,.
http://itsdangerous.palletsprojects.com/en/1.1.x/serializer/
CC-MAIN-2018-47
en
refinedweb
Hi> > questions > ------------ > The library for dragging the AJAX debug dialog has the following > notice: "distributed under commons creative license 2.0" - this is > potentially problematic since it's a family of licenses some of which > are open source compatible, some of which are not. what are the > details? I have asked this on our mailinglist. A priliminary search uncovered some nasty licensing issue. Thanks for catching this. I've created a high priority issue to resolve it [1] > niclas already noted the missing headers from threadtest (major due to > number and copyright) These will be altered and fixed in a new attempt. Thanks Niclas! > DISCLAIMER but is missing LICENSE and NOTICE from: > * wicket-*-javadoc.jar > * wicket-*-sources.jar > > (all artifacts MUST have LICENSE and NOTICE) I will work with the maven gurus to ask how to fix this. >.). That said, I'll add the header. >. > i'd recommend creating release notes (but i hope that these are > missing since this is only an audit release). Normally we have those as well, containing links to our documentation etc. Are the release notes part of what is voted on? > BTW i see the current namespace is. do > you plan to change this upon (sometime)? Yep, however, we think we should only change this after graduation as we are still not officially an Apache project :). Thanks for looking into this and providing us with some solid feedback. I expect to have a new release available somewhere next week with all problems solved and questions answered. Martijn [1] --
http://mail-archives.eu.apache.org/mod_mbox/incubator-general/200704.mbox/%[email protected]%3E
CC-MAIN-2020-40
en
refinedweb
#include "common/noncopyable.h" Go to the source code of this file. Internal interface to the QuickTime audio decoder. Note that you need to use this macro from the Common namespace. This is because C++ requires initial explicit specialization to be placed in the same namespace as the template. Definition at line 101 of file singleton.h.
https://doxygen.residualvm.org/d4/d0b/singleton_8h.html
CC-MAIN-2020-40
en
refinedweb
The Turtle module in Python allows you to draw with code. Woot! Of course you’ll need to have Python installed on your local machine. You’ll likely want to work with a code editor, of course, most of which support Python graphics. I use VSCode. Step 1: Create a file with the extension .py on the end. Open it in your editor. Okay, that is two steps. (>.>). Now you’ll want to make sure you include the turtle module with and close that out with import turtle at the end. turtle.done() Next, right under make a new instance of a turtle like so (the second turtle uses a capital T): import turtle kitten = turtle.Turtle() Then you can go ahead and tell it to move! Draw a line forward 100 pixels: kitten.forward(100) Turn left 45 degrees: kitten.left(45) Turn right 90 degrees: kitten.right(90) Seems pretty logical, right? Here is how we can make a square: Make the turtle and line a color: kitten.color("cyan") Or you could even use an RGB or hex value: kitten.color("#ffffff") If you make it cyan it will look like this at the end of the animation: If you wrap your code with a and kitten.begin_fill() , the square will turn blue at the end of the animation. kitten.end_fill() You can make the border cyan and the fill pink like so: kitten.color("cyan", "pink") It’s cute! Hooray! We can draw a square! We just made it through our first day of Python Preschool. Woot!
https://kittenkamala.com/python-turtle-basics-1/
CC-MAIN-2020-40
en
refinedweb
06-23-2011 12:43 AM hello there, SDK is new to me. my colleague and I added ACT.Framework reference to our program. We tried to add ACT.Framework reference just to test how these dll work. We were trying to load a form but we ran the code we encountered the error, "The type or namespace name 'ACT' could not be found (are you missing a using directive or an assembly reference?)". What does it mean? are we missing any libraries? We already have the files from ACT_Premium_2011\ACTWG\GlobalAssemblyCache of the ACT installer. What else are we missing? Do we still need to download something? Is there a sort of installer for us to be able to code plug-ins for ACT? Please advise, thanks in advance 06-23-2011 01:17 AM Hi, I'm most probably not understanding properly here so apologies! If you've got a ref to Act.Framework also add a ref to Act.Shared.Collections in your project. You should be able to simply start using the framework by including a using statement at the top of your cs file (if your doing it in C#)? e.g. using Act.Framework; using Act.Framework.Database; using Act.Framework.Contacts; etc 06-23-2011 06:54 AM Based on the error message I believe Vivek is correct, your simply missing the using statements for the assemblies you referenced. 06-29-2011 06:18 PM Hello, Thanks for responding to my post. I'm sorry for not being so clear. I have hear my sample codes for your reference: using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using Act.Framework; namespace WindowsFormsApplication1 { static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { ActFramework ACTFM = new ActFramework(); ACTFM.LogOn("N:\\TSA\\Zilin\\ACTTest\\TSATest.pad", "tan zilin", ""); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } } I use Visual Studio 4.0, dotnet framework 4.0, for ACT! 2011 SDK. I have installed the program to a 32-bit computer. When I build the solution, I get the error "The type or namespace 'Framework' does not exist in the namespace 'Act' (are you missing an assembly reference?)" Please advise how to fix the issue. Thanks, Raquel 06-29-2011 11:43 PM Hi Raquel, ACT! is a .NET 3.5 product. You'll need to convert your project into .NET 3.5 Framework project. Also ensure that if you are coding on a 64bit machine that you compile for a x86 CPU not "any". HTH 06-30-2011 09:14 AM Hi Vivek, Thanks for your response. When you say, "You'll need to convert your project into .NET 3.5 Framework Project." Does this mean that if I have different versions of .NET (say 2.0, 3.0, 3.5 and 4.0) in my computer, I need to uninstall everything but not the .NET 3.5? Also, what do you mean by "Also ensure that if you are coding on a 64-bit machine that you compile for x86 CPU not "any""? Sorry am not really an IT expert expecially when it comes to hardware-related items. Thanks in advance. Raquel 06-30-2011 09:30 AM No, you don't need to remove anything. You can simply right click on your project file, should be right under the solution. In the properties windows on the application tab you can select the target framework. As for the processor that's being targeted, at the top of visual studio, next to the debug/release drop down is another drop down where you can select your target processor. On 64bit systems you can choose x86 or x64, you want to select the former. 06-30-2011 12:13 PM Just to build off of what was previously stated in case others run into this; those options aren't shown by default in VB Express 2010. However, x86 is the default target for this edition (not sure about C#/C++ editions). You can show them by following the below steps (grabbed from Here) Tools > Settings > Expert Settings then, Tools > Options > Project and Solutions > General > Show advanced build configuration-Checked. Finally, Build > Configuration Manager. HTH 07-03-2011 09:14 AM Hi Matthew and Knif, I have selected the .NET 3.5 Framework Project and x86 processor. The form has loaded when I click Start Debug. . However, when I try to Build Solution, I encountered the following: Warning 1 The primary reference "Microsoft.CSharp", which is a framework assembly, could not be resolved in the currently targeted framework. ".NET Framework,Version=v3.5". To resolve this problem, either remove the reference "Microsoft.CSharp" or retarget your application to a framework version which contains "Microsoft.CSharp". WindowsFormsApplication1 Warning 2 The referenced component 'Microsoft.CSharp' could not be found. What does it mean? Should I remove Microsft.CSharp? I’ve searched from my computer such file, but I didn’t find any. Thanks, Raquel 07-05-2011 05:46 AM Raquel, Based on what you said, you have a bad reference in your project. You'll need to remove it. This page shows how you can add/remove references. I will note that is for Visual Studio 2010 (but the process is basically the same back to VS2003). If you keep having problems, after removing the reference, I'm going to suggest that you start a new project and set it up to the proper framework target (.NET 3.5, x86 CPU) before you add any of your code. Make sure it builds properly before you start, presumably, copying and pasting code. It just sounds like the configuration of the project got messed up somehow when you switched the target framework.
https://community.act.com/t5/Act-Developer-s-Forum/ACT-Framework-Error/m-p/138960
CC-MAIN-2020-40
en
refinedweb
Perl::Critic::Policy::CodeLayout::RequireTrailingCommaAtNewline − comma at end of list at newline This policy is part of the "Perl::Critic::Pulp" addon. It asks you to put a comma at the end of a list etc when it ends with a newline, @array = ($one, $two # bad ); @array = ($one, $two, # ok ); This makes no difference to how the code runs, so the policy is under the "cosmetic" theme (see " POLICY THEMES " in Perl::Critic). The idea is to make it easier when editing the code -- you don’t have to remember to add a comma to a preceding item when extending or cutting and pasting lines to re-arrange. If the closing bracket is on the same line as the last element then no comma is required. It can be be used if desired, but is not required. $hashref = { abc => 123, def => 456 }; # ok Parens around an expression are not a list, so nothing is demanded in for instace $foo = ( 1 + 2 # ok ); But a single element paren expression like this is treated as a list when it’s in an array assignment or a function or method call. @foo = ( 1 + 2 # bad ); @foo = ( 1 + 2, # ok ); Return Statement A "return" statement with a single value is considered an expression so a trailing comma is not required. return ($x + $y # ok ); Whether such code is a single-value expression, or a list of only one value, depends on how the function is specified. There’s nothing much in the text (nor even at runtime) which would say for sure. It’s handy to included parens around a single-value expression to make it clear some big arithmetic is all part of the return, especially if you can’t remember precedence levels very well. And in such an expression a newline before the final ")" can help keep a comment together with a term for a cut and paste, or not lose a paren if commenting the last line etc. So for now the policy is to be lenient. Would an option be good though? Is a "return" statement an expression or a list? return (1 + 2 + 3 # should this be ok, or not? ); Strictly speaking it would depend whether the intention in subr is a list return or a single value, where there’s no way to distinguish. Perhaps it should be allowed if there’s just one expression. Disabling As always if you don’t care about this you can disable "RequireTrailingCommaAtNewline" from .perlcriticrc in the usual way (see " CONFIGURATION " in Perl::Critic), [−CodeLayout::RequireTrailingCommaAtNewline] Other Ways to Do It This policy is a variation of "CodeLayout::RequireTrailingCommas". That policy doesn’t apply to function calls or hashref constructors, and you may find its requirement for a trailing comma in even one-line lists like "@x=(1,2,)" too much. <>.
http://man.m.sourcentral.org/f17/3+Perl::Critic::Policy::CodeLayout::RequireTrailingCommaAtNewline
CC-MAIN-2020-40
en
refinedweb
Create Beautiful Bar Chart Java Tutorial using Netbeans IDE This tutorial is all about How to Create Beautiful Bar Chart Java Tutorial using Netbeans IDE.In this tutorial you will learn how to create your own professional quality bar charts in your applications.This tutorial uses jfreechart-1.0.19.jar version Package library and Netbeans IDE. Please follow all the steps below to complete this tutorial. Create Beautiful Bar Chart Java Tutorial using Netbeans IDE steps Download the JFreeChart library in this website If the download is done, add it to your project by right clicking the Libraries folder located in your project and then Add JAR/Folder and browse where your downloaded package located and inside lib folder add all the libraries listed. The next step is to Create your project by clicking file at the top of your project and then click new project, you can name your project whatever you want. And then drag a button in your form. After that you need to import these packages import java.awt.Color; import javax.swing.UIManager; import org.jfree.chart.ChartFactory; import org.jfree.chart.ChartFrame; import org.jfree.chart.JFreeChart; import org.jfree.chart.plot.CategoryPlot; import org.jfree.chart.plot.PlotOrientation; import org.jfree.data.category.DefaultCategoryDataset; above your class and then double click your button and copy paste the code below:[java] DefaultCategoryDataset dataset = new DefaultCategoryDataset(); dataset.setValue(80, “Marks”, “Value 1”); dataset.setValue(70, “Marks”, “Value 2”); dataset.setValue(75, “Marks”, “Value 3”); dataset.setValue(85, “Marks”, “Value 4”); dataset.setValue(90, “Marks”, “Value 5”); JFreeChart chart = ChartFactory.createBarChart(“Student’s Score”, “Student’s Name”,”Marks”, dataset, PlotOrientation.VERTICAL,false,true,false); CategoryPlot p = chart.getCategoryPlot(); p.setRangeGridlinePaint(Color.black); ChartFrame frame = new ChartFrame(“Bar Chart Report”,chart); frame.setVisible(true); frame.setSize(650,550);[.) Create a Login Form in Java 2.) Create MySQL Connection in Java
https://itsourcecode.com/tutorials/java-tutorial/create-beautiful-bar-chart-java-tutorial/
CC-MAIN-2020-40
en
refinedweb
How to know if a render pass is enabled On 13/03/2013 at 10:23, xxxxxxxx wrote: Hi I am able to get the list of passes via GetFirstMultipass(). And I can request the description name, but I was not able to get the "checked" status of the pass. Any hints? Thanks On 13/03/2013 at 11:11, xxxxxxxx wrote: Is this what you want? import c4d def main() : rd = doc.GetActiveRenderData() #gets the render settings rd[c4d.RDATA_MULTIPASS_ENABLE]= True #Enables the mp option rd.Message(c4d.MSG_UPDATE) #Tell c4d you changed it state = rd[c4d.RDATA_MULTIPASS_ENABLE] #Gets the mp on/off state print state c4d.EventAdd() if __name__=='__main__': main() -ScottA On 13/03/2013 at 12:01, xxxxxxxx wrote: the videopost state is implemented with the BaseList2d bits. edit something like this : state = myVideoPostNode.GetBit(c4d.BIT_VPDISABLED) On 13/03/2013 at 13:28, xxxxxxxx wrote: Thanks, I will try the getBit state.
https://plugincafe.maxon.net/topic/7025/7928_how-to-know-if-a-render-pass-is-enabled
CC-MAIN-2020-40
en
refinedweb
CHANGES¶ Add Python3 wallaby unit tests Update master for stable/victoria 2.3.0¶ Fix wsgi SSL tests for wsgi module under python 3 Reactivate wsgi test related to socket option under python 3 Fix wsgi/SSL/ipv6 tests for wsgi module under python 3 Fix some SSL tests for wsgi module under python 3 Raise minimum version of eventlet to 0.25.2 Fix pygments style Stop to use the __future__ module 2.2.0¶ Drop six usages Fix hacking min version to 3.0.1 Switch to newer openstackdocstheme and reno versions Remove the unused coding style modules Remove translation sections from setup.cfg Align tests with monkey patch original current_thread _active Remove monotonic usage Align contributing doc with oslo’s policy Monkey patch original current_thread _active Bump default tox env from py37 to py38 Add py38 package metadata Use unittest.mock instead of third party mock Add release notes links to doc index Add Python3 victoria unit tests Update master for stable/ussuri Cleanup py27 support 2.1.0¶ Update eventlet Update the minversion parameter remove outdated header reword releasenote for py27 support dropping 1.41.1¶ Add ‘is_available’ function tox: Keeping going with docs Switch to official Ussuri jobs Extend test cert validity to 2049 Update the constraints url 1.40.1¶ Polish usage.rst restart: don’t stop process on sighup when mutating Move doc related modules to doc/requirements.txt Add Python 3 Train unit tests 1.39.0¶ Cap Bandit below 1.6.0 and update Sphinx requirement Add workers’ type check before launching the services Replace git.openstack.org URLs with opendev.org URLs OpenDev Migration Patch Dropping the py35 testing Update master for stable/stein 1.38.0¶ Update oslo.service to require yappi 1.0 or newer add python 3.7 unit test job Update hacking version 1.36.0¶ Profile Oslo Service processes Use eventletutils Event class Avoid eventlet_backdoor listing on same port 1.35.0¶ Use template for lower-constraints Deprecate the ThreadGroup.cancel() API Document the threadgroup module Actually test child SIGHUP signal Restore correct signal handling in Python3 Add stop_on_exception to TG timers Add better timer APIs to ThreadGroup Update mailinglist from dev to discuss Use SleepFixture in looping call test suite 1.32.1¶ Fix stop of loopingcall Use eventlet Event for loopingcall events Clean up .gitignore references to personal tools Always build universal wheels 1.32.0¶ Ensure connection is active in graceful shutdown tests Stop asserting on Eventlet internals Skips signal handling on Windows add lib-forward-testing-python3 test job add python 3.6 unit test job import zuul job settings from project-config Update reno for stable/rocky 1.31.3¶ Remove unnecessary pyNN testenv Convert oslo.service to using stestr Add release notes link to README Fix oslo.service ProcessLauncher fails to call stop fix tox python3 overrides Add test dependency on requests Remove moxstubout 1.31.2¶ [ThreadGroup] Don’t remove timer when stop timers Make lower-constraints job voting tox.ini: Use python3.5 in py35 environment Python 3: Fix eventlet wakeup after signal Python 3: Fix non-deterministic test Remove stale pip-missing-reqs tox test Trivial: Update pypi url to new url add lower-constraints job move doc8 test to pep8 job set default python to python3 1.30.0¶ Imported Translations from Zanata Imported Translations from Zanata Update links in README Imported Translations from Zanata Updated from global requirements Update reno for stable/queens Updated from global requirements Updated from global requirements Updated from global requirements 1.29.0¶ Maintain shared memory after fork in Python >=3.7 Updated from global requirements Revert “Permit aborting loopingcall while sleeping” 1.28.0¶ Remove -U from pip install Avoid tox_install.sh for constraints support Updated from global requirements Remove setting of version/release from releasenotes Updated from global requirements 1.27.0¶ Updated from global requirements change periodic_task to catch all exceptions including BaseException Fix bandit scan and make it voting Imported Translations from Zanata 1.26.0¶ Updated from global requirements Updated from global requirements Updated from global requirements Updated from global requirements Imported Translations from Zanata Updated from global requirements Updated from global requirements Update reno for stable/pike Updated from global requirements 1.24.1¶ rearrange existing documentation to fit the new standard layout switch from oslosphinx to openstackdocstheme 1.24.0¶ Updated from global requirements Updated from global requirements Updated from global requirements Updated from global requirements Permit aborting loopingcall while sleeping Updated from global requirements Updated from global requirements Updated from global requirements Updated from global requirements 1.21.0¶ Remove log translations Use Sphinx 1.5 warning-is-error Fix some reST field lists in docstrings Updated from global requirements 1.20.0¶ Updated from global requirements [Fix gate]Update test requirement Updated from global requirements Updated from global requirements Fix race condition with fast threads pbr.version.VersionInfo needs package name (oslo.xyz and not oslo_xyz) Remove duplicated register_opts call Update reno for stable/ocata Remove references to Python 3.4 1.19.0¶ Add FixedIntervalWithTimeoutLoopingCall Add Constraints support Show team and repo badges on README 1.18.0¶ Updated from global requirements Updated from global requirements Updated from global requirements Imported Translations from Zanata Update .coveragerc after the removal of respective directory Delete python bytecode file 1.17.0¶ Changed the home-page link Updated from global requirements Replace ‘MagicMock’ with ‘Mock’ Enable release notes translation Updated from global requirements Updated from global requirements Updated from global requirements 1.13.0¶ Updated from global requirements Updated from global requirements Updated from global requirements Add reno for release notes management Updated from global requirements 1.12.0¶ Imported Translations from Zanata Updated from global requirements Updated from global requirements Updated from global requirements Updated from global requirements Updated from global requirements Updated from global requirements 1.9.0¶ ‘_’ function Fix Heartbeats stop when time is changed Updated from global requirements 1.7.0¶ Updated from global requirements Correct some help text Fix typo in help text wsgi: decrease the default number of greenthreads in pool Updated from global requirements 1.6.0¶ Updated from global requirements Allow the backdoor to serve from a local unix domain socket Updated from global requirements 1.4.0¶() 1.3.0¶ 1.2.0¶ Updated from global requirements Fix a race condition in signal handlers Enable py3 mock.patch of RuntimeError Delete python bytecode before every test run Trival: Remove ‘MANIFEST.in’ 1.1.0¶ Avoid warning when time taken is close to zero Update the _i18n.py file and fix the domain value Add Bandit to tox for security static analysis Code refactoring of ThreadGroup::stop_timers() 1.0.0¶ Updated from global requirements 0.13.0¶ Default value of graceful_shutdown_timeout is set to 60sec Updated from global requirements Logger name argument was added into wsgi.Server constructor Avoid the dual-naming confusion Forbid launching services with 0 or negative number of workers 0.12.0¶ Document graceful_shutdown_timeout config option Remove py26 env from test list Added config option graceful_shutdown_timeout Updated from global requirements Add docstring for LoopingCallBase._start() Updated from global requirements 0.11.0¶ Updated from global requirements Add doc8 to py27 tox env and fix raised issues Document termination of children on SIGHUP Updated from global requirements Updated from global requirements 0.10.0¶ Avoid removing entries for timers that didn’t stop Cleanup thread on thread done callback Move ‘history’ -> release notes section Add unit tests for sslutils Expand README and clean up intro to sphinx docs Add shields.io version/downloads links/badges into README.rst add auto-generated docs for config options Move backoff looping call from IPA to oslo.service Change ignore-errors to ignore_errors Fix the home-page value in setup.cfg WSGI module was corrected Updated from global requirements ThreadGroup’s stop didn’t recognise the current thread correctly doing monkey_patch for unittest 0.9.0¶ Handling corner cases in dynamic looping call Change DEBUG log in loopingcall to TRACE level log Updated from global requirements 0.7.0¶ Updated from global requirements Update “Signal handling” section of usage docs Use oslo_utils reflection to get ‘f’ callable name Updated from global requirements Prefix the ‘safe_wrapper’ function to be ‘_safe_wrapper’ Setup translations Check that sighup is supported before accessing signal.SIGHUP Use contextlib.closing instead of try … finally: sock.close Avoid using the global lockutils semaphore collection Updated from global requirements 0.6.0¶ Added newline at end of file Added class SignalHandler Updated from global requirements Activate pep8 check that _ is imported Denote what happens when no exceptions are passed in Allow LoopingCall to continue on exception in callee 0.5.0¶ Updated from global requirements Updated from global requirements Updated from global requirements Add oslo_debug_helper to tox.ini Add usage documentation for oslo_service.service module 0.4.0¶ Updated from global requirements save docstring, name etc using six.wraps Move backdoor-related tests from test_service Add mock to test_requirements Remove usage of mox in test_eventlet_backdoor 0.3.0¶ 0.2.0¶ 0.1.0¶ Test for instantaneous shutdown fixed Graceful shutdown WSGI/RPC server Use monotonic.monotonic and stopwatches instead of time.time Updated from global requirements Eventlet service fixed Add documentation for the service module Improve test coverage for loopingcall module Add oslo.service documentation Remove usage of global CONF Make logging option values configurable Introduce abstract base class for services Add entrypoints for option discovery Updated from global requirements Move the option definitions into a private file Fix unit tests Fix pep8 exported from oslo-incubator by graduate.sh Clean up logging to conform to guidelines Port service to Python 3 Test for shutting down eventlet server on signal service child process normal SIGTERM exit Revert “Revert “Revert “Optimization of waiting subprocesses in ProcessLauncher””” Revert “Revert “Optimization of waiting subprocesses in ProcessLauncher”” Revert “Optimization of waiting subprocesses in ProcessLauncher” ProcessLauncher: reload config file in parent process on SIGHUP Add check to test__signal_handlers_set Store ProcessLauncher signal handlers on class level Remove unused validate_ssl_version Update tests for optional sslv3 Fixed ssl.PROTOCOL_SSLv3 not supported by Python 2.7.9 Optimization of waiting subprocesses in ProcessLauncher Switch from oslo.config to oslo_config Change oslo.config to oslo_config Remove oslo.log code and clean up versionutils API Replace mox by mox3 Allow overriding name for periodic tasks Separate add_periodic_task from the metaclass __init__ Upgrade to hacking 0.10 Remove unnecessary import of eventlet Added graceful argument on Service.stop method Remove extra white space in log message Prefer delayed %r formatting over explicit repr use ServiceRestartTest: make it more resilient threadgroup: don’t log GreenletExit add list_opts to all modules with configuration options Remove code that moved to oslo.i18n Remove graduated test and fixtures libraries rpc, notifier: remove deprecated modules Let oslotest manage the six.move setting for mox Remove usage of readlines() Allow test_service to run in isolation Changes calcuation of variable delay Use timestamp in loopingcall Remove unnecessary setUp function Log the function name of looping call pep8: fixed multiple violations Make periodic tasks run on regular spacing interval Use moxstubout and mockpatch from oslotest Implement stop method in ProcessLauncher Fix parenthesis typo misunderstanding in periodic_task Fix docstring indentation in systemd Remove redundant default=None for config options Make unspecified periodic spaced tasks run on default interval Make stop_timers() method public Remove deprecated LoopingCall Fixed several typos Add graceful stop function to ThreadGroup.stop Use oslotest instead of common test module Remove duplicated “caught” message Move notification point to a better place Remove rendundant parentheses of cfg help strings Adds test condition in test_periodic Fixed spelling error - occured to occurred Add missing _LI for LOG.info in service module notify calling process we are ready to serve Reap child processes gracefully if greenlet thread gets killed Improve help strings for sslutils module Remove unnecessary usage of noqa Removes use of timeutils.set_time_override Update oslo log messages with translation domains Refactor unnecessary arithmetic ops in periodic_task Refactor if logic in periodic_task Use timestamp in periodic tasks Add basic Python 3 tests Clear time override in test_periodic Don’t share periodic_task instance data in a class attr Revert “service: replace eventlet event by threading” Simplify launch method Simple typo correction Cleanup unused log related code Utilizes assertIsNone and assertIsNotNone Fix filter() usage due to python 3 compability Use hacking import_exceptions for gettextutils._ threadgroup: use threading rather than greenthread disable SIGHUP restart behavior in foreground service: replace eventlet event by threading Allow configurable ProcessLauncher liveness check Make wait & stop methods work on all threads Typos fix in db and periodic_task module Remove vim header os._exit in _start_child may cause unexpected exception Adjust import order according to PEP8 imports rule Add a link method to Thread Use multiprocessing.Event to ensure services have started Apply six for metaclass Removed calls to locals() Move comment in service.py to correct location Fixes issue with SUGHUP in services on Windows Replace using tests.utils part2 Bump hacking to 0.7.0 Replace using tests.utils with openstack.common.test Refactors boolean returns Add service restart function in oslo-incubator Fix stylistic problems with help text Enable H302 hacking check Convert kombu SSL version string into integer Allow launchers to be stopped multiple times Ignore any exceptions from rpc.cleanup() Add graceful service shutdown support to Launcher Improve usability when backdoor_port is nonzero Enable hacking H404 test Enable hacking H402 test Enable hacking H401 test Fixes import order nits Add DynamicLoopCall timers to ThreadGroups Pass backdoor_port to services being launched Improve python3 compatibility Use print_function __future__ import Improve Python 3.x compatibility Import nova’s looping call Copy recent changes in periodic tasks from nova Fix IBM copyright strings Removes unused imports in the tests module update OpenStack, LLC to OpenStack Foundation Add function for listing native threads to eventlet backdoor Use oslo-config-2013.1b3 Support for SSL in wsgi.Service Replace direct use of testtools BaseTestCase Use testtools as test base class ThreadGroup remove unused name parameters Implement importutils.try_import Fix test cases in tests.unit.test_service Don’t rely on os.wait() blocking Use Service thread group for WSGI request handling Make project pyflakes clean Replace try: import with extras.try_import raise_on_error parameter shouldn’t be passed to task function Account for tasks duration in LoopingCall delay updating sphinx documentation Enable eventlet_backdoor to return port Use the ThreadGroup for the Launcher Change RPC cleanup ordering threadgroup : greethread.cancel() should be kill() Use spawn_n when not capturing return value Make ThreadGroup derived from object to make mocking possible Don’t log exceptions for GreenletExit and thread_done Log CONF from ProcessLauncher.wait, like ServiceLauncher Import order clean-up Added a missing `cfg` import in service.py Log config on startup Integrate eventlet backdoor Add the rpc service and delete manager Use pep8 v1.3.3 Add threadgroup to manage timers and greenthreads Add basic periodic task infrastructure Add multiprocess service launcher Add signal handling to service launcher Basic service launching infrastructure Move manager.py and service.py into common Copy eventlet_backdoor into common from nova Copy LoopingCall from nova for service.py initial commit Initial skeleton project
https://docs.openstack.org/oslo.service/latest/user/history.html
CC-MAIN-2020-40
en
refinedweb
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Can't we use processing math functions when creating a library in java for processing? I added the core.jar but get multiple errors when I use functions like cos() or int(). Also, the HelloLibrary.java file created has a package template.library; package name. Do we rename it entirely or just the "template" part? i.e. package testLibrary; or package testLibrary.library; Thanks in advance, Alex Answers Either use import static processing.core.PApplet.*;: cos(radians(90)); Or w/ import processing.core.PApplet;, prefix them w/ PApplet: PApplet.cos(PApplet.radians(90))); Keep in mind we can directly access members from Java's class Math as well: Math.cos(Math.PI); Just choose a lowercase name for package: package org.alex_pr.name_of_my_library;or something like this. ~:> That works nicely! Thanks a lot! I also want to use the methods in processing without prefixing the class. Is there any way to do that? I tried import static myLibrarybut this displays an error "only a type can be imported, library resolves to a package." import staticworks only for the staticmembers of a classor interface. And we can't use it if it name-conflicts w/ something else in our code. So do I have to prefix the methods in processing? It would be really useful to simple type the method name (like we do for processing functions line(), translate() etc.) A sketch program (.pde) is a subclass of PApplet. That's why we don't need to prefix its members w/ this.if we don't want to. You can explore this link: although I believe the PApplet offers you what you need. A quote from the website above: Kf This is a bit off topic but is it possible to create a public variable inside a class and link it with a method (of the same class and the same type as the variable) so that each time that variable is used, it gets its value from the linked function? In JS it is, but not in Java. ~O) That is also possible in Python. But the Java way is to write getters and setters for almost everything -- that way if you ever decide to mediate variable access with a function as you describe your access method doesn't change, because every variable is already mediated by a .get() type of function.
https://forum.processing.org/two/discussion/22309/use-processing-math-functions-in-a-processing-library
CC-MAIN-2020-40
en
refinedweb
This post is part of an ongoing series that I am writing with @jesseliberty .The original post is on his site and the spanish version on mine. In this series, we will explore a number of topics on Advanced Xamarin.Forms that have arisen in our work for clients, or at my job as Principal Mobile Developer for IFS Core. Before we can start, however, we need to build an API that we can program against. Ideally, this API will support all CRUD operations and will reside on Azure. Building the API Begin in Visual Studio, creating an API Core application, which we’ll call BookStore. Change the name of the values controller to BookController. Create a new project named Bookstore.Dal (Data access layer), within which you will create two folders: Domain and Repository. Install the Nuget package LiteDB a NoSql database. In the BookRepository folder, create a file BookRepository.cs. In that file you will initialize LiteDB and create a collection to hold our books. public class BookRepository { private readonly LiteDatabase _db; public BookRepository() { _db = new LiteDatabase("bookstore.db"); Books.EnsureIndex(x => x.BookId); } If bookstore.db does not exist, it will be created. Similarly if the index doesn’t exist, it too will be created. Go up to the API project and right-click on Dependencies to add a reference to our Bookstore.dal project. Open the BookstoreController file. Add using statements for Bookstore.Dal.Domain and Bookstore.Dal.Repository. Create an instance of the BookRepository in the controller, and initialize it. public class BooksController : ControllerBase { private BookRepository _bookRepository; public BooksController() { _bookRepository = new BookRepository(); } Creating the Book Object We will need a simple book object (for now) to send and retrieve from our repository and database. Start by creating the book object (feel free to add more fields) public class Book { public Book() { BookId = Guid.NewGuid().ToString(); } public string BookId { get; set; } public string ISBN { get; set; } public string Title { get; set; } public List<Author> Authors { get; set; } public double ListPrice { get; set; } } Notice that the Book has a List of Author objects. Let’s create that class now, again keeping it very simple, public class Author { public string AuthorId { get; set; } public string Name { get; set; } public string Notes { get; set; } } The Repository Let’s return to the repository. Earlier we initialized the database. Now all we need is to create the methods. We begin by creating the connection of our local List of Book to the db table: public LiteCollection<Book> Books => _db.GetCollection<Book>(); Next comes our basic methods: get, insert, update, delete: public IEnumerable<Book> GetBooks() { return Books.FindAll(); } public void Insert(Book book) { Books.Insert(book); } public void Update(Book book) { Books.Update(book); } public void Delete(Book book) { Books.Delete(book.BookId); } Connecting to the API We are ready to create the CRUD methods in the API and connect them to the methods in the repo. [HttpGet] public ActionResult<IEnumerable<Book>> Get() { var books = _bookRepository.GetBooks(); return new ActionResult<IEnumerable<Book>>(books); } This will return the retrieved data to the calling method. Note that this will retrieve all the books. Following along with using the HTTP syntax, let’s create the Post method: [HttpPost] public void Post([FromBody] Book book) { _bookRepository.Insert(book); } Testing To check that we can now add and retrieve data, we’ll create a Postman collection. Bring up the Postman Desktop App (the Chrome extension has been deprecated), and see or create a Bookstore collection. In that collection will be your post and get methods. To keep this simple let’s add a book using JSON. Click on Headers and set authorization to applicatoin/json. Then click on body and enter the Json statement: {"Title":"Ulyses"} Press send to send this to our local database. To ensure it got there, switch Postman to Get, Enter the url and press send. You should get back your book object. [ { "bookId": "d4f8fa63-1418-4e06-8c64-e8408c365b13", "isbn": null, "title": "Ulyses", "authors": null, "listPrice": 0 } ] The next step is to move this to Azure. That is easier than it sounds. First, go to the Azure site.Select the Start Free button. Create an account with your email address and after you confirm your email address you are ready to use the account form in Visual Studio. Our next step is to publish the API. Select the publish option and select the following steps: When you are creating the App Service to be used in the hosting plan, you can select the free tier. After publishing is complete, you will have access to your API in the endpoint you just created. In our example, the endpoint is You can test your new endpoint in Postman and then use it in your project. By Jesse Liberty (Massachusetts US) and Rodrigo Juarez (Mendoza, Argentina) About Jesse Liberty Jesse Liberty is the Principal Mobile Developer with IFS Core. He has three decades of experience writing and delivering software projects. He is the author of 2 dozen books and a couple dozen Pluralsight & LinkedIn Learning courses, and has been. Posted on by: Rodrigo Juarez Just a programmer using Microsoft tools to create awesome apps! Discussion
https://dev.to/codingcoach/advanced-xamarin-forms-part-1-the-api-3k6p
CC-MAIN-2020-40
en
refinedweb
CreatePartnerEventSource Called by an SaaS partner to create a partner event source. This operation is not used by AWS customers. Each partner event source can be used by one AWS account to create a matching partner event bus in that AWS account. A SaaS partner must create one partner event source for each AWS account that wants to receive those event types. A partner event source creates events based on resources within the SaaS partner's service or application. An AWS account that creates a partner event bus that matches the partner event source can use that event bus to receive events from the partner, and then process them using AWS Events rules and targets. Partner event source names follow this format: partner_name/event_namespace/event_name partner_name is determined during partner registration and identifies the partner to AWS customers. event_namespace is determined by the partner and is a way for the partner to categorize their events. event_name is determined by the partner, and should uniquely identify an event-generating resource within the partner system. The combination of event_namespace and event_name should help AWS customers decide whether to create an event bus to receive these events. Request Syntax { "Account": " string", "Name": " string" } Request Parameters For information about the parameters that are common to all actions, see Common Parameters. The request accepts the following data in JSON format. - Account The AWS account ID that is permitted to create a matching partner event bus for this partner event source. Type: String Length Constraints: Fixed length of 12. Pattern: \d{12} Required: Yes - Name The name of the partner event source. This name must be unique and must be in the format partner_name/event_namespace/event_name. The AWS account that wants to use this partner event source must create a partner event bus with a name that matches the name of the partner event source. Type: String Length Constraints: Minimum length of 1. Maximum length of 256. Pattern: aws\.partner(/[\.\-_A-Za-z0-9]+){2,} Required: Yes Response Syntax { "EventSourceArn": "string" } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - EventSourceArn The ARN of the partner event source. See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_CreatePartnerEventSource.html
CC-MAIN-2020-40
en
refinedweb
In this tutorial you are going to learn how to create unit tests for DAOs. As a prerequisite, you fundamental knowledge of DAOs is expected. When it comes to testing DAO components, we really have 2 approaches. One is to use the mocking framework Mockito and the other is to create a couple of classes that work together. In this tutorial, we will be using Mockito. EmployeeDAO public class EmployeeDAO implements Dao<Employee> { // will act as a "mini-database" private List<Employee> employees = new ArrayList<>(); private SessionFactory sessFactory; // Constructor; } // Overriding the Dao interface methods @Override public Employee get(long id) { return employees.get((int) id)); } @Override public List<Employee> getAll() { return employees; } @Override public void save(Employee emp) { employees.add(emp); } @Override public void update(Employee employee, String[] params) { // Check for validity if (params[0].length() != 0|| params[1].length() != 0) { // Initialize the employee employee.setName(params[0]); employee.setEmail(params[1]); // Add the Initialized employee to the list of employees (a.k.a. DB) employees.add(employee); } } @Override public void delete(Employee employee) { employees.remove(employee); } } If you are wondering how the Employee class looks like, here it is: Employee.java public class Employee { //members private String name; private String email; // constructor Employee(String n, String e) { name = n; email = e; } // setter methods public void setName(String n) { name = n; } public void setEmail(String e) { email = e; } // getter methods public String getName() { return name; } public String getEmail() { return email; } } The employee class is just a standard constructor/getter/setter methods. Now it is time to create the test class for the DAO class. EmployeeDAOTest @ExtendWith(SpringExtension.class) @Tag("DAO") public class EmployeeDAOTest { @MockBean private SessionFactory sessionFactory; @MockBean private Session session; private EmployeeDAO employeeDAO; @BeforeEach public void prepare() throws Exception { Mockito.when(sessionFactory.getCurrentSession()).thenReturn(session); employeeDAO = new EmployeeDAO(sessionFactory); } @Test public void should_returnNull_ifNonExistent() { Query q = Mockito.mock(Query.class); Mockito.when(session.getNamedQuery("get")).thenReturn(q); Mockito.when(q.getResultList()).thenReturn(new ArrayList()); List l = employeeDAO.getAll(); assertAll("Employees", () -> assertNotEquals(l, null), () -> assertEquals(l.size(), 0)); } } Breakdown There are a couple of things to break down in the preceding class. First, note that we are using the @MockBean annotation which simply put adds mock objects to the application context. Meaning, this will replace any existing bean of the same type. In case there haven’t been any existing beans, a new one will be created. Then we use the @BeforeEach annotation which will get executed before all the unit tests run. Hence the name of the method, prepare, we are preparing the “environment” for the unit tests. Within the prepare method, we have a couple of things. Since SessionFactory is a functional interface, it can be used as the assignment for a lambda expression. We use the Mockito.when method in the prepare() method. It is used for mocking methods which given an exception during a call. So the line Mockito.when(sessionFactory.getCurrentSession()).thenReturn(session); really is saying “get me the current session and if there are no exception, return me the session”. And after that, we simply assign the DAO instance to a brand new one: employeeDAO = new EmployeeDAO(sessionFactory); After that, we have our very test purpose method, called should_returnNull_ifNonExistent() and does just what the name of it says: returns a null, or an empty ArrayList if there isn’t a list to return. In our EmployeeDAO implementation though, we will never run the risk of getting null as we add three Employee entries in the List as soon as we create an instance of EmployeeDAO():; } Note the @Test annotation of the method. This specifies that this method is for testing purposes. We get the “get” method overriden by us in the EmployeeDAO class and if there are not exceptions, thenReturn the q which is of type Query. Then we get simply return a new empty array list. After that, we use the getAll() method that should give us back 3 entries at least and then we use the assertAll() method which combines assertNotEquals and assertEquals. The lines: assertAll("Employees", () -> assertNotEquals(l, null), () -> assertEquals(l.size(), 0)); are really saying check whether l (the list that contains the entries that have been returned from the getAll() method) is not empty and check if the size of the list is 0. If it is, return true, assertEquals would evaluate to true.
https://javatutorial.net/how-to-unit-test-dao-components
CC-MAIN-2020-40
en
refinedweb
This topic describes how to create a namespace. Prerequisites You have created a Kubernetes cluster. For more information, see Create an ACK cluster. Background information In a Kubernetes cluster, you can use namespaces to create multiple virtual spaces. When a large number of users share a cluster, multiple namespaces can be used to effectively divide different work spaces and assign cluster resources to different tasks. Furthermore, you can use resource quotas to assign resources to each namespace. Procedure - Log on to the Container Service console. - In the left-side navigation pane under Kubernetes, choose . - Select the target cluster, and then click Create in the upper-right corner. - In the displayed dialog box, set a namespace. - Name: Enter a name for the namespace name. The name must be 1 to 63 characters in length and can contain numbers, letters, and hyphens (-). It must start and end with a letter or number. In this example, test is used as the name. - Tags: Add one or multiple tags to the namespace to identify the characteristics of the namespace. For example, you can set a tag to identify that this namespace is used for the test environment. You can enter a variable name and a variable value, and then click Add on the right to add a tag to the namespace. - Click OK. - The namespace named test is displayed in the namespace list.
https://www.alibabacloud.com/help/tc/doc-detail/89265.htm
CC-MAIN-2020-40
en
refinedweb
I'm not sure if the title is right, but I'm migrating a system made of FuelPHP to the server, but the following error has occurred and I can't get it to work properly. Fatal error: Access to undeclared static property: Controller_Auth :: $this in ... (abbreviated) The Controller_Auth class inherits from Basecontroller. It seems that an error has occurred in the following description part in Basecontroller. if (method_exists (static :: $this, 'before_controller')) { static :: before_controller (); } I have investigated a lot, but is it NG to use "::" to call a method that doesn't declare static? There was certainly no method that declared static in Controller_Auth. If i don't know how to rewrite, and if you know how to rewrite, please let me know. - Answer # 1 Related articles - php - how to convert collection class to builder class with laravel - access modifiers in php class inheritance - i want to call a class in php - Common methods for calling shell commands in Python (4 types) - php - class not found when using namespace in exception handling - php - another class with the same mechanism it doesn't work well when used together - php - syntax error occurs when calling with require_once - in c#, i want to give parents the methods and properties of the class that the generic contains - db access using thread class in php - i want to add a guard to the state transition of class in php - php - about class access after instantiation - centralization and calling of code using functionphp in wordpress - php - how to write when calling a property from an instance - php - i want to know how to add a special class to only one page in wordpress - java - i want to put two methods in the graphics class, can anyone please tell me how - php - class definition related occurrence of unexpected t_variable, expecting t_function - php - calling and displaying data from sql does not work - php - in the definition of the class, notice: undefined variable: ingredients is displayed static :: $thisrecognizes the PHP 5.3 series andhalfwayin PHP 5.4 to 5.5. An error occurs in PHP 5.4.22, PHP 5.5.6 or higher (including PHP 5.6 of course) with the change of # 65911 (3v4l). And method_exists's argument is just $this. Looks like (3v4l).
https://www.tutorialfor.com/questions-100493.htm
CC-MAIN-2020-40
en
refinedweb
SP Populate JavaBean With Data Fetched From the Database JiaPei Jen Ranch Hand Posts: 1309 posted 16 years ago I have a bean. There are three field variables in the bean; username, userrole, and category. "username" is supplied by me. "userrole" and "category" in the bean gets populated by fetching data from the database. I am positive that data are present in the database. However, I got "blanks" when I later on tried to display the properties (userrole and category) of that bean. I would appreciate if anybody could help identifying the problem. The relevant codes are show below: package org.apache.artimus.logon; import org.apache.artimus.logon.dao.*; public class EditorBean { private String username; private String userrole; private String category; static EditorDAO ed = new MySQLEditorDAO(); public EditorBean() {} public EditorBean( String username, String userrole, String category ) { setUsername( username ); setUserrole( userrole ); setCategory( category ); } public String getUsername() { return username; } public void setUsername( String username ) { this.username = username; } public String getUserrole() { return userrole; } public void setUserrole( String userrole ) { this.userrole = userrole; } public String getcategory() { return category; } public void setCategory( String category ) { this.category = category; } public static EditorBean findEditorData( String username ) { return ed.findEditor( username ); } } My data access code is show below: package org.apache.artimus.logon.dao; import java.io.IOException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.SQLException; import org.apache.artimus.logon.EditorBean; import org.apache.artimus.logon.exceptions.EditorDAOSysException; import org.apache.artimus.ConnectionPool.DBConnection; public class MySQLEditorDAO implements EditorDAO { // Here the return type is EditorBean public EditorBean findEditor( String username ) throws EditorDAOSysException { Connection conn = null; Statement stmt = null; ResultSet rs = null; try { conn = DBConnection.getDBConnection(); stmt = conn.createStatement(); String query = "SELECT user_role, journal_category FROM members WHERE user_name = '" + username + "'"; rs = stmt.executeQuery( query ); if (rs.next()) { return new EditorBean( username, rs.getString( "user_role" ), rs.getString( "journal_category" ) ); } else { System.out.println( "invalid user name" ); return null; } } catch (SQLException se) { throw new EditorDAOSysException("SQLException: " + se.getMessage()); } finally { if ( conn != null ) { try { rs.close(); rs = null; stmt.close(); stmt = null; conn.close(); } catch( SQLException sqlEx ) { System.out.println( "Problem occurs while closing " + sqlEx ); } conn = null; } } } } [ November 28, 2003: Message edited by: JiaPei Jen ] Andres Gonzalez Ranch Hand Posts: 1561 posted 16 years ago I don't see anything wrong with your code (quick look though). What I recommend is to print to the log the values you get from the DB if (rs.next()) { //how about printing it here? return new EditorBean( username, rs.getString( "user_role" ), rs.getString( "journal_category" ) ); } doing that you ensure that you're getting something from the DB, and not empty or null. hope it helps I'm not going to be a Rock Star. I'm going to be a LEGEND! --Freddie Mercury JiaPei Jen Ranch Hand Posts: 1309 posted 16 years ago The data table is created by me. I am positive that every field in the table has a value for every single record. In my JSP , I am able to display the value of "username". It means that the session object can be found. The JSP does not display anything (it is blank) for "userrole" and "category". Gazi Peer Greenhorn Posts: 1 posted 16 years ago Check to see whether the fieldnames of your jsp page are one and the same as in the properties of the java bean. Craig Jackson Ranch Hand Posts: 405 posted 16 years ago I agree. I would check and verify that your jsp/property attributeS are in synch with your get/set methods of your bean. For example, I noticed that that the following methods are different. Because the get c[ atagory is not the same as the set C ategory: public String getcategory() { return category; } public void setCategory( String category ) { this.category = category; } Check that with your jsp page. Craig Who among you feels worthy enough to be my best friend? Test 1 is to read this tiny ad: Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads I Have Problem To Separate Business Logic From Front Controller Struggling With Passing Data Between Data Access Class, Business Bean, and Controller jsf and database connection problem Illegal Start of the "try" Block?! Reference a Non-static interface in a static Method More...
https://coderanch.com/t/283891/java/Populate-JavaBean-Data-Fetched-Database
CC-MAIN-2020-40
en
refinedweb
[ ] Container networking¶ OpenStack-Ansible deploys Linux containers (LXC) and uses Linux bridging between the container and the host interfaces to ensure that all traffic from containers flows over multiple host interfaces. This appendix describes how the interfaces are connected and how traffic flows. For more information about how the OpenStack Networking service (neutron) uses the interfaces for instance traffic, please see the OpenStack Networking Guide. For details on the configuration of networking for your environment, please have a look at openstack_user_config settings reference. Physical host interfaces¶ In a typical production environment, physical network interfaces are combined in bonded pairs for better redundancy and throughput. Avoid using two ports on the same multiport network card for the same bonded interface, because a network card failure affects both of the physical network interfaces used by the bond. Linux bridges¶ The combination of containers and flexible deployment options requires implementation of advanced Linux networking features, such as bridges and namespaces. Bridges provide layer 2 connectivity (similar to switches) among physical, logical, and virtual network interfaces within a host. After a bridge is created, the network interfaces are virtually plugged in to it. OpenStack-Ansible uses bridges to connect physical and logical network interfaces on the host to virtual network interfaces within containers. Namespaces provide logically separate layer 3 environments (similar to routers) within a host. Namespaces use virtual interfaces to connect with other namespaces, including the host namespace. These interfaces, often called vethpairs, are virtually plugged in between namespaces similar to patch cables connecting physical devices such as switches and routers. Each container has a namespace that connects to the host namespace with one or more vethpairs. Unless specified, the system generates random names for vethpairs. The following image demonstrates how the container network interfaces are connected to the host’s bridges and physical network interfaces: Network diagrams¶ Hosts with services running in containers¶ The following diagram shows how all of the interfaces and bridges interconnect to provide network connectivity to the OpenStack deployment: The interface lxcbr0 provides connectivity for the containers to the outside world, thanks to dnsmasq (dhcp/dns) + NAT. Примечание If you require additional network configuration for your container interfaces (like changing the routes on eth1 for routes on the management network), please adapt your openstack_user_config.yml file. See openstack_user_config settings reference for more details. Services running «on metal» (deploying directly on the physical hosts)¶ OpenStack-Ansible deploys the Compute service on the physical host rather than in a container. The following diagram shows how to use bridges for network connectivity: Neutron traffic¶ The following diagram shows how the Networking service (neutron) agents work with the br-vlan and br-vxlan bridges. Neutron is configured to use a DHCP agent, an L3 agent, and a Linux Bridge agent within a networking-agents container. The diagram shows how DHCP agents provide information (IP addresses and DNS servers) to the instances, and how routing works on the image. The following diagram shows how virtual machines connect to the br-vlan and br-vxlan bridges and send traffic to the network outside the host: When Neutron agents are deployed «on metal» on a network node or collapsed infra/network node, the Neutron Agents container and respective virtual interfaces are no longer implemented. In addition, use of the host_bind_override override when defining provider networks allows Neutron to interface directly with a physical interface or bond instead of the br-vlan bridge. The following diagram reflects the differences in the virtual network layout. The absence of br-vlan in-path of instance traffic is also reflected on compute nodes, as shown in the following diagram.
https://docs.openstack.org/openstack-ansible/latest/ru/reference/architecture/container-networking.html
CC-MAIN-2020-40
en
refinedweb
I want to integrate Firebase analytic with the Xamarin.Form application so I wanted to know Firebase analytics support for Xamarin.forms? How can I integrate with it? Answers Yes, it does, but you need to handle the logic in each project. So basically: 1. Add Firebase Analytics nuget package to each platform project (you don't need it in the Forms project) 2. Follow the instructions for each platform for initializing Firebase. iOS AppDelegate - FinishedLaunching Firebase.Analytics.App.Configure(); Android MainActivity - OnCreate firebaseAnalytics = FirebaseAnalytics.GetInstance(this); @RaymondKelly Then how can I manage the click events happen through the xamarin.forms? You will need to create a dependancy service in each platform. In my case I made a class called Analytics. IOS example: [assembly: Xamarin.Forms.Dependency(typeof(App.iOS.Analytics))] namespace App.iOS { public class Analytics : IAnalytics {.... I then have a function for each type of event I want to log. E.g. 'public void ConnectToServer(Server server) { @RaymondKelly Thanks for your reply hope this will work for me Hi, is there any way to use old Google Analytics Api, becasue I don't see this option anymore on web panel - there is only Firebase. When I add to Xamarin.iOS project the nuget package with Firebase I get those errors: Hi, can you tell me because I'm getting this strange error? Error CS0234: The type or namespace name 'App' does not exist in the namespace 'Firebase.Analytics' (are you missing an assembly reference?) (CS0234) It refers to: Firebase.Analytics.App.Configure(); Here you can see Firebase resources on my project: What do you think is going wrong? Thank you in advance! Well... I solved after 3 minutes from last post. I'm sorry! I don't know why but something changed. Actually "Firebase.Analytics." doesn't longer contain the "App" class. It's now in "Firebase.Core"! I downloaded the sample project from GitHub to try to compile and I saw it. Taking a look to the documentation I see that (now) it's written: I'm quite sure it was in Analytics in the past because I was able to compile the code some months ago. @Tedebus. Your right, "Firebase.Core" working for me. Thank you! I tried to install Firebase.Analytic on IOS project and I got the below error. Do anyone experience the same issue? What would be the solution? Could not install package 'Xamarin.Firebase.Analytics 42.1021.1'. You are trying to install this package into a project that targets 'Xamarin.iOS,Version=v1.0', but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author. KahFaiLok. You are trying to install the android version in iOS. Search for Xamarin iOS firebase Analytics. App.Configuremethod was moved from Firebase.Analyticsnamespace to Firebase.Corenamespace in the mayor release from v3.x to v4.x. Please, import Firebase.Coreto your libraries to use this method. Hi, I am trying to add Firebase to my Xamarin.Forms project. I added the Nuget pacjages to iOS and Droid in my iOS AppDelegate: Firebase.Core.App.Configure(); then I created a class FirebaseAnalytics.cs: but I getting this error: Error CS1519: Invalid token '(' in class, struct, or interface member declaration (CS1519) (ZayedAlKhair.iOS) for this line: Firebase.Analytics.Analytics.LogEvent(EventNamesConstants.Login, parameters); Kindly help.. Thanks You have to remove the braces from the token manually. In my case I do like this token.Trim('<').Trim('>').Replace(" ", ""); Hello @WojciechKulik , I'm experiencing almost the same problem actually. Did you find a solution for that? Hello, check this answer from stackoverflow: This is how i'm using firebase so far without problems. Does it work also for UWP somehow? @Wojciech_Kulik, it works! Thanks! I know this post has been around for a while, but I wanted to first thank you all for the useful answers. I don't mind bumping this since this was the only useful result in google. To add real value though, I had to track this down for documentation. The change that the Firebase.Analytics to Firebase.Core move has this in git's log
https://forums.xamarin.com/discussion/comment/356851
CC-MAIN-2020-40
en
refinedweb
For as long as I’ve been working on Sitecore there has been this really annoying issue where setting the link manager to include server url and running under https will cause urls to be generated with the port number included. e.g. which naturally you don’t actually want. Aside: This issue was finally fixed in Sitecore 9.1 To overcome this there are a few methods you can take. Method 1 – Set the Scheme and Port on you site defenition This is possibly the smallest change you can make as it’s just 2 settings in a config file. Setting the external port on site node to 80 (yes 80) tricks the link manager code into not appending the port number as it does it for everything other than port 80. <configuration xmlns: <sitecore> <sites xdt: <site name="website"> <patch:attribute</patch:attribute> <patch:attribute/sitecore/content/MySite</patch:attribute> <patch:attributehttps</patch:attribute> <patch:attribute80</patch:attribute> </site> </sites> </sitecore> </configuration> What I don’t like about this method though, is your setting something to be wrong to get something else to come out right. It’s all a bit wrong. Method 2 – Write your own link provider The second method which I have generally done is to write your own provider which strips the port number off the generated URL. For this you will need: 1. A patch file to add the provider: <configuration xmlns: <sitecore> <linkManager defaultProvider="sitecore"> <patch:attribute <providers> <add name="CustomLinkProvider" type="MySite.Services.CustomLinkProvider, MySite" languageEmbedding="never" lowercaseUrls="true" useDisplayName="true" alwaysIncludeServerUrl="true" /> </providers> </linkManager> <mediaLibrary> <mediaProvider> <patch:attribute MySite.Services.NoSslPortMediaProvider, MySite </patch:attribute> </mediaProvider> </mediaLibrary> </sitecore> </configuration> 2. A helper method that removes the ssl port namespace MySite { /// <summary> /// Link Helper is used to remove SSL Port /// </summary> public static class LinkHelper { /// <summary> /// This method removes the 443 port number from url /// </summary> /// <param name="url">The url string being evaluated</param> /// <returns>An updated URL minus 443 port number</returns> public static string RemoveSslPort(string url) { if (string.IsNullOrWhiteSpace(url)) { return url; } if (url.Contains(":443")) { url = url.Replace(":443", string.Empty); } return url; } } } 3. The custom link provider which first gets the item url the regular way and then strips the ssl port using Sitecore.Data.Items; using Sitecore.Links; namespace MySite { /// <summary>Provide links for resources.</summary> public class CustomLinkProvider : LinkProvider { public override string GetItemUrl(Item item, UrlOptions options) { // Some code which manipulates and exams the item... return LinkHelper.RemoveSslPort(base.GetItemUrl(item, options)); } } } 4. The same provider for media using Sitecore.Data.Items; using Sitecore.Resources.Media; namespace MySite { /// <summary> /// This method removes SSL port number from Media Item URLs /// </summary> public class NoSslPortMediaProvider : MediaProvider { /// <summary> /// Overrides Url mechanism for Media Items /// </summary> /// <param name="item">Sitecore Media Item</param> /// <param name="options">Sitecore Media Url Options object</param> /// <returns>Updated Media Item URL minus 443 port</returns> public override string GetMediaUrl(MediaItem item, MediaUrlOptions options) { var mediaUrl = base.GetMediaUrl(item, options); return LinkHelper.RemoveSslPort(mediaUrl); } } } What I don’t like about this method is it’s messy in the opposite way. The port number is still being added, and we’re just adding code to try and fix it after. Credit to Sabo413 for the code in this example Method 3 – Official Sitecore Patch Given that it’s Sitecore’s bug, it does actually make sense that they fix it. After all people are paying a license fee for support! This simplifies your solution down to 1 extra patch file and a dll. What’s better is as it’s Sitecores code they have the responsibility of fixing it, if it ever breaks something, and you have less custom code in your repo. You can get the fix here for Sitecore version 8.1 – 9.0. So this may leave you wondering how did Sitecore fix it? Well having a look inside the dll reveals they wen’t for method 2.
https://himynameistim.com/2019/07/02/removing-port-443-from-urls-generated-by-sitecore/
CC-MAIN-2020-40
en
refinedweb
Syncfusion has recently added the Query Builder UI component to our React UI components library. It’s a great tool to help you construct and display complex filtering queries. This blog will provide an overview of the React Query Builder UI component, and shows you its basic usage and features, step by step. React Query builder UI component The Query Builder UI combined with Grid performs advanced searching and displays the results in an organized manner. It is fully customizable to include many input components like slider, drop-down list, multi-select, and check boxes for value selection. Here, we can see step by step procedure to get started with the Query Builder UI component in the React platform: - First, we need to use the create-react-app CLI. If you don’t have the CLI, then, install it globally using following command. - Create a React application and download its dependencies using the following command. - Now, install the packages for including Query Builder UI component into your application, using the following command. - With this we have completed the environment related configurations. Next, to include the Query Builder UI component in the application, import the same from ej2-react-querybuilder package in App.tsx. import {QueryBuilderComponent} from '@syncfusion/ej2-react-querybuilder'; - Syncfusion React UI components support a set of built-in themes, and here we will use the Material theme for our query builder. To add the Material theme in your application, you need to import material.css into App.css. @import ""; - With this, we have successfully completed the configurations related to the Query Builder UI component. Now we should initialize our first query builder in tsx as shown in the following code example. class App extends React.Component { public render() { return (<div > <QueryBuilderComponent /> </div>); } } - Finally, run the following command to see the output in a browser. Query Builder UI Data binding The Query Builder UI component uses DataManager, which supports both RESTful JSON data service binding and local JavaScript object array binding. The dataSource property of the query builder can be assigned with a JavaScript object array collection. All the columns defined in the dataSource will be auto-populated using the data source schema. If all the columns defined are not required, then, we can manually define the fields using the columns property. Now, define the employee table and map it to the dataSource of our query builder. const hardwareData: object[] = [ {"TaskID": 1, "Name": "Lenovo Yoga", "Category": "Laptop", "SerialNo": "CB27932009", "InvoiceNo": "INV-2878", "Status": "Assigned" }, {"TaskID": 2, "Name": "Acer Aspire", "Category": "Others", "SerialNo": "CB35728290", "InvoiceNo": "INV-3456", "Status": "In-repair" }, …. ] class App extends React.Component { public render() { return (<div ><QueryBuilderComponent dataSource={ hardwareData } /></div>); } } After binding the data, the query builder will be rendered based on the data. Data binding in Query Builder UI Defining columns The Query Builder UI component has flexible APIs to define the column label, type, format, and operators: label—Field values are considered labels by default, and we can further customize them using this property. type—Defines the data type of the column. format—Customizes the format of date and number column types. operators—Customizes the default operators for the column. The following code example imports the ColumnsModel from the ej2-react-querybuilder package in App.tsx and customizes the label and type. import {ColumnsModel, QueryBuilderComponent} from '@syncfusion/ej2-react-querybuilder'; class App extends React.Component { public columnData: ColumnsModel[] = [ {field: 'TaskID', label: 'TaskID', type: 'number', operators: [{ key: 'equal', value: 'equal' }, { key: 'greaterthan', value: 'greaterthan' }, { key: 'lessthan', value: 'lessthan' }] }, { field: 'Name', label: 'Name', type: 'string' }, { field: 'Category', label: 'Category', type: 'string' }, { field: 'SerialNo', label: 'SerialNo', type: 'string' }, { field: 'InvoiceNo', label: 'InvoiceNo', type: 'string' }, { field: 'Status', label: 'Status', type: 'string' } ]; public render() { return (<div ><QueryBuilderComponent dataSource={employeeData} columns={this.columnData} /></div>); } } Defining columns in Query Builder UI Binding the filter query to Grid In this segment, we are going to add a grid component and populate it with the search result from the query builder. For this, we need to install the Syncfusion React Grid and Syncfusion React Button packages. Now, bind the data and configure the columns of the grid. Initialize DataManager with hardwareData, which is defined as the dataSource of Query Builder UI component. Define the query to get the required data from the hardware data and assign it to the Grid component’s query property. import { DataManager, Predicate, Query } from '@syncfusion/ej2-data'; import { ButtonComponent } from '@syncfusion/ej2-react-buttons'; import { ColumnDirective, ColumnsDirective, GridComponent, Inject, Page } from '@syncfusion/ej2-react-grids'; public datamanager: DataManager = new DataManager(hardwareData); public query: Query = new Query().select(['TaskID', 'Name', 'Category', 'SerialNo', 'InvoiceNo', 'Status']); <GridComponent allowPaging={true} dataSource={this.datamanager} <ColumnDirective field='Name' headerText='Name' width='140' /> <ColumnDirective field='Category' headerText='Category' width='140' textAlign='Right' /> <ColumnDirective field='SerialNo' headerText='Serial No' width='130' /> <ColumnDirective field='InvoiceNo' headerText='Invoice No' width='120' /> <ColumnDirective field='Status' headerText='Status' width='120' /> </ColumnsDirective> <Inject services={[Page]} /> </GridComponent> Finally, synchronize the filter changes in our query builder to populate the filtered data into our grid. For this, we need to detect the query changes and update the grid query to refresh the filtered data by handling the button click event. <QueryBuilderComponent width='100%' dataSource={hardwareData} columns={this.columnData} ref={(scope) => { this.qryBldrObj = scope; }} /> <ButtonComponent onClick={this.updateRule()} >Filter Grid</ButtonComponent> public updateRule = ()=>() => { const predicate: Predicate = this.qryBldrObj.getPredicate({ condition: this.qryBldrObj.rule.condition, rules: this.qryBldrObj.rule.rules }); if (isNullOrUndefined(predicate)) { this.gridObj.query = new Query().select(['TaskID', 'Name', 'Category', 'SerialNo', 'InvoiceNo', 'Status']); } else { this.gridObj.query = new Query().select(['TaskID', 'Name', 'Category', 'SerialNo', 'InvoiceNo', 'Status']).where(predicate); } this.gridObj.refresh(); } Now our query builder will look like the below screenshot, with a grid component. You can now create your own complex filter queries and click the filter grid button to see the results. Binding query builder filter result to grid Summary The Query Builder UI component is designed to be highly customizable, to include various input components like slider, multi-select, checkbox, and drop-down lists. This way, input can be received in different formats, to create complex queries. To try our Query Builder UI component, please download our free trial. You can also check its source from GitHub. For the better use of the component you can check our online sample browser and documentation. If you have any questions or need any clarification, then, please let us know in the comments section below. You can also contact us through our support forum, Direct-Trac or Feedback portal. We are always happy to assist you! If you like this blog post, then we think you’ll also like the following free ebooks,
https://www.syncfusion.com/blogs/post/new-react-query-builder-ui-component.aspx
CC-MAIN-2019-35
en
refinedweb
Currency Displayed in Posted on October 21, 2015 binaries from the repository, there will be two folders inside the "bin" folder: "net" and "netcf". "net" is for the .Net Framework, and "netcf" is for the .Net Compact Framework. The CUWIN is a .Net Compact Framework device, so to use NModbus in a CUWIN project, simply add a reference to the "Modbus.dll" file in the "netcf" folder. Add a using declaration to the source file to contain the Modbus Master code. using Modbus.Device; Initialize a new ModbusSerialMaster: // Open COM1 with a baud rate of 115200 _serialPort = new SerialPort("COM1", 115200); _serialPort.Open(); // Create a new Modbus RTU Master using _serialPort as the communication channel _modbusMaster = ModbusSerialMaster.CreateRtu(_serialPort); _modbusMaster.Transport.ReadTimeout = 500; _modbusMaster.Transport.WriteTimeout = 500; _modbusMaster.Transport.Retries = 0; Then call any of the given methods to query a Modbus slave: bool[] ReadCoils(byte slaveAddress, ushort startAddress, ushort numberOfPoints); ushort[] ReadHoldingRegisters(byte slaveAddress, ushort startAddress, ushort numberOfPoints); ushort[] ReadInputRegisters(byte slaveAddress, ushort startAddress, ushort numberOfPoints); bool[] ReadInputs(byte slaveAddress, ushort startAddress, ushort numberOfPoints); ushort[] ReadWriteMultipleRegisters(byte slaveAddress, ushort startReadAddress, ushort numberOfPointsToRead, ushort startWriteAddress, ushort[] writeData); void WriteMultipleCoils(byte slaveAddress, ushort startAddress, bool[] data); void WriteMultipleRegisters(byte slaveAddress, ushort startAddress, ushort[] data); void WriteSingleCoil(byte slaveAddress, ushort coilAddress, bool value); void WriteSingleRegister(byte slaveAddress, ushort registerAddress, ushort value); The following sample project demonstrates creating a Modbus Master using the CUWIN CWV that monitors and controls a Modbus slave using a CUBLOC CB280 and the CUBLOC Study Board 2. The CB280's digital output on pin P0 is connected to the Study Board's LED0 on the, and an analog input is connected to the Study Board's knob potentiometer, AD1. The Analog Input "Read" button will query the CB280 for the value read from its analog input and display it on the form. Turning the knob pot AD1 on the Study Board will change the value read from the analog input. The Digital Output "Read" button will query the CB280 for the state of the CB280's digital output on pin P0. The Digital Output "On" and "Off" buttons will turn P0 on or off respectively thus turning the Study Board's LED0 on or off. All prices are in USD
http://comfiletech.com/blog/creating-a-cuwin-modbus-rtu-master-with-nmodbus/
CC-MAIN-2019-35
en
refinedweb
47281/ignore-the-nan-and-the-linear-regression-on-remaining-values Is there a way to ignore the NaN and do the linear regression on remaining values? val=([0,2,1,'NaN',6],[4,4,7,6,7],[9,7,8,9,10]) time=[0,1,2,3,4] slope_1 = stats.linregress(time,values[1]) # This works slope_0 = stats.linregress(time,values[0]) # This doesn't work Yes, you can do this using statsmodels: import statsmodels.api as sm from numpy import NaN x = [0, 2, NaN, 4, 5, 6, 7, 8] y = [1, 3, 4, 5, 6, 7, 8, 9] model = sm.OLS(y, x, missing='drop') results = model.fit() In [2]: results.params Out[2]: array([ 1.16494845]) Use the following code: from scipy import stats slope, ...READ MORE Hi @Dipti, you could try something like ...READ MORE You can read excel files df = pd.read_excel(...) You ...READ MORE Rolling regression is the analysis of changing ...READ MORE Hey @Tanmay, try something like this: >>> from ...READ MORE Hey @Vivek, Try something like this: >>> from ...READ MORE Supervised learning is an aspect of machine learning ...READ MORE In reinforcement learning, the output depends on ...READ MORE Hey @Ruth, you can use this model ...READ MORE LassoLars is a lasso model implemented using ...READ MORE OR
https://www.edureka.co/community/47281/ignore-the-nan-and-the-linear-regression-on-remaining-values?show=47284
CC-MAIN-2019-35
en
refinedweb
Basic Java Basic Java 1. 1. 2. 3. 4. 5. 6. 7. 8. Table of Contents Introduction to Java ...3 The language package ................29 The Utilities package .................36 The I/O Package .48 Applet Programming ...............68 Multithreading .109 Networking in Java ..120 Java Database Connectivity 131 Basic Java Java was designed to be a powerful language but simple. To support the development of large software, the concept of package is used. The major difference was the removal of the direct use of pointers. Java automatically handles referencing and dereferencing of language objects. Other difference includes the removal of support for data structures like struct, union. Its in-built classes provide this. Also, the concepts of operator overloading and multiple-inheritance in the form of classes have been removed. OBJECT-ORIENTED NATURE The notion of object in Java is implemented by its class construct. In fact, it is not possible to write a Java program that does something meaningful without using the class construct. Java language comes with very powerful set of pre-defined classes with a hierarchy level. DISTRIBUTED NATURE Java provides the network capabilities by a pre-defined package java.net. This package has many classes that simplify the network communication. Accessing to any remote object is also possible in Java via the java.rmi package. ARCHITECTURALLY NEUTRAL The Java Compiler does not produce the machine language instructions that make up the executable Java Program. The Java Compiler DOES NOT generate a .exe file. Instead the compiler produces an intermediate code called as 'byte code'. Java byte code is an architecturally neutral representation of the program, that is, it is independent of any processor type or machine architecture. These byte codes are read by the Java interpreter and the same is executed using an internal model of an abstract machine. The Java Interpreter and the implementation of this abstract machine are called the JAVA VIRTUAL MACHINE. SECURE LANGUAGE Before any Java program is interpreted, the Java runtime system performs a byte-code verification to ensure that the program is not violating the system integrity. Also, Basic Java 3. programs loaded from the net are loaded in a separate name space than the local classes. This prevents any other program to affect the system classes. MULTITHREADED LANGUAGE Java supports multitasking in the form of multithreading within itself. First Java Application HelloWorld Application public class HelloWorld { public static void main(String args[]) { System.out.println("Hello World!!"); } } Create the file Save this into a file called HelloWorld.java using any text editor. It is very important to call the file HelloWorld.java, because the compiler expects the file name to match the class identifier. Compile the code Type Prompt> javac HelloWorld.java at a command prompt. The javac program creates a file called HelloWorld.class from the HelloWorld.java file. Inside this file (HelloWorld.class) is text known as bytecodes which can be run by the Java interpreter. Run the program Now that you have compiled the program, you can run it by typing at the command prompt: Promtpt> java HelloWorld The input to the interpreter is nothing but the name of the class that has the main method. After you do this, the computer should print to the screen Hello World!! Understanding HelloWorld Declaring a class The first task when creating any Java program is to create a class. Look at the first line of the HelloWorld application: Basic Java 4. public class HelloWorld { This declares a class called HelloWorld. To create any class, simply write a line that looks like: public class ClassName Here, ClassName is the name of the program you are writing. In addition, ClassName must correspond to the file name. Next, notice the little curly brace ({) that is located after the class declaration. If you look at the end of the class, there is also a closing brace (}). The braces tell the compiler where your class will begin and end. Any code between those two braces is considered to be in the HelloWorld class. public static void main(String args[]){ This line declares what is known as the main method. Methods are essentially mini-programs. Each method performs some of the tasks of a complete program. The main method is the most important one with respect to applications, because it is the place that all Java applications start. For instance, when you run java HelloWorld, the Java interpreter starts at the first line of the main method. Writing to the Screen The text Hello World!! appears on the screen through System.out.println("Hello World!!"); You can replace any of the text within the quotation marks ("") with any text that you would like. The System.out line is run because, when the application starts up, the interpreter looks at the first line of code (namely the printout) and executes it. If you place any other code there, it runs that code instead. The System.out.println serves approximately the same purpose as the writeln in Pascal. In C, the function is printf, and in C++, cout. println Versus print There is one minor variation on println which is also readily used: print("Hello World!!"). The difference between println and print is that print does not add a carriage return at the end of the line, so any subsequent printouts are on the same line. Access Specifiers : The first option for a method is the access specifier. Access specifiers are used to restrict access to the method. Regardless of what the access specifier is, though, the method is accessible from any other method in the same class. public The public modifier is the most relaxed modifier possible for a method. By specifying a method as public it becomes accessible to all classes regardless of their lineage or their package. In other words, a public method is not restricted in any way. 5. The second possible access modifier is protected. Protected methods can be accessed by any class within the current package, but are inaccessible to any class outside the package. default The next access modifier that can be applied to a class is that of default. Default methods are accessible only to the current class and any classes that extend from it. If you fail to specify an access modifier, the method is considered default. private private is the highest degree of protection that can be applied to a method. A private method is only accessible by those methods in the same class. Even classes that extend from the current class do not have access to a private class. Method Modifiers Method modifiers enable you to set properties for the method, such as where it will be visible and how subclasses of the current class will interact with it. static Placing the static modifier in front of a method or variable declaration makes it common to all object references of that class. While non-static methods can also operate with static variables, static methods can only deal with static variables and static methods. abstract Abstract methods are simply methods that are declared, but are not implemented in the current class. The responsibility of defining the body of the method is left to subclasses of the current class. final By placing the keyword final in front of the method declaration, you prevent any subclasses of the current class from overriding the given method. This ability enhances the degree of insulation of your classes, you can ensure that the functionality defined in this method will never be altered in any way. Note: Neither static methods nor class constructors can be declared to be abstract. Furthermore, you should not make abstract methods final, because doing so prevents you from overriding the method. native Native methods are methods that you want to use, but do not want to write in Java. Native methods are most commonly written in C++, and can provide several benefits such as faster execution time. Like abstract methods, they are declared simply by placing the modifier native in front of the method declaration and by substituting a semicolon for the method body. synchronized By placing the keyword synchronized in front of a method declaration, you can prevent data corruption that may result when two methods attempt to access the same piece of data at the same time. While this may not be a concern for simple Basic Java 6. programs, once you begin to use threads in your programs, this may become a serious problem. Modified HelloWorld In the above HelloWorld program, the print method was called inside the same class. The following example creates a separate PrintWorld object that has a print method and any other class can invoke this method to print the necessary result. class PrintWorld { String data_member; public PrintWorld(String line) { data_member = new String(line); } public void printMe() { System.out.println(data_member); } } public class ObjectWorld { public static void main(String args[]) { PrintWorld p_world = new PrintWorld("Hello World"); p_world.printMe(); } } In the above program, PrintWorld p_world = new PrintWorld("Hello World"); is used to construct the class PrintWorld. Quite simply, the line tells the compiler to allocate memory for an instance of the class and points variable to the new section of memory. In the process of doing this, the compiler also calls the class's constructor method and passes the appropriate parameters to it p_world is the object to the class PrintWorld. This class has a data member, data_member and a method printMe(). In the construction phase of the class, the argument of the constructor is assigned to the data member. And later when the printMe() method is called, this data member value is retrieved and printed. Getting information from the user with System.in System.out has a convenient partner called System.in. While System.out is used to print information to the screen, System.in is used to get information into the program. Requesting input from the user public class ReadHello { public static void main (String args[] { int inChar =0; System.out.println("Enter a Character:"); try { inChar = System.in.read(); System.out.println("You entered " + inChar); } catch (IOException e) { System.out.println("Error reading from user"); } } } You've probably already noticed that there is a lot more to this code than there was to the last one. Lets first compile the program. Enter a Character: A You entered 65 The code we are most interested in is the line, which reads: inChar = System.in.read(); System.in.read() is a method that takes a look at the character that the user enters. It then performs what is known as a return on the value. A value that is returned by a method is then able to be used in an expression. In the case of ReadHello, a variable called inChar is set to the value which is returned by the System.in.read() method. In the next line, the value of the inChar variable is added to the System.out string. By adding the variable into the string, you can see the results of your work. It's not actually necessary to use a variable. If you prefer, you can print it out directly in the second System.out line, by changing it to System.out.println("You entered "+ System.in.read()); Now, notice that the program displays a number instead of a character for what you entered. This is because the read() method of System.in returns an integer, not an actual character. The number corresponds to what is known as the ASCII character set. Converting integer to character To convert the number that is returned from System.in into a character, you need to do what is known as a cast. Casting effectively converts a given data type to another one. Basic Java 8. --inChar =(char) System.in.read(); --Notice the characters before System.in.read().The (char) causes the integer to be changed into a character. The Rest of the Extra Codetry, catch In this code, there is a sequence there called a try-catch block. In some programming languages, when a problem occurs during execution, there is no way for you as a programmer to catch it and deal with the problem. In some languages, it's a bit complicated. In Java, most problems cause what are known as Exceptions. When a method states that it will throw an exception, it is your responsibility to only try to perform that method, and if it throws the exception, you need to catch it. See the line of code right after the catch phase. If there is an error while reading, an exception called an IOException is thrown. When that happens, the code in the catch block is called. JAVA LANGUAGE FUNDAMENTALS KEYWORDS The following is a list of the 56 keywords you can use in Java. abstract boolean Break Byte case cast Catch Char class const Continue Default do double Else Extends final finally Float for future generic Goto if implements import Inner instanceof int interface Long native new null Operator outer package private Protected public rest return Short static super switch Synchronized this throw throws Transient try var void Volatile while EXTENDING OBJECTS THROUGH INHERITANCE Inheritance is a feature of OOP programming that enables us inherit all the common features of a parent class onto a child class, it's not necessary to reinvent the object every time. When new classes inherit the properties of another class, they are referred to as child classes or subclasses. The class from which they are derived is then called a parent or super class. A Simple Inheritance Program Basic Java 9. { System.out.println("Base Class Constructor Called"); } } /* DerivedClass extends or inherits the property of the BaseClass */ class DerivedClass extends BaseClass { public DerivedClass() { System.out.println("Derived Class Constructed"); } } public class Inheritance { public static void main(String args[]) { BaseClass base = new BaseClass(); System.out.println("------------"); DerivedClass derived = new DerivedClass(); } } The output is: Base Class Constructor Called -----------Base Class Constructor Called Derived class Constructed By looking at the output, you can find that, when the child class is constructed, the parent class constructor is invoked first. INTERFACES Interfaces are Java's substitute for C++'s feature of multiple inheritance, the practice of allowing a class to have several super classes. While it is often desirable to have a class inherit several sets of properties, for several reasons the creators of Java decided not to allow multiple inheritance. Java classes, however, can implement several interfaces, thereby enabling you to create classes that build upon other objects without the problems created by multiple inheritance. The syntax for creating an interface is extremely similar to that for creating a class. However, there are a few exceptions. The most significant difference is that none of the methods in your interface may have a body. An Interface Example Basic Java 10 . } public class Shoe implements Product { public int getPrice(int id) { if (id == 1) return(5); else return(10); } } public class Store { public static void main(String argv[]) { Shoe some = new Shoe(); int x = Some.getPrice(3); System.out.println(the price : +x); } } The Declaration Interface declarations have the syntax public interface NameofInterface Public Interfaces By default, interfaces may be implemented by all classes in the same package. But if you make your interface public, you allow classes and objects outside of the given package to implement it as well. The rules for an interface name are identical to those for classes. Extending Other Interfaces In keeping with the OOP practice of inheritance, Java interfaces may also extend other interfaces as a means of building larger interfaces upon previously developed code. e.g public interface NameOfInterface extends AnotherInterface Interfaces cannot extend classes. There are a number of reasons for this, but probably the easiest to understand is that any class, which the interface would be extending would have its method bodies defined. This violates the "prime directive" of interfaces. The Interface Body Basic Java 11 . The main purposes of interfaces are to declare abstract methods that will be defined in other classes. As a result, if you are dealing with a class that implements an interface, you can be assured that these methods will be defined in the class. While this process is not overly complicated, there is one important difference that should be noticed. An interface method consists of only a declaration. Methods in Interface Method declarations in interfaces have the following syntax: public return_value nameofmethod (parameters); Note that unlike normal method declarations in classes, declarations in interfaces are immediately followed by a semicolon. All methods in interfaces are public by default, regardless of the presence or absence of the public modifier. This is in contrast to class methods which default to friendly. It's actually illegal to use any of the other standard method modifiers (including native, static, synchronized, final, private, protected, or private protected) when declaring a method in an interface. Variables in Interfaces Although interfaces are generally employed to provide abstract implementation of methods, you may also define variables within them. Because you cannot place any code within the bodies of the methods, all variables declared in an interface must be global to the class. Furthermore, regardless of the modifiers used when declaring the field, all fields declared in an interface are always public, final, and static. While all fields will be created as public, final, and static, you do not need to explicitly state this in the field declaration. All fields default to public, static and final regardless of the presence of these modifiers. It is, however, a good practice to explicitly define all fields in interfaces as public, final, and static to remind yourself (and other programmers) of this fact. Implementing an interface. In order to fulfill the requirements of implementing the Product interface, the class must override the getPrice(int) method. Overriding Methods Declaring a method in an interface is a good practice. However, the method cannot be used until a class implements the interface and overrides the given method. ENCAPSULATION Another benefit of enclosing data and methods in classes is the OOP characteristic of encapsulationthe ability to isolate and insulate information effectively from the rest of your program. POLYMORPHISM Basic Java 12 . Finally, the allure of the OOP approach to creating self-sustaining modules is further enhanced by the fact that children of a given class are still considered to be of the same "type" as the parent. This feature, called polymorphism, enables you to perform the same operation on different types of classes as long as they share a common trait. While the behavior of each class might be different, you know that the class will be able to perform the same operation as its parent because it is of the same family tree Example of Function Overload class Sample { public Sample() { System.out.println("Sample Constructor Called"); } public void overloadMe() { System.out.println("Overload Method Invoked"); } public void overloadMe(String str) { System.out.println(str); } } public class Overload { public static void main(String args[]) { Sample samp = new Sample(); System.out.println("-------------"); samp.overloadMe(); System.out.println("-------------"); samp.overloadMe("Hi! I am not the old one"); } } Output: Sample Constructor Called ------------Overload Method Invoked ------------Hi! I am not the old one Here, though the method overloadMe is the same, it throws different ouput based on its invocation. This is termed as method overloading. JAVA DATA TYPES Java has Two Types of Data Types In Java, there are really two different categories in which data types have been divided: Primitive types Reference types Reference types enclose things such as arrays, classes, and interfaces. Java has eight primitive types, each with its own purpose and use: Basic Java 13 . Type Description boolean - These have values of either true or false. byte - 8-bit 2s-compliment integer with values between -128 to 127 short - 16-bit 2s-compliment integer with values between -2^15 and 2^15-1 (-32,768 to 32,767) char 16-bit Unicode characters. For alpha-numerics, these are the same as ASCII with the high byte set to 0. The numerical values are unsigned 16-bit values are between 0 and 65535. int 32-bit 2s-compliment integer with values between -231 and 231-1 (-2,147,483,648 to 2,147,483,647) long 64-bit 2s-compliment integer with values between -263 and 263-1 (9223372036854775808 to 9223372036854775807) float 32-bit single precision floating point numbers using the IEEE 754-1985 standard (+/about 1039) double 64-bit double precision floating point numbers using the IEEE 754-1985 standard. (+/- about 10317) VARIABLES You can create any variable in Java in the same way as was just shown: State the data type that you will be using State the name the variable will be called Assign the variable a value As with every other line of code in Java, terminate the line with a semicolon example: int number = 0; boolean value = false; IdentifiersThe Naming of a Variable There are several rules that must be obeyed when creating an identifier: The first character of an identifier must be a letter. After that, all subsequent characters can be letters or numerals. The underscore (_) and the dollar sign ($) may be used as any character in an identifier, including the first one. Identifiers are case-sensitive and language-sensitive. Examples of Legal Identifiers HelloWorld Basic Java 14 . Operators are used to change the value of a particular object. They are described here in several related categories. UNARY LOGICAL OPERATORS Description Operator ++ -- Bitwise complement ~ Example for Increment and Decrement Operators class IncDec { public static void main int x = 8, y = 13; System.out.println(x = System.out.println(y = System.out.println(++x System.out.println(y++ System.out.println(x = System.out.println(y = } } Output x=8 y = 13 ++x = 9 y++ = 13 x=9 y = 14 Example for negation operator class Negation { public static void main (String args[]) { int x = 8; System.out.println(x = + x); int y = -x; System.out.println(y = + y); Basic Java 15 . } } class BitwiseComplement public static void main int x = 8; System.out.println(x = int y = ~x; System.out.println(y = } } ARITHMETIC OPERATORS Addition Subtraction Multiplication Division Modulus Bitwise AND Bitwise OR Bitwise XOR Left Shift Right Shift The left-shift, right-shift, and zero-fill-right-shift operators (<<, >>, and >>>) shift the individual bits of an integer by a specified integer amount. The following are some examples of how these operators are used: x << 3; y >> 7; z >>> 2; Example for Shift Operators Basic Java 17 . x >>> 1 = 3 Relational Operators The last group of integer operators is the relational operators, which all operate on integers but return a type boolean. Description Operator Less Than Greater Than Less Than Or Equal To Equal To Not Equal To ASSIGNMENT OPERATORS The simplest assignment operator is the standard assignment operator. This operator is often known as the gets operator, because the value on the left gets the value on the right. = assignment operator The arithmetic assignment operators provide a shortcut for assigning a value. When the previous value of a variable is a factor in determining the value that you want to assign, the arithmetic assignment operators are often more efficient: Description Operator = += -= /= %= &= |= ^= Multiplication *= BOOLEAN OPERATORS Boolean operators act on Boolean types and return a Boolean result. The Boolean operators are listed Description Operator & | ^ && Basic Java 18 . ! == ?: Not Equal To != CONDITIONAL OPERATOR It takes the following form: expression1 ? expression2 : expression3 In this syntax, expression1 must produce a Boolean value. If this value is true, then expression2 is evaluated, and its result is the value of the conditional. If expression1 is false, then expression3 is evaluated, and its result is the value of the conditional. Example for Conditional Operator class Conditional { public static void main (String args[]) { int x = 0; boolean isEven = false; System.out.println(x = + x); x = isEven ? 4 : 7; System.out.println(x = + x); } } The results of the Conditional program follow: x=0 x=7 CONTROL FLOW Control flow is the heart of any program. Control flow is the ability to adjust (control) the way that a program progresses (flows). By adjusting the direction that a computer takes, the programs that you build become dynamic. Without control flow, programs would not be able to do anything more than several sequential operations. IF STATEMENTS Programmers use iteration statements to control sequences of statements that are repeated according to runtime conditions. Basic Java 19 . Java supports five types of iteration statements: while do for continue break WHILE STATEMENTS switch (expression){ case V1: statement1; break; case V2: statement2; break; default: statementD; } BREAK STATEMENTS The sub-statement blocks of loops and switch statements can be broken out of by using the break statement. RETURN STATEMENTS A return statement passes control to the caller of the method, constructor, or static initializer containing the return statement. If the return statement is in a method that is not declared void, it may have a parameter of the same type as the method. ARRAYS An array is simply a way to have several items in a row. If you have data that can be easily indexed, arrays are the perfect means to represent them. Basic Java 20 . int IQ[] = {123,109,156,142,131}; The next line shows an example of accessing the IQ of the third individual: int ThirdPerson = IQ[3]; Arrays in Java are somewhat tricky. This is mostly because, unlike most other languages, there are really three steps to filling out an array, rather than one: There are two ways to do this: place a pair of brackets after the variable type, or place brackets after the identifier name. int MyIntArray[]; int[] MyIntArray; Examples of declaring arrays. long Primes[] = new long[1000000]; // declare an array and assign // some memory to hold it. long[] EvenPrimes = new long[1]; // Either way, it's an array. EvenPrimes[0] = 2; // populate the array. There are several additional points about arrays you need to know: Indexing of arrays starts with 0. In other words, the first element of an array is MyArray[0], not MyArray[1]. A traditional comment is a C-style comment that begins with a slash-star (/*) and ends with a star-slash (*/). C++ Style Comments The second style of comment begins with a slash-slash (//) and ends when the current source code line ends. These comments are especially useful for describing the intended meaning of the current line of code. javadoc Comments The final style of comment in Java is a special case of the first. It has the properties mentioned previously, but the contents of the comment may be used in automatically generated documentation by the javadoc tool. javadoc comments are opened with /**, and they are closed with */. By using these comments in an appropriate manner, you will be able to use JavaDoc to automatically create documentation pages 21 . There are several different types of literal. In fact, there are five major types of literal, in the Java language: Boolean Character Floating-point Integer String Integer Literal Example int j=0; long GrainOfSandOnTheBeachNum=1L; short Mask1=0x007f; String FirstName = "Ernest"; char TibetanNine = '\u1049' boolean UniverseWillExpandForever = true; ESCAPE CHARACTERS Escape Literal Meaning '\b' \u0008 backspace '\t' \u0009 horizontal tab '\n' \u000a linefeed '\f' \u000c form feed '\r' \u000d carriage return '\"' \u0022 double quote '\'' \u0027 single quote '\\' \u005c backslash Don't use the \u format to express an end-of-line character. Use the \n or \r characters instead. ERROR-HANDLING CLASSES Runtime error handling is a very important facility in any programming environment. Java provides the following classes for dealing with runtime errors: Throwable Exception Error The Throwable class provides low-level error-handling capabilities such as an execution stack list. The Exception class is derived from Throwable and provides the base level of functionality for all the exception classes defined in the Java system. The Exception class is used for handling normal errors. The Error class is also derived from Throwable, but it is used for handling abnormal errors that arent expected to occur. Very few Java programs worry with the Error class; most use the Exception class to handle runtime errors. public class ExceptionHandling { public static void main(String args[]) { int values[] = {5,6,3,5,2}; int index = 6; try { int get = values[index]; System.out.println("The value in the requested index is " +get); } catch(Exception err) { System.out.println("Requested Index Not found"); } finally { System.out.println("--------End---------"); } } } In the above example, the array size is 5, but we are trying to access the 6 element. As this is a runtime error, an exception is caught and the catch block is executed. Use of finally clause th Suppose there is some action that you absolutely must do, no matter what happens. Usually, this is to free some external resource after acquiring it, to close a file after opening it, or something similar. In exception handling, the finally block is executed no matter whether an exception is thrown or not. Output: Requested Index Not found --------End--------THE THROWABLE CLASS. A Throwable class contains a snapshot of the execution stack of its thread at the time it was created. It can also contain a message string that gives more information about the error. THE EXCEPTION CLASS The class Exception and its subclasses are a form of Throwable that indicates conditions that a reasonable application might want to catch. EXAMPLE FOR MULTIPLE EXCEPTION HANDLING Basic Java 23 . int values[] = {5,6,2,3,5}; int index; char input = (char)-1; String data = ""; System.out.println("Enter an index value"); try { do { input = (char)System.in.read(); data = data + input; }while(input!='\n'); } catch(Exception err) { System.out.println("Unable to obtain system input"); } try { index = Integer.parseInt(data.trim()); System.out.println("The value in the requested index : "+values[index]); } catch(NumberFormatException err) { System.out.println("Invalid Index"); } catch(ArrayIndexOutOfBoundsException err) { System.out.println("Requested Index Not Found"); } finally { System.out.println("--------End---------"); } } } In the above program, there is a pre-defined array of length 5. The user input is got as a String. It is then parsed into an int data type. The value in the array for this index is given as ouput. Here, the exceptions may be thrown While getting an input While trying to convert the input to an int data type While trying to access that index in the array The exception classes for the last two exceptions are NumberFormatException and ArrayIndexOutOfBoundsException respectively. So the try block encapsulating the parsing of the input and searching of the index has two catch blocks to handle these exceptions in their own different way. If input is not an integer, then output is Invalid Index --------End--------- Basic Java 24 . If input is an integer, but index is out of range in the array, then output is Requested Index Not Found --------End--------Note that in both the cases, the finally block is executed. Packages: Java provides a mechanism for partitioning the classname into more manageable chunks. This mechanism is the package. The package is both a naming and a visibility comttrol mechanism. Classes can be defined inside a package that are not accessible by code outside the package. Class members can also be defined that are only exposed to other members of the same package. This is achieved with the help of package and access protection. Access Protection: Java provides many levels of protection to allow fine-grained control over the visibility of the variables and methods within classes, subclasses and packages. Packages add another dimension to access control. Classes and packages are both means of encapsulating and containing the namespace and scope of variables and methods. Packages act as containers for classes and other subordinate packages. Classes act as containers for data and code. The class is Javas smallest unit of abstraction. Java addresses four categories of visibility for class members. 1. Sub classes in the same package 2. Non Subclasses in the same package 3. Subclasses in different packages 4. Classes are neither in the same package nor subclasses The three access specifiers, private, public and protected provide a variety of ways to produce tha many levels of access required by these categories. public - Anything declared public can be accessed from anywhere private Anything declared private cannot be seen outsideof its class. default When a member doesnot have an access specification, it is visible to subclasses as well as other classes in the same package. protected If an element has to be seen outside the current package but only to classes that subclass your class directly. Defining Package: Creating a package in Java is quite easy. This is achieved by simply including a package command as the first statement in a Java Source file. Any classes that are declared with in that file belong to the specified package. The package statement defines a namepsace in which classes are stored. If you omit the package statement, the classes are put into the default package that has no name. Basic Java 25 . Syntax for package statement: package mypackage; Use package keyword as the first line in the file. E.g. package com.first The classes under this package are in com/first namespace. The classes under this package must be stored inside the com/first folder Use import keyword for using the classes in different package. Example for Package mechanism: package com.first.one; public class BaseClass { int x=6; // default access private int x_pri=2; // private access protected int x_pro=3; //protected access public int x_pub=4; //public access public BaseClass() { System.out.println("Inside Constructor of Base Class"); } public void display() { System.out.println("Value of x(default) is "+x); System.out.println("Value of x(private) is "+x_pri); System.out.println("Value of x(protected) is "+x_pro); System.out.println("Value of x(public) is "+x_pub); } } package com.first.one; class Derived extends BaseClass { Derived() { System.out.println("Inside Derived Class Constrcutor\n"); System.out.println("Value of x(default) is "+x); // Not available to derived class also because it is private (Class only) // System.out.println("Value of x(private) is "+x_pri); System.out.println("Value of x(protected) is "+x_pro); Basic Java 26 . System.out.println("Value of x(public) is "+x_pub); } public static void main(String arg[]) { Derived deri=new Derived(); } } package com.first.one; public class TestBaseClass { public TestBaseClass() { System.out.println("Inside TestBaseClass constructor"); BaseClass bc1=new BaseClass(); System.out.println("Value of x(default) is "+bc1.x); // Not accessible because private - access is for Class only // System.out.println("Value of x(private) is "+bc1.x_pri); System.out.println("Value of x(protected) is "+bc1.x_pro); System.out.println("Value of x(public) is "+bc1.x_pub); } public static void main(String arg[]) { BaseClass bc=new BaseClass(); bc.display(); System.out.println("\n****************TestBaseClass************** *\n"); TestBaseClass test=new TestBaseClass(); } } package com.first.two; import com.first.one.BaseClass; class BaseClassNew extends BaseClass { BaseClassNew() { System.out.println("Constrcutor of Base class in another package"); //Not accessible because it is default - for package only //System.out.println("Value of x(default) is"+x); // Not accessible becuase it is private - for Class only //System.out.println("Value of x(private) is "+x_pri); Basic Java 27 . System.out.println("Value of x(protected) is "+x_pro); System.out.println("Value of x(public) is "+x_pub); } public static void main(String arg[]) { BaseClassNew bcn=new BaseClassNew(); } } package com.first.two; import com.first.one.*; public class SomeClass { SomeClass() { System.out.println("Inside Constructor of SomeClass"); BaseClass bc=new BaseClass(); // Only for package //System.out.println("Value of x(default) is "+bc.x); // Only for Class //System.out.println("Value of x(private) is "+bc.x_pri); // Only for Class, subClass & package //System.out.println("Value of x(protected) is "+bc.x_pro); System.out.println("Value of x(public) is "+bc.x_pub); } public static void main(String arg[]) { SomeClass sc=new SomeClass(); } } Basic Java 28 . Basic Java 29 . Basic Java 30 . THE MATH CLASS The Math class serves as a grouping of mathematical functions and constants. It is interesting to note that all the variables and methods in Math are static, and the Math class itself is final. This means you cant derive new classes from Math. Additionally, you cant instantiate the Math class. Its best to think of the Math class as just a conglomeration of methods and constants for performing mathematical computations. The Math class includes the E and PI constants, methods for determining the absolute value of a number, methods for calculating trigonometric functions, and minimum and maximum methods, among others. EXAMPLE FOR MATH CLASS public class MathExample { public static void main(String args[]) { char temp = (char)-1; String input = ""; Double data = null; System.out.println("Enter any number"); /** Gets the user input**/ try { do { temp = (char)System.in.read(); input = input + temp; }while(temp != '\n'); data = new Double(input); } catch(Exception err) { System.out.println("Exception ..."); System.exit(0); } double d_data = data.doubleValue(); System.out.println("Printing Math values......"); System.out.println("Sin : " + (Math.sin(d_data))); System.out.println("Cos : " + (Math.cos(d_data))); System.out.println("Tan : " + (Math.tan(d_data))); System.out.println("asin : " + (Math.asin(d_data))); System.out.println("acos : " + (Math.acos(d_data))); System.out.println("atan : " + (Math.atan(d_data))); System.out.println("Abs : " + (Math.abs(d_data))); System.out.println("Exp : " + (Math.exp(d_data))); System.out.println("Log : " + (Math.log(d_data))); System.out.println("Sqrt : " + (Math.sqrt(d_data))); System.out.println("Ceil : " + (Math.ceil(d_data))); System.out.println("Floor : " + (Math.floor(d_data))); System.out.println("rint : " + (Math.rint(d_data))); Basic Java 31 . System.out.println("round : " + (Math.round(d_data))); System.out.println("Random Number : " + (Math.random())); } } STRING CLASSES For various reasons (mostly security related), Java implements text strings as classes, rather than forcing the programmer to use character arrays. The two Java classes that represent strings are String and StringBuffer. The String class is useful for working with constant strings that cant change in value or length. The StringBuffer class is used to work with strings of varying value and length. THE STRING CLASS The String class represents character strings. All string literal in Java programs, such as "abc", are implemented as instances of this class. Strings are constant; their values cannot be changed after they are created. String buffers support mutable strings. Because String objects are immutable they can be shared.); The class String includes methods for examining individual characters of the sequence, for comparing strings, for searching strings, for extracting substrings, and for creating a copy of a string with all characters translated to uppercase or to lowercase.. EXAMPLE FOR STRING CLASS public class StringExample { public static void main(String args[]) { String str = new String("Java World"); int length = str.length(); System.out.println("Length of data : "+length); System.out.println("Extracting character..."); for(int index=0;index<length;index++) { char temp = str.charAt(index); System.out.println(temp); Basic Java 32 . } System.out.println("Substring from 3rd position : " +(str.substring(3))); System.out.println("Substring from 3rd to 5th position : " +(str.substring(3,6))); System.out.println("Index of Wor : " + (str.indexOf("Wor"))); System.out.println("Converting to Upper Case : " + (str.toUpperCase())); System.out.println("Replacing 'a' with '*' : " + (str.replace('a','*'))); System.out.println("--------End-------"); } } THE STRINGBUFFER CLASS A string buffer implements a mutable sequence of characters. String buffers are safe for use by multiple threads. The methods are synchronized where necessary so that all the operations on any particular instance behave as if they occur in some serial order. String buffers are used by the compiler to implement the binary string concatenation operator +. For example, the code: x = "a" + 4 + "c" is compiled to the equivalent of: x = new StringBuffer().append("a").append(4).append("c").toString()". Every string buffer has a capacity. As long as the length of the character sequence contained in the string buffer does not exceed the capacity, it is not necessary to allocate a new internal buffer array. If the internal buffer overflows, it is automatically made larger. EXAMPLE FOR STRINGBUFFER public class SBExample { public static void main(String args[]) { String s = new String("Hello"); /** Constructors Basic Java 33 . 1. Empty Constructor will create with initial capacity of 16 characters. 2. Constructor with specified characters as the initial capacity 3. Constructor with specified string as the initial value */ StringBuffer sb1 = new StringBuffer(); StringBuffer sb2 = new StringBuffer(40); StringBuffer sb3 = new StringBuffer(s); //Appending a boolean value sb1.append(true); System.out.println("Value of StringBuffer = " + sb1); //Appending a character sb1.append('c'); System.out.println("Value of StringBuffer = " + sb1); //Appending a character array char c[] = {'H','e','l','l','o'}; sb1.append(c); System.out.println("Value of StringBuffer = " + sb1); sb1.append(c,2,3); System.out.println("Value of StringBuffer = " + sb1); double d = 12.141354; sb1.append(d); System.out.println("Value of StringBuffer = " + sb1); float f = (float)15.1; sb1.append(f); System.out.println("Value of StringBuffer = " + sb1); int i = 1; sb1.append(i); System.out.println("Value of StringBuffer = " + sb1); long l = 1000000; sb1.append(l); System.out.println("Value of StringBuffer = " + sb1); sb1.append(s); System.out.println("Value of StringBuffer = " + sb1); System.out.println("Capacity = " + sb2.capacity()); System.out.println("Character at 5th position = " + sb1.charAt(5)); sb1.getChars(0,4,c,0); System.out.println("Chars extracted from Sb1 = " + c); //Insert the boolean value at the 5th position sb1.insert(5,true); //Insert the character value at the 9th position sb1.insert(9,'M'); Basic Java 34 . System.out.println("Length of the string buffer = " + sb1.length()); sb1.reverse(); System.out.println("Reverse of the String Buffer = " + sb1); sb1.setCharAt(5, 'Y'); System.out.println("Value of String Buffer = " + sb1); } } THE SYSTEM AND RUNTIME CLASSES The System and Runtime classes provide a means for your programs to access system and runtime environment resources. Like the Math class, the System class is final and is entirely composed of static variables and methods. The System class basically provides a systemindependent programming interface to system resources. Examples of system resources include the standard input and output streams, System.in and System.out, which typically model the keyboard and monitor. The Runtime class provides direct access to the runtime environment. An example of a run-time routine is the freeMemory method, which returns the amount of free system memory available. EXAMPLE FOR RUNTIME public class RuntimeExample { public static void main(String args[]) { try { Runtime run = Runtime.getRuntime(); run.exec("notepad.exe"); } catch(Exception err) { System.out.println("Exception " +err.getMessage()); } } } THREAD CLASSES Java is a multithreaded environment and provides various classes for managing and working with threads. Following are the classes and interfaces used in conjunction with multithreaded programs: Thread ThreadDeath ThreadGroup Runnable The Thread class is used to create a thread of execution in a program. The ThreadDeath class is used to clean up after a thread has finished execution. As its name implies, the ThreadGroup class is useful for organizing a group of threads. Basic Java 35 . Finally, the Runnable interface provides an alternate means of creating a thread without subclassing the Thread class. CLASS CLASSES Java provides two classes for working with classes: Class and ClassLoader. The Class class provides runtime information for a class, such as the name, type, and parent superclass. Class is useful for querying a class for runtime information, such as the class name. The ClassLoader class provides a means to load classes into the runtime environment. ClassLoader is useful for loading classes from a file or for loading distributed classes across a network connection. Example to print the class name of an object: void printClassName(Object obj) { System.out.println("The class of " + obj + " is " + obj.getClass().getName()); } Basic Java 36 . CHAPTER- 3: THE UTILITIES PACKAGE java.util The Java utilities, package, which is also known as java.util, provides various classes that perform different utility functions. The utilities package includes a class for working with dates, a set of data structure classes, a class for generating random numbers, and a string tokenizer class, among others. The most important classes contained in the utilities package follow: The Date Class Data Structure Classes The Random Class The StringTokenizer Class The Properties Class The Observer Interface The Enumeration Interface THE DATE CLASS The Date class represents a calendar date and time in a system-independent fashion. The Date class provides methods for retrieving the current date and time as well as computing days of the week and month. EXAMPLE FOR DATE CLASS import java.util.Date; public class DateExample { public static void main(String args[]) { String days[] = {"Sun","Mon","Tue","Wed","Thur","Fri","Sat"}; Date sys_date = new Date(); Date date = new Date(101,8,3); System.out.println("System Date : " + (sys_date.toString())); System.out.println("Specified Date : " + (date.toString())); int day = date.getDay();System.out.println("The day for the specified date : " + days[day]);System.out.println("Does the specified date precede the system date ? ");System.out.println(date.before(sys_date)); }} Basic Java 37 . Subclasses of Calendar interpret a Date according to the rules of a specific calendar system. The JDK GregorianCalendar object whose time fields have been initialized with the current date and time: THE RANDOM CLASS Many programs, especially programs that model the real world, require some degree of randomness. Java provides randomness by way of the Random class. The Random class implements a random-number generator by Basic Java 38 . providing a stream of pseudo-random numbers. A slot machine program is a good example of one that would make use of the Random class. EXAMPLE FOR RANDOM CLASS import java.util.*; public class RandomExample { public static void main(String args[]) { Random random = new Random(); System.out.println("Random number(int):" + (random.nextInt())); System.out.println("Random number(float): " + (random.nextFloat())); System.out.println("Random number(double): " +(random.nextDouble())); System.out.println("Random number(gaussian): " +(random.nextGaussian())); Date date = new Date(); Random seed_random = new Random(date.getTime()); System.out.println("Random number with seed(int): " +(seed_random.nextInt())); System.out.println("Random number with seed(float): " +(seed_random.nextFloat())); System.out.println("Random number with seed(double): " + (seed_random.nextDouble())); System.out.println("Random number with seed(gaussian) : " +(seed_random.nextGaussian())); } } Basic Java 39 . : If the flag is false, delimiter characters serve to separate tokens. A token is a maximal sequence of consecutive characters that are not delimiters. If the flag is true, delimiter characters are considered to be tokens. A token is either one delimiter character, or a maximal sequence of consecutive characters that are not delimiters. The following is one example of the use of the tokenizer. The code: StringTokenizer st = new StringTokenizer("this is a test"); while (st.hasMoreTokens()) { println(st.nextToken()); } prints the following output: this is a test Basic Java 40 . while(tokenizer.hasMoreTokens()) { System.out.println(tokenizer.nextToken()); } } } Basic Java 41 . Integer pos = new Integer(day); System.out.println("Day : " + (hash.get(pos).toString())); } } THE STACK CLASS The Stack class represents a last-in-first-out (LIFO) stack of objects. EXAMPLE FOR STACK CLASS import java.util.*; public class StackExample { public static void main(String args[]) { Stack stack = new Stack(); Date date = new Date(); StringTokenizer tokenizer = new StringTokenizer(date.toString()); System.out.println("tokens : "+tokenizer.countTokens()); while (tokenizer.hasMoreTokens()) { stack.push(tokenizer.nextToken()); } Object obj = stack.peek(); System.out.println("First element in stack - by peek : " + (obj.toString())); System.out.println("Pop out the elements in stack "); while(!stack.empty()) { obj = stack.pop(); System.out.println(obj.toString()); } } } THE VECTOR CLASS. Basic Java 42 . EXAMPLE FOR VECTOR CLASS import java.util.*; public class VectorExample { public static void main(String args[]) { Vector store = new Vector(); String input = ""; char temp = (char)-1; System.out.println("Enter a string "); try { do { temp = (char)System.in.read(); input = input+temp; }while(temp != '\n'); input = input.trim(); }catch(Exception err){} StringTokenizer tokenizer = new StringTokenizer(input); while(tokenizer.hasMoreTokens()) { store.addElement(tokenizer.nextToken()); } System.out.println("Size of the Vector : "+store.size()); System.out.println("Capacity of the Vector : "+store.capacity()); System.out.println("First Element : "+store.firstElement()); System.out.println("Last Element : "+store.lastElement()); Enumeration enum = store.elements(); while(enum.hasMoreElements()) { System.out.println(enum.nextElement().toString()); } store.trimToSize(); System.out.println("Capacity of the vector after trimming : " + (store.capacity())); } } Basic Java 43 . structures and algorithms. Because the various implementations of each interface are interchangeable, programs can be easily tuned by switching implementations. Provides interoperability between unrelated APIs by establishing a common interfaces. Legacy Implementations - The collection classes from earlier releases, Vector and implementations. Convenience Implementations - High-performance "mini-implementations" of the collection interfaces. Abstract Implementations - Partial implementations of the collection interfaces to Basic Java 44 . sorting a list. Infrastructure - Interfaces that provide essential support for the collection interfaces. Array Utilities - Utility functions for arrays of primitives and reference objects. Not, strictly speaking, a part of the Collections Framework, this functionality is being added to the Java platform at the same time and relies on some of the same infrastructure. Collection Interfaces There are six collection interfaces. The most basic interface is Collection . Three interfaces extend Collection : Set , List , and SortedSet . The other two collection interfaces, Map and SortedMap , do not extend Collection , as they represent mappings rather than true collections. However, these interfaces contain collection-view operations, which allow them to be manipulated as collections. tion Collec The Collection interface is the root of the collection hierarchy. A ns implement. Collection is used to pass collections around and Set List and . This interface is the least common denominator that all collectio manipulate them when maximum generality is desired. A Set is a collection that cannot contain duplicate elements. As might expect, this interface models the mathematical set abstraction. It is used to represent sets like the cards comprising a poker hand, the Basic Java 45 . courses making up a student's schedule, or the processes running on List in the List each element is inserted. The eir integer index (position). If you've List. ction interfaces (SortedSet and SortedMap) are merely sorted version Object le interface provides automatic natural order on classes that implement it, while SortedSet for things like word lists and Sorted its mappings in ascending key rowing a runtime exception (UnsupportedOperationException) if they are attempted. Imp their documentation which optional operations they support. Several terms are introduced to aid in this specification: modification operations (such as add, to as unmodifiable. Collections that are not Lists that guarantee that their size remains constant even though the are referred to as fixed-size. Lists that are not fixed-riable-size. a machine. A List is an ordered collection (sometimes called a sequence). Lists can contain duplicate elements. The user of a List generally has precise control over where user can access elements by th, you're already familiar with the general flavor of used VectorMap A Map is an object that maps keys to values. Maps cannot contain duplicate keys: Each key can map to at most one value. If you've used Hashtable, you're already familiar with the general flavor of Map. The last two core colle s of Set and Map Ordering There are two ways to order objects: The Comparab the Comparator interface gives the programmer complete control over object ordering. Note that these are not core collection interfaces, but underlying infrastructure. The last two core collection interfaces: A SortedSet is a Set that maintains its elements in ascending order. Several additional operations are provided to take advantage of the ordering. The SortedSet interface is used membership rolls. Map A SortedMap is a Map that maintains order. It is the Map analogue of SortedSet. The SortedMap interface is used for apps like dictionaries and telephone directories. All of the modification methods in the collection interfaces are labeled optional. Someimplementations may not perform one or more of these operations, th lementations must specify in Collections that do not support any remove and clear) are referred unmodifiable are referred to modifiable. Collections that additionally guarantee that no change in the Collection object will ever be visible are referred to as immutable. Collections that are not immutable are referred to as mutable. elements may change size are referred to as va Basic Java 46 . Some implementations may restrict what elements (or in the case of Maps , keys and values) may be stored. Possible restrictions include requiring elements to: Be of a particular type. Be non-null. Obey som edic e. Atte lement tha n imple entation's restrictions results in a runtime excep ClassCastException lArgum ntException or gh table below: Implementations e ar mpting to ad d an e tion, t violates a m e typically a , an Illega bitrary pr at a NullPointerException. Attempting to remove or test for the presence of an elementthat violates an implementation's restrictions may result in an exception, thousome "restricted collections" may permit this usage. Collection Implementations Class that implement the collection interfaces typically have names of the form <Implementation-style><Interface>. The general-purpose implementations are summarized in the Hash TableResizable ArrayBalanced Tree Linked List Set HashSet TreeSet List ArrayList LinkedListInterfaces Map HashMap TreeMap The general-purpose implementations support all of the optional operations in the collection interfaces, and have no restrictions on the elements they may contain. The AbstractCollectionAbstractSetAbstractListAbstractSequentialList AbstractMap , , , and classes provide skeletal implementations of the core collection interfaces, to minimize the effort required to implement them. The API documentation for these classes describes precisely how each method is implemented so the De The ma an API that was reasonably small, both in size, and not , rather he same time, the new API had to be powerful eption to indicate that they do not support a specified optional operation. Of course, collection implementers must clearly document which optional operations are supported by an implementation. To keep the number of methods in each core interface small, an interface contains a method only if either: implementer knows which methods should be overridden, given the performance of the "basic operations" of a specific implementation. sign Goals in design goal was to produce , more importantly, in "conceptual weight." It was critical that the new functionality seem alien to current Java programmers; it had to augment current facilitiesthan replacing them. At t enough to provide all the advantages described above. To keep the number of core interfaces small, the interfaces do not attempt to capture such subtle distinctions as mutability, modifiability, resizability. Instead, certain calls in the core interfaces are optional, allowing implementations to throw an UnsupportedOperationExc Basic Java 47 . 1. It is a truly fundamental operation: a basic opcould be reasonably defined, erations in terms of which others includes methods to allow i rrays, arrays to be viewed as collections, and maps to be w 2. There is a compelling performance reason why an important implementation would want to override it. It was critical that all reasonable representations of collections interoperate well. This included arrays, which cannot be made to implement the Collection interface directly ithout changing the language. Thus, the framework wcollect ons to be dumped into a vieed as collections. Basic Java classes for inputting streams of data, outputting , and tokenizing streams of data. The most s a ms. Java ile. The File class iles that takes into account system-dependent features. for reading and ing data to and from a file; it is only useful for querying and modifying the ibutes of a file. In actuality, you can think of the File class data as representing a ame, and the class methods as representing operating system commands that t on filenames. a variety of methods for reading and writing stances of this class represent the name of a file or directory on the host file ame, which can either be an absolute pathname directory. The pathname must follow tion that deals with most of the machin fashion Note th me or path is used it is assumed that the host's file ing . rt java.io.*; t java.util.Date; The Java I/O package, also known as java.io, provides classes witreading and writing data to and from different input and output devices, includin I/O package include he streams of data, working with filesant classes contained in the I/O package follows: import Input Stream Classes Output Stream Classes File Classes The StreamTokenizer Class FILE CLASSES re the most widely used method of data storage in computer syste Filesup ports files with two different classes: File and RandomAccessFprovides an abstraction for f The File class keeps up with information about a file including the location where it is stored and how it can be accessed. The File class has no methods writ attr filen ac The RandomAccessFile class provides d ata to and from a file. RandomAccessFile contains many different methods for reading and writing different types of information, namely the data type wrappers. HE FILE CLASS T In system. A file is specified by a pathn or a pathname relative to the current working the naming conventions of the host platform. The Fi le class is intended to provide an abstrac e dependent complexities of files and pathnames in a machine-independent . at whenever a filena conventions are used nam Example for File im poimpor public class FileExample { public static void main(String args[]) { Basic Java 49 . if(args.length == 0) xample ing filename = args[0]; File file = new File(filename); ists()); date = new Date(file.lastModified()); Last Modified : "+date.toString()); System.out.println("Absolute Path : ile.getAbsolutePath()); System.out.println("Length of file : "+file.length()); n("Parent : "+file.getParent()); } TH CLASS Ins rt both reading and writing to a random access file. An app in the file at which the next read or write occurs. Thi rity by offering methods that allow specified mode acc te to files. Example for RandomAccessFile mp rt java.io.*; System.exit(0); File srcfile = new File(sourcefile); File destfile = new File(destinfile); RandomAccessFile srcrdm = new RandomAccessFile(srcfile,"r"); RandomAccessFile dstrdm =new RandomAccessFile(destfile,"rw"); { System.out.println("Usage : java FileE<filename>"); System.exit(0); } Str try { System.out.println("File Exists : "+file.ex Date System.out.println(" "+f System.out.printl} c atch(Exception err) { System.out.println("Exception file"); } } E RANDOMACCESSFILE tances of this class suppo lication can modify the position f secu s class provides a sense o esses of read-only or read-wri o i public class RandomAccessExample { public static void main(String args[]) { if(args.length != 2) { System.out.println("Usage : "); System.out.println("java RandomAccessExample <sourcefile> <destfile>"); } String sourcefile = args[0]; String destinfile = args[1]; String data = ""; ry t{ in accessing Basic Java 50 . System.out.println("The size of the file "+srcrdm.length()); System.out.println("The file pointer is at "+srcrdm.getFilePointer()); data = srcrdm.readLine(); intln("File successfully copied"); ystem.out.println("Open the destination file to view the reading data from an input source. An input ring, memory, or anything else that contains data. The input SequenceInputStream StringBufferInputStream counterpart to input streams and handle writing data to an input sources, output sources include files, strings, memory, n contain data. The output stream classes defined in java.io while(!data.equals("")) { dstrdm.writeBytes(data); data = srcrdm.readLine(); } ystem.out.pr S S result"); catch(Exception err) { } } }} INPUT STREAM CLASSES Java uses input streams to handle source can be a file, a ststream classes follow: InputStream BufferedInputStream ByteArrayInputStream tStream DataInpuFileInpu tStream putStream FilterIn LineNumberInputStream PipedInputStream PushbackInputStream The InputStream class is an abstract class that serves as the base class for all input streams. The InputStream class defines an interface for reading streamed bytes of data, finding out the number of bytes available for reading, and moving the stream position pointer, among other things. All the other input streams provide support for reading data from different types of input devices. UTPUT STREAM CLASSES O Output streams are the output source. Similar to anything else that ca and follow: OutputStream BufferedOutputStream ByteArrayOutputStream DataOutputStream Basic Java 51 . FileOutputStream FilterOutputStream Ou class that serves as the base class for all output n interface for writing streamed bytes of data o other output streams provide support for writing data to written by an output stream is formatted to be read by BUFFEREDINPUTSTREAM CLASS class implements a buffered input stream. By setting up such an input stream, stream without necessarily causing a call to the ad. The data is read by blocks into a buffer; ng args[]) { te buf[] = new byte[10]; tion e) { he class implements a buffered output stream. By setting up such an output stream, cau yte written. The data is written into b uffer reaches its capacity, e buffer output stream is closed, or the buffer output stream is explicity flushed. ing args[]) / Copy the string into a byte array ance, spider!\n); e[] buf = new byte[64]; PipedOutputStream PrintStream The tputStream class is an abstractam defines a streams. OutputStreutput source. All the to an different output devices. Dataput stream. an in E TH he T an application can read bytes from a nderlying system for each byte re u subsequent reads can access the data directly from the buffer. Example for BufferedInputStream import java.io.*; class BufferedExample { ublic static void main (Stri p BufferedInputStream in = new dInputStream(System.in); Buffereby try { in.read(buf, 0, 10); } catch (Excep System.out.println(Error: + e.toString()); } 0); System.out.println(s); String s = new String(buf, } } THE BUFFEREDOUTPUTSTREAM CLASS T an application can write bytes to the underlying output stream without necessarily sing a call to the underlying system for each b a uffer, and then written to the underlying stream if the b th E xample for BufferedOutputStream Class import java.io.*; class WriteStuff { public static void mai n (Str{ Basic Java 52 . s.getBytes(0, s.length(), buf, 0); // Output the byte array (buffered) BufferedOutputStream out = new BufferedOutputStream(System.out); try { out.write(buf, 0, 64); out.flush(); } catch (Exception e) { System.out.println(Error: + e.toString()); an application read primitive Java data types from an a machine-independent way. An application uses a data ata that can later be read by a data input stream. ta input streams and data output streams represent Unicode strings in a format at is a slight modification of UTF-8. ll characters in the range '\u0001' to '\u007F' are represented by a single byte: he null character '\u0000' and characters in the range '\u0080' to '\u07FF' are e '\u0800' to '\uFFFF' are represented by three bytes: bits 12-1510bits 61110bits 0-5 he "standard" UTF-8 format are the d 3-byte formats are used. m and DataOutputStream Stream; ort java.io.File; mport java.io.IOException; public class DataIOApp { } } } THE DATAINPUTSTREAM CLASS A data input stream lets in underlying input stream output stream to write d Da th A 0bits 0-7 T represented by a pair of bytes: 110bits 6-1010bits 0-5 Characters in the rang 1110 The two differences between this format and tfollowing: The null byt e '\u0000' is encoded in 2-byte format rather than 1-byte, so that the encoded strings never have embedded nulls. yte, 2-byte, an Only the 1-b mple for DataInputStrea Exa ort java.io.DataInputStream; imp import java.io.DataOutputStream; Stream; import java.io.FileInputort java.io.FileOutput impmp ii Basic Java 53 . public static void main(String args[]) throws IOException { Stream outFile = new FileOutputStream(file); DataOutputStream outStream = new DataOutputStream(outFile); (true); t(123456); har('j'); Double(1234.56); tes were written"); ); aInputStream inStream = new DataInputStream(inFile); readBoolean()); .readInt()); nStream.readChar()); inStream.readDouble()); nput stream for reading data from a File or from a ple for FileInputStream ort java.io.*; lass ReadFile ublic static void main (String args[]) ream(Grocery.txt); n.read(buf, 0, 64); + e.toString()); ; File file = new File("test.txt"); FileOutput outStream.writeBoolean outStream.writeIn outStream.writeCtStream.write ou System.out.println(outStream.size()+" bytStream.close(); ou outFile.close(); InputStream inFile = new FileInputStream(file Filet Da System.out.println(inStream.stem.out.println(inStream Sy System.out.println(istem.out.println( Sy inStream.close(); File.close(); in file.delete(); } } THE FILEINPUTSTREAM CLASS A file input stream is an iescriptor. FileD Exam imp c { p { byte buf[] = new byte[64]; try { FileInputStream in = new FileInputSt i } catch (Exception e) { stem.out.println(Error: Sy } ing s = new String(buf, 0) Str System.out.println(s); }} Basic Java 54 . THE FILEOUTPUTSTREAM CLASS is an output stream fo A file output stream FileDescriptor. r writing data to a File or to a ead the user input byte buf[] = new byte[64]; y + e.toString()); ry putStream out = new FileOutputStream(Output.txt); ut.write(buf); atch (Exception e) toString()); tes read are a Stream; o.IOException; in(String args[]) throws IOException ArrayOutputStream(); tring s = "This is a test."; for(int i=0;i<s.length();++i) outStream.write(s.charAt(i)); System.out.println("outstream: "+outStream); Example for FileOutputStream import jav a.io.*; class WriteFile { public static void main (String args[]) { // R tr { System.in.read(buf, 0, 64); } catch (Exception e) { System.out.println(Error: } // Output the data to a file t { FileOut o } c { System.out.println(Error: + e. } } } THE BYTEARRAYINPUTSTREAM CLASS This class allows an application to create an input stream in which the bypplications can also read bytes from supplied by the contents of a byte array. A string by using a StringBufferInputStream. Example for Byte Array input/output stream import java.io.ByteArrayInput import java.io.ByteArrayOutputStream; import java.i public class ByteArrayIOApp { public static void ma { ByteArrayOutputStream outStream = new Byte S Basic Java 55 . System.out.println("size: "+outStream.size()); ByteArrayInputStream inStream; System.out.println("inStream has "+inBytes+" available byte inBuf[] = new byte[inBytes]; Buf,0,inBytes); bytes were read"); "+new String(inBuf)); ASS lass implements a character buffer that can be used as a characterinput . xample for CharArrayReader and Writer port java.io.CharArrayReader; for(int i=0;i<s.length();++i) System.out.println("outstream: "+outStream); am.size()); eam.toCharArray()); ngBuffer(""); ) != -1) inStream = new ByteArrayInputStream(outStream.toByteArray()); int inBytes = inStream.available(); bytes"); int bytesRead = inStream.read(inad+" System.out.println(bytesRe System.out.println("They are: } } BYTEARRAYOUTPUTSTREAM CL THE s class implements an output stream in which the data is written into a byte array. Thi The buffer automatically grows as data is written to it. The data can be retrieved using toByteArray() and toString(). THE CHARARRAYREADER CLASS This cm strea E im import java.io.CharArrayWriter; import java.io.IOException; public class CharArrayIOApp { public static void main(String args[]) throws IOException { CharArrayWriter outStream = new CharArrayWriter(); String s = "This is a test."; outStream.write(s.charAt(i)); System.out.println("size: "+outStre CharArrayReader inStream; inStream = new CharArrayReader(outStr int ch=0; StringBuffer sb = new Stri while((ch = inStream.read()sb.append((char) ch); s = sb.toString(); ystem.out.println(s.length()+" characters were read"); S System.out.println("They are: "+s); } } Basic Java 56 . THE LINENUMBERREADER CLASS mbers. A line is nsidered to be terminated by any one of a line feed ('\n'), a carriage return ('\r'), or ed. ("LineNumberIOApp.java"); NumberReader(inFile); tLine); HE PUSHBACKINPUTSTREAM CLASS buffer before reading from the underlying input stream. ent of code should read an indefinite number f data bytes that are delimited by particular byte values. After reading the h it back, so that the next read operation ream; IOException m outStream = new tream(); s is a test."; i) i)); A buffered character-input stream that keeps track of line nu co a carriage return followed immediately by a linefe Example for LineNumberReader import java.io.LineNumberReader; import java.io.FileReader; im port java.io.BufferedWriter; import java.io.IOException; public class LineNumberIOApp { p ublic static void main(String args[]) throws IOException { FileRead er inFile = new FileReader LineNumberReader inLines = new Line String inputLine; while ((inputLine=inLines.readLine()) != null) { System.out.println(inLines.getLineNumber()+". "+inpu } } } T This class is an input stream filter that provides a buffer into which data can be "unread." An application may unread data at any time by pushing it back into the buffer, as long as the buffer has sufficient room. Subsequent reads will read all of the pushed-back data in the This functionality is useful when a fragm o terminating byte the code fragment can pus on the input stream will re-read that byte. Ex ample PushbackInputStream and OutputStream Classes imp ort java.io.PushbackInputStream; import java.io.ByteArrayInputStream; imp ort java.io.ByteArrayOutputStimport java.io.IOException; public class PushbackIOApp { public static void main(String args[]) throws { ByteArrayOutputStrea By teArrayOutputS String s = "Thi for(int i=0;i<s.length();++ outStream.write(s.charAt( Basic Java 57 . System.out.println("outstream: "+outStream); Stream.size()); ArrayInputStream inByteArray; yteArray = new Stream inStream; w PushbackInputStream(inByteArray); har ch = (char) inStream.read(); System.out.println("First character of inStream is "+ch); inStream.unread((int) 't'); ilable(); for(int i=0;i<inBytes;++i) inBuf[i]=(byte) inStream.read(); HE SEQUENCEINPUTSTREAM CLASS he sequence input stream class allows an application to combine several input inp tically switches to xa s[]) throws IOException f1 = new p.java"); eInputStream("FileIOApp.java"); System.out.println("size: "+out Byte inB ByteArrayInputStream(outStream.toByteArray()); PushbackInput inStream = ne c int inBytes = inStream.ava System.out.println("inStream has "+inBytes+" available bytes"); byte inBuf[] = new byte[inBytes]; System.out.println("They are: "+new String(inBuf)); } } T T streams serially and make them appear as if they were a single input stream. Each ut stream is read from, in turn, until it reaches the end of the stream. The sequence input stream class then closes that stream and automa the next input stream. mple for SequenceInputStream E import java.io.FileInputStream; import java.io.SequenceInputStream; mport java.io.IOException; i ublic class SequenceIOApp p { public static void main(String arg { am; SequenceInputStream inStre FileInputStream FileInputStream("ByteArrayIOApFileInputStream f2 = new Fil inStream = new SequenceInputStream(f1,f2); boolean eof = false; int byteCount = 0; hile (!eof) w { int c = inStream.read(); if(c == -1) eof = true; Basic Java 58 . else { System.out.print((char) c); ount+" bytes were read"); ; n input stream and parses it into "tokens", controlled by a ber of flags that can be set to various states. The stream tokenizer ntifiers, numbers, quoted strings, and various comment styles. regarded as a character in the range '\u0000' d to look up five possible attributes of the alphabetic, numeric, string quote, and comment character. character can have zero or more of these attributes. addition, an instance has four flags. These flags indicate: d. Whether the se. t constructs an instance of this class, sets up the syntax dly loops calling the nextToken method in each iteration of EOF. okenizer; reamReader; ic static void main(String args[]) throws IOException .in)); r(inData); ++byteCount; } } System.out.println(byteC inStream.close(); f1.close(); f2.close() } } THE STREAMTOKENIZER CLASS The StreamTokenizer class takes a allowing the tokens to be read one at a time. The parsing process is table and a num can recognize ide Each byte read from the input stream is through '\u00FF'. The character value is usecharacter: white space, Each In Whether line terminators are to be returned as tokens or treated as white space that merely separates tokens. Whether C-style comments are to be recognized and skipped. Whether C++-style comments are to be recognized and skippe characters of identifiers are converted to lowerca A typical application firsbles, and then repeate ta the loop until it returns the value TT_ Example for StreamTokenizer Class import java.io.StreamTport java.io.InputSt im import java.io.BufferedReader; OException; import java.io.I public class StreamTokenApp { ubl p { BufferedReader inData = new BufferedReader(new InputStreamReader(System StreamTokenizer inStream = new StreamTokenize inStream.commentChar('#'); Basic Java 59 . boolean eof = false; int token=inStream.nextToken(); case inStream.TT_EOF: case inStream.TT_EOL: break; case inStream.TT_WORD: ("Word: "+inStream.sval); break; System.out.println("Number: "+inStream.nval); break; default: System.out.println((char) token+" encountered."); true; } t stream and aving the other thread read the data through a piped input stream. ile file = new File(filename); public class PipedExample { public static void main(String args[]) { if(args.length == 0) { System.out.println("Usage : java PipedExample <filename>"); System.exit(0); } String filename = args[0]; try { F FileInputStream fis = new FileInputStream(file); byte store[] = new byte[fis.available()]; byte p_store[] = new byte[fis.available()]; fis.read(store,0,store.length); do { switch(token) { System.out.println("EOF encountered."); eof = true; break; System.out.println("EOL encountered."); System.out.println case inStream.TT_NUMBER: if(token=='!') eof= } while(!eof); } } THE PIPEDINPUTSTREAM CLASS A piped input stream is the receiving end of a communications pipe. Two threads can communicate by having one thread send data through a piped outpu h Example for Piped Input/Output Operations import java.io.*; Basic Java 60 . PipedOutputStream piped_out = new PipedOutputStream(); PipedInputStream piped_in = new PipedInputStream(piped_out); piped_out.write(store,0,store.length); ystem.out.println("Result stored in file 'output.txt'"); ("ouput.txt"); 0,p_store.length); ---"); or use in debugging, and for compatibility with PrintWriter class. -serializable classes will be initialized using piped_in.read(p_store,0,p_store.length); S FileOutputStream fos = new FileOutputStream fos.write(p_store, System.out.println("----------End-----} catch(Exception err) { System.out.println("Exception in accessing file"); } } } THE PRINTSTREAM CLASS Print values and objects to an output stream, using the platform's default character encoding to convert characters into bytes. If automatic flushing is enabled at creation time, then the stream will be flushed each time a line is terminated or a newline character is written. Methods in this class never throw I/O exceptions. Client code may inquire as to hether any errors have occurred by invoking the checkError method. w Note: This class is provided primarily fxisting code; new code should use the e THE SERIALIZABLE INTERFACE'spublic, protected, and (if accessible) package fields. The subtype may assume this responsibility only if the class it extends has an accessible no-arg constructor to initialize the class's state. It is an error to declare a class Serializable in this case. he error will be detected at runtime. T uring deserialization, the fields of non D thepublic or protected no-arg constructor of the class. A no-arg constructor must be accessible to the subclass that is serializable. The fields of serializable subclasses will be restored from the stream. Basic Java 61 .) r writing the state of the object for its articular class so that the corresponding readObject method can restore it. The riteObject. The method does not need to concern itself with the state elonging to its superclasses or subclasses. State is saved by writing the individual adObject ethod uses information in the stream to assign the fields of the object saved in the DataOutput. HE EXTENALIZABLE INTERFACE to write the object's nd to read them back. The Externalizable interface's plemented by a class to give the class tents of the stream for an object and its ust explicitly coordinate with the supertype to save its ate. ject to be stored is tested for e Externalizable interface. If the object supports it, the writeExternal method is HE OBJECTINPUTSTREAM CLASS serializes primitive data and objects previously written using utStream can provide an of objects when used with a throws IOException private void readObject(java.io.ObjectInputStream in) throws IOException, ClassNotFoundException; The writeObject method is responsible fo p default mechanism for saving the Object's fields can be invoked by calling out.defaultW bRe T Externalization allows a class to specify the methods to be used contents to a stream a writeExternal and readExternal methods are im complete control over the format and con supertypes. These methods m st Object Serialization uses the Serializable and Externalizable interfaces. Object persistence mechanisms may use them also. Each ob th called. If the object does not support Externalizable and does implement Serializable the object should be saved using ObjectOutputStream. When an Externalizable object is to be reconstructed, an instance is created using thepublic no-arg constructor and the readExternal method called. Serializable objects are restored by reading them from an ObjectInputStream. T An ObjectInputStream de an ObjectOutputStream. ObjectOutputStream and ObjectInp application with persistent storage for graphs Basic Java 62 . FileOutputStream and FileInputStream respectively. ObjectInputStream is used to g the standard mechanisms. ut. lds declared as transient or static re ignored by the deserialization process. References to other objects cause those g constructors are voked for the nonserializable classes and then the fields of the serializable classes the serializable class closest to va.lang.object and finishing with the object's most specifiec class. the example in ObjectOutputStream: putStream("t.tmp"); utStream(istream); ject(); Object(); tion to save and restore ation and deserialization process should am stream) ream stream) FoundException; usinp The default deserialization mechanism for objects restores the contents of each field to the value and type it had when it was written. Fie a-ar in are restored from the stream starting with ja For example to read from a stream as written by FileInputStream istream = new FileIn ObjectInputStream p = new ObjectInp int i = p.readInt(); String today = (String)p.readOb Date date = (Date)p.read istream.close(); Cla sses control how they are serialized by implementing either thejava.io.Serializable or java.io.Externalizable interfaces. Imp lementing the Serializable interface allows object serializathe entire state of the object and it allows classes to evolve between t he time the stream is written and the time it is read. It automatically traverses refe rences between objects, saving and restoring entire graphs. Serializable classes that require special handling during the serializ im plement both of these methods: private void writeObject(java.io.ObjectOutputStre throws IOException; bjectInputSt private void readObject(java.io.O throws IOException, ClassNot Basic Java 63 . The readObject method is responsible for reading and restoring the state of the written to the stream by the corresponding not need to concern itself with the state sses. State is restored by reading data from the individual fields and making assignments to the object. Reading primitive data types is supported by taInput. t plement the java.io.Serializable interface. Subclasses of Objects that are not able class must have a se it is the responsibility class. It is le (public, package, or get and set methods that can be used to restore the exception that occurs while deserializing an object will be caught by the ectInputStream and abort the reading process. e lized form. The methods of e interface, writeExternal and readExternal, are called to save and ts state. When implemented by a class they can write and read their sing all of the methods of ObjectOutput and ObjectInput. It is the the objects to handle any versioning that occurs. ObjectInputStream; ObjectOutputStream; Serializable; FileInputStream; FileOutputStream; File; rt java.io.IOException; args[]) throws IOException, t.txt"); tream outFile = new FileOutputStream(file); jectOutputStream outStream = new ectOutputStream(outFile); ',0.0001,"java"); s is a test."; te(); object for its particular class using data writeObject method. The method does belonging to its superclasses or subcla the ObjectInputStream for appropriate fields of the Da Serialization does not read or assign values to the fields of any object that does no im serializable can be serializable. In this case the non-serializ no-arg constructor to allow its fields to be initialized. In this ca of the subclass to save and restore the state of the non-serializable frequently the case that the fields of that class are accessib protected) or that there are state. y AnObj Im plementing the Externalizable interface allows the object to assume complettrol over the contents and format of the object's seria con the Externalizabl objec restore the own state u responsibility of Example for ObjectInput and Output Streams import java.io. import java.io. import java.io. import java.io. import java.io.ort java.io. impmpo i import java.util.Date; public class ObjectIOApp { public static void main(String ClassNotFoundException { = new File("tes File file ileOutputS FOb bj O TestClass1 t1 = new TestClass1(true,9,'Aests2 t2 = new TestClass2(); TClas String t3 = "Thi Date t4 = new Da Basic Java 64 . outStream.writeObject(t1); eam.writeObject(t2); tStream.close(); putStream(file); = new ObjectInputStream(inFile); println(inStream.readObject()); stem.out.println(inStream.readObject()); ystem.out.println(inStream.readObject()); System.out.println(inStream.readObject()); inFile.close(); lass TestClass1 implements Serializable double d; this.i = i; lueOf(c)+" "; } ializable estClass1 tc2; outStr outStream.writeObject(t3); outStream.writeObject(t4); ou outFile.close(); FileInputStream inFile = new FileIn ObjectInputStream inStream System.out. Sy S inStream.close(); file.delete(); } } c { boolean b; int i; char c; String s; TestClass1(boolean b,int i,char c,double d,String s) { this.b = b; this.c = c; this.d = d; this.s = s; } public String toString() { String r = String.valueOf(b)+" "; r += String.valueOf(i)+" "; r += String.va r += String.valueOf(d)+" "; r += String.valueOf(s); return r; } class TestClass2 implements Ser { int i; TestClass1 tc1; T Basic Java 65 . TestClass2() tc1 = new TestClass1(true,2,'j',1.234,"Java"); A"); String r = String.valueOf(i)+" "; r += tc1.toString()+" "; n be written to streams. he class of each serializable object is encoded including the class name and n to the stream using the appropriate methods . an object writes the class of the object, the nt and non-static fields. References FileOutputStream ostream = new FileOutputStream("t.tmp"); tStream p = new ObjectOutputStream(ostream); p.writeObject("Today"); { i=0; tc2 = new TestClass1(false,7,'J',2.468,"JAV } public String toString() { r += tc2.toString(); return r; } } THE OBJECTOUTPUTSTREAM CLASS reconsituted on another host or in another process. Only objects that support the java.io.Serializable interface ca writteom DataOutput. Strings can also be written using the writeUTF method fr The default serialization mechanism for lass signature, and the values of all non-transie c to other objects (except in transient or static fields) cause those objects to be written also. Multiple references to a single object are encoded using a reference sharing mechanism so that graph of objects can be restored to the same shape as when the original was written. For example to write an object that can be read by the example in ObjectInputStream: ObjectOutpu p.writeInt(12345); p.writeObject(new Date()); p.flush(); ostream.close(); Basic Java 66 . Classes that require special handling during the serialization and deserialization these exact signatures: throws IOException, ClassNotFoundException; eObject method is responsible for writing the state of the object for its thod can restore it. The itself with the state belonging to the object's by writing the individual fields to the ethod or by using the methods for lization does not write out the fields of any object that does not implement the .io.Serializable interface. Subclasses of Objects that are not serializable can be erializable. In this case the nonserializable class must have a no-arg constructor to the responsibility of the subclass to alizable class. It is frequently the case that rm. The methods of the Externalizable n InputStreamReader is a bridge from byte streams to character streams: It reads platform's efault)); Example for InputStreamReader import java.io.InputStreamReader; import java.io.BufferedReader; process must implement special methods with private void readObject(java.io.ObjectInputStream stream) private void writeObject(java.io.ObjectOutputStream stream) throws IOException The writ par ticular class so that the corresponding readObject memethod does not need to concern superclasses or subclasses. S tate is saved ObjectOutputStream using the writeObject m primitive data types supported by DataOutput. Seria java s a llow its fields to be initialized. In this case it is save and restore the state of the non-seri th e fields of that class are accessible (public, package, or protected) or that there are get and set methods that can be used to restore the state. Serialization of an object can be prevented by implementing writeObject and re adObject methods that throw the NotSerializableException. The exception will be caught by the ObjectOutputStream and abort the serialization process. Implementing the Externalizable interface allows the object to assume complete control over the contents and format of the object's serialized fo in terface,. THE INPUTSTREAMREADER CLASS A b ytes and translates them into characters according to a specified characterencoding. The encoding that it uses may be specified by name, or the d Basic Java 67 . import java.io.IOException; public class InputConversionApp { public static void main(String args[]) throws IOException { InputStreamReader in = new InputStreamReader(System.in); BufferedR eader inStream = new BufferedReader(in); intln("Encoding: "+in.getEncoding()); ne; tem.out.println(inputLine); } while (inputLine.length() != 0); EAMWRITER CLASS ranslating characters into bytes according to a h OutputStreamWriter incorporates its own ter streams to byte streams. The en may be specified by name, by providing a CharToByteConverter, or by accepting the default encoding, which is defined by the invocation of a write() method causes the encoding converter to be invoked on e given character(s). The resulting bytes are accumulated in a buffer before being ing output stream. The size of this buffer may be specified, but y default it is large enough for most purposes. Note that the characters passed to (System.out)); System.out.pr String inputLi do { System.out.print(">"); System.out.flush(); inputLine=inStream.readLine(); Sys } } THE OUTPUTSTR Write characters to an output stream, t specified character encoding. Eac CharToByteConverter, and is thus a bridge from charac coding used by an OutputStreamWriter system property file.encoding. Each th written to the underly b the write() methods are not buffered. For top efficiency, consider wrapping an OutputStreamWriter within a BufferedWriter so as to avoid frequent converter invocations. For example, Writer out = new BufferedWriter(new OutputStreamWriter Basic Java 68 . CHAPTER 5: APPLET PROGRAMMING a Web page or viewed by the Java Applet their r. and applications. The most important of an existing class. This class is called e to extend Applet in order for a class to be usable as for which is ou should see that the applet HelloWorld is quite different tion. ation , you ran them using the Java r, don't run from the command line; they are executed applet into the browser, you need to embed what are s" WIDTH ="200" HEIGHT="200"> at the file name be the same as the class file. This is HTML file can contain several An applet is a small program that is intended not to be run on its own, but rather to be embedded inside another application. The Applet class must be the superclass of ny applet that is to be embedded in a Viewer. The Applet class provides a standard interface between applets andnvironment. e elloWorld Applet H pplets can be run in a browser such as Netscape Navigator, Internet Explore A Several differences exist between applets hese is that Java applet classes extend t java.applet.Applet. You havuch. s ne of the simplest applets is the HelloWorld applet, the source code O shown below. Right away y from the HelloWorld applica HelloApplet.java import java.applet.Applet; import java.awt.Graphics; public class HelloApplet extends Applet { public void paint (Graphics g) { g.drawString ("Hello World!",0,50); } } Creating HTML file hen you created the HelloWorld applic W interpreter. Applets, howeveithin a browser. To get the w known as HTML tags into an HTML file. The HTML file can then be read into a browser. The simplest HTML file for the HelloApplet class is shown. HTML> < <BODY> APPLET CODE="HelloApplet.clas < </APPLET> </BODY> </HTML> ith Java files, it is necessary th W not necessary with the HTML file. In fact, a single APPLET> tags. < Basic Java 69 . Using AppletViewer ow, to run the applet, the JDK includes a very simplified version of a browser called o run the HelloApplet program using Appletviewer, on the command line type: lloApplet in it. CE CODE port java.applet.Applet; he import statement is a new one. Often it is necessary or easier to use the ce t work yourself. The import statement enables you to use these other classes. If ou are familiar with the C/C++ #include declaration, the import statement works in is specific to applets. In fact, in order for any class to be run in browser as an applet, it must extend java.applet.Applet. hics class. va.awt.Graphics contains all kinds of tools for drawing things to the screen. In fact, lass lass that xtends another class is placed at the bottom of the existing chain. ublic class HelloApplet extends Applet { ou may think this is harping the issue, but it's important: all applets must extend still have xtended it using the full name: ublic class HelloApplet extends java.applet.Applet { N Appletviewer. Appletviewer looks for <APPLET> tags in any given HTML file and opens a new window for each of them. T appletviewer HelloApplet.html Appletviewer opens a new window and runs He UNDERSTANDING THE SOUR Importing Other Classes The first thing to notice are the top two lines of the code: im import java.awt.Graphics; T contents of a class file, which have already been created, rather than try to reprodu tha y somewhat the same way. In the case of the HelloApplet program, there are two classes that are used other than HelloApplet. The first is the java.applet.Applet class. The Applet class contains all the information that a The second class that is imported into HelloApplet is the java.awt.Grap ja the screen is treated as a Graphics object. Declaring an Applet C You may have noticed that there is a slight difference between this class declaration for the HelloApplet class. HelloApplet extends Applet. extends is the keyword for saying that a class should be entered into that class hierarchy. In fact, a c e p Y java.applet.Applet. However, because you imported the Applet class, you can simply call it Applet. If you had not imported java.applet.Applet, you could e p Basic Java 70 . Applet Methodspaint he next item to notice about the HelloApplet class versus HelloWorld is that er lies in the fact that the applets don't start up themselves. They are being dded to an already running program (the browser). The browser has a predefined eans for getting each applet to do what it wants. It does this by calling methods that paint. e the browser needs to display the applet on the screen, so you can use the paint method to display anything. The browser helps out by passing a Graphics object to the paint method. This object gives the paint method a way to display items directly to the screen. The next line shows an example of using the Graphics object to draw text to the screen: g.drawString ("Hello World!",0,50); } BRIEF LIFE OF AN APPLET The paint method is not the only method that the browser calls of the applet. You can override any of these other methods just like you did for the paint method in the HelloWorld example. When the applet is loaded, the browser calls the init() method. This method is only called once no matter how many times you return to the same Web page. After the init() method, the browser first calls the paint() method. This means that if you need to initialize some data before you get into the paint() method, you should do so in the init() method. Next, the start() method is called. The start() method is called every time an applet page is accessed. This means that if you leave a Web page and then click the Back button, the start() method is called again. However, the init() method is not. When you leave a Web page (say, by clicking a link), the stop() method is called. Finally, when the browser exits all together the destroy() method is called. Notice that unlike the paint(Graphics g) method, the init(), start(), stop(), and destroy() methods do not take any parameters between the parentheses. The java.applet.AppletContext Interface This interface corresponds to an applet's environment: the document containing the applet and the other applets in the same document. The methods in this interface can be used by an applet to obtain information about its environment. T HelloApplet doesn't have a main method. Instead, this applet only has a paint method. How is this possible? The answ a m it knows the Applet has. One of these is public void paint (Graphics g) { The paint method is called any tim Basic Java 71 . The java.applet.AppletStub Interface When an applet is first created, an applet stub is attached to it using the applet's Multiple is mixed gether to produce a composite. setStub method. This stub serves as the interface between the applet and the browser environment or applet viewer environment in which the application is running. The java.applet.AudioClip Interface The AudioClip interface is a simple abstraction for playing as ound clip.AudioClip items can be playing at the same time, and the resulting sound to Basic Java 72 . Object ComponeContainerPanel Button Window TextComponeTextFieldDialog Frame MenuCompone Choice List TextArea MenuItemMenu MenuBar Basic Java 73 . A Simple AW T Applet import java.awt.*; import java.applet.Applet; public class Example1 extends Applet add(hiButton); ght=100></applet> ton component is created with the label, Click Me! this case an applet). prisingly little e is hidden behind the e g basic components, its relatively easy to keep things simple. to extend the functionality of the basic components, the e increases. ainer. A container is omponents (and even other containers) can e placed. This can go on endlessly: A component is added to a container, which is omponents are the building blocks from which all programs using the AWT are built. bet Thi gs about all components: terface components encompass all of the standard widgets or controls normally ssociated with a windowing system. Examples of these include buttons, text labels, crollbars, pick lists, and text-entry fields. { Button hiButton; public void init() { hiButton = new Button(Click Me!); } } applet code=Example1.class width=250 hei < It is not important at this point to understand exactly what every line means. Instead, try to get a general feel for what is going on. The example is doing the following: 1. A But 2. The Button is added to the container (inFor a program with a user interface that prod uces output, there is surcode here. Almost all the real work of handling the user interfac scnes. If you are usi nHowever, if you want complexity of your cod When a component is created, it usually is added to a cont simply an area of the screen in which c b added to another container, and so on. We will, in fact, be doing just this in the calculator example at the end of the chapter. This flexibility is one of the biggest advantages of programming the AWT. In an object-oriented programming environment, it makes sense to think of the user interface as actual objects and concentrate on relationships between objects. This is exactly what the AWT lets you do. Components C There are many other classes to handle the components and the interactions ween them, but if its on the screen, its a component. s enables us to say a number of thin All components have a screen position and a size All components have a foreground and background color Components are either enabled or disabled There is a standard interface for components to handle events AWT components can be conceptually broken down into three major categories: Interface components In a s Basic Java 74 . Containers ontainers encompass areas in which components can be placed. This allows together to form a more cohesive object to be anipulated. A Panel is an example of this type of component. indows are a very special case of the Component class. All other components are ow is an actual, separate dow with a completely new area to create an interface upon. Normally with applet logs and Frames are examples of this type bstract base class for all plication to draw onto components that are realized state information needed for the basic rendering e following The Component object on which to draw. A translation origin for rendering and clipping coordinates. utput device. rations, which render horizontal text render the ove the baseline coordinate. the right from the path it traverses. This has the e occupies one extra he right and bottom edges as compared to filling a figure that is l line along the same y coordinate as the baseline of a line entirely below the text, except for any descends. nts to the methods of this Graphics object in of this Graphics object prior to the vocation of the method. All rendering operations modify only pixels, which lie within ip of the graphics context and the extents of component used to create the Graphics object. All drawing or writing is done in t font. C groups of components to be grouped m Windows W added onto a container that already exists, whereas a Wind win programming, windows are not used. Dia of c omponent. Th e java.awt.Graphics classThe Graphics class is the agraphics contexts that allow an ap on various devices, as well as onto off-screen images. A Graphics object encapsulates operations that Java supports. This state information includes thperties: pro The current clip. The current color. The current font. The current logical pixel operation function (XOR or Paint). The current XOR alternation color . oordinates are infinitely thin and lie between the pixels of the o C Operations, which draw the outline of a figure, operate by traversing an infinitely thin path between pixels with a pixel-sized pen that hangs down and to the right of the nchor point on the path. Operations, which fill a figure operate by filling the interior a of that infinitely thin path. Opescending portion of character glyphs entirely ab a The graphics pen hangs down and tollowing implications: fo If you draw a figure that covers a given rectangle, that figur row of pixels on tbounded by that same rectangle. If you draw a horizontaof text, that line is drawn All coordinates, which appear as argume considered relative to the translation orig are in the area bounded by both the current cl the the current color, using the current paint mode, and in the curren Basic Java 75 . Example for using Graphics class blic class GraphicsExample extends Applet aphics g) raphics class",50,50); 10); .drawOval(90,60,10,10); 60,10,10,6,6); fillOval(90,90,10,10); red, blue, green components of a color are each represented by an integer in the range 0- . The value 0 indicates no contribution from this primary color. The value 255 lor component. on the three-component RGB model, the class B colors. mple for Using Color class mport java.applet.Applet; ublic void init() .drawString("Example for Color class",50,50); 0,10,10); d); import java.awt.Graphics; import java.applet.Applet; pu { public void paint(Gr { g.drawString("Example for G g.drawRect(60,60,10, g g.drawRoundRect(120, g.fillRect(60,90,10,10); g. g.fillRoundRect(120,90,10,10,6,6); } } The java.awt.Color class This class encapsulates colors using the RGB format. In RGB format, the and5 25 indicates the maximum intensity of this co Although the Color class is based provides a set of convenience methods for converting between RGB and HS Exa import java.awt.*; i public class ColorExample extends Applet { Color c_green; p { c_green = new Color(0,255,0); } public void paint(Graphics g) { g g.drawRect(60,6 g.setColor(Color.reg.drawOval(90,60,10,10); Basic Java 76 . g.drawRoundRect(120 ,60,10,10,6,6); .setColor(c_green); xample for Font Class ont font; aphics g) nds Object implements Serializable infinite recursion when your subclass is used. In ) getMaxAdvance() g g.fillRect(60,90,10,10); g.fillOval(90,90,10,10); g.fillRoundRect(120,90,10,10,6,6); } } The java.awt.Font Class A class that produces font objects. E import java.awt.*; import java.applet.Applet; public class FontExample extends Applet { F public void init() { font = new Font("Helvetica",Font.BOLD+Font.ITALIC,15); } ublic void paint(Gr p { g.setFont(font); g.drawString("Text Displayed in Helvetica font",25,50); } } The java.awt.FontMetrics class ublic abstract class FontMetrics exte p A font metrics object, which gives information about the rendering of a particular font on a particular screen. Note that the implementations of these methods are inefficient, they are usually overridden with more efficient toolkit-specific implementations. Note to subclassers: Since many of these methods form closed mutually recursive loops, you must take care that you implement at least one of the methods in each uch loop in order to prevent s particular, the following is the minimal suggested set of methods to override in order to ensure correctness and prevent infinite recursion (though other subsets are equally feasible): getAscent() getDescent() getLeading( Basic Java 77 . charWidth(char ch) charsWidth(char data[], int off, int len) When an application asks AWT to place a character at the position (x, y), the haracter is placed so that its reference point is put at that position. The reference he baseline of the character. In normal rinting, the baselines of characters should align. amount by which the character ascends above the baseline. idth is w, then the following character is placed with its ference point at the position (x + w, y). The advance width is often the same as the ing can also have an ascent, a descent, and an e array is the maximum ascent of any character in e array. The . A label displays a single rocessMouseEvent, or it can register itself as a listener for mouse events by calling ods are defined by Component, the abstract t to od c point specifies a horizontal line called t p In addition, every character in a font has an ascent, a descent, and an advance idth. The ascent is the w The descent is the amount by which the character descends below the baseline. The advance width indicates the position at which AWT should place the next character. If the current character is placed with its reference point at the position (x, y), and the character's advance w re width of character's bounding box, but need not be so. In particular, oblique and italic fonts often have characters whose top-right corner extends slightly beyond the advance width. An array of characters or a strdvance width. The ascent of th a the array. The descent is the maximum descent of any character in thdvance width is the sum of the advance widths of each of the characters in the a array. The java.awt.LabelClass A Label object is a component for placing text in a containere of readonly text. The text can be changed by the application, but a user cannot lin edit it directly. The java.awt.Button Class This class creates a labeled button. The application can cause some action to happen when the button is pushed. p addMouseListener. Both of these methuperclass of all components. s When a button is pressed and released, AWT sends an instance of ActionEvene button, by calling processEvent on the button. The button's processEvent meth th receives all events for the button; it passes an action event along by calling its own processActionEvent method. The latter method passes the action event on to any action listeners that have registered an interest in action events generated by this utton. b Basic Java 78 . If an application wants to perform some action based on a button being pressed and tener and register the new listener to receive e button's addActionListener method. The ction command as a messaging protocol. he java.awt.TextComponent Class ublic class TextComponent extends Component es a string of text. The TextComponent class s that determine whether or not this text is editable. If es another set of methods that supports a addition, the class defines methods that are used to maintain a current text selection, a substring of the component's text, rations. It is also referred to as the selected text. The jav public c implements ItemSelectable hec nent that can be in either an "on" (true) or "off" se) state. Clicking on a check box changes its state from "on" to "off," or from "off" on." le creates a set of check boxes: the "on" state, and the other two are in the "off" state. In boxes are set independently. boxes can be grouped together under the control of a heckboxGroup class. In a check box group, at most one at any given time. Clicking on a check box to turn it on he same group that is on into the "off" state. up Class extends Object implements Serializable used to group together a set of Checkbox buttons. ox button in a CheckboxGroup can be in the "on" state at any released, it should implement ActionLis events from this button, by calling th application can make use of the button's a T p The TextComponent class is the superclass of any component that allows the diting of some text. e text component embodi A defines a set of method the component is editable, it definxt insertion caret. te In selection from the text. The he target of editing ope is t a.awt.Checkbox Class lass Checkbox extends Component k box is a graphical compo Ac (falo " t he following code examp T add(new Checkbox("one", null, true)); add(new Checkbox("two")); dd(new Checkbox("three")); a he button labeled one is in T this example, the states of the three check ck Alternatively, several che single object, using the C button can be in the "on" stateox in t forces any other check b o The java.awt.CheckboxGr public class CheckboxGroup is The CheckboxGroup class xactly one check b E given time. Pushing any button sets its state to "on" and forces any other button that is in the "on" state into the "off" state. The following code example produces a new check box group, with three check boxes: Basic Java 79 . CheckboxGroup cbg = new CheckboxGroup(); he java.awt.Choice ayed s the title of the menu. mport java.awt.*; ublic class ChoiceExample extends Applet hoice ColorChooser; ColorChooser = new Choice(); ColorChooser.add("Green"); . The list can choose either one item or multiple items. dd("JavaSoft"); st.add("Mars"); dd(lst); an item that is already m the scrolling list selected at a time, since the second argument when creating the new s any other selected item to be add(new Checkbox("one", cbg, true)); add(new Checkbox("two", cbg, false)); add(new Checkbox("three", cbg, false)); T public class Choice extends Component implements ItemSelectable The Choice class presents a popup menu of choices. The current choice is displ a Example for adding Choice i import java.applet.Applet; p { C public void init|() { ColorChooser.add("Red"); ColorChooser.add("Blue"); add(ColorChooser); } } The java.awt.List Class public class List extends Component implements ItemSelectable List component presents the user with a scrolling list of text items The be set up so that the user can For example, the code . . . List lst = new List(4, false); lst.add("Mercury"); lst.add("Venus"); d("Earth"); lst.adt.a lsl lst.add("Jupiter"); st.add("Saturn"); l lst.add("Uranus"); lst.add("Neptune"); d("Pluto"); lst.adnt.a c lects it. Clicking on Clicking on an item that isn't selected sedeselects it. In the preceding example, only one item fro selected n be ca scrolling list is false. Selecting an item causeautomatically deselected. Basic Java 80 . Beginning with Java 1.1, the Abstract Window Toolkit sends the List object all keyboard, and focus events that occur over it. (The old AWT event model is ing maintained only for backward compatibility, and its use is discouraged.) elected, AWT sends an instance of Item Event to the t. When the user double-clicks on an item in a scrolling list, AWT sends an instance nerates an action an application wants to perform some action based on an item in this list being ner or ActionListener as ppropriate and register the new listener to receive events from this list. ction scrolling lists, it is considered a better user interface to use an xternal gesture (such as clicking on a button) to trigger the action. s component represents a blank rectangular area of the screen onto which e application can draw or from which the application can trap input events from the n application must subclass the Canvas class in order to get useful functionality overridden in order tom graphics on the canvas. ublic class CanvasTest extends java.applet.Applet doodle = new MyCanvas(getSize().width, etSize().height); } public MyCanvas(int width, int height) { g) { MOUSE, be When an item is selected or des lis of ActionEvent to the list following the item event. AWT also ge event when the user presses the return key while an item in the list is selected. If selected or activated, it should implement ItemListe a For multiple sele e The java.awt.Canvas Class A Canva th user. A such as creating a custom component. The paint method must be to perform cus Example for Canvas Class import java.awt.*; p { MyCanvas doodle; public void init() { g add(doodle); } } class MyCanvas extends Canvas { public MyCanvas() { this(100,100); resize(width,height); } public void paint(Graphics Basic Java 81 . g.fillRect(0, 0, getSize().width-1, getSize().height-1); } } The java.awt.Scrollbar Class public class Scrollbar extends Component implements Adjustable The Scrollbar class embodies a scroll bar, a familiar user interface object. A scroll bar provides a convenient means for allowing a user to select from a range of values. le, if a scroll bar resent range of the scroll bar. The or ould be created with code like the following: HORIZONTAL, 0, 64, 0, 255); ximum value for the scroll bar's own, or click roll ba gestures can oll bar receives an instance djustmentListener, an interface defined in the package java.awt.event. e top arrow of a vertical scroll bar, or makes the The following code provides a sample model of scrollbar. redSlider=new Scrollbar(Scrollbar.VERTICAL, 0, 1, 0, 255); add(redSlider); Alternatively, a scroll bar can represent a range of values. For examp is used for scrolling through text, the width of the "bubble" or "thumb" can rep the amount of text that is visible. Here is an example of a scroll bar that represents a range: The value range represented by the bubble is the visibleizontal scroll bar in this example c h ranger = new Scrollbar(Scrollbar.add(ranger); ote that the maximum value above, 255, is the ma N bubble. The actual width of the scroll bar's track is 255 + 64. When the scroll bar is set to its maximum value, the left side of the bubble is at 255, and the right side is at 55 + 64. 2 theture with the Normally, user changes the value of the scroll bar by making a gesd d mouse. For example, the user can drag the scroll bar's bubble up an the scr's unit increment or block increment areas. Keyboard in also be mapped to the scroll bar. By convention, the Page Up and Page Down keys are equivalent to clicking in the scroll bar's block increment and block decrement areas. hen the user changes the value of the scroll bar, the scr W of AdjustmentEvent. The scroll bar processes this event, passing it along to any registered listeners. Any object that wishes to be notified of changes to the scroll bar's value should plement A im Listeners can be added and removed dynamically by calling the methods addAdjustmentListener and removeAdjustmentListener. The AdjustmentEvent class defines five types of adjustment event, listed here: AdjustmentEvent.TRACK is sent out when the user drags the scroll bar's bubble. AdjustmentEvent.UNIT_INCREMENT is sent out when the user clicks in the left rrow of a horizontal scroll bar, or th a equivalent gesture from the keyboard. Basic Java 82 . AdjustmentEvent.UNIT_DECREMENT is sent out when the user clicks in the right age Up ey is equivalent, if the user is using a keyboard that defines a Page Up key. EMENT is sent out when the user clicks in the track, the right of the bubble on a horizontal scroll bar, or below the bubble on a vertical nt, if the user is using a ge Down key. patibility, but its use with er versions of JDK is discouraged. The fives types of adjustment event pond to the five event types that are associated with ars in previous JDK versions. The following list gives the adjustment event .0 event type it replaces. LL_ABSOLUTE eplaces Event.SCROLL_LINE_UP NT replaces Event.SCROLL_LINE_DOWN T replaces Event.SCROLL_PAGE_UP NT replaces WN he java.awt.ScrollPane Class horizontal and/or vertical scrolling for r the scrollbars can be set to: as needed : scrollbars created and shown only when needed by scrollpane .always : scrollbars created and always shown by the scrollpane ver created or shown by the scrollpane objects (one rties (minimum, maximum, blockIncrement, and nally by the scrollpane in accordance with the geometry child and these should not be set by programs using the n the scrollpane can still be ethod and the scrollpane will ve and clip the child's contents appropriately. This policy is useful if the program ustable controls. rmspecific properties set by the but can be reset using setSize(). arrow of a horizontal scroll bar, or the bottom arrow of a vertical scroll bar, or makes the equivalent gesture from the keyboard. AdjustmentEvent.BLOCK_INCREMENT is sent out when the user clicks in the track, to the left of the bubble on a horizontal scroll bar, or above the bubble on a vertical scroll bar. By convention, the P k AdjustmentEvent.BLOCK_DECR to scroll bar. By convention, the Page Down key is equivale keyboard that defines a Pa The JDK 1.0 event system is supported for backward com new introduced with JDK 1.1 corres scroll b type, and the corresponding JDK 1 AdjustmentEvent.TRACK replaces Event.SCRO AdjustmentEvent.UNIT_INCREMENT r AdjustmentEvent.UNIT_DECREME AdjustmentEvent.BLOCK_INCREMENDECREME AdjustmentEvent.BLOCK_nt.SCROLL_PAGE_DO Eve T public class ScrollPane extends Container A container class, which implements automatic child component. The display policy fo a single . 1 2 3.never : scrollbars ne The state of the horizontal and vertical scrollbars is represented by two for each dimension) which implement the Adjustable interface. The API provides methods to access those objects such that the attributes on the Adjustable object (such as unitIncrement, value, etc.) can be manipulated. Certain adjustable propeisibleAmount) are set inter v of the scrollpane and its scrollpane. If the scrollbar display policy is defined as "never", thegrammatically scrolled using the setScrollPosition() m proo m needs to create and manage its own adj The placement of the scrollbars is controlled by platfo user outside of the program. The initial size of this container is set to 100x100, Basic Java 83 . Insets are used to define any space used by scrollbars and any borders created by ets() can be used to get the current value for the insets. If the lue of the insets will change ot. ScrollPane pane; XCanvas xcan; setLayout(new BorderLayout()); ScrollPane(); add("Center", pane); pane.add(xcan); } } class public void paint(Graphics g) { g.drawLine(0, 0, 200, 200); } } bject implements Cloneable, Serializable n of the borders of a container. It specifies the leave at each of its edges. The space can be a border, a g[] args) (); w .add ; ); the scroll pane. getIns value of scrollbarsAlwaysVisible is false, then the vaamically depending on whether the scrollbars are currently visible or n dyn Example for ScrollPane Class import java.applet.Applet; import java.awt.*; pub lic class ScrollpaneTest extends Applet { public void init() { pane = new xcan = new XCanvas(); xcan.setSize(200, 200); XCanvas extends Canvas { g.drawLine(0, 200, 200, 0); The java.awt.Insets Class public class Insets extends On Insets object is a representatio A space that a container must lank space, or a title. b Example for Using Insets import java.awt.*; ass InsetTest cl { public static void main (Strin { = new MyFrame MyFrame win win.setLayout( new FlowLayout() ); in( new Button("One") ); win.add( new Button("Two") ) win.add( new Button("Three") Basic Java 84 . win.add( new Button("Four") ); wi pack(); t.println( win.insets() ); urn new Insets(100, 2, 2, 2); } sets insets() { return new Insets(25, 100, 2, 2); / public Insets insets() { return new Insets(25, 2, 100, 2); /public Insets insets() { return new Insets(25, 2, 2, 100); } from the outside world to the program that something clicks When the mouse button is clicked while positioned over a component. sent to the component informing it what coordinates in the component the mouse has d to. events A component that an action can be performed upon is used, an Action event by default and the owner of the component (usually the container in which the component is placed) is notified that something happened. to user actions. ublic class ActionEvent extends Applet { Button bt_submit; TextField data; it() submit = new B new Tex ubmit); n. win.show(); System.ou } } class MyFrame extends Frame { public Insets insets() { ret //public In } / } / } EVENT HANDLING An event is a communication has occurred. The following are a few basic event types: Mouse Mouse movement The mouse is moved over a component, many events are move Action is created One of the most important things to understand about the AWT is how events are handled. Without events, your application will not be able to respond Event Handling in JDK1.0 Example for using action import java.awt.*; a.applet.Applet; import jav p tf_ void in public { bt_ utton("SUBMIT"); tf_data = tField(15); add(bt_s Basic Java 85 . on Clicked"); line: ar to this. They accept a parameter of type Event bout the event. Second, they return a Boolean dled or False if it was not. it is the button. e if they are the Button Clicked) tton was clicked, we change the textfield to reflect that. ent was handled, return true or else return false. This is an important in mind: The event handler keeps searching for a method that will o use the event-handling methods that Sun has rized in Table below. Remember that everything component. For example, the mouseMove() method of a component he mouse is moved inside that component. add(tf_data); } public boolean action(Event evt, Object obj) { if(evt.target == bt_submit) { tf_data.setText("Butt } return true; } } Lets break the action() method down line by public boolean action(Event evt, Object what) { All event handlers have a form similat provides detailed information a th value-indicating True if the event was han(evt.target == bt_submit) { if Here the target of the event is being checked to see whether or not ecause evt.target and hiButton are both objects, we can check to se B same objects. tf_data.setText(ecause the bu B eturn true; r } lse e return false; Finally, if the evncept to keep co accept the Event. Accepting the Event is signaled by returning true. vent Handling in Detail E In almost all cases, you will want trovided for you. These are summa p is relative to the called when t is Java events. Event Type Method Action taken action(Event evt, Object what) mouseDown(Event evt, int x, int y) Mouse button pressed Mouse button released moved mouseMove(Event evt, int x, int y) useUp(Event evt, int x, int y) Mouse mo Mouse dragged mouseDrag(Event evt, int x, int y) Mouse enters component mouseEnter(Event evt, int x, int y) ouse exits component mouseExit(Event evt, int x, int y) MKey pressed keyDown(Event evt, int key) Basic Java 86 . Key released keyUp(Event evt, int key) When would yu actually wou want to use other methods than action()? The answer is that when ant to change the behavior of a component (as opposed to just using ponent as is was originally designed) action() isnt quite enough. It only vents that are essential to the utility of the component, such as a mouse public class Example3 extends Applet { ton; etLabel(Go Away!); ouseExit(Event evt, int x, int y) { hiButton.setLabel(Stay Away!); ject what) { ) { icked!); yo the comports e re click on a button. ets add new behavior to the previous example. L Adding new behavior to the sample applet. import java.awt.*; import java.applet.Applet; Button hiBut public void init() { hiButton = new Button(Click Me!!!); add(hiButton); } public boolean mouseEnter(Event evt, int x, int y) { hiButton.s return true; } public boolean m return true; } public boolean action(Event evt, Ob if (evt.target == hiButton hiButton.setLabel(Cl return true; } else Basic Java 87 . return false; } } e applet, the user is informed that perhaps licking on the button isnt such a good idea. This is a fundamentally different evious example. Before, we were using a button in a completely ere, we wished to change that functionality. This is important to other built-in event handlers will do the of the process ailable. es and disadvantages. On the positive side, you have u have complete control. This means that ault handleEvent() or your application can e buggy and confusing very quickly. ample, lets say you overrode handleEvent() in your class for whatever reason, u had used mouseEnter() earlier in the development of the program, as shown the following: n handleEvent(Event evt) ); ould expect the mouseEnter() you had written to keep working. Unfortunately ats not the case. Because the default handleEvent() has been overridden, <applet code=Example3.class width=250 height=100></applet> Now, whenever the mouse moves over th c behavior than the prstandard manner. H rememberotherwise, you might end up sub-classing components where you dont need to, making your program slower and more difficult to understand and maintain. handleEvent() or action() Generally, a combination of action() and theb nicely. For those times when you want to take complete control jo yourself, handleEvent() is avandleEvent() has advantag h complete control. On the negative side, yoou must be very careful overriding the def y becom For ext yo bu in Example using handleEvent class MyLabel extends Label { MyLabel(String label) { super(label); } public boolean mouseEnter(Event evt, int x, int y) { setText(Not Again); } public boolea { if (Event.id == KEY_PRESS) { setText(Keypress return true; } return false; else } }You w th Basic Java 88 . mouseEnter() never gets called. Luckily there is an easy solution to this problem for Event(evt); it of keeping all of the functionality of the old handleEvent() while tting you manipulate things first. Note, however, that you can also override handleEvent() to remove functionality, in which case you wouldnt want to call the p nt(). Its all up to you. Delivering Events O ability of the program to m events comes in quite h it may seem strange to fake an event, in reality it makes the design of a program much simpler. F designing a calcu t h ain container that deciphe ts from the button, as fp ction(Event evt, obj What) { if (evt.target == oneKey) the current number owever, it might make sense to add the ability to handle keyboard input, because a ty from a calculator. Although you just copy the code from the action() handler to a new keyDown() handler, you uld then have two copies of the same code in the same program to maintain and e target is the Object that you would like the event delivered to, id is an integer presenting the event type, and obj is an arbitrary argument to append to the event s the following: nt evt, int key) } ... many cases. Add the following to your handleEvent() in place of return false;: return super.handle This has the benef le arents handleEve ccasionally the andy. Although or example, if you were lator you migh andler in the m ollows: ublic boolean a t decide to write an evenrs the action even anufacture its own ... // Append 1 to } ... } H user of the calculator would expect that functionali could wo k eep track of. The solution is to deliver your own event. A simple event can be created with the following form: E vent aEvent = new Event (target, id, obj); Wher re if there is extra information that you would like the handler to receive. Then, to deliver the event, you just need to call deliverEvent() as follows: deliverEvent(aEvent); So, in the previous example, you could add another handler that doe public boolean keyDown(Eve { if (key == 49) { // If the 1 key was Event(oneKey,Event.MOUSE_DOWN, null)); return true; } pressed deliverEvent(new Basic Java 89 . Now you can manage the rest of the program without worrying about handling eyboard input differentlythe same event is generated whether the button is clicked Event type Event IDs k or the correspond AWT event types. The Action event ACTION_EVENT Mouse button pressed MOUSE_DOWN Mouse dragged MOUSE_DRAG Mouse entered MOUSE_ENTER Mouse exited MOUSE_EXIT ouse button released MOUSE_UP Mouse moved MOUSE_MOVE M Key pressed y released KEY_RELEASE KEY_PRESS Ke Dealing with Focus hen a user clicks a user interface component, t W hat item becomes in a sense lected. This is as known as the input focus. For instance, when a text field is ked on, the user then can type in the field because it has the input focus. ocus() method of that en a component loses the input focus, the lostFocus() method of that component is not uncommon for a program to desire to keep the focus. For example, if a textan to accept input, you probably ceive the focus. Using a text-entry field to display utput enables you to take advantage of the fields text-handling abilities. In that shown in the following: r that the text field has been used in and would seic cl When a component receives the input focus, the gotFponent is called, as follows: com public boolean gotFocus(Event evt, Object what) { .. . } h W is called, as follows: public boolean lostFocus(Event evt, Object what) { ... } It entry field were being used to display output rather th would not want it to be able to re o case, the requestFocus() method exists, as public void requestFocus() { ... } is could be placed in the containe Th bar that field from receiving the focus. Basic Java 90 . Event Handling in JDK1.1 Event Handling in JDK 1.1 is through listeners. Listeners are provided as interfaces able the class to provide a later versions, the Event class is maintained only Applet Button b = new Button("I have a listener!"); ed(ActionEvent e) "Listener here: the button was vent e) java.applet.Applet; class AdjustmentEventTest extends Applet to enable multiple listeners to a class and also to en specific definition. . In Java 1.1 and for backwards compatibilty Example for using ActionListener import java.awt.Button; impor t java.applet.Applet; import java.awt.event.*; pub lic class ButtonDelegateTest extends{ public void init() { add(b); b.addActionListener(listener); } p ublic void actionPerform{ System.out.println(clicked."); } } The java.awt.event.ActionListener interface pu blic interface ActionListener extends EventListener The listener interface for receiving action events. Methods public abstract void actionPerformed(ActionE Invoked when an action occurs. Example for using AdjustmentListener mport i import java.awt.*; import java.awt.event.*; ublic p implements AdjustmentListener { c void init() publi{ Basic Java 91 . setLayout(new BorderLayout()); // A plain scrollbar that delegates to the applet. Scrollbar sbar1 = new Scrollbar(); ; dles its own adjustment events add(sbar2, "East"); ent e) ); SelfScrollbar extends Scrollbar { } ntEvent e) { e.getValue()); } port java.awt.event.*; t extends Applet ner { let. (); wn focus events (); nt e) { d gained focus"); ent e) { lost focus"); sbar1.addAdjustmentListener(this) add(sbar1, "West"); // A subclass that han SelfScrollbar sbar2 = new SelfScrollbar(); } public void adjustmentValueChanged(AdjustmentEv { System.out.println("Scrollbar #1: " + e.getValue() } } class { public SelfScrollbar() enableEvents(AWTEvent.ADJUSTMENT_EVENT_MASK); me public void processAdjustmentEvent (Adjust System.out.println("Scrollbar #2: " + } Example for using FocusEvent import java.applet.Applet;port java.awt.*; im im cusEventTes public class Fo implements FocusListe public void init() { setLayout(new BorderLayout()); s to the app // A textfield that delegate TextField tf = new TextField tf.addFocusListener(this); add(tf, "North"); its o // A subclass that handles SelfTextArea sta = new SelfTextArea add(sta, "Center"); } public void focusGained(FocusEveprintln("Text Fiel System.out. } public void focusLost(FocusEv System.out.println("Text Field } Basic Java 92 . } class SelfTextArea extends TextArea { public SelfTextArea() { enableEvents(AWTEvent.FOCUS_EVENT_MASK); } cusEvent e) { cusEvent.FOCUS_GAINED) n("Text Area gained focus"); System.out.println("Text Area lost focus"); } ener interface for receiving keyboard focus events on a component. blic abstract void focusGained(FocusEvent e) usEvent e) mport java.awt.*; ntTest extends Applet public void init() { applet. ; "); cotch"); list.addItem("Spumoni"); nts sc.addItem("Frozen Yogurt"); sc.addItem("Sorbet"); public void processFocusEvent(Fo if (e.getId() == Fo System.out.printl else } The java.awt.event.FocusListener interface The list pu public abstract void focusLost(Foc Example for using ItemListener import java.applet.Applet; i import java.awt.event.*; public class ItemEve implements ItemListener { List list; SelfChoice sc; // A list that dele gates to the list = new List(5, false); list.addItem("Chocolate"); list.addItem("Vanilla"); list.addItem("Strawberry") list.addItem("Mocha list.addItem("Peppermint Swirl"); list.addItem("Blackberry Ripple"); list.addItem("Butters list.addItemListener(this); add(list); // A choice subclass that handles its own item eve sc = new SelfChoice(); sc.addItem("Ice Cream"); Basic Java 93 . add(sc); } public void itemStateChanged(ItemEvent e) { System.out.println("New item from list:" + Item()); public void processItemEvent(ItemEvent e) { m from choice: " + getSelectedItem()); he java.awt.event.ItemListener interface ce for receiving item events. ent e) xample for using KeyListener mport java.awt.*; tends Applet setLayout(new BorderLayout()); applet. es its own item events ; list.getSelected } } class SelfChoice extends Choice { public SelfChoice() { enableEvents(AWTEvent.ITEM_EVENT_MASK); } System.out.println("New ite } } T public interface ItemListener extends EventListener The listener interfa public abstract void itemStateChanged(ItemEv E import java.applet.Applet; i import java.awt.event.*; public class KeyEventTest exer { implements KeyListen public void init() { // A text field that delegates to the TextField tf = new TextField(); tf.addKeyListener(this); add(tf, "North"); // A text area subclass that handl SelfKeyTextArea sta = new SelfKeyTextArea() add(sta, "Center"); } public void keyTyped(KeyEvent e) { System.out.println("Key typed in text field: " + e.getKeyChar()); } Basic Java 94 . public void keyPressed(KeyEvent e) { } public void keyReleased(KeyEvent e) { } } class SelfKeyTextArea extends TextArea { public SelfKeyTextArea() { enableEvents(AWTEvent.KEY_EVENT_MASK); } public void processKeyEvent(KeyEvent e) { if (e.getId() == KeyEvent.KEY_TYPED) System.out.println("Key typed in text area: " + e.getKeyChar()); e listener interface for receiving keyboard events. ) t occurs when a key press is followed bstract void keyPressed(KeyEvent e) Applet nListener { e applet. ; // A canvas subclass that handles its own item events seCanvas(); } } } The java.awt.event.KeyListener interface c interface KeyListener extends EventListener publi Th public abstract void keyTyped(KeyEvent e Invoked when a key has been typed. This even by a key release. public a In voked when a key has been pressed. public abstract void keyReleased(KeyEvent e) Invoked when a key has been released. Example for using Mouse & MouseMotion Listeners import java.applet.Applet; import java.awt.*; import java.awt.event.*; p ublic class MouseEventTest extendsimplements MouseListener, MouseMotio public void init() { setLayout(new GridLayout(2, 1)); // A canvas that delegates to th Canvas can1 = new Canvas(); can1.setBackground(Color.yellow); can1.addMouseListener(this); can1.addMouseMotionListener(this) add(can1); SelfMouseCanvas can2 = new SelfMou add(can2); Basic Java 95 . public void mousePressed(MouseEvent e) { se pressed at " + e.getX() + "," + e.getY()); public void mouseReleased(MouseEvent e) { leased at " + e.getX() + "," + e.getY()); public void mouseEntered(MouseEvent e) { entered"); public void mouseClicked(MouseEvent e) { } // Ditto ) { } // Ditto nds Canvas { { een); t.MOUSE_EVENT_MASK | t.MOUSE_MOTION_EVENT_MASK); Event(MouseEvent e) { useEvent.MOUSE_PRESSED) e pressed at " + )); D) use released at " + () + "," + e.getY()); ener interface for receiving mouse events on a component. ) omponent. when a mouse button has been pressed on a component. System.out.println("UPPER: mou } System.out.println("UPPER: mouse re } System.out.println("UPPER: mouse } public void mouseExited(MouseEvent e) { } // Satisfy compiler public void mouseMoved(MouseEvent e public void mouseDragged(MouseEvent e) { } // Ditto } lass SelfMouseCanvas exte c public SelfMouseCanvas() (Color.gr setBackground enableEvents(AWTEven AWTEven } public void processMouse if (e.getId() == Mo System.out.println("LOWER: mous e.getX() + "," + e.getY(t.MOUSE_RELEASE else if (e.getId() == MouseEvenR: mo System.out.println("LOWE e.getX else if (e.getId() == MouseEvent.MOUSE_ENTERED) entered"); System.out.println("LOWER: mouse } } The java.awt.event.MouseListener interface nterface MouseListener extends EventListener public ilist The public abstract void mouseClicked(MouseEvent en clicked on a c Invoked when the mouse has bee bstract void mousePressed(MouseEvent e) public avoked In public abstract void mouseReleased(MouseEvent e) Invoked when a mouse button has been released on a component. Basic Java 96 . public abstract void mouseEntered(MouseEvent e) voked when the mouse enters a component. ublic abstract void mouseExited(MouseEvent e) voked when the mouse exits a component. nterface istener on a component. ponent and then dragged. Mouse rag events will continue to be delivered to the component where the first originated ntil the mouse button is released (regardless of whether the mouse position is within n a component (with no buttons o down). nt.*; et ut(2, 1)); // A text area that delegates to the applet. ); add(ta1); s its own item events lfTextTA(); add(ta2); public void textValueChanged(TextEvent e) { System.out.println("UPPER get text event: " + e); TextArea { stem.out.println("LOWER get text event: " + e); } In p In The java.awt.event.MouseMotionListener i public interface MouseMotionListener extends EventL The listener interface for receiving mouse motion events public abstract void mouseDragged(MouseEvent e) Invoked when a mouse button is pressed on a com d u the bounds of the component). public abstract void mouseMoved(MouseEvent e) Invoked when the mouse button has been moved o n Example for using TextListener import java.applet.Applet; import java.awt.*; import java.awt.eve public class TextEventTest extends Appl implements TextListener { public void init() { setLayout(new GridLayo TextArea ta1 = new TextArea(); ta1.addTextListener(this // A text area subclass that handle SelfTextTA ta2 = new Se } } } lass SelfTextTA extends c public SelfTextTA() { enableEvents(AWTEvent.TEXT_EVENT_MASK); } public void processTextEvent(TextEvent e) { Sy } Basic Java 97 . The java.awt.event.TextListener interface ends EventListener ner interface for receiving adjustment events. er interface extends EventListener r interface for receiving window events. indowEvent e) (WindowEvent e) indowEvent e) Activated(WindowEvent e) implements LayoutManager, Serializable in a left to right flow, much like lines of text in a to arrange buttons in a panel. It will ore buttons fit on the same line. Each line is public interface TextListener ext The liste public abstract void textValueChang ed(TextEvent e) Invoked when the value of the text has changed. The java.awt.event.WindowListen public interface WindowListener The listene p ublic abstract void windowOpened(WindowEvent e) Invoked when a window has been opened. public abstract void windowClosing(W In voked when a window is in the process of being closed. The close operation can be overridden at this point. public abstract void windowClosed(WindowEvent e) Invoked when a window has been closed. p ublic abstract void windowIconifiedInvoked when a window is iconified. public abstract void windowDeiconified(W In voked when a window is deiconified. public abstract void windowInvoked when a window is a ctivated. public abstract void windowDeactivated(WindowEvent e) In voked when a window is deactivated. AWT LAYOUTS The java.awt.FlowLayout class p ublic class FlowLayout extends ObjectA flow layout arranges components p aragraph. Flow layouts are typically used arrange buttons left to righ t until no mcentered. Basic Java 98 . Example for FlowLayout import java.awt.*; import java.applet.Applet; public class myButtons extends Applet { Button button1, button2, button3; FlowLayout flow; public void init() flow = new FlowLayout(FlowLayout.CENTER); button1 = new Button("Ok"); add(button2); ); he java.awt.BorderLayout class ect implements LayoutManager2, Serializable border layout lays out a container, arranging and resizing its components to fit in p.setLayout(new BorderLayout()); s a convenience, BorderLayout interprets the absence of a string specification the Layout()); p.add(new TextArea(), "Center"); sizes and the constraints of ents may be stretched horizontally; ically; the Center component ce left over. sing the BorderLayout layout import java.awt.*; t; public class buttonDir extends Applet { { setLayout(flow); button2 = new Button("Open"); button3 = new Button("Close"); add(button1); add(button3 } } A flow layout lets each component assume its natural (preferred) size. T public class BorderLayout extends Obj A five regions: North, South, East, West, and Center. When adding a component to a container with a border layout, use one of these five names, for example: Panel p = new Panel(); p.add(new Button("Okay"), "South"); A same as "Center": Panel p2 = new Panel(); p2.setLayout(new Border p2.add(new TextArea()); // Same as ording to their preferred The components are laid out acc the container's size. The North and South compon the East and West components may be stretched vert may stretch both horizontally and vertically to fill any spa u Here is an example of five buttons in an applet laid out manager: le for using BorderLayout Examp import java.applet.Apple public void init() { Basic Java 99 . setLayout(new BorderLayout()); ew Button("South")); add("East", new Button("East")); ter", new Button("Center")); } he java.awt.BorderLayout class ublic class BorderLayout extends Object implements LayoutManager2, Serializable arranging and resizing its components to fit in en adding a component to a s the absence of a string specification the ayout(new BorderLayout()); .add(new TextArea()); // Same as p.add(new TextArea(), "Center"); he components are laid out according to their preferred sizes and the constraints of uth components may be stretched horizontally; e East and West components may be stretched vertically; the Center component c class buttonDir extends Applet { public void init() { add("East", new Button("East")); } he java.awt.GridLayout class add("North", new Button("North")); add("South", n add("West", new Button("West")); add("Cen } T p A border layout lays out a container, five regions: North, South, East, West, and Center. Whtainer with a border layout, use one of these five names, for example: con p = new Panel(); Panel p.setLayout(new BorderLayout()); p.add(new Button("Okay"), "South"); As a convenience, BorderLayout interpret same as "Center": Panel p2 = new Panel(); p2.setL p2 T the container's size. The North and So th may stretch both horizontally and vertically to fill any space left over. Here is an example of five buttons in an applet laid out using the BorderLayout layout manager: Example for using BorderLayout import java.awt.*; import java.applet.Applet; publi setLayout(new BorderLayout()); add("North", new Button("North")); add("South", new Button("South")); add("West", new Button("West")); add("Center", new Button("Center")); } T public class GridLayout extends Object implements LayoutManager, Basic Java 100 . Serializable The GridLayout class is a layout manager that lays out a container's components in a rectangular grid. The container is divided into equalsized rectangles, and one component is placed in ach rectangle. import java.awt.*; port java.applet.Applet; public class ButtonGrid extends Applet add(new Button("4")); d(new Button("5")); add(new Button("6")); class class GridBagLayout extends Object implements LayoutManager2, erializable ach GridBagLayout object maintains a dynamic rectangular grid of cells, with each nt occupying one or more cells, called its display area. ponents' containers. g layout effectively, you must customize one or more of the ridBagConstraints objects that are associated with its components. You customize e For example, the following is an applet that lays out six buttons into three rows and two columns: Example for GridLayout im { public void init() { setLayout(new GridLayout(3,2)); add(new Button("1")); add(new Button("2")); add(new Button("3")); ad } } The java.awt.GridbagLayout public S The GridBagLayout class is a flexible layout manager that aligns components vertically and horizontally, without requiring that the components be of the same size. E compone com To use a grid ba G a GridBagConstraints object by setting one or more of its instance variables: gridx, gridy Specifies the cell at the upper left of the component's display area, where thepperleftmost cell has address gridx = 0, gridy = 0. Use u Basic Java 101 . GridBagConstraints.RELATIVE (the default value) to specify that the component be st placed just to the right of (for gridx) or just below (for gridy) the component that as added to the container just before this component was added. ridwidth, gridheight r gridheight) in the NDER to specify that the component be the last one in its (for gridwidth) or column (for gridheight). Use GridBagConstraints.RELATIVE to gridwidth) or column component's requested nent. Possible values are the default), GridBagConstraints.HORIZONTAL (make to fill its display area horizontally, but don't change its traints.VERTICAL (make the component tall enough to fill its hange its width), and GridBagConstraints.BOTH ea entirely). internal padding within the layout, how much to add to the width of the component will be at least its (since the padding applies to both sides of the f the component will be at least the minimum adding, the minimum amount of space between isplay area. r than its display area to determine where (within e the component. Valid values are R (the default), GridBagConstraints.NORTH, GridBagConstraints.EAST, GridBagConstraints.SOUTH, rmine how to distribute space, which is important for specifying resizing avior. Unless you specify a weight for at least one component in a row (weightx) column (weighty), all the components clump together in the center of their ntainer. This is because when the weight is zero (the default), the GridBagLayout object puts any extra space between its grid of cells and the edges of the container. ju w g Specifies the number of cells in a row (for gridwidth) or column (foponent's display area. The default value is 1. Use com GridBagConstraints.REMAI row specify that the component be the next to last one in its row (for (for gridheight). fill Used when the component's display area is larger than the size to determine whether (and how) to resize the compo GridBagConstraints.NONE ( the component wide enough height), GridBagCons display area vertically , but don't c(make the component fill its display ar ipadx, ipady Specifies the compone nt's minimum size of the component. Thexels minimum width plus (ipadx * 2) pi component). Similarly, the height o pixels. height plus (ipady * 2) insets Specifies the component's external p the edges of its d the component and anchor Used when the component is smalleo plac the display area) t.CENTE GridBagConstraints GridBagConstraints.NORTHEAST, straints.SOUTHEAST, GridBagCon GridBagConstraints.SOUTHWEST, GridBagConstraints.WEST, and GridBagConstraints.NORTHWEST. weightx, weighty sed to dete U behd anco Basic Java 102 . Example for GridbagLayout import java.awt.*; public class Gridbag extends java.applet.Applet ayout gb = new GridBagLayout(); w GridBagConstraints(); out(gb); / gbc.fill= GridBagConstraints.HORIZONTAL; nchor= GridBagConstraints.NORTHWEST; gb.setConstraints(b, gbc); add(b); b = new Button("Third"); gbc.gridx = 3; gbc.gridwidth = GridBagConstraints.REMAINDER; gb.setConstraints(b, gbc); add(b); b = new Button("Fourth"); gbc.gridy++; gbc.gridx = 0; gb.setConstraints(b, gbc); add(b); b = new Button("Fifth"); gbc.gridwidth = 1; gbc.gridy++; gb.setConstraints(b, gbc); add(b); b = new Button("Sixth"); gbc.gridwidth = GridBagConstraints.REMAINDER; gbc.gridx = 2; gb.setConstraints(b, gbc); add(b); } } { public void init() { GridBagL GridBagConstraints gbc = ne Button b; setLay / gbc.a gbc.gridwidth = 1; gbc.gridheight = 1; gbc.gridx = 0; gbc.gridy = 0; b = new Button("First"); gb.setConstraints(b, gbc); add(b); b = new Button("Second"); gbc.gridx = 1; gbc.gridwidth = 2; Basic Java 103 . To address the shortcomings of the AWT, the Java Foundation Classes (JFC) were eveloped. JFC 1.2 is an extension of the AWT, not a replacement of it. The JFC T container class. So the methods in the omponent & Container classes are still valid. JFC 1.2 consists of five major packages: Swing d-feel(PL&F) Drag and Drop 2D Sw w for efficient graphical user interface development. Swing Components are lightweight components. The major difference between lightweight a lightweight component can have transparent ixels while a heavywe ght component is always opaque. By taking advantage of ansparent pixels, a lightweight component can appear to be non-rectangular, while ust always be rectangular. A mouse event occuring in a hrough to its parent component, while a mouse event in snot propagate through its parent component. Swing ompliant. d visual components extend the AW C Pluggable Look-an Accessibility ing Swing components allo and heavyweight components is thati p tr a heavyweight component mghtweight component falls t li a heavyweight component doen C Components are Java Bea javax.swing.plaf etc The First Swing Program: import javax.swing.*; import java.awt.*; public class HelloSwing extends JFrame { JLabel text; public HelloSwing(S text = new getContent setVisible(true); } public static void main(String arg[]) HelloSwing swing = new Hewing"); S Basic Java 105 . creates a border with title "Border Example" LineBorder Soft Com ; der(Color.red,5)); n imgIcon = new ImageIcon("dot.gif"); = new MatteBorder(imgIcon); atte); -- f); ere,imgIcon); Other Borders MatteBorder EtchedBorder EmptyBorder BevelBorder BevelBorder poundBorder E xample of using Other Borders ---JPanel panel2, panel3 panel2 = new JPanel(); panel3 = new JPanel(); nel2.setBorder(BorderFactory.createLineBor paIco MatteBorder matte panel3.setBorder(m -Creating an ImageButton --Icon imgIcon = new ImageIcon(dot.giJButton imgBtn = new JButton(Click h --l displays a ImageButton with the image and the labe Basic Java 106 . Example Using JProgressBar mport java.awt.*; mport javax.swing.border.*; mport java.awt.event.*; ublic class UsingProgressBar extends JFrame implements ActionListener,Runnable { Button start; ProgressBar progress; nt count = 0; hread t = null; ublic UsingProgressBar(String title) for adding components t = new Thread(this); ionEvent event) { +count); tructor new DefaultMutableTreeNode("MDC ew "COMXpert"); w t"); first = new efaultMutableTreeNode("COM"); DefaultMutableTreeNode a_second = new DefaultMutableTreeNode("JAVA"); DefaultMutableTreeNode b_second = new DefaultMutableTreeNode("CORBA"); first.add(a_first); first.add(b_first); second.add(a_second); second.add(b_second); main.add(first); main.add(second); TreeModel model = new DefaultTreeModel(main); JTree tree = new JTree(model); import javax.swing.*; i i i p J J i T p { //code } Act public void actionPerformed( t.start(); } ublic void run() { p while(true) progress.setValue(+ } Example using JTree //code for class & cons DefaultMutableTreeNode main =Futura"); first = n DefaultMutableTreeNodeDefaultMutableTreeNode( D efaultMutableTreeNode second = neDefaultMutableTreeNode("CORBA XperDefaultMutableTreeNode a_first = new DefaultMutableTreeNode("ASP"); DefaultMutableTreeNode b_ D Basic Java 107 . //rest of code This displays a tree structure as follows: Using /code for class & constructor w Vector(); Vector colu ata.addElement(row); ow = new Vector(); t("CORBAXpert"); ow.addElement("JAVA & CORBA"); a.addElement(row); row.addElem t(row); "Course Contents"); JTable(data,column); /code to add the table JTable / Vector row = ne Vector data = new Vector(); mn = new Vector(); row.addElement("COMXpert"); row.addElement("ASP & COM"); d r row.addElemen r dat row = new Vector(); row.addElement("WEBXpert"); ent("COM & CORBA"); data.addElemen column.addElement("Course Name"); column.addElement(Table table = new J / Basic Java 108 . This displays a table as follows: he statement, he above statement listens to the Escape key press on the textfield tf_data and of the ActionListener ,16), "crosshair cursor"); Using Toolitp T setTooltipText(String text) is used to set the tooltip for a JComponent Example JButton btn = new JButton(Submit); btn.setTooltipText(Click here ); KeyStroke Handling : //code to be added tf_data.registerKeyboardAction(this,KeyStroke.getKeyStroke (KeyEvent.VK_ESCAPE,0),JComponent.WHEN_FOCUSED); T performs the code stated in the actionPerformed block C reating Custom Cursor The code block, --- Image img = getToolkit().getImage("duke.gif"); Cursor cr = getToolkit().createCustomCursor (img, new Point(16 b tn.setCursor(cr); --- creates a cursor with image of a duke. This cursor is made visible when the mouse is moved over the button. Basic Java 109 . lots of applets running at once on the same page. Depending on how many you have, you may eventually exhaust the system so that all of them will run slower, but all of them will run independently. Even if you dont have lots of applets, using threads in your applets is good Java programming practice. The general rule of thumb for well-behaved applets W Basic Java 110 . (such as an animation loop, or a bit of code that takes a long time to execute), put it You con l of the Java system can latt reem sus Non-preemptive , any scheduler has two fundamentally different ways of looking at its job: ote: non-preemptive scheduling, the scheduler runs the current thread forever, quiring that thread explicitly to tell it when it is safe to start a different thread. on-preemptive scheduling is very courtly, always asking for permission to schedule, time-critical, real-time applications where being terrupted at the wrong moment, or for too long, could mean crashing an airplane. r-priority threads, ven before their time-slice has expired. If you're going to depend on the priority of our threads, make sure that you test the application on both a Windows and e. can be assigned priorities, and when a choice is made between several reads that all want to run, the highest-priority thread wins. However, among threads at are all the same priority, the behavior is not well-defined. In fact, the different latforms on which Java currently runs have different behaviorssome behaving ore like a preemptive scheduler and some more like a non-preemptive scheduler. in a thread. T hread Scheduling The part of the system that decides the real-time ordering of threads is called the sch eduler. might wonder exactly what order your threads will be run in, and how you can trothat order. Unfortunately, the current implementations not precisely answer the former, though with a lot of work, you can always do the er. ptive Ver P Normally nonpreemptive scheduling and preemptive time-slicing. N With re With preemptive time-slicing, the scheduler runs the current thread until it has used up a certain tiny fraction of a second, and then preempts it, suspend()s it, and resume()s another thread for the next tiny fraction of a second. N and is quite valuable in extremely in Most modern schedulers use preemptive time slicing, because except for a few time-critical cases, it has turned out to make writing multithreaded programs much easier. For one thing, it does not force each thread to decide exactly when it should yield control to another thread. Instead, every thread can just run blindly on, knowing that the scheduler will be fair about giving all the other threads their chance to run. It turns out that this approach is still not the ideal way to schedule threads. Youve given a little too much control to the scheduler. The final touch many modern schedulers add is to enable you to assign each thread a priority. This creates a total ordering of all threads, making some threads more important than others. Being higher priority often means that a thread gets run more often (or gets more total running time), but it always means that it can interrupt other, lowe e y Macintosh or UNIX machin The current Java release does not precisely specify the behavior of its scheduler. Threads th th p m Basic Java 111 . Writing Applets with Threads ow do you create an applet that uses threads? There are several things you need do. e four modifications you need to make to create an applet that uses threads: nstance variable to hold this applets thread. nothing but spawn a thread and start it Create a run() method that contains the actual code that starts your applet ep 1: change is to the first line of your class definition. s Runnable { port for the Runnable interface in your applet. re, the Runnable interface includes the behavior your applet needs to run a nable interface should be implemented by any class whose instances are ion, Runnable provides the means for while not subclassing Thread. A class that implements Runnable d by instantiating a Thread instance and passing most cases, the Runnable interface should be used if you are ride the run() method and no other Thread methods. This is sses should not be subclassed unless the programmer intends cing the fundamental behavior of the class. ublic abstract void run() nnable is used to create a thread, starting d to be called in that separately executing H to There ar Change the signature of your applet class to include the word implements Runnable. Include an i Modify your start() method to do running. running. t S The first You need to change it to the following: pletClass extends java.applet.Applet implement public class MyAp... } hat does this do? It includes sup W He thread; in particular, it gives you a default definition for the run() method. Before proceeding further, lets get on to the details of Runnable interface The java.lang.Runnable interface The Run t yet been stopped. In addit started and has no class to be active a can run without subclassing Threa as the target. In itself inonly pl anning to over important because claifying or enhan on mod p When an object implementing interface Ruhe thread causes the object's run metho t thread. Step 2: Basic Java 112 . The second step is to add an instance variable to hold this applets thread. Call it anything you like; its a variable of the type Thread (Thread is a class in java.lang, so hread runner: r.start(); do nothing but spawn a thread, where does the body of your plet go? It goes into a new method, run(), which looks like this: ublic void run() { ble r garbage collection so that the applet can be removed from memory after a certain you dont have to import it): T S tep 3: Third, add a start() method or modify the existing one so that it does nothing but create a new thread and start it running. Heres a typical example of a start() method: public void start() { if (runner == null); { runner = new Thread(this); runne }} Step 4 : If you modify start() to ap p //ll also run inside that thread. The run method is the real heart of your applet. Step 5: Finally, now that youve sets the threads variable (runner) to null. Setting the variable to null makes the Thread object it previously contained availa fo Basic Java 113 . amount of time. If the reader comes back to this page and this applet, the start thod creates a new thread and starts up the me applet once again. each of you is sharing some anaging that data, you could it. Now visualize a piece of code out it for a while, and then adds ut what to do += 1; t of the system at nce. The disaster occurs when two threads have both executed the if test before alue is clobbered by them ments has been lost. This d to you, but imagine instead that the crucial value affects the his disaster is inescapable if any significant part of the system has not been written r to a mainstream threaded hread safety. ratch with this is mind, and every Java class in its ve to worry only about your own ou can assume that the n threads of execution running concurrently. Every thread with higher priority are executed in preference to threads with er priority. Each thread may or may not also be marked as a daemon. When code ning in some thread creates a new Thread object, the new thread has its priority on thread if and nly if the creating thread is a daemon. , there is usually a single non-daemon thread hich typically calls the method named main of some designated class). The Java Now you have a well-behaved applet that runs in its own thread. T he Problem with Parallelism If threading is so wonderful, why doesnt every system have it? Many modern operating systems have the basic primitives needed to create and run threads, but ey are missing a key ingredient. The rest of their environment is not thread-safe. th Imagine that you are in a thread, one of many, and system. If you were m important data managed by the take steps to protect it but the system is managing in the system that reads some crucial value, thinks ab 1 to the value: if (crucialValue > 0) { // think abo . . . alue crucialV } Remember that any number of threads may be calling upon this par o either has incremented the crucialValue. In that case, the vrucialValue + 1, and one of the incre both with the same cay not seem so ba m state of the screen as it is being displayed. Now, unfortunate ordering of the threads can cause the screen to be updated incorrectly. In the same way, mouse or keyboard events can be lost, databases can be inaccurately updated, and so forth. T with threads in mind. Therein lies the barrientthe large effort required to rewrite existing libraries for t environme Luckily, Java was written from sc library is thread-safe. Thus, you now ha and thread-ordering problems, because y synchronization Java system will do the right thing. The java.lang.Thread Class cution in a program. The Java Virtual Machine allows a A thread is a thread of exeave multiple application to hrity. Threads has a prio lown ru initially set equal to the priority of the creating thread, and is a daem o When a Java Virtual Machine starts up (w Virtual Machine continues to execute threads until either of the following occurs: Basic Java 114 . The exit method of class Runtime has been called and the security manager has e exit operation to take place. All threads that are not daemon threads have died, either by returning from the ng the stop method. arted. For example, thread that computes primes larger than a stated value could be written as follows: ends Thread { ime = minPrime; { mes larger than minPrime 3 ents the Runnable class then implements the run method. An instance of the class can gument when creating Thread, and started. The void run() { // compute primes larger than minPrime . . . } s enerated for it. permitted th call to the run method or by performi There are two ways to create a new thread of execution. One is to declare a class to be a subclass of Thread. This subclass should override the run method of class Thread. An instance of the subclass can then be allocated and st a class PrimeThread ext long minPrime; PrimeThread(long minPrime) { this.minPr } public void r un() // compute pri . . . } } The following code would then create a thr ead and start it running: PrimeThread p = new PrimeThread(14); p.start(); The other way to create a thread is to declare a class that implem interface. That then be a llocated, passed as an arsame example in this other style loo ks like the following: class PrimeRun implements Runnable { long minPrime; ng minPrime) PrimeRun(lo { minPrime; this.minPrime = } blic pu } i g Basic Java 115 . Constructors public Thread() Allocates a new Thread object. This constructor has the same effect as Thread(null, null, gname), where gname is a newly generated name. Automatically generated ames are of the form "Thread-"+n, where n is an integer. Threads created this way tually do anything. trating this method being used follows: } public void run() { System.out.println("A new thread with name " } d main(String args[] ) { (); System.out.println("new Thread() succeed"); else { System.out.println("new Thread() failed"); failed++; } public T t) Allocat hread-"+n, where n is an integer. has the same effect as Thread is a newly generated name. Automatically generated names are of the form "Thread-"+n, where n is an integer. public Thread(String name) n must have overridden their run() method to ac An example illus import java.lang.*; class plain01 implements Runnable { String name; plain01() { name = null; plain01(String s) { name = s; } if (name == null) System.out.println("A new thread created"); else + name + " created"); } class threadtest01 { public static voi int failed = 0 ; Thread t1 = new Thread if (t1 != null) } } ad(Runnable targe hre es a new Thread object. This constructor has the same effect as Thread(null, target, gname), where gname is a newly generated name. Automatically generated names are of the form "T public Thread(ThreadGroup group, Runnable target) Allocatehread object. This constructor s a new T (group, target, gname), where gname Basic Java 116 . Allocates a new Thread object. This constructor has the same effect as Thread(null, , name). e) structor has the same effect as Thread public Thread(Runnable target,String name) Allocates a new Thread object. This constructor has the same effect as Thread(null, target, public Thread( ) Allocates a new Thread object object, has the specified name as its na , an d to by group. If group is not null, checkAccess method of that thread group is called with no arguments; throwing a SecurityException; if group is null, the new process same group as the thread that is creating the new thread. read is started. The priority of ad creating it, tha to change the The newly created thread is initially marked the thread creating it is currently setDaemon may be used to c Example for u ss rstThread extends Thread public FirstThread(String name) super(name); catch(Exception err){} } } null public Thread(ThreadGroup group, String namcates a new Thread object. This con Allo (group, null, name) name). ThreadGroup group,Runnable target, String name so that it has target as its run d belongs to the thread group referre me the this may result inbelongs to the If the target argument is not null, the run method of the target is called when this thread is started. If the target argument is null, this thread's run method is called when this th the newly created thread is set equal to the priority of the thre t is, the currently running thread. The method setPriority may be usedpriority to a new value. as being a daemon thread if and only if marked as a daemon thread. The methodhange whether or not a thread is a daemon. sing Thread Class Fi cla { { } public void run() { for(int count System.out.println("count : "+count); System.out.println("First Thread in sleep state"); try { Thread.sleep(3000); } = 0;count<25;count++) if(count == 10) { { Basic Java 117 . } lass SecondThread extends Thread lic SecondThread(String name) { public void run() { for(int index = 0;index<25;index++) (Thread.activeCount())); System.out.println("Name of current thread : " +((Thread.currentThread()).getName( When dealing with multiple threads, consider this: What happens when two or more he same variable at the same time, and at least one of the } c{ pubsuper(name); } { System.out.println("***"); } } } public class ThreadExample{public static void main(String args[]) { FirstThread first = new FirstThread("first"); SecondThread second = new SecondThread("second"); System.out.println("Number of active threads : " + ))); first.start(); second.start(); } } Synchronization threads want to access t Threads wants to change the variable? If they were allowed to do this at will, chaos would reign. For example, while one thread reads Joe Smith's record, another thread tries to change his salary (Joe has earned a 50-cent raise). The problem is that this little change causes the Thread reading the file in the middle of the others update to see something somewhat random, and it thinks Joe has gotten a $500 raise. That's a great thing for Joe, but not such a great thing for the company, and probably a worse thing for the programmer who will lose his job because of it. How do you resolve this? The first thing to do is declare the method that will change the data and the method that will read to be synchronized. Java's key word, synchronized, tells the system to put a lock around a particular method. At most, one thread may be in any synchronized method at a time. Basic Java 118 . Two synchronized methods. public synchronized void setVar(int){ myVar=x; } public synchronized int getVar (){ return myVar; } Now, while in setVar() the Java VM sets a condition lock, and no other thread will be allowed to enter a synchronized method, including getVar(), until setVar() has finished. Because the other threads are prevented from entering getVar(), no thread will obtain information which is not correct because setVar() is in mid-write. The java.lang.ThreadGroup Class A thread group represents a set of threads. In addition, a thread group can also include other thread groups. The thread groups form a tree in which every thread group except the initial thread group has a parent. A thread is allowed to access information about its own thread group, but not to access information about its thread group's parent thread group or any other thread groups. Constructors public ThreadGroup(String name) Constructs a new thread group. The parent of this new group is the thread group of the currently running thread. public ThreadGroup(ThreadGroup parent,String name) Creates a new thread group. The parent of this new group is the specified thread group. The checkAccess method of the parent thread group is called with no arguments; this may result in a security exception. The Daemon Property Threads can be one of two types: either a thread is a user thread or a Daemon thread. Daemon thread is not a natural thread, either. You can set off Daemon threads on a path without ever worrying whether they come back. Once you start a Daemon thread, you don't need to worry about stopping it. When the thread reaches the end of the tasks it was assigned, it stops and changes its state to inactive, much like user threads. A very important difference between Daemon threads and user threads is that Daemon Threads can run all the time. If the Java interpreter determines that only Daemon threads are running, it will exit, without worrying if the Daemon threads have finished. This is very useful because it enables you to start threads that do things such as monitoring; they die on their own when there is nothing else running. Two methods in java.lang.Thread deal with the Daemonic state assigned to a thread: Basic Java 119 . isDaemon() setDaemon(boolean) T he first method, isDaemon()isDaemon() true false , is used to test the state of a particular thread. Occasionally, this is useful to an object running as a thread so it can determine if it is running as a Daemon or a regular thread. returns if the thread is a Daemon, and otherwise. The second method, setDaemon(boolean), is used to change the daemonic state of e thread. To make a thread a Daemon, you indicate this by setting the input value t back to a user thread, you set the Boolean value to false. f is class only if it must clean up after being terminated asynchronously. If by a method, it is important that it be rethrown so that the read actually dies. th to true. To change i The java.lang.ThreadDeath Class An instance of ThreadDeath is thrown in the victim thread when the stop method with ero arguments in class Thread is called. An application should catch instances o zth ThreadDeath is caught th. Basic Java 120 . HAPTER 7: Networking in Java with remote systems. Much API within the java.net C The Java execution environment is designed so that applications can be easily written to efficiently communicate and share processingf this functionality is provided with the standard Java o Thr investig pro co nteract is critical to developing network applications. Interne P is t ll data on the Internet flows through IP ackets, the basic unit of IP transmissions. IP is termed a connectionless, unreliable pro o before destina becaus ct corrupted data. These tasks ust be implemented by higher level protocols, such as TCP. package. TCP/IP Protocols eeprotocols are most commonly used within the TCP/IP scheme and a closer ation of their properties is warranted. Understanding how these three ls (IP, TCP, and UDP) i to t Protocol (IP) he keystone of the TCP/IP suite. A Ip tocl. As a connectionless protocol, IP does not exchange control information transmitting data to a remote systempackets are merely sent to the tion with the expectation that they will be treated properly. IP is unreliable e it does not retransmit lost packets or dete m IP defines a universal-addressing scheme called IP addresses. An IP address is a 32-bit number and each standard address is unique on the Internet. Given an IP packet, the information can be routed to the destination based upon the IP address defined in the packet header. IP addresses are generally written as four numbers, between 0 and 255, separated by period. (for example, 124.148.157.6) While a 32-bit number is an appropriate way to address systems for computers, umans understandably have difficulty remembering them. Thus, a system called the h Domain Name System (DNS) was developed to map IP addresses to more intuitive identifiers and vice-versa. You can use instead of 128.148.157.6. It is important to realize that these domain names are not used nor understood by IP. When an application wants to transmit data to another machine on the Internet, it ust first translate the domain name to an IP address using the DNS. A receiving e translation, using the DNS to return a domain ame given an IP address. There is not a one-to-one correspondence between IP Jav m application can perform a revers n addresses and domain names: A domain name can map to multiple IP addresses, and multiple IP addresses can map to the same domain name. a provides a class to work with IP Addresses, InetAddress. Basic Java 121 . THE INETADDRESS CLASS This class represents an Internet Protocol (IP) address. Applications should use the the transport layer. TCP provides a liable, connection-oriented, continuous-stream protocol. The implications of these haracteristics are: uous-stream. TCP provides a communications medium that allows for stics, it is easy to see why TCP would be used by most ternet applications. TCP makes it very easy to create a network application, freeing efficiently provide reliable transmissions given e parameters of your application. Furthermore, retransmission of lost data may be appropriate for your application, because such information's usefulness may have n important addressing scheme which TCP defines is the port. Ports separate same P clients to initiate contact, a pecific port can be established from where communications will originate. These efore transmitting data. Information is sent with the assumption that the recipient will be TP), lost data indicating the current time would be invalid methods getLocalHost, getByName, or getAllByName to create a new InetAddress instance. Transmission Control Protocol (TCP) Most Internet applications use TCP to implement rec Reliable. When TCP segments, the smallest unit of TCP transmissions, are lost or corrupted, the TCP implementation will detect this and retransmit necessary segments. Connectionoriented. TCP sets up a connection with a remote system by transmitting control information, often known as a handshake, before beginning a communication. At the end of the connect, a similar closing handshake ends the transmission. Contin an arbitrary number of bytes to be sent and received smoothly; once a connection has been established, TCP segments provide the application layer the appearance of a continuous flow of data. Because of these characteri In you from worrying how the data is broken up or about coding error correction routines. However, TCP requires a significant amount of overhead and perhaps you might wish to code routines that more thin expired. A various TCP communications streams which are running concurrently on the system. For server applications, which wait for TC s concepts come together in a programming abstraction known as sockets. User Datagram Protocol (UDP) UDP is a low-overhead alternative to TCP for host-to-host communications. In contrast to TCP, UDP has the following features: Unreliable. UDP has no mechanism for detecting errors nor retransmitting lost or corrupted information. Connectionless. UDP does not negotiate a connection b listening. Message-oriented. UDP allows applications to send self-contained messages within UDP datagrams, the unit of UDP transmission. The application must package all information within individual datagrams. For some applications, UDP is more appropriate than TCP. For instance, with the Network Time Protocol (N Basic Java 122 . by the time it was retransmitted. In a LAN environment, Network File System (NFS) provide reliability at the application layer and thus uses UDP. niform Resource Locator (URL) e. These two portions of can more efficiently As with TCP, UDP provides the addressing scheme of ports, allowing for many applications to simultaneously send and receive datagrams. UDP ports are distinct from TCP ports. For example, one application can respond to UDP port 512 while another unrelated service handles TCP port 512. U While IP addresses uniquely identify systems on the Internet, and ports identify TCP or UDP services on a system, URLs provide a universal identification scheme at the application level. Anyone who has used a Web browser is familiar with seeing URLs, though their complete syntax may not be self-evident. URLs were developed to create a common format of identifying resources on the Web, but they were designed to be general enough so as to encompass applications that predated the Web by decades. Similarly, the URL syntax is flexible enough so as to accommodate future rotocols. p URL Syntax The primary classification of URLs is the scheme, which usually corresponds to an application protocol. Schemes include http, ftp, telnet, and gopher. The rest of the URL syntax is in a format that depends upon the scheminformation are separated by a colon to give us: s cheme-name:scheme-info Thus, while mailto:[email protected] indicates "send mail to user dwb at the machine netspace.org," means "open an FTP connection to etspace.org n and log i n as user dwb." nform to a general format that follows the following pattern: chemename://host:port/file-info#internal-reference General URL Format Most URLs used co s Scheme-name is a URL scheme such as HTTP, FTP, or Gopher. Host is the domain name or IP address of the remote system. Port is the port number on which the ervice is listening; since most application protocols define a standard port, unless a nd the colon which delimits it from the host urce requested on the remote system, which often mes is a file. However, the file portion may actually execute a server program and it c file on the system. The internal-reference is identifier of a named anchor within an HTML page. A named anchor llows a link to target a particular location within an HTML page. Usually this is not # s non-standard port is being used, the port a is omitted. File-info is the reso ti usually includes a path to a specifi usually the a used, and this token with the character that delimits it is omitted. Java and URLs Java provides a very powerful and elegant mechanism for creating network client applications allowing you to use relatively few statements to obtain resources from e Internet. The java.netpackage contains the sources of this power, the URL and RLConnection th U classes. Basic Java 123 . the so re a "relative UR An application can also specifye resourc information to reach thently used within HT frequ or F contained within it the relative URL: FAQ.html it would be a shorthand for: /FAQ.html he relativ T name, or port number is missing, the value is inherited from the fully spe not inherited The file component must be specified. The optional anchor is THE URL CLASS lass URL represents a Uniform Resource Locator, a pointer to a "resource" on the can be something as simple as a file or a directory, or it h as a query to a database or to nformation on the types of URLs and their formats can be eb/url-primer.html e previous example of a URL ates that the protocol to use is http (HyperText Transport Protocol) and that the on resides on a host machine named. The information t host machine is named demoweb/url-primer.html. The exact meaning of this t. The on the fly. This the information is er to which the TCP is not specified, the example, the default port for http is as: 0/demoweb/urlprimer.html lso known as a "ref" or a "reference". cter "#" followed by more characters. un.com/index.html#chapter1 C World Wide Web. A resource can be a reference to a more complicated object, suc a search engine. More iund at: fo In general, a URL can be broken into several parts. Th indic informati on tha name on the host machine is both protocol dependent and host dependen information normally resides in a file, but it could be generated component of the URL is called the file c omponent, even thoughnot necessarily in a file. A URL can optionally specify a "port", which is the port numbconnection is made on the remote host machine. If the port default port for the protocol is used instead. For 80. An alternative port could be specified A URL may have appended to it an "anchor", anchor is indicated by the sharp sign chara The a For example, L. Rather, it indicates that after the specifically interested in that part of cument that has the tag chapter1 attached to it. The meaning of a tag is urce specific. L", which contains only enough e relative to another URL. Relative URLs are ML pages. example, if the contents of the URL: e URL need not specify all the components of a URL. If the protocol, host cified URL. . This anchor is not technically part of the URied resource is retrieved, the application is specifdo Basic Java 124 . Example for URL GetURLApp.java import java.net.URL; import java.net.MalformedURLException; public class GetURLApp { public static void main(String args[]) { !=1) error("Usage: java GetURLApp URL"); lformedURLException ex) URL"); ion occurred."); void error(String s){ intln(s); to write to the resource referenced by the URL. In eneral, creating a connection to a URL is a multistep process: Manipulate parameters that affect the connection to the remote resource. Interact with the resource; query header fields and contents. import java.io.*; try { if(args.length System.out.println("Fetching URL: "+args[0]); URL url = new URL(args[0]); BufferedReader inStream = new BufferedReader( new InputStreamReader(url.openStream())); String line; { error("Bad } catch (IOException ex) { error("IOExcept } } public stat ic System.out.pr System.exit(1); } } THE URLCONNECTION CLASS The abstract class URLConnection is the superclass of all classes that represent a communications link between the application and a URL. Instances of this class can be used both to read from and g openConnection() connect() while ((line = inStream.readLine())!= null) { System.out.println(line); } inStream.close(); } catch (Ma Basic Java 125 . 1. The connection object is created by invoking the openConnection method on a . The setup parameters and general request properties are manipulated. the following methods: setAllowUserInteraction setDoInput e lt ction and UseCaches parameters can be set usi th seCaches. Default val t using the set fa n he s are accessed frequently. The methods: nte g Conte gth Conte Date Expir LastM ntType method. at: URL. 2 3. The actual connection to the remote object is made, using the connect method. 4. The remote object becomes available. The header fields and the contents of the remote object can be accessed. The setup parameters are modified using setDoOutput setIfModifiedSince setUseCaches and the general request properties are modified using the method: * setRequestProperty D fau values for the AllowUserIntera ng e methods setDefaultAllowUserInteraction and setDefaultUperties can be se uesfor general request pro DeultRequestProperty method. *C ertai* getCo ader fieldntEncodin * get * get * get * get * get ation odified ntLen ntType provide convenient access to these fields. The getContentType method is used by the getContent method to determine the type of the remote object; subclasses may find it convenient to override the getConte an be found c Basic Java 126 . TCP Socket Basics at the University of California at Berkeley as a tool rogramming. Originally part of UNIX operating ystems, the concept of sockets has been incorporated into a wide variety of hat is a Socket? munications link over the network with another tilizes the TCP protocol, inheriting the ces of information are needed to create a ient-server applications: A centralized service waits for arious remote machines to request specific resources, handling each request as it ients to know how to communicate with the server, standard pplication protocols are assigned well-known ports. On UNIX operating systems, ly be bound by applications with superuser (for example, Sockets were originally developedto easily accomplish network p s operating environments, including Java. W socket is a handle to a com A application. A TCP socket is one that uehavior of that transport protocol. Four pie b TCP socket: The local system's IP address The TCP port number which the local application is using The remote system's IP address The TCP port number to which the remote application is responding Sockets are often used in cl v arrives. In order for cl a ports below 1024 can on root) privileges, and thus for control, these well-known ports lie within this range, by orts are shown in the following table. convention. Some well known p Well-known TCP ports and services Port Service 21 FTP 23 Telnet 25 SMTP (Internet Mail Transfer) Finger 79 80 HTTP For many application protocols, you can merely usert and then manually emulate a clie the Telnet application to connect nt. This may help you understand communications work. , a port to establish a socket connection. such a port num are usually run by ports are allocated from ated to other operating en a dynamically-allocated port rt on the same machine uely identifies a communications link. Realize that a lients on the same port, since the clients will be on to the service po client-server how Client applications must also obtain, or bind Because the client initiates the communication with the server, ber could conveniently be assigned at runtime. Client applications normal, unprivileged users on UNIX systems, and thus these the range above 1024. This convention has held when migr systems, and client applications are ge nerally givabove 1024. Because no two applications can bind the same po Basic Java 127 . different systems and/or different ports; the uniqueness of the link's characteristics P SOCKET CLASSES of classes, which allow you to create socket-based network pplications. The two classes you use include java.net.Socket are preserved. TC JAVA Java has a number a and java.net.ServerSocket. THE SERVERSOCKET CLASS public class ServerSocket extends Object This class implements server sockets. A server socket waits for requests to come in ver the network. It performs some operation based on that request, and then lt to the requester. he actual work of the server socket is performed by an instance of the SocketImpl e the socket factory that creates the socket e itself to create sockets appropriate to the local firewall. .Date; ("server started"); end = new edOutputStream(socket.getOutputStream()); String date = (new Date()).toString(); byte data[] = new byte[date.length()]; data = date.getBytes(); data.length); o possibly returns a resu T class. An application can chang implementation to configur Example for ServerSocket ServerExample.java import java.io.*; import java.net.*; import java.u til pub lic class ServerExample { public static void main(String args[]) { ServerSocket server = null; Socket socket = null; BufferedOutputStream send = nu ll; try { server = new ServerSocket(3000); System.out.println while(true) { socket = server.accept(); s Buffer send.write(data,0, send.flush(); System.out.println("data socket.close(); } } flushed"); send.close(); Basic Java 128 . catch(Exception err) { System.out.println("Exception in transferring data to client"); } } xample for Socket ientExample <server IP ddress>"); socket = new Socket(InetAddress.getByName(ser_address),3000); rec System.out.println("socket created"); byt d rec v Str g System.out.println("Date from server : "+date); receive.close(); socket.close(); n err){ verview of UDP Messaging rly suited to UDP. UDP requires much less overhead, but the burden of } THE SOCKET CLASS This class implements client sockets (also called just "sockets"). A socket is an endpoint for communication between two machines. The actual work of the socket is performed by an instance of the SocketImpl class. An application, by changing the socket factory that creates the socket implementation, can configure itself to create sockets appropriate to the local firewall. E import java.io.*; import java.net.*; public class ClientExample { public static void main(String args[]) { Socket socket = null; BufferedInputStream receive = null; if(args.length == 0){ System.out.println("Usage : java Cl a System.exit(0); } String ser_address = args[0]; try { eive = new BufferedInputStream(socket.getInputStream()); e ata[] = new byte[100]; eie.read(data,0,data.length); in date = new String(data); } catch(Exceptio System.out.println("Exception in accessing file"); } } } O Programming with UDP has significant ramifications. Understanding these factors will inform your network programming. UDP is a good choice for applications in which communications can be separated into discrete messages, where a single query from a client invokes a single response from a server. Time-dependent data is particula Basic Java 129 . engineering any necessary reliability into the system is your responsibility. For responses to their queriesperfectly possible and gitimate with UDPyou might want to program the clients to retransmit the request mative message indicating communication difficulties. essageoriented. A common ith postcards. A dialog with all messages that fit within a small packet of a rn ur r message could have been lost en route, the recipients lost, or the recipient might be ignoring your message. een network programs are referred to as store an array of bytes. A receiving your message, possibly sending a ming abstraction. However, UDP sockets are very different from TCP e analogy, UDP sockets are much like creating a mailbox. A dress on m the message is intended. You place the postcard , you could potentially wait forever until one arrives in your read the postcard. Meta-information appears on l tasks: Creating an appropriately addressed datagram to send. up a socket to send and receive datagrams for a particular tion. ket for transmission. from a socket. instance, if clients never receive le or perhaps display an infor UDP Socket Characteristics UDP is described as unreliable, connectionless, and mgy that elucidates UDP is that of communicating w analo UDP must be quanticized into sm, although some packets can hold more data than others. When you specific size send out a message, you can never be certain that you will receive a retumessage. Unless you do receive a return message, you have no idea if yo message was receivedyoucould have been confirmation The postcards you will be exchanging betw datagrams. Within a datagram, you can application can extract this array and decode gram response. As with TCP, you will program in UDP using the socket return data program sockets. Extending thmailbox is identified by your address, but you don't construct a new one for each person to whom you will be sending a message. Instead, you place an ad the postcard that indicates to who in the mailbox and it is (eventually) sent on its way. When receiving a message mailbox . Once one does, you can the postcard that identifies the sender through the return address. As the previous analogies suggest, UDP programming involves the following genera Settingapplica Inserting datagrams into a soc Waiting to receive datagrams Decoding a datagram to extract the message, its recipient, and other meta-information. Java UDP Classes The java.net package has the tools that are nece ssary to perform UDP communications. For working with datagrams, Java provides the DatagramPacket and DatagramSocket classes. When receiving a UDP datagram, you also use the DatagramPacket class to read the data, sender, and meta-information. THE DATAGRAMPACKET CLASS This class represents a datagram packet. Datagram packets are used to implement a connectionless packet delivery service. Each message is routed from one machine to another based solely on information contained within that packet. Multiple packets sent from one machine to another might be routed differently, and might arrive in any order. Basic Java 130 . Example for DatagramPacket import java.net.*; import java.io.*; public class DatagramClient { public static void main(String args[]) { ngth == 0) t dgp = null; ))); a datagram socket is individually ddressed and routed. Multiple packets sent from one machine to another may be if(args.le { System.out.println("Usage : java DatagramClient <server address>"); System.exit(0); } String address = args[0]; DatagramPacke DatagramSocket dgs = null; byte receive[] = new byte[50]; try { dgs = new DatagramSocket(5000,InetAddress.getByName(address)); dgp = new DatagramPacket(receive,receive.length); dgs.receive(dgp); System.out.println("data received : "+(new tring(receive S dgs.close(); } catch(Exception err) { System.out.println("Exception in client"); } } } THE DATAGRAMSOCKET CLASS This class represents a socket for sending and receiving datagram packets. A datagram socket is the sending or receiving point for a connectionless packet delivery service. Each packet sent or received on a routed differently, and may arrive in any order. Basic Java 131 . BC JD com terms of the ODBC standard C API. What Is JDBC ? JDBCTM is a JavaTM API for executing SQL s tatements. (As a point of intis a trademarked name and is not an acronym; nevertheless, JDBC is of as standing for "Javinterfaces written in th e Java programming langfor tool/database developers a pure Java API. using U Sybase database, another program to access an Oracle datab. Jav a, being robust, secure, easy to use, easy to understand, and automatically adable on a network, is an excellent language basis for database applications. s needed is a way for Java applications to talk to a variety of different ses. JDBC is the mechanism for doing this. xtends what can be done in Java. For example, with Java and the JDBC API, ssible to publish a web page containing an applet that uses information d from a remote ployees (even if they are using a conglomeration of Windows, Macintosh, and achines) to one or more internal databases via an intranet. With more and programmers using the Java programming language, the need for easy se access from Java is continuing to grow. anagers like the combination of Java and JDBC because it makes inating information easy and economical. Businesses can continue to use stalled databases and access information easily d TM. Basic Java 132 . What Does JDBC Do? do three things: mt = con.createStatement(); = stmt.executeQuery("SELECT a, b, c FROM Table1"); w Str floa c"); JDBC JDBC is a "low-level" interface, which means com databa build h a m into higher- 3. va In this "object/relational" tion. Simply put, JDBC makes it possible to establish a connection with a database send SQL statements process the results. The following code fragment gives a basic example of these three steps: Connection con = DriverManager.getConnection ( "jdbc:odbc:wombat", "login", "password"); Statement st ResultSet rs hile (rs.next()) { int x = getInt("a"); ing s = getString("b"); t f = getFloat(" } Is a Low-level API and a Base for Higher-level APIs that it is used to invoke (or "call") SQL mands directly. It works very well in this capacity and is easier to use than other se connectivity APIs, but it was designed also to be a base upon which to igher-level interfaces and tools. A higher-level interface is "userfriendly," using understandable or more con orevenient API that is translated behind the scenes a low-level interface such as JDBC. At the time of this writing, two kinds of Ja. 4. a direct mapping of relational database tables to Java classes. JavaSoft and others have announced plans to implement this. applica Basic Java 133 . JDBC versus ODBC and other APIs t this point, Microsoft's ODBC (Open DataBase Connectivity) API is probably the ers Calls from Java to native C code have a number of drawbacks in the security, implementation, robustness, and automatic portability of applications. 2. A literal translation of the ODBC C API into a Java API would not be desirable. For example, Java has no pointers, and ODBC makes copious use of them, including the notoriously rrorprone generic pointer "void *". You can think of JDBC as ODBC transla d interface that is natural for Java programmers. 3. ODBC is hard to learn. It mixes simple and advanced features together, and it has complex options even for simple queries. JDBC, on the other hand, was designed to keep simple things simple while allowing more advanced capabilities where required. 4. A Java API like JDBC is needed in order to enable a "pure Java" solution. When ODBC is used, the ODBC driver manager and drivers must be urse, it is easy to use. ore recently, Microsoft has introduced new APIs beyond ODBC: RDO, ADO, and LE DB. These designs move in the same direction as JDBC in many ways, that is, being an object-oriented database interface based on classes that can be plemented on ODBC. However, we did not see compelling functionality in any of ese interfaces to make them an alternative basis to ODBC, especially with the DBC driver market well-established. Mostly they represent a thin veneer on ODBC. his is not to say that JDBC does not need to evolve from the initial release; owever, we feel that most new functionality belongs in higher- level APIs such as e object/relational mappings and embedded SQL mentioned in the previous ection. A most widely used programming interface for accessing relational databases. It off: 1. ODBC is not appropriate for direct use from Java because it uses a C interface. e ted into an object-oriente manually installed on every client machine. When the JDBC driver is written completely in Java, however, JDBC code is automatically installable, portable, and secure on all Java platforms from network computers to mainframes. In summary, the JDBC API is a natural Java interface to the basic SQL abstractions and concepts. It builds on ODBC rather than starting from scratch, so programmers familiar with ODBC will find it very easy to learn JDBC. JDBC retains the basic design features of ODBC; in fact, both interfaces are based on the X/Open SQL CLI (Call Level Interface). The big difference is that JDBC builds on and reinforces the tyle and virtues of Java, and, of co s M O in im th O T h th s Basic Java 134 . anagement system being accessed. A user's SQL stateme mdnts are delivered to the atabase, and the results of those statements are sent back to the user. The ated on another machine to which the user is connected via a middle tier, which then sends them to the user. MIS he middle tier makes it Finally, in many cases the three-tier architecture can rovide performance advantages. database may be loc In the three-tier model, commands are sent to a "middle tier" of services, which then send SQL statements to the database. The database processes the SQL statements and sends the results back to theirectors find the three-tier model very attractive because t. Database Server p Java Applet or HTML Browser Application Server (Java) JDBC DBMS Basic Java 135 . Until now the middle tier has typically been written in languages such as C or C++, which offer fast performance. However, with the introduction of optimizing compilers at translate Java bytecode into efficient machine-specific code, it is becoming he middle tier in Java. This is a big plus, making it possible to SQ Str u ) is the standard language for accessing relational SQL syntax or semantics for more dvanced functionality. For example, not all databases support stored procedures or e meantime, however, the JDBC API must support SQL as it is. t, an application query need not even be SQL, or it may be a pecialized derivative of SQL designed for specific DBMSs (for document or image C-sty The es commo and for For co ird way. It pro Databa capabi Bec databa for any standa design refers 1992. E ascerta the JDB The implem conform in the function not currently branding vendor implementations, but this compliance definition th practical to implement ttake advantage of Java 's robustness, multithreading, and security features. JDBC is important to allow database access from a Java middle tier. L Conformance uctred Query Language (SQL databases. One area of difficulty is that although most DBMSs (DataBase Management Systems) use a standard form of SQL for basic functionality, they do not conform to the more recently-defined standard a outer joins, and those that do are not consistent with each other. It is hoped that the portion of SQL that is truly standard will expand to include more and more functionality. In th fac s queries, for example). A second way JDBC deals with problems of SQL conformance is to provide ODB le escape clauses. cape syntax provides a standard JDBC syntax for several of the more n areas of SQL divergence. For example, there are escapes for date literals stored procedure calls. mplex applications, JDBC deals with SQL conformance in a th vides descriptive information about the DBMS by means of the seMetaData interface so that applications can adapt to the requirements and lities of each DBMS. ause the JDBC API will be used as a base API for developing higher-level se access tools and APIs, it also has to address the problem of conformance thing built on it. The designation "JDBC COMPLIANT " was created to set a rd level of JDBC functionality on which users can rely. In order to use this ation, a driver must support at least ANSI SQL-2 Entry Level. (ANSI SQL-2 to the standards adopted by the American National Standards Institute in ntry Level refers to a specific list of SQL capabilities.) Driver developers can in that their drivers meet these standards by using the test suite available with C API. "JDBC COMPLIANT " designation indicates that a vendor's JDBC entation has passed the conformance tests provided by JavaSoft. These ance tests check for the existence of all of the classes and methods defined JDBC API, and check as much as possible that TM TM the SQL Entry Level ality is available. Such tests are not exhaustive, of course, and JavaSoft is Basic Java 136 . provides some degree of confidence in a JDBC implementation. With wider and wider acceptance of the JDBC API by database vendors, connectivity vendors, Internet service vendors, and application writers, JDBC is quickly becoming the standard for Java database access. JavaSoft Framework JavaSoft provides three JDBC product components as part of the Jit (JDK): ava Development manager, t suite, and ridge. er is the backbone of DBC architecture. simple; its primary function is nnect Java get out of the way provides some confidence that JDBC drivers will run your pass the JDBC driver test suite can be designated JDBC as a way to get JDBC off the ground quickly, and long term will provide way to access some of the less popular DBMSs if JDBC drivers are not bridge product provides JDBC access via ODBC drivers. Note that ODBC binary code, and tabase client code, must be loaded on each client machine slated to a DBMS protocol by a ng DBC. K the JDBC driver the JDBC driver tes the JDBC-ODBC bThe JDBC driver manag quite small and correct JDBC driver and then the J to co It actually is applications to the . The JDBC driver test suiteprogram. Only drivers that COMPLIANT . The JDBC-ODBC bridge allows ODBC drivers to be used as JDBC drivers. It was implemented a implemented for them. JDBC Driver Types The JDBC drivers that we are aware of at this time fit into one of four categories: 1. JDBC-ODBC bridge plus ODBC driver: The JavaSoft in many cases daindependent net protocol which is then tr TM handle the additional requirements for security, access through firewalls, and so on, that the Web imposes. Several vendors are adding JDBC drivers to their existi database middleware produc ts. 4. Native-protocol pure Java driver: This kind of driver converts J Basic Java 137 . Eventually, we expect access databases fromthat driver categories 3 and 4 will be the preferred way to JDBC. Driver categories 1 and 2 are interim solutions where irect pure Java drivers are not yet available. There are possible variations on ll the advantages of he following chart shows the four categories and their properties: d categories 1 and 2 (not shown in the table below) that require a connector, but these are generally less desirable solutions. Categories 3 and 4 offer a Java, including automatic installation (for example, downloading the JDBC driver with an applet that uses it). T DRIVER CATEGORY ALL JAVA? NET PROTOCOL 1 JDBC-OCBC Bridge No Direct 2 Native API as basis No Direct JDBC-Net Yes Requires Connector 3 4 Native protocol as basis Yes Direct Obtaining JDBC Drivers. C onnection A Connection object represents a connection with a database. A connection session includes the SQL statements that are executed and the results that are returned over at conn th ection. A single application can have one or more connections with a single ith many different databases. way to establish a connection with a database is to call the method riverManager.getConnection. This method takes a string containing a URL. The ndle opening a connection. database, or it can have connections w Opening a Connection The standard D DriverManager class, referred to as the JDBC management layer, attempts to locate a driver than asier to just let the DriverManager class ha e"); Basic Java 138 . URLs in General Use Since URLs often cause some confusion, we will first give a brief explanation of URLs in general and then go on to a discussion of JDBC URLs. URL (Uniform Resource Locator) gives information for locating a resource on the s information, and it is alw specifies "file tran protoco the Inte file:/hom orial.html The rest of a URL, everything after the first colon, gives information about where the data so For the http, the rest of the URL identifies the host and may optiona L for the JavaSo A Internet. It can be thought of as an address. The first part of a URL specifies the protocol used to acces ays followed by a colon. Some common protocols are "ftp", which sfer protocol," and "http," which specifies "hypertext transfer protocol." If the l is "file," it indicates that the resource is in a local file system rather than on rnet. asoft.com/docs/JDK-1_apidocs.zip e/haroldw/docs/tut urce is located. If the protocol is file, the rest of the URL is the path to a file. protocols ftp and lly give a path to a more specific site. For example, below is the URft home page. This URL identifies only the host: ww.javasoft.com By nav ge, one can go to many other pages, one of which is the JDBC home page. The URL for the JDBC home page is more specific and looks like this: JDB U A JDBC will rec actually Users d supplie conven URLs. ince J kinds of drivers, the conventions are of necessity very flexible. First, they allow different drivers to use different schemes for naming values er writers to encode all necessary connection formation within them. This makes it possible, for example, for an applet that wants talk to a given database to open the database connection without requiring the inistration chores. nt igating from this home pa ww.javasoft.com/products/jdbc C RLs URL provides a way of identifying a database so that the appropriate driver ognize it and establish a connection with it. Driver writers are the ones who determine what the JDBC URL that identifies their particular driver will be. o not need to worry about how to form a JDBC URL; they simply use the URL d with the drivers they are using. JDBC's role is to recommend some tions for driver writers to follow in structuring their JDBC SDBC URLs are used with various databases. The odbc subprotocol, for example, lets the URL contain attribute (but does not require them). Second, JDBC URLs allow driv into user to do any system adm differe Basic Java 139 . network name services (such as DNS, NIS, and DCE), and there is no restriction be used. ree parts, which are jdbc:<subprotocol>:<subname> rotocol is "odbc", and the subname "fred" is a local ODBC data source. network name service (so that the database rotocol, and it can have a subsubname with any jdb The "o The su ODBC-of attrib specified after the subname (the data source name). The full bute-value>]* about which ones can The standard syntax for JDBC URLs is shown below. It has th separated by colons: The three parts of a JDBC URL are broken down as follows: 5. jdbc-the protocol. The protocol in a JDBC URL is always jdbc. 6. subp If one wants to use a name in the JDBC URL does not have to be its actual name), the naming service can be the subprotocol. So, for example, one might have a URL like: jdbc:dcenaming:accounts-payable In this example, the URL specifies that the local DCE naming service should resolve the database name "accounts-payable" into a more specific name that can be used to connect to the real database. 7. <subname>-a way to identify the database. The subname can vary, depending on the subp internal syntax the driver writer chooses. The point of a subname is to give enough information to locate the database. In the previous example, "fred" is enough because ODBC provides the remainder of the information. A database on a remote server requires more information, however. If the database is to be accessed over the Internet, for example, the network address should be included in the JDBC URL as part of the subname and should follow the standard URL naming convention of //hostname:port/subsubname Supposing that "dbnet" is a protocol for connecting to a host on the Internet, a JDBC URL might look like this: c:dbnet://wombat:356/fred dbc" Subprotocol bprotocol odbc is a special case. It has been reserved for URLs that specify style data source names and has the special feature of allowing any number ute values to be syntax for the odbc subprotocol is: jdbc:odbc:<data-source-name>[;<attribute-name>=<attri Basic Java the subprotocol in a JDBC ss presents this name to its list of registered drivers e is reserved should recognize it and establish a connec n s. For example, odbc is reserved for the JDBCDBC Bridge. If there were, for another example, a Miracle Corporation, it might want to register "miracle" as the subprotocol for the JDBC driver that connects to its so that no one else would use that name. rocedure call to a DBMS that does not support . aredStatement- -created by the method prepareStatement. A executed. Instances of PreparedStatement extend Statement and therefore include Statement methods. A PreparedStatement object has the potential to be more efficient than a en pre-compiled and stored for future use. s all of the following are valid jdbc:odb c:obc:wombat c:wombat;CacheSize=20;ExtensionCase=LOW jdbc:odbc:qeora;UID=kgh;PWD=fooey erig Subprotocols A driver developer can reserve a nURL. When the DriverManager cla , th driver for which this nametio to the database it identifie O Miracle DBMS JavaSoft is acting as an informal registry for JDBC subprotocol names. To register a subprotocol name, send email to: [email protected] Sending SQL Statements Once a connection is established, it is used to pass SQL statements to its underlying database. JDBC pplication that tries to send a stored p a stored procedures will be unsuccessful and generate an exception. JDBC requires that a driver provide at least ANSI SQL-2 Entry Level capabilities in order to be designated JDBC COMPLIANT . This means that users can count on at least this tandard level of functionality s JDBC provides three classes for sending SQL statements to the database, and three methods in the Connection interface create instances of these classes. These classes and the methods which create them are listed below: 5. Statement- -created by the method createStatement. A Statement object is used for sending simple SQL statements. 6. Prep PreparedStatement object is used for SQL statements that take one or more parameters as input arguments (IN parameters). PreparedStatement has a group of methods which set the value of IN parameters, which are sent to the database when the statement is Statement object because it has be 7. CallableStatement- -created by the method prepareCall. CallableStatement objects are used to execute SQL stored procedures- -a group of SQL statements that is called by name, much like invoking a function. A CallableStatement object inherits methods for handling IN parameters from PreparedStatement; it adds methods for handling OUT and INOUT parameters. The TM following list gives a quick way to determine which Connection method is appropriate for creating different types of SQL statements: Basic Java 141 . createStatement method is used for simple SQL statements (no parameters) prepareStatement method is used for SQL statements with one or more IN parameters simple SQL statements that are executed frequently prepareCall method is used for call to stored procedures Transactions A transaction consists of one or more statements that have been executed, completed, and then either committed or rolled back. When the method commit or rollback is called, the current transaction ends and another one begins. A new connection is in autocommit mode by default, meaning that when a statement is completed, the method commit will be called on that statement automatically. In this case, since each statement is committed individually, a transaction consists of only one statement. If auto-commit mode has been disabled, a transaction will not rminate until the method commit or rollback is called explicitly, so it will include all e statements that have been executed since the last invocation of the commit or In this second case, all the statements in the transaction are ed back as a group. t one change to take effect unless another one does act, a JDBC-compliant driver must upport transactions. DatabaseMetaData supplies information describing the level of a value and a second transaction rea be allo invalid DBMS to teth rollback method.ommitted or roll c The method commit makes permanent any changes an SQL statement makes to a database, and it also releases any locks held by the transaction. The method rollback will discard those changes. ometimes a user doesn't wan Sa l will support transactions. In f s transaction support a DBMS provides. Transaction Isolation Levels ds that value before the change has been committed or rolled back? Should that wed, given that the changed value read by the second transaction will be if the first transaction is rolled back? A JDBC user can instruct the Basic Java 142 . allow a value to be read before it has been committed ("dirty reads") with the followin con. The hig e more care is taken to avoid conflicts. The transac transac to the slower concur with the el epends on the apabilities of the underlying DBMS. onnection object is created, its transaction isolation level depends on e driver, but normally it is the default for the underlying database. A user may call commended, for it will trigger an mediate call to the method commit, causing any changes up to that point to be riverManager iver. In addition, e DriverManager class attends to things like driver login time limits and the printing connect, but in most cases it is better to let the DriverManager class anage the details of establishing a connection. gisters it with the DriverManager class when it is loaded. Thus, a user would not citly loads the driver class. recommended. The following code loads the class acme.db.Driver: g code, where con is the current connection: setTransactionIsolation(TRANSACTION_READ_UNCOMMITTED); her the transaction isolation level, th Connection interface defines five levels, with the lowest specifying that tions are not supported at all and the highest specifying that while one tion is operating on a database, no other transactions may make any changes data read by that transaction. Typically, the higher the level of isolation, the the application executes (due to increased locking overhead and decreased rency between users). The developer must balance the need for performance need for data consistency when making a decision about what isolation lev to use. Of course, the level that can actually be supported d c When a new C th the method setIsolationLevel to change the transaction isolation level, and the new level will be in effect for the rest of the connection session. To change the transaction isolation level for just one transaction, one needs to set it before the transaction begins and reset it after the transaction terminates. Changing the transaction isolation level during a transaction is not re im made permanent. D The DriverManager class is the management layer of JDBC, working between the user and the drivers. It keeps track of the drivers that are available and handles establishing a connection between a database and the appropriate dr th of log and tracing messages. For simple applications, the only method in this class that a general programmer needs to use directly is DriverManager.getConnection. As its name implies, this method establishes a connection to a database. JDBC allows the user to call the DriverManager methods getDriver, getDrivers, and registerDriver as well as the Driver method m Keeping Track of Available Drivers The DriverManager class maintains a list of Driver classes that have registered themselves by calling the method DriverManager.registerDriver. All Driver classes should be written with a static section that creates an instance of the class and then re normally call DriverManager.registerDriver directly; it should be called automatically by a driver when it is loaded. A Driver class is loaded, and therefore automatically registered with the DriverManager, in two ways: 8. By calling the method Class.forName. This expliSince it does not depend on any external setup, this way of loading a driver is Basic Java 143 .. he DriverManager class is intialized, it looks for the system property jdbc.drivers, and if the user has entered one or more drivers, the ~/.hotjava/properties ombat.sql.Driver:bad.test.ourDriver; he first call to a DriverManager method will automatically cause these driver classes be loaded. a database. When a r.getConnection ver in turn to see if it can establish a is capable of onnecting to a given URL. For example, when connecting to a given remote might be possible to use a JDBC-ODBC bridge driver, a JDBC-toby the database vendor. In such c.drivers are always registered first.) It will skip 9. By adding the driver to the java.lang.System property jdbc.drivers. This is a list of driver classnames, separated by colons, that the DriverManager class loads. When t DriverManager class attempts to load them. The following code illustrates how a programmer might enter three driver classes in (HotJava loads these into the system properties list on startup): jdbc.drivers=foo.bah.Driver:w Tto Note that this second way of loading drivers requires a preset environment that is persistent. If there is any doubt about that being the case, it is safer to call the method Class.forName to explicitly load each driver. This is also the method to use to bring in a particular driver since once the DriverManager class has been initialized, it will never recheck the jdbc.drivers property list. In both of the cases listed above, it is the responsibility of the newly-loaded Driver class to register itself by calling DriverManager.registerDriver. As mentioned above, ame class loader as the code issuing the request for a connection. s Establishing a Connection Once the Driver classes have been loaded and registered with the DriverManager class, they are available for establishing a connection with request for a connection is made with a call to the DriverManage method, the DriverManager tests each dri connection. It may sometimes be the case that more than one JDBC driver cdatabase, it generic-network-protocol driver, or a driver supplied cases, the order in which the drivers are tested is significant because the DriverManager will use the first driver it finds that can successfully connect to the given URL. First the DriverManager tries to use each of the drivers in the order they were gistered. (The drivers listed in jdb re any drivers which are untrusted code, unless they have been loaded from the same source as the code that is trying to open the connection. Basic Java 144 . ample of all that is normally needed to set up a onnection with a driver such as a JDBC-ODBC bridge driver: ment object is used to execute a precompiled SQL tatement with or without IN parameters; and a CallableStatement object is used to se stored procedure. h the Connection b, c FROM Table2); Statement objects tatements that produce a single result set, such as SELECT statements. The method executeUpdate is used to execute INSERT, UPDATE, or DELETE statements and also SQL DDL (Data Definition Language) statements like CREATE TABLE and DROP TABLE. The effect of an INSERT, UPDATE, or DELETE string comparisons per connection since it is unlikely that dozens of drivers will be loaded concurrently. The following code is an ex c Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); //loads the driver String url = "jdbc:odbc:fred"; DriverManager.getConnection(url, "userID", "passwd"); Statement A Statement object is used to send SQL statements to a database. There are actually three kinds of Statement objects, all of which act as containers for executing SQL statements on a given connection: Statement, PreparedStatement, which inherits from Statement, and CallableStatement, which inherits from PreparedStatement. They are specialized for sending particular types of SQL statements: a Statement object is used to execute a simple SQL statement with no arameters; a PreparedState ps execute a call to a databa The Statement interface provides basic methods for executing statements and retrieving results. The PreparedStatement interface adds methods for dealing with IN parameters; CallableStatement adds methods for dealing with OUT parameters. Creating Statement Objects Once a connection to a particular database is established, that connection can be sed to send SQL statements. A Statement object is created wit u method createStatement, as in the following code fragment: Connection con = DriverManager.getConnection(url, "sunny", ""); Statement stmt = con.createStatement(); The SQL statement that will be sent to the database is supplied as the argument to one of the methods for executing a Statement object: ResultSet rs = stmt.executeQuery("SELECT a, Executing Statements Using The Statement interface provides three different methods for executing SQL statements, executeQuery, executeUpdate, and execute. The one to use is determined by what the SQL statement produces. The method executeQuery is designed for s Basic Java 145 . statement is a modification of one or more columns in zero or more rows in a table. pdate is an integer indicating the number of rows that the update count). For statements such as CREATE lt set if there is one open. This means that one needs to complete any rocessing of the current ResultSet object before re-executing a Statement object. t or CallableStatement versions of these methods will cause an xecuteUpdate, a statement is completed when it is executed. In the rare cases The return value of executeUwere affected (referred to as T ABLE or DROP TABLE, which do not operate on rows, the return value of executeUpdate is always zero. The method execute is used to execute statements that return more than one result set, more than one update count, or a combination of the two. Because it is an advanced feature that most programmers will never need, it is explained in its own section later in this overview. All of the methods for executing statements close the calling Statement object's current resu p It should be noted that the PreparedStatement interface, which inherits all of the methods in the Statement interface, has its own versions of the methods executeQuery, executeUpdate and execute. Statement objects do not themselves contain an SQL statement; therefore, one must be provided as the argument to the Statement.execute methods. PreparedStatement objects do not supply an SQL statement as a parameter to these methods because they already contain a precompiled SQL statement. CallableStatement objects inherit the PreparedStatement forms of these methods. Using a query parameter with the PreparedStatemen S QLException to be thrown. Statement Completion When a connection is in auto-commit mode, the statements being executed within it are committed or rolled back when they are completed. A statement is considered complete when it has been executed and all its results have been returned. For the method executeQuery, which returns one result set, the statement is completed when all the rows of the ResultSet object have been retrieved. For the method e where the method execute is called, however, a statement is not complete until all of the result sets or update counts it generated have been retrieved. Some DBMSs treat each statement in a stored procedure as a separate statement; others treat the entire procedure as one compound statement. This difference becomes important when auto-commit is enabled because it affects when the method commit is called. In the first case, each statement is individually committed; in the second, all are committed together. Closing Statement Objects Statement objects will be closed automatically by the Java garbage collector. Nevertheless, it is recommended as good programming practice that they be closed explicitly when they are no longer needed. This frees DBMS resources immediately and helps avoid potential memory problems. Basic Java 146 . Using the Method execute The execute method should be used only when it is possible that a statern more than one ResultSet object, more than one update coment may unt, or a ts. These multiple possibilities for re, are possible when one is executing certain stored procedures or ring (that is, unknown to the application might execute a stored procedure then a select, then an update, out of the ordinary, it is no ng. For instance, e returns two result sets. After using the method one must call the method getResultSet to get the nd then the appropriate getXXX methods to retrieve values from it. To e second result set, one needs to call getMoreResults and then getResultSet a ond time. If it is known that a procedure returns two update counts, the method ults and a second call to now what will be returned present a more the result is a ResultSet r the statement executed was a DDL command. The first thing method execute, is to call either getResultSet or d getResultSet is called to get what might be the first of esultSet objects; the method getUpdateCount is called to get what st of two or more update counts. hen the result of an SQL statement is not a result set, the method getResultSet will turn null. This can mean that the result is an update count or that there are no more only way to find out what the null really means in this case is to call the e following is true: e ResultSet object it returned, it is necessary to call the method getMoreResults to see if there is another result set or update count. If getMoreResults returns true, then one needs to again call g ultS ually retrieve the next result set. As already stated above, if getR re one h means that the result is an update count or that there are no more results. Whe reR rns f upda ere are eeds to call the method retu combination of ResultSet objects and update coun results, though ra dynamically executing an unknown SQL st programmer at compile time). For example, a user and that stored procedure could perform an update,ically, someone using a stored procedure will know what then a select, and so on. Typ it returns. Because the method execute handles the cases that are surprise that retrieving its results requires some special handli suppose it is known that a procedurdure, execute to execute the proce first result set a get th sec getUpdateCount is called first, followed by getMoreRes getUpdateCount. Those cases where one does not kion. The method execute returns true if complicated situat object and false if it is a Java int. If it returns an int, that means that the result is eithe an update count or that to do after calling the getUpdateCount. The metho two or more Rbe the fir might Wre results. Themethod get UpdateCount, which will return an integer. This integer will be the number of rows affected by the calling statement or -1 to indicate either that the result is a result set or that there are no results. If the method getResultSet has already returned null, which means that the result is not a ResultSet object, then a return value of -1 has to mean that there are no more results. In other words, there are no results (or no more results) when th ((stmt.getResultSet() == null) && (stmt.getUpdateCount() == -1)) If one has called the method getResultSet and processed th etRes et to act esultSet turns null, as to call getUpdateCount to find out whether null n getMo esults retu alse, it means that the SQL statement returned an te count or that th no more results. So one n Basic Java 147 . getUpdateCount to find out which is the case. In this situation, there are no more sults when the following is true: ounts generated by a call to the method execute: stmt.getUpdateCount(); f (rowCount > 0) { // this is an update count " + count); lts(); s System.out.println(" No rows changed or statement was DDL command"); stmt.getMoreResults(); // use metadata to get info about result set columns ata in those rows through a set of get ethods that allow access to the various columns of the current row. The esultSet.next method is used to move to the next row of the ResultSet, making the come the current row. c re ((stmt.getMoreResults() == false) && (stmt.getUpdateCount() == -1)) The code below demonstrates one way to be sure that one has accessed all the result sets and update c stmt.execute(queryStringWithUnknownResults); while (true) { int rowCount = i System.out.println("Rows changed = stmt.getMoreResu continue; } if (rowCount == 0) { // DDL command or 0 update continue; } // if we have gotten this far, we have either a result set // or no more results ResultSet rs = stmt.getResultSet; if (rs != null) { . . . while (rs.next()) { . . . // process results stmt.getMoreResults(); continue; } break; // there are no more results ResultSet A ResultSet contains all of the rows which satisfied the conditions in an SQL statement, and it provides access to the d mR next row be The general form of a result set is a table with column headings and the corresponding values returned by a query. For example, if your query is SELECT a, b, c FROM Table1, your result set will have the following form: a b ------ --------- ------- 12345 Cupertino CA 83472 Redmond WA 83492 Boston MA Basic Java 148 . The following code fragment is an example of executing an SQL statement that will return a collection of rows, with column 1 as an int, column 2 as a String, and column 3 as an array of bytes: java.sql.Statement stmt = conn.createStatement(); ResultSet r = stmt.executeQuery("SELECT a, b, c FROM Table1"); while (r.next()) + s + " " + f); } the method next is called. Initially it is positioned efore the first row, so that the first call to next puts the cursor on the first row, tSet rows are retrieved in sequence from the top row ed. If a database allows positioned e cursor needs to be supplied as a nce. Either the column name or the column number can be used to designate the column from which to retrieve data. For example, if the second column of a ResultSet object rs is named "title" and stores values as strings, either of the following will retrieve the value stored in that column: String s = rs.getString("title"); String s = rs.getString(2); { // print the values for the current row. int i = r.getInt("a"); String s = r.getString("b"); float f = r.getFloat("c"); System.out.println("ROW = " + i + " " Rows and Cursors A ResultSet maintains a cursor which points to its current row of data. The cursor oves down one row each time mb making it the current row. Resul down as the cursor moves down one row with each successive call to next. A cursor remains valid until the ResultSet object or its parent Statement object is closed. In SQL, the cursor for a result table is nampdates or positioned deletes, the name of th u parameter to the update or delete command. This cursor name can be obtained by calling the method getCursorName. Note that not all DBMSs support positioned update and delete. The DatabaseMetaData.supportsPositionedDelete and supportsPositionedUpdate methods can be used to discover whether a particular connection supports these operations. When they are supported, the DBMS/driver must ensure that rows selected are properly locked so that positioned updates do not result in update anomalies or other concurrency problems. Columns The getXXX methods provide the means for retrieving column values from the current row. Within each row, column values may be retrieved in any order, but for maximum portability, one should retrieve values from left to right and read column values only o Basic Java 149 . Note that columns are numbered from left to right starting with column 1. Also, olumn names used as input to getXXX methods are case insensitive. e option of using the column name was prov h a ser who specifies column names in a query can use those same names as the arguments to getXXX methods. If, on the other hand, the l t column names (as in "select * from table1" or in ca s h e mn numbers should be used. In such situ o , r is no o ser to know for sure what the column names are. In some cases, it is possible for a SQL query to return a result set that has more than one column with the same name. If a column name is used as the parameter to a getXXX method, getXXX will return the value of the first matching column name. Thus, if there are multi0ple columns with the same name, one needs to use a column index to be sure that the correct column value is retrieved. It may also be slightly more efficient to use column numbers. Information about the columns in a ResultSet is available by calling the method ResultSet.getMetaData. The ResultSetMetaData object returned gives the number, types, and properties of its ResultSet object's columns. If the name of a column is known, but not its index, the method findColumn can be used to find the column number. Data Types and Conversions For the getXXX methods, the JDBC driver attempts to convert the underlying data to the specified Java type and then returns a suitable Java value. For example, if the getXXX method is getString, and the data type of the data in the underlying database AR, the JDBC dr r will convert VARCHAR to Java String. The return e a Java String object.. c Th ati ns sesethe ect staere em a ideent does d o at rived), u we column is d not specify y f sr te colu wa th u ive Basic Java Using Streams for Very Large Row Values ResultSet makes it possible to retrieve arbitrarily large LONGVARBINARY or LONGVARCHAR data. The methods getBytes and getString return data as one large chunk (up to the limits imposed by the return value of Statement.getMaxFieldSize). However, it may be more convenient to retrieve very large data in smaller, fixed-size chunks. This is done by having the ResultSet class return java.io.Input streams from which data can be read in chunks. Note that these streams must be accessed immediately because they will be closed automatically on the next getXXX call on ResultSet. (This behavior is imposed by underlying implementation constraints on large blob access.) The JDBC API has three separate methods for getting streams, each with a different return value: getBinaryStream returns a stream which simply provides the raw bytes from the database without any conversion. getAsciiStream returns a stream which provides one-byte ASCII characters. Basic Java 151 . getUnicodeStream returns a stream which provides two-byte Unicode characters. Note that this differs from Java streams, which return untyped bytes and can (for example) be used for both ASCII and); } } NULL Result Values To determine if a given result value is JDBC NULL, one must first read the column and then use the ResultSet.wasNull method to discover if the read returned a JDBC NULL. When one has read a JDBC NULL using one of the ResultSet.getXXX methods, the method wasNull will return one of the following: A Java null value for those getXXX methods that return Java objects (methods such as getString, getBigDecimal, getBytes, getDate, getTime, getTimestamp, getAsciiStream, getUnicodeStream, getBinaryStream, getObject). A zero value for getByte, getShort, getInt, getLong, getFloat, and getDouble. A false value for getBoolean. Optional or Multiple Result Sets Normally SQL statements. Basic Java 152 . You do not need to do anything to close a ResultSet; it is automatically closed by the Statement that generated it when that Statement is closed, is re-executed, or is used to retrieve the next result from a sequence of multiple results. PreparedStatement The PreparedStatement interface inherits from Statement and differs from it in two ways: 10. Instances of PreparedStatement contain an SQL statement that has already been compiled. This is what makes a statement "prepared." 11. The SQL statement contained in a PreparedStatement object may have one or more IN parameters. An IN parameter is a parameter whose value is not specified when the SQL statement is created. Instead the statement has a question mark ("?") as a placeholder for each IN parameter. A value for each question mark must be supplied by the appropriate setXXX method before the statement is executed. whole set of methods which. Creating PreparedStatement Objects The following code fragment, where con is a Connection object, creates a PreparedStatement object containing an SQL. Passing IN Parameters Before a PreparedStatement object is executed, the value of each ? parameter must be set. This is done by calling a setXXX method, where XXX is the appropriate type for the parameter. For example, if the parameter has a Java type of long, the method to use is setLong. The first argument to the setXXX methods is the ordinal position of the parameter to be set,. In the default mode for a connection (auto-commit enabled), each statement is commited or rolled back automatically when it is completed. Basic Java 153 ..(); } Data Type Conformance on IN Parameters The XXX in a setXXX method is a Java type. It is implicitly a JDBC type (a generic SQL type) because the driver will map the Java type to its corresponding JDBC type. Using setObject A programmer can explicitly convert an input parameter to a particular JDBC type by using the method setObject. This method can take a third argument, which specifies the target JDBC type. The driver will convert the Java Object from Java types to JDBC types whereas the setObject method uses the mapping from Java Object types to JDBC types Basic Java 154 .. Sending JDBC NULL as an IN parameter The setNull method allows a programmer to send a JDBC. Sending Very Large IN Parameters.. CallableStatement A CallableStatement object provides a way to call stored procedures in a standard way for all DBM. Basic Java 155 . The syntax for invoking a stored procedure in JDBC is shown below. (generic SQL types) of the OUT parameters, retrieving values from them, or checking whether a returned value was JDBC NULL. Creating a CallableStatement Object CallableStatement objects are created with the Connection method prepareCall. The example below creates an instance of CallableStatement that contains a call to the stored procedure getTestData, which has two arguments and no result parameter: CallableStatement cstmt = con.prepareCall( "{call getTestData(?, ?)}"); Whether the ? placeholders are IN, OUT, or INOUT parameters depends on the stored procedure getTestData. IN and OUT Parameters Passing in any IN parameter values to a CallableStatement object is done using the setXXX methods inherited from PreparedStatement. The type of the value being passed in determines which setXXX method to use (setFloat to pass in a float value, and so on). If the stored procedure returns OUT parameters, the JDBC type of each OUT parameter must be registered before the CallableStatement object can be executed. (This is necessary because some DBMSs require the JDBC type.) Registering the JDBC type is done with the method registerOutParameter. Then after the statement has been executed, CallableStatement's getXXX methods retrieve the parameter value. The correct getXXX method to use is the Java type that corresponds to the JDBC type registered for that parameter. In other words, registerOutParameter uses a JDBC type (so that it matches the JDBC, Basic Java 156 . and getBigDecimal retrieves a BigDecimal object (with three digits after the decimal point) from the second OUT parameter: CallableStatement cstmt = con.prepareCall( "{call getTestData(?, ?)}");); Unlike ResultSet, CallableStatement does not provide a special mechanism for retrieving large OUT values incrementally. INOUT Parameters which. CallableStatement cstmt = con.prepareCall( "{call reviseTotal(?)}"); cstmt.setByte(1, 25); cstmt.registerOutParameter(1, java.sql.Types.TINYINT); cstmt.executeUpdate(); byte x = cstmt.getByte(1); Retrieve OUT Parameters after Results Because of limitations imposed by some DBMSs, it is recommended that for maximum portability, all of the results generated by the execution of a CallableStatement object should be retrieved before OUT parameters are retrieved using CallableStatement.getXXX methods. If a CallableStatement object returns multiple ResultSet objects (using. Basic Java 157 . After this is done, values from OUT parameters can be retrieved using the CallableStatement.getXXX methods. Retrieving NULL Values as OUT Parameters JDBC NULL is to test it with the method wasNull, which returns true if the last value read by a getXXX method was JDBC NULL and false otherwise. Mapping SQL and Java Types Since SQL data types and Java data types are not identical, there needs to be some mechanism for reading and writing data between an application using Java types and a database using SQL types. To accomplish this, JDBC provides sets of getXXX and setXXX methods, the method registerOutParameter, and the class Types. This section brings together information about data types affecting various classes and interfaces and puts all the tables showing the mappings between SQL types and Java types in one place for easy reference. Mapping SQL Data Types into Java database. We recommend that you consult your database documentation if you need exact definitions of the behavior of the various SQL types on a particular database. Basic Java 158 .. Examples of Mapping In any situation where a Java program retrieves data from a database, there has to be some form of mapping and data conversion. In most cases, JDBC programmers will be programming with knowledge of their target database's schema. They would. Simple SQL Statement In the most common case, a user executes a simple SQL statement and gets back a ResultSet object with the results. The. The getXXX methods may be used to retrieve which JDBC types. (A user who does not know the type of a ResultSet column can get that information by calling the method ResultSet.getMetaData and then invoking the ResultSetMetaData methods getColumnType or getColumnTypeName.) The following code fragment demonstrates getting the column type names for the columns in a result set: String query = "select * from Table1"; ResultSet rs = stmt.executeQuery(query); ResultSetMetaData rsmd = rs.getMetaData(); int columnCount = rsmd.getColumnCount(); for (int i = 1; i <= columnCount; i++) { String s = rsmd.getColumnTypeName(i); System.out.println ("Column " + i + " is type " + s); } Basic Java 159 . SQL Statement with IN Parameters In another possible scenario, the user sends an SQL statement which. SQL Statement with INOUT Parameters In yet another scenario, a user wants to call a stored procedure, assign values to its INOUT parameters, retrieve values from the results,, since results returned to a ResultSet object with ResultSet.getXXX methods and retrieves the values stored in the output parameters with CallableStatement.getXXX methods. The XXX type used for ResultSet.getXXX methods is fairly flexible in some cases..math.BigDecimal object to a JDBC NUMERIC value. Next the two parameters are registered as OUT parameters, the first parameter as a JDBC TINYINT and the second parameter as a JDBC DECIMAL getInt gets DECIMAL as a java.math.BigDecimal object with Basic Java 160 . CallableStatement cstmt = con.prepareCall( "{call getTestData(?, ?)}"); cstmt.setByte(1, 25); cstmt.setBigDecimal(2, 83.75); // register the first parameter as a JDBC TINYINT and the second //parameter as a JDBC DECIMAL with two digits after the decimal point cstmt.registerOutParameter(1, java.sql.Types.TINYINT); cstmt.registerOutParameter(2, java.sql.Types.DECIMAL, 2); ResultSet rs = cstmt.executeUpdate(); //, 2); To generalize, the XXX in CallableStatement.getXXX and PreparedStatement.setXXX methods is a Java type. For setXXX methods, the driver converts the Java type to a JDBC type before sending it to the database. For getXXX methods, the driver converts the JDBC type returned by the database to a Java type. The driver will perform the explicit or implicit conversion before sending the parameter to the database. Dynamic Data Access and one constant facilitate accessing values whose data types are not known at compile time: ResultSet.getObject PreparedStatement.setObject CallableStatement.getObject Basic Java 161 . java.sql.Types.OTHER (used as an argument to CallableStatement.registerOutParameter) The method getObject can also be used to retrieve user-defined Java types. With the advent of abstract data types (ADTs) or other user-defined types in some database systems, some vendors may find it convenient to use getObject for retrieving these types. Tables for Data Type Mapping JDBC type Java type CHAR String VARCHAR String LONGVARCHAR String NUMERIC java.math.BigDecimal DECIMAL java.math.BigDecimal BIT boolean TINYINT byte SMALLINT short INTEGER int BIGINT long REAL float FLOAT double DOUBLE double BINARY byte[] VARBINARY byte[] LONGVARBINARY byte[] DATE java.sql.Date TIME java.sql.Time TIMESTAMP java.sql.Timestamp Basic Java 162 . Java Types Mapped to JDBC Types Java Type JDBC type String VARCHAR or LONGVARCHAR java.math.BigDecimal NUMERIC boolean BIT byte TINYINT short SMALLINT int INTEGER long BIGINT float REAL double DOUBLE byte[] VARBINARY or LONGVARBINARY java.sql.Date DATE java.sql.Time TIME java.sql.Timestamp TIMESTAMP The mapping for String will normally be VARCHAR but will turn into LONGVARCHAR if the given value exceeds the driver's limit on VARCHAR values. The same is true for byte[] and VARBINARY and LONGVARBINARY values. JDBC Types Mapped to Java Object Types Since the Java built-in types such as boolean and int are not subtypes of Object, there is a slightly different mapping from JDBC types to Java object types for the getObject/setObject methods. This mapping is shown in the following table: JDBC Type Java Object Type CHAR String VARCHAR String LONGVARCHAR String NUMERIC java.math.BigDecimal DECIMAL java.math.BigDecimal BIT Boolean TINYINT Integer SMALLINT Integer INTEGER Integer BIGINT Long REAL Float FLOAT Double DOUBLE Double BINARY byte[] VARBINARY byte[] LONGVARBINARY byte[] DATE java.sql.Date TIME java.sql.Time Basic Java 163 . TIMESTAMP java.sql.Timestamp Java Object Types Mapped to JDBC Types Java Object Type JDBC Type String VARCHAR or LONGVARCHAR java.math.BigDecimal NUMERIC Boolean BIT Integer INTEGER Long BIGINT Float REAL Double DOUBLE byte[] VARBINARY or LONGVARBINARY java.sql.Date DATE java.sql.Time TIME java.sql.Timestamp TIMESTAMP Note that the mapping for String will normaly be VARCHAR but will turn into LONGVARCHAR if the given value exceeds the driver's limit on VARCHAR values. The case is similar for byte[] and VARBINARY and LONGVARBINARY values. Conversions by setObject The method setObject converts Java object types to JDBC types. TI NY I NT String x x x x x x x x x x x x x x x x x x x java.math.BigDecimal x x x x x x x x x x x x x Boolean x x x x x x x x x x x x x Integer x x x x x x x x x x x x x Long x x x x x x x x x x x x x Float x x x x x x x x x x x x x Double x x x x x x x x x x x x x byte[] x x x java.sql.Date x x x x x java.sql.Time x x x x java.sql.Time- stamp x x x x x x Basic Java 164 . JDBC Types Retrieved by ResultSet.getXXX Methods An "x" means that the method can retrieve the JDBC type. An "X" means that the method is recommended for the JDBC type. T I N Y I N T Basic Java 165 . Sample Code // The following code can be used as a template. Simply // substitute the appropriate url, login, and password, and then substitute //the // SQL statement you want to send to the database. //--------------------------------------------------------- ------------------ // // Module: SimpleSelect.java // // Description: //Test program for ODBC API interface. //This java application will connect to //a JDBC driver, //issue a select statement // and display all result columns and rows //--------------------------------------------------------- ------------------ import java.net.URL; import java.sql.*; class SimpleSelect { public static void main (String args[]) { String url = "jdbc:odbc:my-dsn"; String query = "SELECT * FROM emp"; try { // Load the jdbc-odbc bridge driver Class.forName ("sun.jdbc.odbc.JdbcOdbcDriver"); DriverManager.setLogStream(System.out); // Attempt to connect to a driver. Each one // of the registered drivers will be loaded until // one is found that can process this URL Connection con = DriverManager.getConnection ( url, "my-user", "my-passwd"); // If we were unable to connect, an exception // would have been thrown. So, if we get here, // we are successfully connected to the URL // Check for, and display and warnings generated // by the connect. checkForWarning (con.getWarnings ()); // Get the DatabaseMetaData object and display // some information about the connection DatabaseMetaData dma = con.getMetaData (); System.out.println("\nConnected to " + dma.getURL()); System.out.println("Driver " + dma.getDrive System.out.println("Version " +dma.getDriverVersion()); System.out.println(""); Basic Java 166 . // Create a Statement object so we can submit // SQL statements to the driver Statement stmt = con.createStatement (); // Submit a query, creating a ResultSet object ResultSet rs = stmt.executeQuery (query); // Display all columns and rows from the result set dispResultSet (rs); // Close the result set rs.close(); // Close the statement stmt.close(); // Close the connection con.close(); } catch (SQLException ex) { // A SQLException was generated. Catch it and // display the error information. Note that there // could be multiple error objects chained // together) { // Got some other type of exception. Dump it. ex.printStackTrace (); } } //--------------------------------------------------------- --------- // checkForWarning // Checks for and displays warnings. Returns true if a warning // existed Basic Java 167 . //---------------------------------------------------------- --------- private static boolean checkForWarning (SQLWarning warn) throws SQLException { boolean rc = false; // If a SQLWarning object was given, display the // warning messages. Note that there could be // multiple warnings chained together if (warn != null) { System.out.println ("\n *** Warning ***\n"); rc = true; while (warn != null) { System.out.println ("SQLState: " +warn.getSQLState ()); System.out.println ("Message: " +warn.getMessage ()); System.out.println ("Vendor: " +warn.getErrorCode ()); System.out.println (""); warn = warn.getNextWarning (); } } return rc; } //---------------------------------------------------------- -------- // dispResultSet // Displays all columns and rows in the given result set //--------------------------------------------------------- --------- private static void dispResultSet (ResultSet rs) throws SQLException { int i; // Get the ResultSetMetaData. This will be used for // the column headings ResultSetMetaData rsmd = rs.getMetaData (); // Get the number of columns in the result set int numCols = rsmd.getColumnCount (); // Display column headings for (i=1; i<=numCols; i++) { Basic Java 168 . if (i > 1) System.out.print(","); System.out.print(rsmd.getColumnLabel(i)); } System.out.println(""); // Display data, fetching until end of the result set boolean more = rs.next (); while (more) { // Loop through each column, getting the // column data and displaying for (i=1; i<=numCols; i++) { if (i > 1) System.out.print(","); System.out.print(rs.getString(i)); } System.out.println(""); // Fetch the next result set row more = rs.next (); } } } JDBC-ODBC Bridge Driver If possible, use a Pure Java JDBC driver instead of the Bridge and an ODBC driver. This completely eliminates the client configuration required by ODBC. It also eliminates the potential that the Java VM could be corrupted by an error in the native code brought in by the Bridge (that is, the Bridge native library, the ODBC driver manager library, the ODBC driver library, and the database client library). What Is the JDBC-ODBC Bridge? The JDBC-ODBC Bridge is a JDBC driver which implements JDBC operations by translating them into ODBC operations. To ODBC it appears as a normal application program. The Bridge implements JDBC for any database for which an ODBC driver is available. The Bridge is implemented as the sun.jdbc.odbc Java package and contains a native library used to access ODBC. The Bridge is a joint development of Intersolv and JavaSoft. What Version of ODBC Is Supported? The bridge supports ODBC 2.x. This is the version that most ODBC drivers currently support. It will also likely work with most forthcoming ODBC 3.x drivers; however, this has not been tested. Basic Java 169 . The Bridge Implementation The Bridge is implemented in Java and uses Java native methods to call ODBC. Installation The Bridge is installed automatically with the JDK as package and libodbc.so. The Bridge expects these libraries to be named libodbcinst.so.1 and libodbc.so.1, so symbolic links for these names must be created. Using the Bridge The Bridge is used by opening a JDBC connection using a URL with the odbc subprotocol. See below for URL examples. Before a connection can be established, the bridge driver class, sun.jdbc.odbc.JdbcOdbcDriver, must either be added to the java.lang.System property. Using the Bridge from an Applet JDBC used with a Pure Java JDBC driver works well with applets. The Bridge driver does not work well with applets. Most Browsers Do Not Support the Bridge Since the Bridge is an optional component of the JDK, it may not be provided by a browser. Even if it is provided, only trusted applets (those allowed to write to files) will be able to use the Bridge. This is required in order to preserve the security of the applet sandbox. Finally, even if the applet is trusted, ODBC and the DBMS client library must be configured on each client. Tested Configurations From Solaris, we have used the Bridge to access Oracle 7.1.6 and Sybase Version 10 running on Solaris. From NT, we have used the Bridge to access SQL Server 6.x. ODBC Drivers Known to Work with the Bridge Visigenic provides ODBC drivers which have been tested with the the Bridge. Drivers are available for Oracle, Sybase, Informix, Microsoft SQL Server, and Ingres. To purchase the ODBC DriverSet 2.0, please contact Visigenic sales at 415-312-7197, or visit the web site. The INTERSOLV ODBC driver suite should be completely compatible with the JDBC-ODBC Bridge. The following drivers have successfully passed a minimal test suite: Oracle, xBASE, Sybase (Windows NT/95 Basic Java 170 . Basic Java 171 only), Microsoft SQL-Server, and Informix. To evaluate or purchase INTERSOLV ODBC drivers, please contact INTERSOLV DataDirect Sales at 1- 800-547-4000 Option 2 or via the World Wide Web at http:\\. The MS SQL Server driver has also been used successfully on NT. Many other ODBC drivers will likely work. ODBC Driver Incompatibilities On Solaris, we have found that the Sybase ctlib-based drivers don't work because ctlib has a signal-handling conflict with the Java VM. This is likely not a problem on NT due to differences in the NT Java VM; however, this has not been verified. Some ODBC drivers only allow a single result set to be active per connection. What Is the JDBC URL Supported by the Bridge? The Bridge driver uses the odbc subprotocol. URLs for this subprotocol are of the form: jdbc:odbc:<data-source-name>[<attribute-name>=<attribute-value>]* For example: jdbc:odbc:sybase jdbc:odbc:mydb;UID=me;PWD=secret jdbc:odbc:ora123;Cachesize=300 Debugging The Bridge provides extensive tracing when DriverManager tracing is enabled. The following line of code enables tracing and sends it to standard out: java.sql.DriverManager.setLogStream(java.lang.System.out); General Notes The Bridge assumes that ODBC drivers are not reentrant. This means the Bridge must synchronize access to these drivers. The result is that the Bridge provides limited concurrency. This is a limitation of the Bridge. Most Pure Java JDBC drivers provide the expected level of concurrent access.
https://id.scribd.com/document/170040914/Basic-Java
CC-MAIN-2019-35
en
refinedweb
ORA-30937: No schema definition for 'redefine' when registering XML schema (Doc ID 2417618.1) Last updated on AUGUST 04, 2018 Applies to:Oracle Database - Standard Edition - Version 12.2.0.1 and later Information in this document applies to any platform. Symptoms Attempting to register an XML schema give the following error ERROR at line 1: ORA-30937: No schema definition for 'redefine' (namespace '') in parent '/schema' ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 72 ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 33 ORA-06512: at line 2 The XML schema is using "redefine" to redefine one of the schemas <xs:redefine ... </xs:redefine>
https://support.oracle.com/knowledge/Oracle%20Database%20Products/2417618_1.html
CC-MAIN-2019-35
en
refinedweb
The C library function void free(void *ptr) deallocates the memory previously allocated by a call to calloc, malloc, or realloc. Following is the declaration for free() function. void free(void *ptr) ptr − This is the pointer to a memory block previously allocated with malloc, calloc or realloc to be deallocated. If a null pointer is passed as argument, no action occurs. This function does not return any value. The following example shows the usage of free() function. #include <stdio.h> #include <stdlib.h> #include <string); /* Deallocate allocated memory */ free(str); return(0); } Let us compile and run the above program that will produce the following result − String = tutorialspoint, Address = 355090448 String = tutorialspoint.com, Address = 355090448
https://www.tutorialspoint.com/c_standard_library/c_function_free.htm
CC-MAIN-2019-35
en
refinedweb
Microsoft Orleans — Reusing Grains and Grain State Russ Hammett Originally published at Medium on ・1 min read Microsoft Orleans — Reusing Grains and Grain State We’ve explored Orleans for distributing application logic across a cluster. Next, we’ll be looking at grain reuse and grain state… Recap: var grain = client.GetGrain<IHelloWorld>(Guid.NewGuid()); var response = await grain.SayHello(name); Console.WriteLine($"\n\n{response}\n\n"); Returns: Grain Reuse: public class HelloWorld : Grain, IHelloWorld { public Task<string> SayHello(string name) { return Task.FromResult($"Hello World! Orleans is neato torpedo, eh {name}?"); } } To: public class HelloWorld : Grain, IHelloWorld { public Task<string> SayHello(string name) { return Task.FromResult($"Hello from grain {this.GetGrainIdentity()}, {name}!"); } }: private static async Task DoClientWork(IClusterClient client) { Console.WriteLine("Hello, what should I call you?"); var name = Console.ReadLine(); if (string.IsNullOrEmpty(name)) { name = "anon"; } // example of calling grains from the initialized client var grain = client.GetGrain<IHelloWorld>(Guid.NewGuid()); var response = await grain.SayHello(name); Console.WriteLine($"\n\n{response}\n\n"); } Should change to: private static async Task DoClientWork(IClusterClient client) { // example of calling grains from the initialized client var grain = client.GetGrain<IHelloWorld>(Guid.NewGuid()); Console.WriteLine($"{await grain.SayHello("1")}"); Console.WriteLine($"{await grain.SayHello("2")}"); Console.WriteLine($"{await grain.SayHello("3")}"); }. Stateful Grains Grains can have a notion of “state”, or instance variables pertaining to some internals of the grain, or its context. Orleans has two methods of tracking grain state: - Extend Grain<T>rather than Grain - Do it yo’self Generally, I’d prefer to not write code that’s already been written, so I’ll be sticking with the first option. Grain Persistence: - configured persistence store(s) - stateful grain(s) Grain Persistence how-to) .ConfigureLogging(logging => logging.AddConsole()); var host = builder.Build(); await host.StartAsync(); return host; }) .AddMemoryGrainStorage("OrleansStorage") .ConfigureLogging(logging => logging.AddConsole()); var host = builder.Build(); await host.StartAsync(); return host; } Note that since the above is using a Builder Pattern, you could just add: builder.AddMemoryGrainStorage("OrleansStorage"); as a separate line in between the instantiation of the builder, and the var host = builder.Build();. That’s it! A persistent grain Next, let’s slap together a grain with some state. We’ll create a grain that can track the number of times a user visits our “site” (pretend it’s a website). The first thing is to define the interface: public interface IVisitTracker : IGrainWithStringKey { Task<int> GetNumberOfVisits(); Task Visit(); } Properties of the above: - String key — since we’re using it to track visits to our site, using the account email seems to make sense as a unique key - Task GetNumberOfVisits() — this method will be used to retrieve the number of times a user has visited. - Task Visit() — this method will be invoked when a user visits the site. The grain implementation: [StorageProvider(ProviderName = Constants.OrleansMemoryProvider)] public class VisitTracker : Grain<VisitTrackerState>, IVisitTracker { public Task<int> GetNumberOfVisits() { return Task.FromResult(State.NumberOfVisits); } public async Task Visit() { var now = DateTime.Now; if (!State.FirstVisit.HasValue) { State.FirstVisit = now; } State.NumberOfVisits++; State.LastVisit = now; await WriteStateAsync(); } } public class VisitTrackerState { public DateTime? FirstVisit { get; set; } public DateTime? LastVisit { get; set; } public int NumberOfVisits { get; set; } } A few new things going on above: - Specifying a storage provider as a class level attribute — this is the storage provider we defined in the changes to the SiloHost earlier. - Extending Grain<T>instead of Grain where <T>is a state class. - Manipulation of this.State, in order to keep track of the specific instantiation “state”. - A state class VisitTrackerState Seeing Grain State in action Finally, let’s see what sort of things we can do in our Client app using our new stateful grain. private static async Task DoStatefulWork(IClusterClient client) { var kritnerGrain = client.GetGrain<IVisitTracker>("[email protected]"); var notKritnerGrain = client.GetGrain<IVisitTracker>(: - Blog post — Getting Started with Microsoft Orleans - GitHub repo as of the start of this post — v0.1 - GitHub repo at the end of this post — v0.11 - Orleans Documentation — Grain Persistence How to Improve Writing Skills as a Non-Native Speaker Let me begin by saying, that writing a new post either on technical or soft skill topics is always a challenge for me. Why? Because I am not a native English speaker.
https://dev.to/kritner/microsoft-orleansreusing-grains-and-grain-state-339b
CC-MAIN-2019-35
en
refinedweb
We’re trying to compare the state information is the XML saved in session data with the current state of our plugin, for AAX’s Compare ability. I’m using this function to compare, but for some reason it doesn’t compare correctly. At first, it returns true (equal), but after changing one parameter and changing it back, it returns false. if (pParameterTreeXml->hasTagName(parameters.state.getType())) { ValueTree givenState = ValueTree::fromXml(*pParameterTreeXml); return (givenState.isEquivalentTo(parameters.state)) Also, it might be nice for some parameters to have more specific control over the comparison (such as a "slacK’ for how close a parameter needs to be, or the ability to ignore some parameters). What I’m wondering is how I can compare, with more control, my value tree state with the state returned from ValueTree::fromXml?
https://forum.juce.com/t/how-to-compare-xml-from-session-data-with-current-state/33820
CC-MAIN-2019-35
en
refinedweb
Hi, Yes, it can be done like this, but like I said, I don't view the Lua script part as a separate module (or the DLL if you reverse the roles of DLL and script). The script is really private to the DLL and both communicate through private functions and tables that are unreachable from outside of the module. I had a similar problem in previous versions of LuaSocket. The implementation of socket.select was messy in the sense that it was composed of both C and Lua code, with Lua parts that used C parts and vice-versa. I don't know exactly what is your scenario, but I now tend to avoid this double-dependency. Having C code depend on Lua code tends to complicate the setup, although it might simplify the implementation itself.Why worry about someone being able to load an internal module? If someone does that, it should be their problem, shouldn't it? Can't you simply not document it? This should make it obvious that a module is internal. I know this is not a solution to your problem, it's just sort of denying it exists. Sometimes this is the best solution, though. :) It is not worth it to worry about paths. I would suggest you simply use the engine provided by require itself.I wouldn't have to worry if require simply passed the path as an argument. :-) Look at it as a win-win situation since both your approach and mine could be implemented easily. Moreover, the changes it would take to loader_C (or require) are minimal. This is somewhat assymetric. One of the ideas of the packaging system, I think, is that you can place the same function in the package.preload table and everything should work fine. In that case, you would be in bad shape. All you have to do is return _M in the end of your module definition (as I added above). In C, you only have to return 1 from your loader, because luaL_module already leaves the namespace table on the stack.Don't you agree that it would be much nicer if we only need to use "module" and not rely on return values? Then it is simple: "module" determines the package table, period. If a) modules are not placed in the globals automatically and b) return values of packages are ignored, there'd be no way around using "module" to properly setup a package. I am not sure. The current scheme makes it work in 99% of the cases with no setup. With one extra line for the other 1% you can get it to work. Is this too bad? In fact, inverting the order, in the case you choose to implement your modules by simply filling up a table and returning it, would still not work.But that is exactly what I *don't* want to do. I think the "module" approach is nice and easy. (If only it would would bind to the required name instead of the module name. Again, the changes to "module" and "require" to support this binding are minimal.) Here's another example of possibly weird behaviour. Suppose we implement two packages "a" and "b" both adding to module "a", but without "b" requiring "a". Then local b = require "b" local a = require "a" doesn't load package "a", while local a = require "a" local b = require "b" does. With a modified module function, both would work as expected. Or should we compose a list of additional coding rules like - When in doubt, end your package with "return _M" - Before doing "module(name)" where <name> is not the package name, you should also do "require(name)" But here you are focusing on a cases that are not commonly needed, are somewhat pathological, and for which there are cookbook solutions. 1) If two modules are recursive, invoke module() on yourself before requiring() your mate. This is just like a forward declaration. 2) If you are invoking module with a name other than your own, BEWARE. This means you probably are extending someone else. In that case, require() whoever it is that you are extending (it only makes sense). Also, because you are not exporting where you should, make sure you return your namespace, i.e., the one you are borrowing from. Alternatively, store your namespace into the loaded table where it belongs. Finally, you might consider placing your extension module as a submodule of whoever it is that you are extending, just to make things obvious, regardless of whether you export symbols to your own namespace or to that of your master. You try very hard! ;-) I know. :) There has to be some inertia, otherwise this thing never converges... But if there is a solution to a problem that is simpler than the work around, I am all for it. Seriously though, I think my small wish list does not conflict with the spirit of the require proposal (and would take only minimal changes): 1) "module" binds to the "require"-ed name, not to the module name. And how is "module" supposed to know what was the required name? Maybe some magic environment trick... 3) Drop the support for package return values, they're just confusing I don't like this idea. One might desire to write a package whose only exported symbol is a function. Returning that function works and is useful. []s, Diego.
http://lua-users.org/lists/lua-l/2005-08/msg00469.html
CC-MAIN-2019-35
en
refinedweb
(X,Y) coordinate using python openCV Dear all, It is my first time to work on openCV; in fact, I just heard of it two months ago.I need to get the position point value using (x,y) coordinate system so I can program motors to pick the dices. My question, can you please help to find the location of each dice and return (x,y) values for each one of them? I attached an example of what I really want. Thank you. what have you tried, so far ? (we won't write your program) Just a suggestion - If you always have an image of the scene you could simply take the image coordinate system in openCV: Top-left corner is 0,0; x-axis is horizontal, y-axis is vertical. I think one way is to look for white color (thresholding or masks), draw contours on the new image (only the dice should be visible) and take the center of the contours.
https://answers.opencv.org/question/199275/xy-coordinate-using-python-opencv/
CC-MAIN-2019-35
en
refinedweb
Java Interview QuestionsNareshIT_Hr What is Java? - Java is a distributed application software - Java is the high-level, object-oriented programming - Java is API Document. What are differences between C, C++ and Java? Why java or What are the features in JAVA? Features of Java: - Oops concepts - Class - object - Inheritance - Encapsulation - Polymorphism - Abstraction - Platform independent: Java is System independent as well as Platform independent because it works with diff System hw as well as Diff Platforms(diff operating Systems) - High Performance: JIT (Just In Time compiler) enables high performance in Java. JIT converts the bytecode into machine language and then JVM starts the - Multi-threaded: A flow of execution is known as a Thread. JVM creates a thread which is called main thread. The user can create multiple threads by extending the thread class or by implementing Runnable - Simple: Java having simple Syntax rules. easy to learn. Complicated things like pointers and multiple inheritance is not - Portable: Java supports write-once-run-anywhere approach. We can execute the Java program on every machine. Java program (.java) is converted to bytecode (.class) which can be easily run on every machine. - Secured: Java is secured because it doesn’t use explicit Java also provides the concept of ByteCode and Exception handling which makes it more secured. - Robust: Java is a strong programming language as it uses strong memory management. The concepts like Automatic garbage collection, Exception handling, etc. make it more - - Distributed: Java is distributed because it facilitates users to create distributed applications in Java. RMI and EJB are used for creating distributed applications. This feature of Java makes us able to access files by calling the methods from any machine on the - Dynamic: Java is a dynamic language. It supports dynamic loading of classes. It means classes are loaded on demand. It also supports functions from its native languages, i.e., C and C++. What is difference between compiler and interpreter ? Why java not support pointer? A pointer is a variable which can hold the address of another variable or object. But, Java does not support pointer due to security reason, because if you get the address of any variable you could access it anywhere from the program without any restriction even variable is private. Update Your Skills form Our Experts: Core Java Online Training Java comments Interview Questions What is java comments ? The java comments are statements that are not executed by the compiler and interpreter. The comments can be used to provide information or explanation about the variable, method, class or any statement. It can also be used to hide program code for specific time. Comments are not consider as java code used build communication between programmers and end users? What are Types of Java Comments? There are 3 types of comments in java. 1. Single Line Comment The single line comment is used to comment only one line. EX: // this is Our application 2. Multi Line Comment The multi line comment is used to comment several lines of code. EX: /* This is multiline comment User for give big description purpose Give n no line etc */ 3. Documentation Comment /** This is doc comment User for give big description purpose Give n no line etc */ Difference Between Multiline and Document Type Comments in java? Multiline Comments are used build communication between programmers Document Type Comments are used build communication between programmer and end users. Javadoc tool generates html description for Document Type Comments Java Package Interview Questions What is a package? Package is a collection class And Interfaces those are collection of methods they can perform some action. What is the difference between #include and import? #include is used in C or C++ programing witch is used to go to standard library and copy the entire header file code in to a C/C++ programs. So the program size increases unnecessarily wasting memory & processor time. Import statement used in java programming uses to pass Ref for particular package .package is saved once used for N no of times. Witch avoids memory wastage problems. What Is The Super Class Of All Classes? Java.lang.Object is a Super class for classes What are the advantages of java package? - Packages hide classes & interfaces. Thus they provide protection for them. - The classes of one Package are isolated from the classes of another Package. So it is possible to use same names for the classes into different packages. - Using package concept we can create our own Packages & also we can extend already available Packages. - Packages provide re usability of code. - - Package removes naming Can we import same package/class twice? Will the JVM load the package twice at runtime? Yes even though programmer whitens same package class twice in the program Jvm loads it once only Can u Explain Update Your Skills form Our Experts: Core Java Online Training Which package is always imported by default? By default java.lang package imported How many ways we can use packages classes Method-I By passing complete address: Java.util.Scanner sc=new java.util.Scanner(System.in); Method-II: By using import stmt: import java.util.*; Scanner sc=new Scanner(System.in); Java Main Method Interview Questions Can we execute java program without main method? No, you can’t run java class without main method. Before Java 7, you can run java class by using static initializers. But, from Java 7 it is not possible. Can we change return type of main() method? No, the return type of main() method must be void only. Any other type is not acceptable. Can we declare main() method as private or protected or with no access modifier? - No, main() method must be public. You can’t define main() method as private or protected or with no access - This is because to make the main() method accessible to JVM. If you define main() method other than public, compilation will be successful but you will get run time error as no main method Can We Overload main() method? Can java class Suports more Than One Main Method Yes, We can overload main() method. A Java class can have any number of main() methods. But to run the java class, class should have main() - method with signature as “public static void main(String[] args)”. If you do any modification to this signature, compilation will be successful. - But, you can’t run the java program. You will get run time error as main method not Can main() method take an argument other than string array? No, argument of main() method must be string array. - But, from the introduction of var args you can pass var args of string type as an argument to main() method. Again, var args are nothing but the arrays. Java Data type Interview Questions What are the primitive data types in Java ? There are eight primitive data types. - byte - short - int - long - float - double - boolean - char What is the default value of variables in Java? What is uni code? Uni code is a standard to include the alphabet from all human languages in java. Uni code system uses 2 bytes to represent a character What is the default value of local variables in Java? - There is no default value for local variables What is Widening? - Is process of conversion lower data types into higher data types What is Narrowing ? - Is process of conversion Higher data types into Lower data types What is ASCII? American Standard Code for Information Interchange. It is a standard numarical value asigened to every key in keybord.its range is 0 to 255. EX: A value=65 What are the Types of Variables in JAVA ? - Java has 3 kinds of variables. They are Local variables, Instance variables (fields) and. What is this key word in java? “this”is a predefined instance variable to hold current object reference Update Your Skills form Our Experts: Core Java Online Training Can we use this in static methods? No we cannot use this in static methods. if we try to use compile time error will come :Cannot use this in a static context What are all the differences between this and super keyword? - This refers to current class object where as super refers to super class object - Using this we can access all non static methods and variables. Using super we can access super class variable and methods from sub class. - Using this(); call we can call other constructor in same class. Using super we can call super class constructor from sub class constructor. Is it possible to use this in static blocks? - No its not possible to use this keyword in static block. Can we use this to refer static members? - Yes its possible to access static variable of a class using this but its discouraged and as per best practices this should be used on non static reference Can we pass this as parameter of method? - Yes we can pass this as parameter in a method Can we return this from a method? Yes We can return this as current class object. public class B{ int a; public int getA() { return a; } public void setA(int a) { this.a = a; } B show(){ return this; } public static void main(String[] args) { B obj = new B(); obj.setA(10); System.out.println(obj.getA()); B obj2= obj.show(); System.out.println(obj2.getA()); } } Can we call method on this keyword from constructor? - Yes we can call non static methods from constructor using this keyword. What is the use of final keyword in java? - By. What is the main difference between abstract method and final method? - Abstract methods must be overridden in sub class where as final methods can not be overridden in sub class What is the actual use of final class in java? - If a class needs some security and it should not participate in inheritance in this scenario we need to use final class. - We can not extend final class. Can we declare interface as final? - No We can not declare interface as final because interface should be implemented by some class so its not possible to declare interface as final. Is it possible to declare final variables without initialization? - No. Its not possible to declare a final variable without initial value assigned. - While declaring itself we need to initialize some value and that value can not be change at any time. Can we declare constructor as final? - No . Constructors can not be final. What will happen if we try to override final methods in sub classes? - Compile time error will come :Cannot override the final method from Super Class Update Your Skills form Our Experts: Core Java Online Training Can we create object for final class? - Yes we can create object for final class What is java static import? - In Java, static import concept is introduced in 1.5 version access the static members of a class directly without class name or any object - For example, System class & Math class has static method: import static java.lang.System.*; import static java.lang.Math.*; public class MyStaticImportTest { public static void main(String[] a) { System.out.println(sqrt(625)); } } } What is Drawback of Static Import in Java? - If two static members of the same name are imported from multiple different classes, the compiler will throw an error, as it will not be able to determine which member to use in the absence of class name qualification. EX: import static java.lang.System.*; import static java.lang.Integer.*; import static java.lang.Byte.*; class Demo { public static void main(String[] args) { out.println(MAX_VALUE); } } Output: Error:Reference to MAX_VALUE is ambigious What is the Difference between import and static import?. What all memory areas are allocated by JVM? - Classloader, Class area, Heap, Stack, Program Counter Register and Native Method Stack. What is class? - A class is a blueprint or template or prototype from which you can create the object of that class. A class has set of properties and methods that are common to its objects. What is a wrapper class in Java? - A wrapper class converts the primitive data type such as int, byte, char, boolean etc. to the objects of their respective classes such as Integer, Byte, Character, Boolean etc. What is a path and classPath in Java? - Path specifies the location of .exe files. Classpath specifies the location of bytecode (.class files). Aslo Read: Core Java Interveiw Questions and Answers
https://nareshit.com/java-interview-questions/
CC-MAIN-2019-43
en
refinedweb
Coroutine This function is asynchronous and will complete at some point in the future, when the coroutine has finished communicating with the service backend. This function is used to tell MatchMaker to destroy a match in progress, regardless of who is connected. This function is not part of the normal MatchMaker flow and is there to allow termination of a match immediatly. For normal flow, each client disconnecting should call NetworkMatch.DropConnection with their own information; Once the last client leaves a match, the match will be immediately cleaned up. This function is protected by the authentication token given to the client when it creates the match. Only a host (which is automatically granted admin rights) is allowed to call NetworkMatch.DestroyMatch. Anyone else will be denied access. using UnityEngine; using UnityEngine.Networking; using UnityEngine.Networking.Match; using UnityEngine.Networking.Types; public class ExampleScript : MonoBehaviour { public NetworkID netId; void Start() { NetworkManager.singleton.StartMatchMaker(); NetworkManager.singleton.matchMaker.DestroyMatch(netId, 0, OnMatchDestroy); } public void OnMatchDestroy(bool success, string extendedInfo) { // ... } }
https://docs.unity3d.com/kr/2018.2/ScriptReference/Networking.Match.NetworkMatch.DestroyMatch.html
CC-MAIN-2019-43
en
refinedweb
- NAME - SYNOPSIS - DESCRIPTION - METHODS - new - $self->preload_dispatch_types - $self->postload_dispatch_types - $self->dispatch($c) - $self->visit( $c, $command [, \@arguments ] ) - $self->go( $c, $command [, \@arguments ] ) - $self->forward( $c, $command [, \@arguments ] ) - $self->detach( $c, $command [, \@arguments ] ) - $self->prepare_action($c) - $self->get_action( $action_name, $namespace ) - $self->get_action_by_path( $path ); - $self->get_actions( $c, $action, $namespace ) - $self->get_containers( $namespace ) - $self->uri_for_action($action, \@captures) - expand_action - $self->register( $c, $action ) - $self->setup_actions( $class, $context ) - $self->dispatch_type( $type ) - meta - AUTHORS->dispatch($c) Delegate the dispatch to the action that matched the url, or return a message about unknown resource $self->visit( $c, $command [, \@arguments ] ) $self->go( $c, $command [, \@arguments ] ) $self->forward( $c, $command [, \@arguments ] ) $self->detach( $c, $command [, \@arguments ] ) $self->prepare_action($c) Find an dispatch type that matches $c->req->path, and set args from it. $self->get_action( $action_name, $namespace ) returns a named action from a given namespace. $action_name may be a relative path on that $namespace such as $self->get_action('../bar', 'foo/baz'); In which case we look for the action at 'foo/bar'. $self->get_action_by_path( $path ); Returns the named action by its full private path. This method performs some normalization on $path so that if it includes '..' it will do the right thing (for example if $path is '/foo/../bar' that is normalized to '/bar'. _action expand an action into a full representation of the dispatch. mostly useful for chained, other actions will just return a single action. $self->register( $c, $action ) Make sure all required dispatch types for this action are loaded, then pass the action to our dispatch types so they can register it if required. Also, set up the tree with the action containers. $self->setup_actions( $class, $context ) Loads all of the pre-load dispatch types, registers their actions and then loads all of the post-load dispatch types, and iterates over the tree of actions, displaying the debug information if appropriate. $self->dispatch_type( $type ) Get the DispatchType object of the relevant type, i.e. passing $type of Chained would return a Catalyst::DispatchType::Chained object (assuming of course it's being used.) meta Provided by Moose AUTHORS Catalyst Contributors, see Catalyst.pm This library is free software. You can redistribute it and/or modify it under the same terms as Perl itself.
https://metacpan.org/pod/Catalyst::Dispatcher
CC-MAIN-2019-43
en
refinedweb
import "golang.org/x/crypto/nacl/secretbox" Package secretbox encrypts and authenticates small messages. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages with secret-key cryptography. The length of messages is not hidden. It is the caller's responsibility to ensure the uniqueness of nonces—for example, by using nonce 1 for the first message, nonce 2 for the second message, etc. Nonces are long enough that randomly generated nonces have negligible risk of collision. Messages should be small because: 1. The whole message needs to be held in memory to be processed. 2. Using large messages pressures implementations on small machines to decrypt and process plaintext before authenticating it. This is very dangerous, and this API does not allow it, but a protocol that uses excessive message sizes might present some implementations with no other choice. 3. Fixed overheads will be sufficiently amortised by messages as small as 8KB. 4. Performance may be improved by working with messages that fit into data caches. Thus large amounts of data should be chunked so that each message is small. (Each message still needs a unique nonce.) If in doubt, 16KB is a reasonable chunk size. This package is interoperable with NaCl:. Code: // Load your secret key from a safe place and reuse it across multiple // Seal calls. (Obviously don't use this example key for anything // real.) If you want to convert a passphrase to a key, use a suitable // package like bcrypt or scrypt. secretKeyBytes, err := hex.DecodeString("6368616e676520746869732070617373776f726420746f206120736563726574") if err != nil { panic(err) } var secretKey [32]byte copy(secretKey[:], secretKeyBytes) // You must use a different nonce for each message you encrypt with the // same key. Since the nonce here is 192 bits long, a random value // provides a sufficiently small probability of repeats. var nonce [24]byte if _, err := io.ReadFull(rand.Reader, nonce[:]); err != nil { panic(err) } // This encrypts "hello world" and appends the result to the nonce. encrypted := secretbox.Seal(nonce[:], []byte("hello world"), &nonce, &secretKey) // When you decrypt, you must use the same nonce and key you used to // encrypt the message. One way to achieve this is to store the nonce // alongside the encrypted message. Above, we stored the nonce in the first // 24 bytes of the encrypted text. var decryptNonce [24]byte copy(decryptNonce[:], encrypted[:24]) decrypted, ok := secretbox.Open(nil, encrypted[24:], &decryptNonce, &secretKey) if !ok { panic("decryption error") } fmt.Println(string(decrypted)) Output: hello world Overhead is the number of bytes of overhead when boxing a message. Open authenticates and decrypts a box produced by Seal and appends the message to out, which must not overlap box. The output will be Overhead bytes smaller than box. Seal appends an encrypted and authenticated copy of message to out, which must not overlap message. The key and nonce pair must be unique for each distinct message and the output will be Overhead bytes longer than message. Package secretbox imports 3 packages (graph) and is imported by 619 packages. Updated 2019-10-12. Refresh now. Tools for package owners.
https://godoc.org/golang.org/x/crypto/nacl/secretbox
CC-MAIN-2019-43
en
refinedweb
No project description provided Project description Motivation Zounds is a python library for working with sound. Its primary goals are to: - layer semantically meaningful audio manipulations on top of numpy arrays - help to organize the definition and persistence of audio processing pipelines and machine learning experiments with sound Audio processing graphs and machine learning pipelines are defined using featureflow. A Quick Example import zounds Resampled = zounds.resampled(resample_to=zounds.SR11025()) @zounds.simple_in_memory_settings class Sound(Resampled): """ A simple pipeline that computes a perceptually weighted modified discrete cosine transform, and "persists" feature data in an in-memory store. """ windowed = zounds.ArrayWithUnitsFeature( zounds.SlidingWindow, needs=Resampled.resampled, wscheme=zounds.HalfLapped(), wfunc=zounds.OggVorbisWindowingFunc(), store=True) mdct = zounds.ArrayWithUnitsFeature( zounds.MDCT, needs=windowed) weighted = zounds.ArrayWithUnitsFeature( lambda x: x * zounds.AWeighting(), needs=mdct) if __name__ == '__main__': # produce some audio to test our pipeline, and encode it as FLAC synth = zounds.SineSynthesizer(zounds.SR44100()) samples = synth.synthesize(zounds.Seconds(5), [220., 440., 880.]) encoded = samples.encode(fmt='FLAC') # process the audio, and fetch features from our in-memory store _id = Sound.process(meta=encoded) sound = Sound(_id) # grab all the frequency information, for a subset of the duration start = zounds.Milliseconds(500) end = start + zounds.Seconds(2) snippet = sound.weighted[start: end, :] # grab a subset of frequency information for the duration of the sound freq_band = slice(zounds.Hertz(400), zounds.Hertz(500)) a440 = sound.mdct[:, freq_band] # produce a new set of coefficients where only the 440hz sine wave is # present filtered = sound.mdct.zeros_like() filtered[:, freq_band] = a440 # apply a geometric scale, which more closely matches human pitch # perception, and apply it to the linear frequency axis scale = zounds.GeometricScale(50, 4000, 0.05, 100) log_coeffs = scale.apply(sound.mdct, zounds.HanningWindowingFunc()) # reconstruct audio from the MDCT coefficients mdct_synth = zounds.MDCTSynthesizer() reconstructed = mdct_synth.synthesize(sound.mdct) filtered_reconstruction = mdct_synth.synthesize(filtered) # start an in-browser REPL that will allow you to listen to and visualize # the variables defined above (and any new ones you create in the session) app = zounds.ZoundsApp( model=Sound, audio_feature=Sound.ogg, visualization_feature=Sound.weighted, globals=globals(), locals=locals()) app.start(9999) Find more inspiration in the examples folder, or on the blog. Installation Libsndfile Issues Installation currently requires you to build lbiflac and libsndfile from source, because of an outstanding issue that will be corrected when the apt package is updated to libsndfile 1.0.26. Download and run this script to handle this step. Zounds Finally, just: pip install zounds Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/zounds/
CC-MAIN-2019-43
en
refinedweb
Please follow these coding standards when writing code for inclusion in Django. Please conform to the indentation style dictated in the .editorconfig file. file. Avoid use of “we” in comments, e.g. “Loop over” rather than “We loop over”. Use underscores, not camelCase, for variable, function and method names (i.e. poll.get_unique_voters(), not poll.getUniqueVoters()). Use InitialCaps for. In test docstrings, state the expected behavior that each test demonstrates. Don’t include preambles such as “Tests that” or “Ensures that”. Reserve ticket references for obscure issues where the ticket has additional details that can’t be easily described in docstrings or comments. Include the ticket number at the end of a sentence like this: def test_foo(): """ A test docstring looks like this (#123456). """ ... Use isort to automate import sorting using the guidelines below. Quick start: $ python -m pip install isort $ isort -rc . This runs isort rec module statements before from module import objects in each section. Use absolute imports for other Django components and relative imports for local components. On each line, alphabetize the items with the upper case items grouped before the lowercase yaml except ImportError: yaml = None CONSTANT = 'foo' class Example: # ... Use convenience imports whenever available. For example, do this: from django.views import View instead of: from django.views.generic.base import View In Django template code, put one (and only one) space between the curly brackets and the tag contents. Do this: {{ foo }} Don’t do this: {{foo}} In Django views, the first parameter in a view function should be called request. Do this: def my_view(request, foo): # ... Don’t do this: def my_view(req, foo): # ... Meta should) The order of model inner classes and standard methods should be as follows (noting that these are not all required): class Meta def __str__() def save() def get_absolute_url() If choices is defined for a given model field, define each choice as a list of tuples, with an all-uppercase name as a class attribute on the model. Example: class MyModel(models.Model): DIRECTION_UP = 'U' DIRECTION_DOWN = 'D' DIRECTION_CHOICES = [ (DIRECTION_UP, 'Up'), (DIRECTION_DOWN, 'Down'), ]. importstatements that are no longer used when you change code. flake8 will identify these imports for you. If an unused import needs to remain for backwards-compatibility, mark the end of with # NOQAto silence the flake8 warning. AUTHORSfile distributed with Django – not scattered throughout the codebase itself. Feel free to include a change to the AUTHORSfile in your patch if you make more than a single trivial change. For details about the JavaScript code style used by Django, see JavaScript.
https://django.readthedocs.io/en/latest/internals/contributing/writing-code/coding-style.html
CC-MAIN-2019-43
en
refinedweb
Hi, i have massive problems with R# 4.0 and VS 2008 team edition. I'm working on a client/server project with windows forms. The solution is not so complex. Just 16 projects with around 20.000 lines of code. But the solution is part of a bigger application and references lots of assemblies. Nearly each time i load the solution into VS i will receive the error "Try to write protected memory.". Mostly short after the error occurs VS crashes. The rare times i can work with the solution i notice a horrible bad performance. For popup menus in the editor i have to wait several seconds. Sometimes the whole IDE freezes for several minutes. After working a while the compiler hangs infiiiinitely. If i look at the process explorer at this time VS didn't consume any cpu time. The memory footprint is around 750 MB to 1 GB. As i have 3 GB on the machine this should be no problem. The only thing i could do was too deinstall R# (and this was very hard for me, because i love your tool). Loading the same solution into VS 2008 Professional i can work without any problems. For me it seems that the Team Explorer and R# didn't like another. Another reason may be that we use the Infragistics Net Advantage 2007.3 suite because i don't have this installed on my machine with VS 2008 Professional. Regards Klaus Hi, I use ReSharper 4.0 with Visual Studio Team Suite with no such problems at all. -- John "Luedi" <[email protected]> wrote in message news:[email protected]... > > > John Saunders wrote: if you don't have any problem mines may be caused either by the solution structure or the Visual Studio versions. Which version of the team foundation server do you use? Do you use the Infragistic NetAdvantage control suite? I had the same problems with the pre-4.0 EAP-Builds they where not so annoying as in the official release build. Regards Klaus Same for me - I use VS Team Edition for Software Testers against TFS 2005. No problems so far! I use VS2008 (SP1 Beta1) Team Suite (the whole thing) against a TFS 2008 server. No problems with either version 3.x or 4. -- John "Luedi" <[email protected]> wrote in message news:[email protected]... >> I use ReSharper 4.0 with Visual Studio Team Suite with no such problems >> at all. >> > > > Hi, I have had identical problems with R# 3 on VS 2005 Team Edition (at a previous customer's site). VS crashed several times (> 10) a day starting from the day we migrated from SourceSafe to TFS. We never managed to find the reason for this. Some of the developers gave up on R# - the rest of us just learned to press Ctrl-Shift-S really often :) /Jesper Guys, This is my vs configuration and it runs without a hitch w/ large projects ~ 250k LOC , except for the missing functionality on vista (Header Text Region) Microsoft Visual Studio 2008 Version 9.0.21022.8 RTM Microsoft .NET Framework Version 3.5 Installed Edition: Enterprise Microsoft Visual Basic 2008 Microsoft Visual C# 2008 Microsoft Visual C++ 2008 Microsoft Visual Studio 2008 Tools for Office Microsoft Visual Studio Team System 2008 Architecture Edition Microsoft Visual Studio Team System 2008 Development Edition Crystal Reports Basic for Visual Studio 2008 Microsoft Visual Web Developer 2008 Gallio integration for Visual Studio Team System ReSharper 4 Full Edition build 4.0.819.19 on 2008-06-09T22:00:24 Sparx Systems UML Integration Version 3.5.2 Application Styling Configuration Dialog RemObjects Everwood for .NET 2.0.1.101 as you can see infragistics have no impact on resharper HTH BTW 3GB Memory processor 6320 i know , i know kinda slow but it works as most of you don't have any trouble with VS 2008 Team edition, maybe my solution structure is the problem. I'm working for a customer who develops a quite big standard application for health care. In the beginning of this year they had trouble with the solution file because they can't compile anymore in areasonable time. So they had split the old solution into several smaller ones. Each solution references all necessary assemblies. The complete application is build via the build server of VSTS. Each solution consists of around 10 to 20 projects and there are around 25 solutions. On average each project in a solution references around 25 assemblies. So the whole structure is quite complex. What about Antivirus software? Did anyone have trouble because of his virus scanner?. On my company laptop i have installed McAffee and i notice a significant slowdown if the on access scan enginge works on many files (e.g. while copying). The customer i work for at the moment uses the Trend Micro suite. Regards Klaus @Jetbrains: I'm willing to aid the developers to examine the problem. Following errors occurs: Each time i load the solution the error message "Try to read or write protected memory. The reason for this may be corrupted memory". Nearly each time when i want to open the Team-Explorer Tab VS crashes. The IDE is very slow and sometimes freezes infinetly (one time i go to lunch for an hour or so and the IDE was still frozen). In this case i notice a permanent CPU-time of around 2% to 4% by VS in the process explorer (also the Garbage collector is running sometimes). I'm working on a HP Laptop (9150 Worksation) with 3 GB Ram on Windows XP SP2. Visual Studio is the original version without any patches and service pack. No other Addins are installed beneath R#. Klaus, the solution I work with the most in VSTS is about 40 projects, including over 700 unit tests. The worst that happens is that VS grows to about 800MB virtual. It takes a while to exit VS sometimes, and if I've had it open for a day or more, it sometimes crashes before it finishes exiting. I do suspect ReSharper to be involved, but this is so much better than in the past! -- John "Klaus Luedenscheidt" <[email protected]> wrote in message news:[email protected]... > > > > > In my personal experience, poorly-configured virus scanners on development workstations are an absolute productivity killer. There is so much disk access during the development process, with tonnes of little reads and writes to discs, even more when tools like R# are in use, when frequently building to run tests, etc. McAfee is particularly problematic in this area, but by far not the worst. If at all possible, get IT to exclude some development-related file extensions from the on-demand scanning configuration. If they can't or won't do that, try to get IT to provide a separate development workstation that has basically nothing other than dev tools and access to source control and issue tracking (and no virus scanner.) If they can't or won't do that, you'll probably have to do what I do: use Process Explorer (or tool of choice) to forcefully terminate all virus scanner processes while doing heavy development work, refrain from using email or all but the safest web sites, etc. while working, and then re-enable the scanner afterward. Doing development work as a regular user (instead of an admin), using FF instead of IE, etc. helps reduce the risk during that time as well. 800MB is not that bad. Over the years at various companies I have worked daily with devenv regularly getting up to as much as 1.5GB with R# 3.x. That said, have you tried using the (memory allocation strategy) wrappers that have been linked to on this forum a few times? They can help keep things running more smoothly as memory creeps up. Additionally, it is worth noting that the managed code running in devenv.exe can't go past 800MB unless you run on 64bit or reconfigure your devenv.exe to be LARGEADDRESSAWARE and then flip the /3GB switch on a machine with at least 3GB of memory. userva tuning might be in order as well if you run into any problems with /3GB but usually it isn't a problem. (And, as an aside to the usual flurry of people who will sweep in and make all sorts of generalized freak-out statements about LARGEADDRESSAWARE, /3GB, and /userva: Making these configuration changes has literally saved my bacon when working on various projects over the years, and have worked great on a variety of systems. There are times when you can't go 64bit. There are times when you can't break up a huge solution. Go suck rocks.) As Visual Studio SP1 was released yesterday and R# 4.0.1 RC1 today i installed both to see if my problems may be solved. For the first try i got the same problems as before with the difference that the "read or write protected memory" error is now reported by R#. But after playing a little bit around i found a workaround for my problem: - Start VS with empty environment - disable R# in the Addins-Mene - go to the Team Explorer Tab and let hi connect to the server - load the solution - enable R# again and R# rocks :)) Regards Klaus Hello, As far as I know, — LARGEADDRESSAWARE is required on 64bit too, otherwise the process still won't get any mem above 2GB (large-address-unaware processes would store ownership info in the higher bit of the pointer, so they couldn't be allowed above 2GB on any platform). — devenv.exe already has the LARGEADDRESSAWARE flag (as it appeared on my installation for VS 9.0). — another choice is running 32-bit Vista, as it's doing some better job of memory allocation, just like the wrappers mentioned above. Pity devenv+R# pushes the memory limit that much. We've already started fighting the mem usage down for 4.5 :) — Serge Baltic JetBrains, Inc — “Develop with pleasure!” Hello, These are and, right? I've reviewed the stack traces of similar exceptions in our tracker. I'm afraid there isn't any useful info to help us fix the issue. Probalby the unmanaged (or, rather, mixed-mode) stack traces of the failure could have more data in them. They could be captured by debugging the Visual Studio instance that is about to crash with a mixed-mode debugger from another Visual Studio instance, with "break on exceptions" on for unmanaged exceptions. DLL symbols should be present for native DLLs, otherwise the stack trace will be not so informative at all. — Serge Baltic JetBrains, Inc — “Develop with pleasure!” All true. I have to admit that I didn't realize that VS 2008 came with it flagged out of the box. Hi Serge, in my company we are having very similar issues. And we're pretty certain that the cause is Team Explorer with R# not liking each other (you don't get the crashes when you don't use any of the two). I can somehow (and only sometimes) make the VS crash and with debugger attached I get (among others) this exception: Unhandled exception at 0x77572dd8 (ole32.dll) in devenv.exe: 0xC0000005: Access violation reading location 0x0000000f. With following stack trace: ole32.dll!CoWaitForMultipleHandles() + 0x1bc97 bytes ole32.dll!CoWaitForMultipleHandles() + 0x1e0d bytes ole32.dll!ProgIDFromCLSID() + 0x39c bytes ole32.dll!DcomChannelSetHResult() + 0x590 bytes ole32.dll!77600e3b() ole32.dll!DcomChannelSetHResult() + 0x5f4 bytes ole32.dll!DcomChannelSetHResult() + 0x42a bytes user32.dll!GetDC() + 0x6d bytes user32.dll!GetDC() + 0x14f bytes user32.dll!GetWindowLongW() + 0x127 bytes user32.dll!DispatchMessageW() + 0xf bytes msenv.dll!DllMain() + 0x4ce74 bytes msenv.dll!VStudioMain() + 0x44d9 bytes msenv.dll!VStudioMain() + 0x4469 bytes msenv.dll!VStudioMain() + 0x4405 bytes msenv.dll!VStudioMain() + 0x43d4 bytes msenv.dll!VStudioMain() + 0x496a bytes msenv.dll!VStudioMain() + 0x7d bytes devenv.exe!3000aabc() devenv.exe!300078f2() msvcr90.dll!_msize(void * pblock=0x00000002) Line 88 + 0xe bytes C msvcr90.dll!_onexit_nolock(int (void)* func=0x0072006f) Line 157 + 0x6 bytes C This will keep on throwing exceptions until process runs out of stack... If you could provide me with help where / how to get additional debug symbols I can try reproducing the error and get more info out of that. Thank you for any help. Jarda Hello Jaroslav, Thanks for the stack! However, it looks like memory and/or system tables are already corrupt, which is most likely the result of previous not-so-fatal failure. Also, could you please check that you don't have AppInit_DLLs registry key? See for details. Sincerely, Ilya Ryzhenkov JetBrains, Inc "Develop with pleasure!" JM> Hi Serge, JM> in my company we are having very similar issues. And we're pretty JM> certain that the cause is Team Explorer with R# not liking each JM> other (you don't get the crashes when you don't use any of the two). JM> I can somehow (and only sometimes) make the VS crash and with JM> debugger attached I get (among others) this exception: JM> JM> Unhandled exception at 0x77572dd8 (ole32.dll) in devenv.exe: JM> 0xC0000005: Access violation reading location 0x0000000f. JM> JM> With following stack trace: JM> ole32.dll!CoWaitForMultipleHandles() + 0x1bc97 bytes JM> [Frames below may be incorrect and/or missing, no symbols loaded JM> for ole32.dll] JM> ole32.dll!CoWaitForMultipleHandles() + 0x1e0d bytes JM> ole32.dll!ProgIDFromCLSID() + 0x39c bytes JM> ole32.dll!DcomChannelSetHResult() + 0x590 bytes JM> ole32.dll!77600e3b() JM> ole32.dll!DcomChannelSetHResult() + 0x5f4 bytes JM> ole32.dll!DcomChannelSetHResult() + 0x42a bytes JM> user32.dll!GetDC() + 0x6d bytes JM> user32.dll!GetDC() + 0x14f bytes JM> user32.dll!GetWindowLongW() + 0x127 bytes JM> user32.dll!DispatchMessageW() + 0xf bytes JM> msenv.dll!DllMain() + 0x4ce74 bytes JM> msenv.dll!VStudioMain() + 0x44d9 bytes JM> msenv.dll!VStudioMain() + 0x4469 bytes JM> msenv.dll!VStudioMain() + 0x4405 bytes JM> msenv.dll!VStudioMain() + 0x43d4 bytes JM> msenv.dll!VStudioMain() + 0x496a bytes JM> msenv.dll!VStudioMain() + 0x7d bytes JM> devenv.exe!3000aabc() JM> devenv.exe!300078f2() JM> msvcr90.dll!_msize(void * pblock=0x00000002) Line 88 + 0xe bytes C JM> msvcr90.dll!_onexit_nolock(int (void)* func=0x0072006f) Line 157 JM> + 0x6 bytes C JM> This will keep on throwing exceptions until process runs out of JM> stack... JM> JM> If you could provide me with help where / how to get additional JM> debug symbols I can try reproducing the error and get more info out JM> of that. JM> JM> Thank you for any help. JM> Jarda Hello Jaroslav, It'll be very useful if you try to attach WinDBG tool to VS before the crash, reproduce the crash and take a dump file and send it to us for investigationt. Thank you. -- Kirill Falk JetBrains, Inc "Develop with pleasure!" We are expecting the same problem with the same environment (VS2008+ReSharper 4.0 + Windows XP or Win 2003 server). I've worked with Microsoft on that problem and reply is: We had an incident#SRX080723600071 with the same description and results of our investigation: It looks like there are components running in a different appdomain (possibly a webservice call). We see where JScript is calling into some object which eventually calls into JetBrains.UI.Interop.WindowsHook.CoreHookProc. It appears that JetBrains have set a hook on some thread to watch messages being dispatched to it. We can't tell what thread from this output, but I suspect it would be the main UI thread since this is in the JetBrains.UI namespace. We see where that calls JetBrains.UI.Interop.Win32Declarations.CallNextHookEx, which is likely calling into User32!CallNextHookEx. In any case, this appears to end up resulting in a System.SecurityException. It's unclear whether this is handled or not. Please check with JetBrains whether they expect to see these types of exceptions. The JetBrains code is involved with the managed exceptions. We know that Visual Studio is crashing because our ole32!CCliModalLoop instance appears to be released. We don't have any direct evidence from the dump that Jetbrains caused that, but we do know that removing JetBrains from the IDE allows it to work properly, so I suspect JetBrains definitely is involved with the root cause. These posts on JetBrain's community discuss various issues with JetBrains causing VS 2008 to crash. The first thread discusses it occurring when launching Team Explorer, and the last post in the thread shows a workaround that one person used.� " My question to the JetBrains: Any ways to fix it? Can you give me any phone, email address of person who can give me reply or I can work with? Please use my email. Hello Vasily, Thank you very much for so detailed information! This is the first time we probably have something we can get our hands on and I will fire my WinDbg on this tomorrow. If you can attach WinDbg and get crashdump as well as logs, that would be extremely helpful. Unfortunately, we don't have access to Microsoft crashdumps. You can send them directly to me, orangy at jetbrains com. Sincerely, Ilya Ryzhenkov JetBrains, Inc "Develop with pleasure!" V> We are expecting the same problem with the same environment V> (VS2008+ReSharper 4.0 + Windows XP or Win 2003 server). V> V> I've worked with Microsoft on that problem and reply is: V> V> We had an incident#SRX080723600071 with the same description and V> results of our investigation: V> V> It looks like there are components running in a different appdomain V> (possibly a webservice call). We see where JScript V> V> is calling into some object which eventually calls into V> JetBrains.UI.Interop.WindowsHook.CoreHookProc. It appears that V> JetBrains have set a hook on some thread to watch messages being V> dispatched to it. We can't tell what thread from this output, but I V> suspect it would be the main UI thread since this is in the V> JetBrains.UI namespace. We see where that calls V> JetBrains.UI.Interop.Win32Declarations.CallNextHookEx, which is V> likely calling into User32!CallNextHookEx. In any case, this appears V> to end up resulting in a System.SecurityException. It's unclear V> whether this is handled or not. Please check with JetBrains whether V> they expect to see these types of exceptions. V> V> The JetBrains code is involved with the managed exceptions. We know V> that Visual Studio is crashing because our ole32!CCliModalLoop V> instance appears to be released. We don't have any direct evidence V> from the dump that Jetbrains caused that, but we do know that V> removing JetBrains from the IDE allows it to work properly, so I V> suspect JetBrains definitely is involved with the root cause. V> V> These posts on JetBrain's community discuss various issues with V> JetBrains causing VS 2008 to crash. The first thread discusses it V> occurring when launching Team Explorer, and the last post in the V> thread shows a workaround that one person used. V> V>� V> V> V> V> " V> V> My question to the JetBrains: Any ways to fix it? Can you give me V> any phone, email address of person who can give me reply or I can V> work with? Please use my email. V>
https://resharper-support.jetbrains.com/hc/en-us/community/posts/206717785--Build-919-Massive-Problems-with-VS-2008-Team-Edition?sort_by=votes
CC-MAIN-2019-43
en
refinedweb
[Previous post –>Writing a simple implementation of dependency injection in MVC 4 Web API with .NET Framework 4.5] In my previous blog post, we developed a simple REST based web API that exposes web methods using MVC 4 controller. We also discussed how to implement a simple dependency injection mechanism using the inbuilt IDependencyResolver interface, which may be useful to understand the IoC pattern for academic purposes though the practical implementation inevitably requires a DI framework such as Unity framework by Microsoft Patterns and Practices team! (Winking for the shameless promotion though it’s among the best out there and we recently utilized it in our latest guidance that shows how to use Microsoft Azure Media Services to build an On-Demand Video Service. And here goes another for the latter! Okay, let’s get back to the topic under discussion before I deviate any further. In this post, first we’ll write a couple of BDD style unit tests to verify the API methods. To the solution created in the previous step, add a new solution folder named Test and create a new Unit Test Project named SensorDataTracker.Test under the Test folder: Rename the default source file UnitTest1.cs to TestHarness.cs: Replace the code in TestHarness.cs with the following: namespace SensorDataTracker.Test{using System;using System.Collections.Generic;using SensorDataTracker.Models;using SensorDataTracker.Store;internal static class TestHarness{internal static IEnumerable<SensorReading> GenerateAndStoreReadings(ISensorReadingStore store,int count,Func<int, int> genMethod){var readings = new List<SensorReading>();for (var i = 0; i < count; i++){var sensorReading = new SensorReading { Id = genMethod(i) };readings.Add(sensorReading);store.Save(sensorReading);}return readings;}}} As you can see, we’ve created a utility method that generates and store readings. For the sake of brevity, the same method is generating as well as storing the readings though ideally in order to adhere to SRP, you’d like to refactor! The method GenerateAndStoreReadings expects a SensorReadingStore object, a count of readings to generate and a method that acts as the reading generator. A SensorReading is generated and added to a list as well as to a SensorReadingStore and finally the list of readings is returned to the caller. Now since the harness is in place, we’re ready to add our first unit test. Add a new class to SensorDataTracker.Test project and name it SensorReadingStoreTest.cs as shown below: Paste the following code in SensorReadingStoreTest.cs file:namespace SensorDataTracker.Test{using System.Collections.Generic;using System.Linq;using Microsoft.VisualStudio.TestTools.UnitTesting;using SensorDataTracker.Models;using SensorDataTracker.Store;[TestClass]public class SensorReadingStoreTest{[TestMethod]public void WhenSavingItemsShouldReturnSincereRepresentationOfTheItems(){// Givenvar store = new SensorReadingStore();var readings = TestHarness.GenerateAndStoreReadings(store, 3, i => 2 * i + 1);// Whenvar results = store.Get().Results.ToList();// ThenCollectionAssert.AreEqual(readings.ToArray(),results,Comparer<SensorReading>.Create((x, y) => x.Id.CompareTo(y.Id)));}}} Carefully examine the language I used while naming the test that self-documents the intent of the test. The test is nicely structured to depict that given a specific set of initial conditions when certain operations are performed that triggers a scenario then specific set of outcomes should be obtained. The test first creates 3 sensor readings using any random function such as an odd number generation algorithm in this particular case and saves the readings to the store. When readings are obtained from the in memory store they should match with the list of readings returned by the method that generates and stores the sensor readings. Note the use of a Comparer delegate that uses a lambda expression to compare the corresponding ids of two collections of sensor readings, pretty elegant! Let’s add another test. Add the following code in SensorReadingStoreTest.cs:[TestMethod]public void WhenSecondPageIsRequestedShouldReturnTheSecondPageOfResults(){var store = new SensorReadingStore();const int TwoPages = 2 * SensorReadingStore.PageSize;for (var i = 0; i < TwoPages; i++){store.Save(new SensorReading() { Id = i, CreateDate = DateTime.Now });}var page = store.Get(2);Assert.AreEqual(SensorReadingStore.PageSize, page.Results.Count());Assert.AreEqual(SensorReadingStore.PageSize, page.Results.First().Id);} Here we’re verifying that on requesting the second page of sensor readings, the test should indeed return the second page. And how do we verify if we’re actually getting the second page? First we initialize a constant integer to double the size of a page e.g. if one page stores 5 readings then TwoPages is initialized to 10. Now we store sensor readings with ids from 0 to 9 in the SensorReadingStore instance and then request the second page of results (ResultPage) from the Web API controller using Get method. And all we’re left to do, is to verify that the number of results and the Id of the first reading in the second page both match the PageSize! Now let’s run the tests and verify that they are passing. I prefer to run all my tests in the Test Explorer window. To bring up Test Explorer in Visual Studio 2013, choose TEST—>Windows—>Test Explorer from the main menu. Select Run All option in the toolbar of Test Explorer window and you should see something similar to below if all goes well! That’s it! In the next blog post, we’ll add a Windows Phone client that consumes the Web API so stay tuned. .. [Next Post—>Developing a Windows Phone 8.1 client for the sensor data tracker web API via a Portable Class Library] Very interesting, I'm going to have to look at BDD.
https://blogs.msdn.microsoft.com/kirpas/2014/05/14/writing-bdd-style-unit-tests-to-verify-the-mvc-based-web-api/
CC-MAIN-2019-43
en
refinedweb
I have mentioned in an earlier post that you can stack up the many if statements to check for the different conditions. This could make the code look messy. The logical operators provide a neat and simple solution. Using logical operators, you can combine a series of comparisons into a single expression so that you need just one if, almost regardless of the complexity of the set of conditions. The example below use the && (And) operator. Other common operators are || (Or) and ! (Not). #include <iostream> using namespace std; void main() { char letter = 'C'; if (letter >= 'A' && letter <= 'Z') { cout << "This is an uppercase letter." << endl; } }
https://codecrawl.com/2015/01/02/c-logical-operators/
CC-MAIN-2019-43
en
refinedweb
How to access Arduino Dock A0 from terminal - Jean-Baptiste Clion Hi! I would like to run a simple script in the Onion terminal in order to read Analog value on A0, is this possible? Thank you Kind Regards - François Levaux I would be interested as well, even better, use it with python - Maurice Marks I've got that to work. Here's the story: The only communication between the Omega and the Arduino is on I2C. And as it stands that was set up to cause a reset on a particular write to do the flash procedure and also to control a neopixel string. Unfortunately that precludes using that channel for anything else, which is a real pain. You can connect the Arduino serial RX/TX to the Omega RX/TX but that would take over the Omega console, so its not ideal. So what I did was to strip out the Neopixel code from the Omega Library and replace it with a modified library that allows a user program to register handlers for i2c receive and request. It preserves the 0xdead to 0x08 processing that is needed to automatically flash a new project but allows a number of user routines to be added. This makes the Arduino look like an I2C slave device with a set of registers that can be read or written from the Onion side. I also create an ONION pseudo device to simplify the setup. Here's my current test application - it controls a single neopixel (on address 1) and also reads from A0 on request from address 2. Note - this is just an experiment at this point. If there is interest I'll post my library code. #include <Wire.h> #include <Onion2.h> #include <Adafruit_NeoPixel.h> #define LED 13 #define LED2 A1 #define NEO 8 Adafruit_NeoPixel rgb(1, NEO); void recvEvent() { byte data; byte r=0, g=0, b=0; if (Wire.available()) r = Wire.read(); if (Wire.available()) g = Wire.read(); if (Wire.available()) b = Wire.read(); rgb.setPixelColor(0, r, g, b); rgb.show(); } byte reqEvent() { return analogRead(A0); } void setup() { pinMode(LED, OUTPUT); // pinMode(LED2, OUTPUT); Serial.begin(115200); rgb.begin(); rgb.clear(); rgb.show(); ONION.registerRcvHandler(1, &recvEvent); ONION.registerReqHandler(2, &reqEvent); // digitalWrite(LED2, LOW); } void loop() { Serial.println("Hello2"); digitalWrite(LED, 1); delay(1000); digitalWrite(LED, 0); delay(1000); } - Kit Bishop @Maurice-Marks Cool :-) Would be useful if this could eventually be generalised to some sort of bridge code between the Omega and the Arduino dock. I would be most interested in seeing your code when it is done and ready :-) - Lazar Demin administrators Just a heads-up, we're hard at work on bridging the ATmega on the Arduino Dock and the Omega. Expect something in the coming months. - Maurice Marks I don't want to preempt any official Onion work. You folks have done a great job with the hardware and software. But if anyone is interested in my experimental library I've put it up on Github: - Craig OShannessy @Lazar-Demin Hey. Is there any update on bridging the arduino and the omega? I need to read from a analogue pin, (which is why I'm using the dock), but it seems that getting the omega to be able to access the data is going to be a massive pain? Or am I missing something? - Kit Bishop @Craig-OShannessy Unfortunately there is currently no code to do what you want. It requires two code components that are not currently available: - Omega code to use I2C to access the Arduino to control the Arduino pins in a general manner - Arduino library that responds to appropriate Omega I2C communications and performs the required pin actions (returning any appropriate information) I am currently working on some general code to do this and will publish it when it is available. This code will provide access by the Omega to ALL standard Arduino pin operations including control of the analog pins Currently the only available code for Omega<-->Arduino communications is for control of Arduino NeoPixels from the Omega - Will Kostelecky @Craig-OShannessy Take a look in case you missed the post that I did on this library. @Will-Kostelecky said in How to access Arduino Dock A0 from terminal: @Craig-OShannessy Take a look in case you missed the post that I did on this library. How was this missed it should have been posted somewhere predominantly reviewed @Will-Kostelecky he put some work into this and it deserves to be seen. Maybe along with a video section with links to great videos a related page for this kind of post created by @Will-Kostelecky . He has a video as well.... - Will Kostelecky
https://community.onion.io/topic/212/how-to-access-arduino-dock-a0-from-terminal/11
CC-MAIN-2019-43
en
refinedweb
IdentityPool.java CreateIdentityPool.java demonstrates how to create a new identity pool. The identity pool is a store of user identity information that is specific to your AWS account. /* *.CognitoIdentityClient; import software.amazon.awssdk.services.cognitoidentity.model.CreateIdentityPoolRequest; import software.amazon.awssdk.services.cognitoidentity.model.CreateIdentityPoolResponse; public class CreateIdentityPool { public static void main(String[] args) { final String USAGE = "\n" + "Usage:\n" + " CreateIdentityPool <identity_pool_name> \n\n" + "Where:\n" + " identity_pool_name - the name to give your identity pool.\n\n" + "Example:\n" + " CreateTable HelloTable\n"; if (args.length < 1) { System.out.println(USAGE); System.exit(1); } String identity_pool_name = args[1]; CognitoIdentityClient cognitoclient = CognitoIdentityClient.builder() .region(Region.US_EAST_1) .build(); CreateIdentityPoolResponse response = cognitoclient.createIdentityPool( CreateIdentityPoolRequest.builder() .allowUnauthenticatedIdentities(false) .identityPoolName(identity_pool_name) .build() ); System.out.println("Unity Pool " + response.identityPoolName() + " is created. ID: " + response.identityPoolId()); } } Sample Details Service: cognito Last tested: 2019-06-02 Author: jschwarzwalder AWS Type: full-example
https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-cognito-src-main-java-com-example-cognito-CreateIdentityPool.java.html
CC-MAIN-2019-43
en
refinedweb
I upgraded my WiPy firmware as shown here: ... er-the-air to 1.8.2 Downloaded the libary from here: After that i copied mpu9150.py, vector3d.py and imu.py to the wipy. Opened a telnet to the wipy and tried the following: I can see the sensor, when i do the following: the addr is 105, i have no idea what i'm doing wrong, could someone help me out? Even after that the mpu9150 module initialization is not very clear to me either. Its probably my bad tho, since i'm new to python, but i could really appreciate some help! Code: Select all from machine import I2C import os def test(): mch = os.uname().machine if 'LaunchPad' in mch: i2c_pins = ('GP11', 'GP10') elif 'WiPy' in mch: i2c_pins = ('GP24', 'GP23') else: raise Exception('Board not supported!') i2c = I2C(0, mode=I2C.MASTER, baudrate=400000, pins=i2c_pins) addr = i2c.scan() print(addr)
https://forum.micropython.org/viewtopic.php?f=5&t=231&p=18489
CC-MAIN-2019-43
en
refinedweb
- Mike Wills - Nov 19, 2020 Introduction At BlueModus, we use a code generator to create lightweight DTO classes for Kentico Xperience page types. These lightweight classes improve caching, support JSON serialization, and allow modeling page type inheritance. Mirroring page type inheritance in the object model enables rendering many types of content with one MVC partial view. For example, a partial view created to render a card can render a product, promotion, or article teaser without knowing the difference as long as they share inherited fields. We create a base type called ListableBase that provides common fields like title, summary, thumbnail, and URL. Using this base type, a website can render lists of content, from simple links to rich card layouts, without knowing the specific content types. This cuts down type-specific boilerplate code. Additionally, using inherited types to share common fields improves SQL performance. When querying Xperience for the content of multiple types, most of the fields will align to reduce the effect of union “hell”. SQL union queries become narrower because page types have common field names. Problem A user experience challenge arises from using inherited page types because most field properties can only be set in the base page type. For example, if a field’s caption is “title” in the base type, its caption cannot be changed to “product name” in an inherited type. This prevents customizing the following field settings in derived types, settings we use to enhance the author experience: - Caption - Description - Explanation - Form control - Drop-down list We often need to customize advanced settings in our custom field controls. For example, we also have a custom field control for selecting related content. With this control, authors can compose pages using reusable content components. In the control’s advanced settings, we need to change the allowed child page types, but these settings are also disabled in an inherited type. Solution The good news is Xperience allows you to configure most field properties in alternative forms. This allows changing a field’s caption, description, or advanced properties for an inherited type. Therefore, a product, call-to-action, promotion, and article can all inherit the same title field but have unique captions and explanations to improve the author’s user experience. Alternative forms allow you to control the fit and polish of each page type’s form and provides significant improvement to the author’s experience. Unfortunately, when using alternative forms to override the settings of inherited fields, you can't make only one alternative form. You must create an alternative form for each edit mode – that is an alternative form for both “update” and “insert”. Creating two alternative forms for a page type was a burden, but an author's user experience is one of the most important factors in the success of an Xperience project. This made the extra effort worth it. However, we realized a third alternative form was needed for sites supporting multiple cultures. In that case, we had the overhead of creating three alternative forms for each page type that required you to override the inherited field settings. Suddenly, this great idea became too much overhead. Xperience’s best-kept secret I've often said that Xperience’s best-kept secret is its robust API and extensibility. The API allows so much customization that there is almost always a solution for unique customer requirements. This time, the API allowed us to create a custom AlternativeFormInfoProvider. By creating a custom provider, we added code that would cause Xperience to use an alternative form named “upsert” whenever it couldn't find one named “insert”, “update”, or “newculture”. Here’s a rudimentary example that shows how to create a custom AlternativeFormInfoProvider. Keep reading to find our version in the DevNet Marketplace. using CMS; using CMS.FormEngine; using Acme.UpsertAlternativeForms; using System; using System.Linq; [assembly: RegisterCustomProvider(typeof(UpsertAlternativeFormInfoProvider))] namespace Acme.UpsertAlternativeForms { /// <inheritdoc /> /// <summary> /// Kentico custom provider for alternative forms to enable using /// alternative forms named "upsert" for all three form modes supported /// for pages: "insert", "update", and "newculture". /// </summary> public class UpsertAlternativeFormInfoProvider : AlternativeFormInfoProvider { // These three alternative form names are the built-in names used for Page Types. private static readonly string[] BuiltInPageTypeAlternativeFormNames = { "update", "insert", "newculture" }; /// <inheritdoc /> /// <summary> /// Overrides default Kentico behavior by allowing an alternative /// form named "upsert" to be used if forms named "insert", "update", /// or "newculture" do not exist. /// </summary> /// <param name="alternativeFormFullName">The name of the alternative form</param> /// <param name="useHashtable">Optional flag to use a hash table</param> /// <returns>An AlternativeFormInfo object</returns> protected override AlternativeFormInfo GetInfoByFullName( string alternativeFormFullName, bool useHashtable = true) { var providedForm = base.GetInfoByFullName( alternativeFormFullName, useHashtable); if (providedForm != null) { return providedForm; } var delimiterPosition = alternativeFormFullName.LastIndexOf('.'); var className = alternativeFormFullName.Substring(0, delimiterPosition); var formName = alternativeFormFullName.Substring(delimiterPosition + 1); if (!BuiltInPageTypeAlternativeFormNames.Contains( formName, StringComparer.OrdinalIgnoreCase)) { return null; } var upsertFormName = className + "." + formName; return base.GetInfoByFullName(upsertFormName); } } } Result After adding a small amount of code, we were able to create one alternative form for each derived page type. This greatly reduced the overhead of using alternative forms and increased the efficiency of tailoring the author user experience. Upsert Alternative Forms in the Marketplace We now use this custom AlternativeFormInfoProvider in every new project and want to share it with the Xperience community. The “Kentico Xperience Upsert Alternative Forms Provider” is available in the Xperience DevNet Marketplace and on GitHub. It is also compatible with projects built in Xperience version 11, 12, or 13. I hope, with the help of this provider, you are inspired to create great author user experiences. Have questions on how to get the most out of your Kentico implementation? Drop me a line on Twitter (@tiriansdoor), or view my other articles here.
https://bluemodus.com/articles/improve-content-authoring-with-kentico-xperience-s-robust-api
CC-MAIN-2020-50
en
refinedweb
Created on 2017-08-27 20:17 by Paul Pinterits, last changed 2020-11-19 11:35 by iritkatriel. The file paths displayed in exception tracebacks have their symlinks resolved. I would prefer if the "original" path could be displayed instead, because resolved symlinks result in unexpected paths in the traceback and can be quite confusing. An example: rawing@localhost ~> cat test_scripts/A.py import B B.throw() rawing@localhost ~> cat test_scripts/B.py def throw(): raise ValueError rawing@localhost ~> ln -s test_scripts test_symlink rawing@localhost ~> python3 test_symlink/A.py Traceback (most recent call last): File "test_symlink/A.py", line 2, in <module> B.throw() File "/home/rawing/test_scripts/B.py", line 2, in throw raise ValueError ValueError As you can see, even though both scripts reside in the same directory, the file paths displayed in the traceback look very different. At first glance, it looks like B is in a completely different place than A. Furthermore, this behavior tends to trip up IDEs - PyCharm for example does not understand that test_scripts/B.py and test_symlink/B.py are the same file, so I end up having the same file opened in two different tabs. Would it be possible to change this behavior and have "/home/rawing/test_symlink/B.py" show up in the traceback instead? Here is a case where the opposite was requested: There they want the traceback to be the same regardless of which symlink the script was found by. I think a change like the one you are proposing here should be discussed on python-ideas. Would you like to bring it up there?
https://bugs.python.org/issue31289
CC-MAIN-2020-50
en
refinedweb
std::numeric_limits<T>::signaling_NaN Returns the special value "signaling not-a-number", as represented by the floating-point type T. Only meaningful if std::numeric_limits<T>::has_signaling_NaN == true. In IEEE 754, the most common binary representation of floating-point numbers, any value with all bits of the exponent set and at least one bit of the fraction set represents a NaN. It is implementation-defined which values of the fraction represent quiet or signaling NaNs, and whether the sign bit is meaningful. [edit] Return value [edit] Notes A NaN never compares equal to itself. Copying a NaN is not required, by IEEE-754, to preserve its bit representation (sign and payload), though most implementation do. When a signaling NaN is used as an argument to an arithmetic expression, the appropriate floating-point exception may be raised and the NaN is "quieted", that is, the expression returns a quiet NaN. [edit] Example Demonstrates the use of a signaling NaN to raise a floating-point exception #include <iostream> #include <limits> #include <cfenv> #pragma STDC_FENV_ACCESS on void show_fe_exceptions() { int n = std::fetestexcept(FE_ALL_EXCEPT); if(n & FE_INVALID) std::cout << "FE_INVALID is raised\n"; else if(n == 0) std::cout << "no exceptions are raised\n"; std::feclearexcept(FE_ALL_EXCEPT); } int main() { double snan = std::numeric_limits<double>::signaling_NaN(); std::cout << "After sNaN was obtained "; show_fe_exceptions(); double qnan = snan * 2.0; std::cout << "After sNaN was multiplied by 2 "; show_fe_exceptions(); double qnan2 = qnan * 2.0; std::cout << "After the quieted NaN was multiplied by 2 "; show_fe_exceptions(); std::cout << "The result is " << qnan2 << '\n'; } Output: After sNaN was obtained no exceptions are raised After sNaN was multiplied by 2 FE_INVALID is raised After the quieted NaN was multiplied by 2 no exceptions are raised The result is nan
https://en.cppreference.com/w/cpp/types/numeric_limits/signaling_NaN
CC-MAIN-2020-50
en
refinedweb
Asked by: Pass Array of items to Jquery in asp.net or asp.net mvc Question I have the following array structure in my jquery code hardcoded as follows which is basically map coordinates etc. var locations = [ ['Petronas Twin Tower','3.1579','101.7116'], ['Kuala Lumpur Tower','3.152866','101.7038'], ['Kuala Lumpur Bird Park','3.14312','101.68835'], ['National Mosque of Malaysia','3.141692','101.691645'], ['Merdeka Square','3.147847','101.693433'] ]; Obviously the above is hardcoded and I want to be able to do is pass back a dynamic list array, that can be used within my jquery in place of the list above. How would I accomplish this using C# in asp.net forms or asp.net mvc? Thanks All replies You would make a custom object, a view model, that would hold a List<T> of custom objects called Locations and you pass the view model into the view. You would load the List<Locations> in the C# code. MVC and JQuery can be discussed at the ASP.NET forums. Create a class as follows, public class Location { public string Name { get; set; } public long Latitude { get; set; } public long Longitude { get; set; } }Then you can create a collection List<Location> and add Items to it. You can pass the above model to the view! Microsoft MVP, Full stack developer, Top contributor Stackoverflow ,@Angular Enthusiast
https://social.microsoft.com/Forums/en-US/f3dbed8d-4ca6-4e3d-874d-ca13361c7a12/pass-array-of-items-to-jquery-in-aspnet-or-aspnet-mvc?forum=Offtopic
CC-MAIN-2020-50
en
refinedweb
SAP Business Application Studio – Getting Started with CAP and SAP HANA Service on CF In the new era of cloud , where we are moving towards Cloud Foundry environment , a next generation development environment is also required catering to the needs of developer. Hence SAP Business Application Studio. What is it ???? SAP Business Application Studio is a next generation, tailor made development environment available as a service on SAP Cloud Foundry which offers a modular development for business application for SAP Intelligent Enterprise. Here developers can utilize more than 1 development space which are isolated from each other and let you run the application without deploying on Cloud Platform using its powerful terminal CAP – SAP Cloud Application Programming Model it is an open and opinionated model , which provides framework of libraries , languages and tools for the development of enterprise grade applications . It provides some best practices and guides developer with out of the box solutions for some common problems. So lets get started Prerequisite - SAP Cloud Platform Account (Foundry ) – Data center available - Subscribed to SAP Business Application Studio - Relevant Roles for accessing the service – Authorization Management - SAP HANA Service hdi-shared After Accessing the application Create a Dev Space Step 1 : Click on Create Dev Space Step 2: Create Dev Space – Enter Name and Select SAP Cloud Business Application as a category Start Developing - Open new terminal 2. type cd projects , to change the directly. 3. setting up the project mvn -B archetype:generate -DarchetypeArtifactId=cds-services-archetype -DarchetypeGroupId=com.sap.cds \ -DarchetypeVersion=1.2.0 -DcdsVersion=3.21.2 \ -DgroupId=com.sap.teched.cap -DartifactId=products-service -Dpackage=com.sap.teched.cap.productsservice Open your project in the Studio Lets Make CDS files - Navigate to db , create a file schema.cds namespace sap.capire.dev; entity Products { title : localized String(20); descr : localized String(100); stock : Integer; price : Decimal(9,2); key id :Integer; } - Navigate to srv , create a file service.cds using { sap.capire.dev as db } from '../db/schema'; service AdminService { entity Products as projection on db.Products; } - From terminal navigate to cd product-service using terminal. - type mvn clean install – to compile the project using terminal . - Now go to srv -> src -> main -> resources and open application.yaml and replace the content --- spring: profiles: default datasource: url: "jdbc:sqlite:/home/user/projects/products-service/sqlite.db" driver-class-name: org.sqlite.JDBC initialization-mode: never - installing SAP HANA DB deployer npm install --save-dev --save-exact @sap/[email protected] - login into CF account – using terminal type cf login and select the space - initialize DB and create SAP HANA Service instance – please make sure , you have the entitlement using terminal type , kindly type it manually cds deploy --to hana:bookstore-hana - Go to srv->pom.xml and add the dependency <dependency> <groupId>com.sap.cds</groupId> <artifactId>cds-feature-hana</artifactId> </dependency> - Time to run the application in the terminal mvn spring-boot:run -Dspring-boot.run.profiles=cloud - Click on Expose and Open on the left Bottom and press Enter - Go Back to Application Studio and open new terminal - Test the service with POST call curl -X POST \ -H "Content-Type: application/json" \ -d '{ "title": "Product 1", "descr":"sample product ","stock":20,"price":100.60,"id":1 }' - Now Lets check the service from step 11 , open Products and you shall be able to view the saved data After successfully following these steps , you would be able to complete the setting up of SAP Business Application Studio on SAP Cloud Platform (Foundry) , creating an application using CAP , connecting it to SAP HANA Service and performing POST , GET operation on your ODATA service. These are the screenshots taken from our SAP Cloud Platform account. Hi, I also found new and updated tutorials about this topic. Cheers, Ervin Thanks Munish Suri for providing a step-by-step guide. 🙂 Excellent, Really Helpful. Hi Munish, nice cookbook tutorial approach…and it worked for me nicely up to point 8 where it simply crashed with this message: Service offering ‘hanatrial’ not found. will take another look after switching my eu and us spaces around, but thank you for the steps 1-7. rgds, greg Hi Greg, I also ran into an issue [ERROR] [cds.deploy] – Service name bookstore-hana must only contain alpha-numeric, hyphens, and underscores. , however when I retyped manually the execution was successfull. Regards Justin Hi Munish, I’m also stuck to step 8: Regards, Yordan Hi Yordan, Please create an instance of hana HDI . you may utilise hanatrial service on cloud foundry for the same. HDI – use the name bookstore-hana Best regards Munish Hi Munish, How do I deploy the application on CF? As MTA? Which steps do I have to go through? Thanks Peter Hi Peter, for deploying , you may have to create a manifest.yml . Where you have to define the name , path(jar) ,services . and using cf push , you should be able to deploy on cloud platform. Best regards Munish Hi Munish, probably not the best place to ask this one, but I’ll try anyway. There is an issue I couldn’t overcome so far. When I want to clone a git repo from SAPs internal Github, it gives me . fatal: unable to access ‘<my repo url>/’: Received HTTP code 502 from proxy after CONNECT I would assume connecting to git repos should be supported out of the box. Thanks! Robin Hi Robin, In the terminal, you can clone the git repository. As i see you are trying to clone the internal Git Repository, which i guess wont be possible on the public version. Maybe you can give it a try in the internal canary account if possible. thanks Best regards Munish Hi Munish, Nice blog. When I do cds delpoy , I get the following error. Deployment to container CC3087871C54427590C88E8F653E85DF failed – ] [Deployment ID: none]. ] Any idea why this is happening?. Regards, Swetha. Hi Swetha, It seems you are trying to connect with the Canary account. Unfortunately, I have used the factory account. Can you please raise an internal ticket for the same? Best regards Munish Suri I got the same error. Can you help me? Hi Louis, Can you please try in SAP Cloud Platform Test account, not the canary one. Best regards Munish Suri Hello, Is it possible to use “cds deploy …” to test in a HANA DB from another subaccount/org/space? I already followed all the steps from documentation in order to deploy, but I’m now quite sure if “cds deploy” should work with this setup. The error I get is similar to the one posted by Swetha: Best regards. Hello Christian, Can you please try in SAP Cloud Platform Test account, it works usually. I am not really sure of the canary account. Kindly raise an internal ticket if the problem persists. Best regards Munish Suri
https://blogs.sap.com/2020/03/16/sap-business-application-studio-getting-started-with-cap-and-sap-hana-service-on-cf/
CC-MAIN-2020-50
en
refinedweb
Get the highlights in your inbox every week. A beginner's guide to Kubernetes container orchestration | Opensource.com A beginner's guide to Kubernetes container orchestration Understanding the building blocks of container orchestration makes it easier to get started with Kubernetes. Subscribe now Last fall, I took on a new role with a team that relies on Kubernetes (K8s) as part of its core infrastructure. While I have worked with a variety of container orchestrators in my time (e.g., Kubernetes, Apache Mesos, Amazon ECS), the job change sent me back to the basics. Here is my take on the fundamentals you should be familiar with if you're working with Kubernetes. Container orchestration refers to the tools and platforms used to automate, manage, and schedule workloads defined by individual containers. There are many players in this space, both open source and proprietary, including Hashicorp's Nomad, Apache Mesos, Amazon's ECS, and let's not forget Google's home-grown Borg project (from which Kubernetes evolved). There are pros and cons with each technology, but Kubernetes' rising popularity and strong community support make it clear that Kubernetes is currently the king of the container orchestrators. I also consider Kubernetes to have clear advantages when you're working with open source software. As an open source platform, it is cloud-agnostic, and it makes sense to build other open source software on top of it. It also has a dedicated following with over 40,000 contributors, and because a lot of developers are already familiar with Kubernetes, it's easier for users to integrate open source solutions built on top of K8s. Breaking down Kubernetes into building blocks The simplest way to break down Kubernetes is by looking at the core concepts of container orchestrators. There are containers, which serve asfoundational building blocks of work, and then there are the components built on top of each other to tie the system together. Components come in two core types: - Workload managers: A way to host and run the containers - Cluster managers: Global ways to make decisions on behalf of the cluster In Kubernetes lingo, these roles are fulfilled by the worker nodes and the control plane that manages the work (i.e., Kubernetes components). Managing the workload Kubernetes worker nodes have a nested layer of components. At the base layer is the container itself. Technically, containers run in pods, which are the atomic object type within a Kubernetes cluster. Here's how they relate: - Pod: A pod defines the logical unit of the application; it can contain one or more containers and each pod is deployed onto a node. - Node: This is the virtual machine serving as the worker in the cluster; pods run on the nodes. - Cluster: This consists of worker nodes and is managed by the control plane. Each node runs an agent known as the kublet for running containers in a pod and a kube-proxy for managing network rules. Managing the cluster The worker nodes manage the containers, and the Kubernetes control plane makes global decisions about the cluster. The control plane consists of several essential components: - Memory store (etcd): This is the backend store for all cluster data. While it's possible to run a Kubernetes cluster with a different backing store, etcd, an open source distributed key-value store, is the default. - Scheduler (kube-scheduler): The scheduler is responsible for assigning newly created pods to the appropriate nodes. - API frontend (kube-apiserver): This is the gateway from which the developer can interact with Kubernetes—to deploy services, fetch metrics, check logs, etc. - Controller manager (kube-controller-manager): This watches the cluster and makes necessary changes in order to keep the cluster in the desired state—such as scaling up nodes, maintaining the correct number of pods per replication controller, and creating new namespaces. The control plane makes decisions to ensure regular operation of the cluster and abstracts away these decisions so that the developer doesn't have to worry about them. Its functionality is highly complex, and users of the system need to have awareness of the logical constraints of the control plane without getting too bogged down on the details. Using controllers and templates The components of the cluster dictate how the cluster manages itself—but how do developers or (human) operators tell the cluster how to run the software? This is where controllers and templates come in. Controllers orchestrate the pods, and K8s has different types of controllers for different use cases. But the key ones are Jobs, for one-off jobs that run to completion, and ReplicaSets, for running a specified set of identical pods that provide a service. Like everything else in Kubernetes, these concepts form the building blocks of more complex systems that allow developers to run resilient services. Instead of using ReplicaSets directly, you're encouraged to use Deployments instead. Deployments manage ReplicaSets on behalf of the user and allow for rolling updates. Kubernetes Deployments ensure that only some pods are down while they're being updated, thereby allowing for zero-downtime deploys. Likewise, CronJobs manage Jobs and are used for running scheduled and repeated processes. The many layers of K8s allow for better customization, but CronJobs and Deployments suffice for most use cases. Once you know which controller to pick to run your service, you'll need to configure it with templating. Anatomy of the template The Kubernetes template is a YAML file that defines the parameters by which the containers run. Much like any kind of configuration as code, it has its own specific format and requirements that can be a lot to learn. Thankfully, the information you need to provide is the same as if you were running your code against any container orchestrator: - Tell it what to name the application - Tell it where to look for the image of the container (often called the container registry) - Tell it how many instances to run (in the terminology above, the number of ReplicaSets) Flexibility in configuration is one of the many advantages of Kubernetes. With the different resources and templates, you can also provide the cluster information about: - Environment variables - Location of secrets - Any data volumes that should be mounted for use by the containers - How much CPU and memory each container or pod is allowed to use - The specific command the container should run And the list goes on. Bringing it all together Combining templates from different resources allows the user to interoperate the components within Kubernetes and customize them for their own needs. In a bigger ecosystem, developers leverage Jobs, Services, and Deployments with ConfigMaps and Secrets that combine to make an application—all of which need to be carefully orchestrated during deployment. Managing these coordinated steps can be done manually or with one of the common package-management options. While it's definitely possible to roll your own deployment against the Kubernetes API, it's often a good idea to package your configuration—especially if you're shipping open source software that might be deployed and managed by someone not directly on your team. The package manager of choice for Kubernetes is Helm. It doesn't take a lot to get started with Helm, and it allows you to package your own software for easy installation on a Kubernetes cluster. Smooth sailing! The many layers and extensions sitting on top of containers can make container orchestrators difficult to understand. But it's actually all very elegant once you've broken down the pieces and see how they interact. Much like a real orchestra, you develop an appreciation for each individual instrument and watch the harmony come together. Knowing the fundamentals allows you to recognize and apply patterns and pivot from one container orchestrator to another.
https://opensource.com/article/20/6/container-orchestration
CC-MAIN-2020-50
en
refinedweb
Inherit this class to test your DatasetBuilder class. Inherits From: SubTestCase, TestCase tfds.testing.DatasetBuilderTestCase( methodName='runTest' ) You must set the following class attributes: - DATASET_CLASS: class object of DatasetBuilder you want to test. You may set the following class attributes: - VERSION: str. The version used to run the test. eg: '1.2.*'. Defaults to None (canonical version). - BUILDER_CONFIG_NAMES_TO_TEST: list[str], the list of builder configs that should be tested. If None, all the BUILDER_CONFIGS from the class will be tested. - DL_EXTRACT_RESULT: dict[str], the returned result of mocked download_and_extractmethod. The values should be the path of files present in the fake_examplesdirectory, relative to that directory. If not specified, path to fake_exampleswill always be returned. - DL_DOWNLOAD_RESULT: dict[str], the returned result of mocked download_and_extractmethod. The values should be the path of files present in the fake_examplesdirectory, relative to that directory. If not specified: will use DL_EXTRACT_RESULT (this is due to backwards compatibility and will be removed in the future). - EXAMPLE_DIR: str, the base directory in in which fake examples are contained. Optional; defaults to tensorflow_datasets/testing/test_data/fake_examples/ . - OVERLAPPING_SPLITS: list[str], splits containing examples from other splits (e.g. a "example" split containing pictures from other splits). - MOCK_OUT_FORBIDDEN_OS_FUNCTIONS: bool, defaults to True. Set to False to disable checks preventing usage of osor builtin functions instead of recommended tf.io.gfileAPI. - SKIP_CHECKSUMS: Checks that the urls called by dl_manager.downloadare registered. This test case will check for the following: - the dataset builder is correctly registered, i.e. tfds.load(name)works; - the dataset builder can read the fake examples stored in testing/test_data/fake_examples/{dataset_name}; - the dataset builder can produce serialized data; - the dataset builder produces a valid Dataset object from serialized data - in eager mode; - in graph mode. - the produced Dataset examples have the expected dimensions and types; - the produced Dataset has and the expected number of examples; - a example is not part of two splits, or one of these splits is whitelisted in OVERLAPPING_SPLITS. Child Classes Methods addCleanup addCleanup( *args, **kwargs ) Add a function, with arguments, to be called when the test is completed. Functions added are called on a LIFO basis and are called after tearDown on test failure or success. Cleanup items are called even if setUp fails (unlike tearDown). addTypeEqualityFunc addTypeEqualityFunc( typeobj, function ) Add a type specific assertEqual style function to compare a type. This method is for use by TestCase subclasses that need to register their own type equality functions to provide nicer error messages. assertAllClose assertAllClose( a, b, rtol=1e-06, atol=1e-06, msg=None ) Asserts that two structures of numpy arrays or Tensors, have near values. a and b can be arbitrarily nested structures. A layer of a nested structure can be a dict, namedtuple, tuple or list. assertAllCloseAccordingToType assertAllCloseAccordingToType( a, b, rtol=1e-06, atol=1e-06, float_rtol=1e-06, float_atol=1e-06, half_rtol=0.001, half_atol=0.001, bfloat16_rtol=0.01, bfloat16_atol=0.01, msg=None ) Like assertAllClose, but also suitable for comparing fp16 arrays. In particular, the tolerance is reduced to 1e-3 if at least one of the arguments is of type float16. assertAllEqual assertAllEqual( a, b, msg=None ) Asserts that two numpy arrays or Tensors have the same values. assertAllEqualNested assertAllEqualNested( d1, d2 ) Same as assertAllEqual but compatible with nested dict. assertAllGreater assertAllGreater( a, comparison_target ) Assert element values are all greater than a target value. assertAllGreaterEqual assertAllGreaterEqual( a, comparison_target ) Assert element values are all greater than or equal to a target value. assertAllInRange assertAllInRange( target, lower_bound, upper_bound, open_lower_bound=False, open_upper_bound=False ) Assert that elements in a Tensor are all in a given range. assertAllInSet assertAllInSet( target, expected_set ) Assert that elements of a Tensor are all in a given closed set. assertAllLess assertAllLess( a, comparison_target ) Assert element values are all less than a target value. assertAllLessEqual assertAllLessEqual( a, comparison_target ) Assert element values are all less than or equal to a target value. assertAlmostEqual assertAlmostEqual( first, second, places=None, msg=None, delta=None ) Fail if the two objects are unequal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is more than the given delta. Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit). If the two objects compare equal then they will automatically compare almost equal. assertAlmostEquals assertAlmostEquals( *args, **kwargs ) assertArrayNear assertArrayNear( farray1, farray2, err, msg=None ) Asserts that two float arrays are near each other. Checks that for all elements of farray1 and farray2 |f1 - f2| < err. Asserts a test failure if not. assertBetween assertBetween( value, minv, maxv, msg=None ) Asserts that value is between minv and maxv (inclusive). assertCommandFails assertCommandFails( command, regexes, env=None, close_fds=True, msg=None ) Asserts a shell command fails and the error matches a regex in a list. assertCommandSucceeds assertCommandSucceeds( command, regexes=(b'',), env=None, close_fds=True, msg=None ) Asserts that a shell command succeeds (i.e. exits with code 0). assertContainsExactSubsequence assertContainsExactSubsequence( container, subsequence, msg=None ) Asserts that "container" contains "subsequence" as an exact subsequence. Asserts that "container" contains all the elements of "subsequence", in order, and without other elements interspersed. For example, [1, 2, 3] is an exact subsequence of [0, 0, 1, 2, 3, 0] but not of [0, 0, 1, 2, 0, 3, 0]. assertContainsInOrder assertContainsInOrder( strings, target, msg=None ) Asserts that the strings provided are found in the target in order. This may be useful for checking HTML output. assertContainsSubsequence assertContainsSubsequence( container, subsequence, msg=None ) Asserts that "container" contains "subsequence" as a subsequence. Asserts that "container" contains all the elements of "subsequence", in order, but possibly with other elements interspersed. For example, [1, 2, 3] is a subsequence of [0, 0, 1, 2, 0, 3, 0] but not of [0, 0, 1, 3, 0, 2, 0]. assertContainsSubset assertContainsSubset( expected_subset, actual_set, msg=None ) Checks whether actual iterable is a superset of expected iterable. assertCountEqual assertCountEqual(DTypeEqual assertDTypeEqual( target, expected_dtype ) Assert ndarray data type is equal to expected. assertDeviceEqual assertDeviceEqual( device1, device2, msg=None ) Asserts that the two given devices are the same. assertDictContainsSubset assertDictContainsSubset( subset, dictionary, msg=None ) Checks whether dictionary is a superset of subset. assertDictEqual assertDictEqual( a, b, msg=None ) Raises AssertionError if a and b are not equal dictionaries. assertEmpty assertEmpty( container, msg=None ) Asserts that an object has zero length. assertEndsWith assertEndsWith( actual, expected_end, msg=None ) Asserts that actual.endswith(expected_end) is True. assertEqual assertEqual( first, second, msg=None ) Fail if the two objects are unequal as determined by the '==' operator. assertEquals assertEquals( *args, **kwargs ) assertFalse assertFalse( expr, msg=None ) Check that the expression is false. assertGreater assertGreater( a, b, msg=None ) Just like self.assertTrue(a > b), but with a nicer default message. assertGreaterEqual assertGreaterEqual( a, b, msg=None ) Just like self.assertTrue(a >= b), but with a nicer default message. assertIn assertIn( member, container, msg=None ) Just like self.assertTrue(a in b), but with a nicer default message. assertIs assertIs( expr1, expr2, msg=None ) Just like self.assertTrue(a is b), but with a nicer default message. assertIsInstance assertIsInstance( obj, cls, msg=None ) Same as self.assertTrue(isinstance(obj, cls)), with a nicer default message. assertIsNone assertIsNone( obj, msg=None ) Same as self.assertTrue(obj is None), with a nicer default message. assertIsNot assertIsNot( expr1, expr2, msg=None ) Just like self.assertTrue(a is not b), but with a nicer default message. assertIsNotNone assertIsNotNone( obj, msg=None ) Included for symmetry with assertIsNone. assertItemsEqual assertItemsEqual(JsonEqual assertJsonEqual( first, second, msg=None ) Asserts that the JSON objects defined in two strings are equal. A summary of the differences will be included in the failure message using assertSameStructure. assertLen assertLen( container, expected_len, msg=None ) Asserts that an object has the expected length. assertLess assertLess( a, b, msg=None ) Just like self.assertTrue(a < b), but with a nicer default message. assertLessEqual assertLessEqual( a, b, msg=None ) Just like self.assertTrue(a <= b), but with a nicer default message. assertListEqual assertListEqual( list1, list2, msg=None ) A list-specific equality assertion. assertLogs @contextlib.contextmanager assertLogs( text, level='info' ) Fail unless a log message of level level or higher is emitted on logger_name or its children. If omitted, level defaults to INFO and logger defaults to the root logger. This method must be used as a context manager, and will yield a recording object with two attributes: output and records. At the end of the context manager, the output attribute will be a list of the matching formatted log messages and the records attribute will be a list of the corresponding LogRecord objects. Example:: with self.assertLogs('foo', level='INFO') as cm: logging.getLogger('foo').info('first message') logging.getLogger('foo.bar').error('second message') self.assertEqual(cm.output, ['INFO:foo:first message', 'ERROR:foo.bar:second message']) assertMultiLineEqual assertMultiLineEqual( first, second, msg=None, **kwargs ) Asserts that two multi-line strings are equal. assertNDArrayNear assertNDArrayNear( ndarray1, ndarray2, err, msg=None ) Asserts that two numpy arrays have near values. assertNear assertNear( f1, f2, err, msg=None ) Asserts that two floats are near each other. Checks that |f1 - f2| < err and asserts a test failure if not. assertNoCommonElements assertNoCommonElements( expected_seq, actual_seq, msg=None ) Checks whether actual iterable and expected iterable are disjoint. assertNotAllClose assertNotAllClose( a, b, **kwargs ) Assert that two numpy arrays, or Tensors, do not have near values. assertNotAllEqual assertNotAllEqual( a, b, msg=None ) Asserts that two numpy arrays or Tensors do not have the same values. assertNotAlmostEqual assertNotAlmostEqual( first, second, places=None, msg=None, delta=None ) Fail if the two objects are equal as determined by their difference rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between the two objects is less than the given delta. Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit). Objects that are equal automatically fail. assertNotAlmostEquals assertNotAlmostEquals( *args, **kwargs ) assertNotEmpty assertNotEmpty( container, msg=None ) Asserts that an object has non-zero length. assertNotEndsWith assertNotEndsWith( actual, unexpected_end, msg=None ) Asserts that actual.endswith(unexpected_end) is False. assertNotEqual assertNotEqual( first, second, msg=None ) Fail if the two objects are equal as determined by the '!=' operator. assertNotEquals assertNotEquals( *args, **kwargs ) assertNotIn assertNotIn( member, container, msg=None ) Just like self.assertTrue(a not in b), but with a nicer default message. assertNotIsInstance assertNotIsInstance( obj, cls, msg=None ) Included for symmetry with assertIsInstance. assertNotRegex assertNotRegex( text, unexpected_regex, msg=None ) Fail the test if the text matches the regular expression. assertNotRegexpMatches assertNotRegexpMatches( *args, **kwargs ) assertNotStartsWith assertNotStartsWith( actual, unexpected_start, msg=None ) Asserts that actual.startswith(unexpected_start) is False. assertProtoEquals assertProtoEquals( expected_message_maybe_ascii, message, msg=None ) Asserts that message is same as parsed expected_message_ascii. Creates another prototype of message, reads the ascii message into it and then compares them using self._AssertProtoEqual(). assertProtoEqualsVersion assertProtoEqualsVersion( expected, actual, producer=versions.GRAPH_DEF_VERSION, min_consumer=versions.GRAPH_DEF_VERSION_MIN_CONSUMER, msg=None ) assertRaises assertRaises( expected_exception, *args, **kwargs ) Fail unless an exception of class expected_exception is raised by the callable when invoked with specified positional and keyword arguments. If a different type of exception is raised, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception. If called with the callable and arguments omitted, will return a context object used like this:: with self.assertRaises(SomeException): do_something() An optional keyword argument 'msg' can be provided when assertRaises is used as a context object. The context manager keeps a reference to the exception as the 'exception' attribute. This allows you to inspect the exception after the assertion:: with self.assertRaises(SomeException) as cm: do_something() the_exception = cm.exception self.assertEqual(the_exception.error_code, 3) assertRaisesOpError assertRaisesOpError( expected_err_re_or_predicate ) assertRaisesRegex assertRaisesRegex( expected_exception, expected_regex, *args, **kwargs ) Asserts that the message in a raised exception matches a regex. assertRaisesRegexp assertRaisesRegexp( expected_exception, expected_regex, *args, **kwargs ) Asserts that the message in a raised exception matches a regex. assertRaisesWithLiteralMatch assertRaisesWithLiteralMatch( expected_exception, expected_exception_message, callable_obj=None, *args, **kwargs ) Asserts that the message in a raised exception equals the given string. Unlike assertRaisesRegex, this method takes a literal string, not a regular expression. with self.assertRaisesWithLiteralMatch(ExType, 'message'): DoSomething() assertRaisesWithPredicateMatch assertRaisesWithPredicateMatch( err_type, predicate ) Returns a context manager to enclose code expected to raise an exception. If the exception is an OpError, the op stack is also included in the message predicate search. assertRegex assertRegex( text, expected_regex, msg=None ) Fail the test unless the text matches the regular expression. assertRegexMatch assertRegexMatch( actual_str, regexes, message=None ) Asserts that at least one regex in regexes matches str. If possible you should use assertRegex, which is a simpler version of this method. assertRegex takes a single regular expression (a string or re compiled object) instead of a list. Notes: This function uses substring matching, i.e. the matching succeeds if any substring of the error message matches any regex in the list. This is more convenient for the user than full-string matching. If regexes is the empty list, the matching will always fail. Use regexes=[''] for a regex that will always pass. '.' matches any single character except the newline. To match any character, use '(.|\n)'. '^' matches the beginning of each line, not just the beginning of the string. Similarly, '$' matches the end of each line. An exception will be thrown if regexes contains an invalid regex. assertRegexpMatches assertRegexpMatches( *args, **kwargs ) assertSameElements assertSameElements( expected_seq, actual_seq, msg=None ) Asserts that two sequences have the same elements (in any order). This method, unlike assertCountEqual, doesn't care about any duplicates in the expected and actual sequences. assertSameElements([1, 1, 1, 0, 0, 0], [0, 1]) # Doesn't raise an AssertionError If possible, you should use assertCountEqual instead of assertSameElements. assertSameStructure assertSameStructure( a, b, aname='a', bname='b', msg=None ) Asserts that two values contain the same structural content. The two arguments should be data trees consisting of trees of dicts and lists. They will be deeply compared by walking into the contents of dicts and lists; other items will be compared using the == operator. If the two structures differ in content, the failure message will indicate the location within the structures where the first difference is found. This may be helpful when comparing large structures. Mixed Sequence and Set types are supported. Mixed Mapping types are supported, but the order of the keys will not be considered in the comparison. assertSequenceAlmostEqual assertSequenceAlmostEqual( expected_seq, actual_seq, places=None, msg=None, delta=None ) An approximate equality assertion for ordered sequences. Fail if the two sequences are unequal as determined by their value differences rounded to the given number of decimal places (default 7) and comparing to zero, or by comparing that the difference between each value in the two sequences is more than the given delta. Note that decimal places (from zero) are usually not the same as significant digits (measured from the most significant digit). If the two sequences compare equal then they will automatically compare almost equal. assertSequenceEqual assertSequenceEqual( seq1, seq2, msg=None, seq_type=None ) An equality assertion for ordered sequences (like lists and tuples). For the purposes of this function, a valid ordered sequence type is one which can be indexed, has a length, and has an equality operator. assertSequenceStartsWith assertSequenceStartsWith( prefix, whole, msg=None ) An equality assertion for the beginning of ordered sequences. If prefix is an empty sequence, it will raise an error unless whole is also an empty sequence. If prefix is not a sequence, it will raise an error if the first element of whole does not match. assertSetEqual assertSetEqual( set1, set2, msg=None ) A set-specific equality assertion. assertSetEqual uses ducktyping to support different types of sets, and is optimized for sets specifically (parameters must support a difference method). assertShapeEqual assertShapeEqual( np_array, tf_tensor, msg=None ) Asserts that a Numpy ndarray and a TensorFlow tensor have the same shape. assertStartsWith assertStartsWith( actual, expected_start, msg=None ) Assert that actual.startswith(expected_start) is True. assertTotallyOrdered assertTotallyOrdered( *groups, **kwargs ) Asserts that total ordering has been implemented correctly. For example, say you have a class A that compares only on its attribute x. Comparators other than lt are omitted for brevity. class A(object): def init(self, x, y): self.x = x self.y = y def hash(self): return hash(self.x) def lt(self, other): try: return self.x < other.x except AttributeError: return NotImplemented assertTotallyOrdered will check that instances can be ordered correctly. For example, self.assertTotallyOrdered( [None], # None should come before everything else. [1], # Integers sort earlier. [A(1, 'a')], [A(2, 'b')], # 2 is after 1. [A(3, 'c'), A(3, 'd')], # The second argument is irrelevant. [A(4, 'z')], ['foo']) # Strings sort last. assertTrue assertTrue( expr, msg=None ) Check that the expression is true. assertTupleEqual assertTupleEqual( tuple1, tuple2, msg=None ) A tuple-specific equality assertion. assertUrlEqual assertUrlEqual( a, b, msg=None ) Asserts that urls are equal, ignoring ordering of query params. assertWarns assertWarns( expected_warning, *args, **kwargs ) 'msg' can be provided when assertWarns is used as a context object. The context manager keeps a reference to the first matching warning as the 'warning' attribute; similarly, the 'filename' and 'lin) assertWarnsRegex assertWarnsRegex( expected_warning, expected_regex, *args, **kwargs ) Asserts that the message in a triggered warning matches a regexp. Basic functioning is similar to assertWarns() with the addition that only warnings whose messages also match the regular expression are considered successful matches. assert_ assert_( *args, **kwargs ) cached_session @contextlib.contextmanager cached_session( graph=None, config=None, use_gpu=False, force_gpu=False ) Returns a TensorFlow Session for use in executing tests. This method behaves differently than self.session(): for performance reasons cached_session will by default reuse the same session within the same test. The session returned by this function will only be closed at the end of the test (in the TearDown function)..cached_session(use_gpu=True) as sess:() captureWritesToStream @contextlib.contextmanager captureWritesToStream( stream ) A context manager that captures the writes to a given stream. This context manager captures all writes to a given stream inside of a CapturedWrites object. When this context manager is created, it yields the CapturedWrites object. The captured contents can be accessed by calling .contents() on the CapturedWrites. For this function to work, the stream must have a file descriptor that can be modified using os.dup and os.dup2, and the stream must support a .flush() method. The default python sys.stdout and sys.stderr are examples of this. Note that this does not work in Colab or Jupyter notebooks, because those use alternate stdout streams. Example: class MyOperatorTest(test_util.TensorFlowTestCase): def testMyOperator(self): input = [1.0, 2.0, 3.0, 4.0, 5.0] with self.captureWritesToStream(sys.stdout) as captured: result = MyOperator(input).eval() self.assertStartsWith(captured.contents(), "This was printed.") checkedThread checkedThread( target, args=None, kwargs=None ) Returns a Thread wrapper that asserts 'target' completes successfully. This method should be used to create all threads in test cases, as otherwise there is a risk that a thread will silently fail, and/or assertions made in the thread will not be respected. countTestCases countTestCases() create_tempdir create_tempdir( name=None, cleanup=None ) Create a temporary directory specific to the test. This creates a named directory on disk that is isolated to this test, and will be properly cleaned up by the test. This avoids several pitfalls of creating temporary directories for test purposes, as well as makes it easier to setup directories and verify their contents. For example: def test_foo(self): out_dir = self.create_tempdir() out_log = out_dir.create_file('output.log') expected_outputs = [ os.path.join(out_dir, 'data-0.txt'), os.path.join(out_dir, 'data-1.txt'), ] code_under_test(out_dir) self.assertTrue(os.path.exists(expected_paths[0])) self.assertTrue(os.path.exists(expected_paths[1])) self.assertEqual('foo', out_log.read_text()) See also: create_tempfile() for creating temporary files. create_tempfile create_tempfile( file_path=None, content=None, mode='w', encoding='utf8', errors='strict', cleanup=None ) Create a temporary file specific to the test. This creates a named file on disk that is isolated to this test, and will be properly cleaned up by the test. This avoids several pitfalls of creating temporary files for test purposes, as well as makes it easier to setup files, their data, read them back, and inspect them when a test fails. For example: def test_foo(self): output = self.create_tempfile() code_under_test(output) self.assertGreater(os.path.getsize(output), 0) self.assertEqual('foo', output.read_text()) See also: create_tempdir() for creating temporary directories, and _TempDir.create_file for creating files within a temporary directory. debug debug() Run the test without collecting errors in a TestResult defaultTestResult defaultTestResult() doCleanups doCleanups() Execute all cleanup functions. Normally called for you after tearDown. enter_context enter_context( manager ) Returns the CM's value after registering it with the exit stack. Entering a context pushes it onto a stack of contexts. The context is exited when the test completes. Contexts are are exited in the reverse order of entering. They will always be exited, regardless of test failure/success. The context stack is specific to the test being run. This is useful to eliminate per-test boilerplate when context managers are used. For example, instead of decorating every test with @mock.patch, simply do self.foo = self.enter_context(mock.patch(...))' insetUp()`. evaluate evaluate( tensors ) Evaluates tensors and returns numpy values. fail fail( msg=None, prefix=None ) Fail immediately with the given message, optionally prefixed. failIf failIf( *args, **kwargs ) failIfAlmostEqual failIfAlmostEqual( *args, **kwargs ) failIfEqual failIfEqual( *args, **kwargs ) failUnless failUnless( *args, **kwargs ) failUnlessAlmostEqual failUnlessAlmostEqual( *args, **kwargs ) failUnlessEqual failUnlessEqual( *args, **kwargs ) failUnlessRaises failUnlessRaises( *args, **kwargs ) gcs_access @contextlib.contextmanager gcs_access() get_temp_dir get_temp_dir() Returns a unique temporary directory for the test to use. If you call this method multiple times during in a test, it will return the same folder. However, across different runs the directories will be different. This will ensure that across different runs tests will not be able to pollute each others environment. If you need multiple unique directories within a single test, you should use tempfile.mkdtemp as follows: tempfile.mkdtemp(dir=self.get_temp_dir()): id id() Returns the descriptive ID of the test. This is used internally by the unittesting framework to get a name for the test to be used in reports. run run( result=None ) session @contextlib.contextmanager session( graph=None, config=None, use_gpu=False, force_gpu=False ) A context manager for a TensorFlow Session for use in executing tests. Note that this will set this session and the graph as global defaults..session(use_gpu=True):() setUp setUp() Hook method for setting up the test fixture before exercising it. setUpClass @classmethod setUpClass() Hook method for setting up class fixture before running tests in the class. shortDescription shortDescription() Formats both the test method name and the first line of its docstring. If no docstring is given, only returns the method name. This method overrides unittest.TestCase.shortDescription(), which only returns the first line of the docstring, obscuring the name of the test upon failure. skipTest skipTest( reason ) Skip this test. subTest @contextlib.contextmanager subTest( msg=_subtest_msg_sentinel, **params ) Return a context manager that will return the enclosed block of code in a subtest identified by the optional message and keyword parameters. A failure in the subtest marks the test case as failed but resumes execution at the end of the enclosed block, allowing further test code to be executed. tearDown tearDown() Hook method for deconstructing the test fixture after testing it. tearDownClass @classmethod tearDownClass() Hook method for deconstructing the class fixture after running all tests in the class. test_baseclass test_baseclass() test_download_and_prepare_as_dataset test_download_and_prepare_as_dataset( *args, **kwargs ) Run the decorated test method. test_info test_info() test_registered test_registered() test_session @contextlib.contextmanager test_session( graph=None, config=None, use_gpu=False, force_gpu=False ) Use cached_session instead. (deprecated) __call__ __call__( *args, **kwds ) Call self as a function. __eq__ __eq__( other ) Return self==value.
https://www.tensorflow.org/datasets/api_docs/python/tfds/testing/DatasetBuilderTestCase?hl=he
CC-MAIN-2020-50
en
refinedweb
Review your favorite Linux distribution. Blogs Recent Entries Best Entries Best Blogs Blog List Search Blogs Forums Reviews Tutorials Articles Search Today's Posts Mark Forums Read LinuxQuestions.org > Linux Answers > Programming Beginning with Java3200 at 2003-07-31 22:53 It seems that you can't go anywhere on the web without running into some form of Java, this is why I am now going to try to explain not only what Java is, but give some examples of programs that you can make, modify and learn from. What is Java? Java was originally developed by Sun Microsystems in an attempt to create an architecturally neutral programming language that would not have to be complied for various CPU architectures. Oak (as it was originally called, although the name was changed in 1995) was developed in 1991 for such things as home appliances which would not all run on the same type of processors. Just then the web was taking off and it was obvious that an programming language that could be used for many different operating systems and CPU architectures without compling many times would be of great importance. The final solution was to use bytecode. Unlike C++, Java code is not executable, it is code that is run by a Java Virtual Machine (JVM), so once a JVM is introduced for a platform all the Java programs can be run on it. There are two types of Java programs, the applications and the applets. The applications are what are written on a computer and run on a computer without the Internet connected in anyway. An applet is a program made for use on the internet and is the programs that runs in your browser. Sun also gave Java some buzzwords. Simple You might get some arguments from beginners on this, but Java remains a fairly simple language. Secure If you ever try to save from a notepad program (or any program) in Java you will get something saying Quote: This application has requested read/write access to a file on the local filesystem. Allowing this action will only give the application access to the file(s) selected in the following file dialog box. Do you want to allow this action? The Java code runs within the JVM and prompts you if the bytecode wants to read or write. Portable Since it is architecturally neutral it can run on PCs, Macs, PDAs, Cellphones, and about anything else if there is a JVM for it. Object-Oriented While some languages are based around commands, Object-Oriented programming focuses on that data. For a more complete definition I highly recommend going to Google Glossary to learn more. Robust Powerful. This is in part due to the fact that the Java complier will check the code and will not complie it if has errors. Multithreaded Java has built-in support for multi-threaded programming. Architecture-neutral Java is not made for a specific architecture or operating system. Interpreted Thanks to bytecode Java can be used on many different platforms. High Performace Java isn't going to be used for 1st person shooters but it does run fast. Distributed It can be used on many platforms Dynamic Can evolve to changing needs. How Java is like C/C++ A Java programmer would be able to learn C/C++ quickly and a C/C++ programmer would be able to learn Java quickly because they are similar. When Java was made it was not to be a programming language that was better then C/C++ but was made to meet the goals of the interenet age. Java also has differences with C/C++, for example, someone could not write C/C++ code and complie it as Java for Internet use, nor could someone take Java code and complie it into C/C++. Getting started writing Java First you must go and get Java . You can download the JRE, which is the Java Runtime Environment, this is good for using Java but not what we need to compile Java applications. You need to download the SDK, which is the Software Development Kit. Once you have installed this free download you will have two important tools. The first is the javac command which is for compiling the program, and there is the java command for running your program. Once the SDK is installed you try typing javac , if you get an unrecognized error you should put the line PATH=$PATH:/usr/java/j2sdk1.4.2/bin (or replace /usr/java/j2sdk1.4.2/bin[/i] in whatever is the place to javac (this can be found with locate javac )in your /etc/profile file. This way the commands are accessible from anywhere. For writing the programs, most text editors will work (not word processors though, they format the text) but I prefer Kwrite because after you save it as a java file it colors all the text and makes blocks of code collaspable and expandable. First we are going to do an analysis of a simple program. /* This is a simple, simple app. They will get more fun in time :) */ class First { public static void main(String args[]) { System.out.println("Yea! I wrote JAVA"); } } Starting at the top you will see the /* and */ markings. This is for a multi-line comment, anything inside of here will be ignored by the Java compiler. You can also add singal line comments with the // markings with everything after the // as a comment. class is the part of the program that everything is inside of. First is the title of the program, you have to save it as whatever you have after class, and this case-sensitive. public is specifying main(String args[]) as being accessable to code outside of its class. static allows main(String args[]) to be used before any objects have been created. void Saying that main(String args[]) itself doesn't give output main(String args[]) { is a method, this is where the code starts executing, you don't need the Sting args for this program but you will need it later so get used to typing it. :) System.out.println is simply telling the system to print and the ln is telling it to make a new line afterwards. You could also just put print instead of println . Everything in parentheses is where you can type messages. } The first one is closing the public static void main() { line and the second is closing the class First { . Once you have this done this, save your file, but make sure to save it as First.java. Next, get a command prompt and go into the folder where you saved your Java file and type javac First.java Nothing fancy should happen. If something does, just copy and paste the program off of this document and it should compile fine. Nearly all of my errors with Java are typos that the compiler will let me know about. After this, you should have a file called First.class. Make sure you are in the same directory as First.class and type java First and you should see Yea! I wrote JAVA . You do not need to include .class when you are running the program. Next, we get started with variables. Variables can be any sort of things that you assign a value to. class var { public static void main(String args[]) { int v; v = 5; System.out.println("v is " + v); } } The output should be v is 5 Since I have already explained most of the things in the previous program I will explain what the new things do. int v; This is declaring that there will be an integer variable. You must declare a variable before you use it. This variable is call v. The names can be longer then one character and are case sensitive. v = 5; v is now being assigned the value 5. System.out.println("v is " + v); Like before, the System.out.println command is being used, everything inside of quotes is what you type. To add the value of v just a the + v outside of the quotes. Once you have complied the program and ran it you should get. v is 5 You can also do math with Java programs, like in the next example. class math { public static void main(String args[]) { int a; int b; int c; a = 5; b = 9; c = a * b; System.out.println( a + " times " + b + " is " + c); } } The output will be 5 times 9 is 45 Along with *, you can also use the +, -, and / signs for math. You can also do things like b = b * a where what the variable equals includes itself. The next program demonstrates a loop. class loop { public static void main(String args[]) { double gallons, cups; for(gallons = 1; gallons <=10; gallons++) { cups = gallons * 16; System.out.println(gallons + " gallons is " + cups + " cups."); } } } The output will be 1.0 gallons is 16.0 cups. 2.0 gallons is 32.0 cups. 3.0 gallons is 48.0 cups. 4.0 gallons is 64.0 cups. 5.0 gallons is 80.0 cups. 6.0 gallons is 96.0 cups. 7.0 gallons is 112.0 cups. 8.0 gallons is 128.0 cups. 9.0 gallons is 144.0 cups. 10.0 gallons is 160.0 cups. The first thing different about this program is double instead of int. Int declares an integer, these work for a lot of things but loose precision if you were to divide 9 by 2, or dealing with anything that has a decimal. For things with decimals you can use float or double. There are also different types of integers other then int. Int is 32 bits, so it covers from 2,147,483,647 to -2,147,483,648. As its name suggests, long is a very long integer, 64 bit, it can handle numbers slightly over 9,200,000,000,000,000,000 and slightly under the negative. For the smaller numbers you might want to look into short (16 bit, 32,867 through -32,768) and byte(8 bit, 127 through -128). And for characters, you use char. Getting back on track, the next thing you will notice it the two variables being declared are separated by a comma. This saves time, I can write double a, b, c, d; instead of writing out double a; double b; double c; double d; The line with for is the loop itself. The basic form of for is for(starting; restrictions; count by) statement; The gallons = 1; is saying we want the loop starting at 1. You could start it at 57 or -23 if you wanted. gallons <= 10; is saying count everything less then or equal to 10. Here are some important things that will come in handy many times == equal to != not equal to < less than > greater than <= less than or equal to >= greater than or equal to And gallons++ is the same as writing out count = count+1 If you want to count by 2s use count = count+2 or 3s use count = count+3 and so on. The { starts a new block of code, inside we assign cups the value and what to display when the loop is complete. This next program will use the if statement. class ifif { public static void main(String args[]) { double a, b; a = 5 b = 4 if(a == b) System.out.println("Since 4 will never equal 5 this won't be displayed, if it does, buy a new CPU"); if(a != b) System.out.println("Since 4 isn't equal to 5 this will be displayed"); if(a < b) System.out.println("5 isn't less then 4, this will not be seen"); if(a > b) System.out.println("I think you get it by now"); } } If statements are very useful in all types of situations. The if statement can also be used as a block of code, for example [i] if(5 == 5) { double e; e = 5; System.out.println("e is " + e); } This may not seem like a very useful tool, but in time it will become very important. Say for example, you are writing a temperature conversion program. You want to prompt the user "Press A to convert Fahrenheit to Celsius or B to convert Celsius to Fahrenheit" You would have something like if(input == A) { Here is the program to convert Fahrenheit to Celsius } if(input == B { Here is the program to convert Celsius to Fahrenheit } This way only the code needed is executed. Of course, you won't actually use input , that is just easy to understand for now. Here is a program that uses user input to find weight on the moon. import java.io.*; class moon { public static void main(String args[]) throws java.io.IOException { double e; double m; System.out.println("Please enter your weight to get the moon equivalent."); String strA = new BufferedReader(new InputStreamReader(System.in)).readLine(); e = Double.parseDouble(strA); m = e * .17; System.out.println("Your weight on the moon would be " + m + " pounds"); } } This one is more complex. import java.io.*; is bringing in things needed for input. The throws java.io.IOException is for error handling. String strA = new BufferedReader(new InputStreamReader(System.in)).readLine(); is going to get the input and the next line is going to assign e the input. From there it is easy. So knowing most of this you can create simple, but useful applications like this. import java.io.*; public class triangle { public static void main(String args[]) throws java.io.IOException { double a; double b; double c; System.out.println("A is? "); //asking for a String strA = new BufferedReader(new InputStreamReader(System.in)).readLine(); a = Double.parseDouble(strA); System.out.println("B is? "); //asking for b String strB = new BufferedReader(new InputStreamReader(System.in)).readLine(); b = Double.parseDouble(strB); System.out.println("C is? "); //asking for c String strC = new BufferedReader(new InputStreamReader(System.in)).readLine(); c = Double.parseDouble(strC); if(c == 0) { //the block that finds out what c is b = b * b; //getting b squared a = a * a; //getting a squared c = a + b; //a squared + b squared equals c squared double x=Math.sqrt(c); //finding the square root System.out.println("C is " + x); //telling what c is } if(b == 0) { c = c * c; a = a * a; b = a - c; if(b <= 0) b = b * -1; //ensuring that the program will not to try to find the square root of a negative number double y=Math.sqrt(b); System.out.println("B is " + y); } if(a == 0) { b = b * b; c = c * c; a = c - b; if(a <= 0) a = a * -1; double z=Math.sqrt(a); System.out.println("A is " + z); } } } You get prompted for A,B and C side of a right triangle, if you don't know one side, enter in 0 for that one. The only new stuff is double x=Math.sqrt(c); this is just declaring x and at the same time saying it is the square root of c. Thanks to moeminhtun on help with the input. This is only scratching the surface of what can be done with Java so here are some more sources that have great information. Sun has some a lot of documentation on there website. Java 2: A Beginner's Guide is a great book. This is not a for Dummies book though. It has a steeper, yet easy to follow learning curve. On the right hand side of this page you will also see a link called "Free downloadable code", download this code and look though it, you can learn a lot. A complete explanation of the Java buzzwords Some more information from Sun Beginning Java 2 SDK 1.4 Edition Learn to program with Java:43
https://www.linuxquestions.org/linux/answers/Programming/Beginning_with_Java?s=e7fdbd2be35e020c818f9a7288acbba9
CC-MAIN-2020-50
en
refinedweb
I've written about NExpect a few times before: If you'd like to learn more first, open those up in other tabs and have a look. I'll wait... I've had a preference for using NExpect myself, but I'm obviously biased: I had a developer-experience with NUnit assertions which I found lacking, so I made a whole new extensible assertions library. But there's always been something that I haven't been able to qualify about why I prefer NExpect over NUnit assertions. I've even gone so far as to tell people to just use either, because they're both good, and I don't want to be that guy who tells people to use his stuff. Though the deep-equality testing is really convenient and NUnit doesn't do that... Today, that changes. I have a good reason to promote NExpect over NUnit now, apart from all of the obvious benefits of NExpect: - fluid expression - extensibility - deep-equality testing - better collection matching Today I found that NExpect can tell you earlier when you've broken something than NUnit can. Explain how? Consider this innocuous code: using NUnit.Framework; [TestFixture] public class Tests { [Test] public void ShouldPassTheTest() { var result = FetchTheResult(); Assert.That(result, Is.EqualTo(1)); // or, the olde way: Assert.AreEqual(result, 1); } private int FetchTheResult() { return 1; } } Of course, that passes. The interesting bit here is the usage of var, which, in C#, means "figure out the type of the result at compile-time and just fill it in". Long ago, that line would have had to have been: int result = FetchTheResult(); var has some distinct advantages over the prior system. It's: - shorter to write - you only have to remember one "muscle-memory" to store a result (always var ___ = ___) - it means that if you do change return types, things are updated for you. In theory (and practice), it makes you quicker on the first run and when you refactor. The problem comes in when those strong types are discarded by code which compiles perfectly. The compiler can't save you from yourself every time! Enter the refactor When we update the above so that FetchTheResult now returns a complex object, the code will still compile: using NUnit.Framework; [TestFixture] public class Tests { [Test] public void ShouldPassTheTest() { var result = FetchTheResult(); Assert.That(result, Is.EqualTo(1)); // or, the olde way: Assert.AreEqual(result, 1); } public class Result { public int Value { get; set; } public DateTime Created { get; set; } } private int FetchTheResult() { return new Result() { Value = 1, Created = DateTime.Now }; } } for the intent of the flow of logic, we're still returning the value 1, but we've also attached a DateTime property to that to indicate when that result was created. Rebuilding, we find that everything builds just fine, and perhaps we forget to re-run tests (or perhaps this function is very far away from where the result is being used, so we don't realise that we just broke a test somewhere else). This is because the NUnit assertions fall back on object for types: Assert.Thatis genericised for the first parameter, so it has a fixed type there, but takes a Constraint for the second parameter -- and a Constraint can house anything, because it casts down to object Assert.AreEqualhas an overload that expects two objects, so it will fall back on that, and also compile. The test will fail -- if you remember to run it (or when CI runs it). So how does NExpect help? If we'd written the first code like so: using NUnit.Framework; using NExpect; using static NExpect.Expectations; [TestFixture] public class Tests { [Test] public void ShouldPassTheTest() { var result = FetchTheResult(); Expect(result).To.Equal(1); } private int FetchTheResult() { return 1; } } then the refactor would have caused a compilation failure at second line of the test, since NExpect carries the type of result through to the .Equal method. actually, it does a bit of up-casting trickery so that you can, for example, Expect((byte)1).To.Equal(1);, but that's beside the point for this particular post... So the second the refactor had gone through, the test wouldn't compile, which means I could find the failure even before running the tests and update them accordingly, instead of waiting for tests to fail at a later date. Conclusion Strongly-typed languages have a certain amount of popularity because the types can help us to avoid common errors. This is why TypeScript is so popular. C# is strongly typed, but there are ways that the strength of that typing can be diluted, and one of those ways is found in NUnit assertions. NExpect protects you here and alerts you about potentially breaking changes to your code before even running tests. Neat, huh? I think I'll pat myself on the back for that 🤣 Discussion
https://dev.to/fluffynuts/nexpect-not-just-pretty-syntax-ja5
CC-MAIN-2020-50
en
refinedweb
EmberEndpoint Struct Reference Gives the endpoint information for a particular endpoint. #include < stack-info.h> Gives the endpoint information for a particular endpoint. Field Documentation ◆ description The endpoint's description. ◆ endpoint An endpoint of the application on this node. ◆ inputClusterList Input clusters the endpoint will accept. ◆ outputClusterList Output clusters the endpoint may send. The documentation for this struct was generated from the following file: stack-info.h
https://docs.silabs.com/zigbee/6.7/em35x/structEmberEndpoint
CC-MAIN-2020-50
en
refinedweb
SocketAddress class. More... #include <SocketAddress.h> SocketAddress class. Representation of an IP address and port pair. Definition at line 36 136 of file SocketAddress.h. Get the human-readable IP address. Allocates memory for a string and converts binary address to human-readable format. String is freed in the destructor. Get the raw IP bytes. Definition at line 118 of file SocketAddress.h. Get the IP address version. Definition at line 127 of file SocketAddress.h. Get the port. Definition at line 145 of file SocketAddress.h. Test if address is zero. Copy address from another SocketAddress. Set the raw IP address. Set the IP address. Set the raw IP bytes and IP version. Set the port. Definition at line 100 of file SocketAddress.h. Compare two addresses for equality. Compare two addresses for equality.
https://os.mbed.com/docs/mbed-os/v6.1/mbed-os-api-doxy/class_socket_address.html
CC-MAIN-2020-50
en
refinedweb
Allows to temporarily modify the power management settings on a MacOS to run processes uninterruptedly. Project description espressomaker espressomaker is a Python module that provides a context manager (and other functionalities) to modify the power management settings on a MacOS X system so that running processes (e.g. a machine learning training algorithm) can run uninterruptedly. More specifically, espressomaker is a wrapper of caffeinate, a shell command in MacOS X distributions that allows users to create assertions that alter the system's sleep behavior. In this sense. espressomaker runs caffeinate subprocesses from the Python 3 or iPython kernel from which espressomaker is being executed and allows to control them through a simple and intuitive set of Python commands. Table of Contents 1. Quick Start To install espressomaker, run the following on your Terminal: $ pip install espressomaker To execute espressomaker as a context manager for a block of code, run on a Python 3 or an iPython kernel: from espressomaker import Espresso with Espresso.shot(): function_1() function_2() ... The indented code will be run using the context manager of espressomaker, Espresso.shot(). While this code is running, your Mac won't go to sleep. 2. Purpose espressomaker is a Python 3 module that does not let your Mac sleep when you are running a block of code. Many applications that run on Python may take hours to finish, like machine learning training algorithms. If a block of code is actively running on a Python 3 kernel and the system goes to sleep, the running processes of the kernel will be interrupted and all the progress related to that block of code will be lost. To avoid that, espressomaker provides a handful of functionalities, including a useful context manager to run blocks of code. The context manager will allow you to use Espresso, a module of espressomaker, to temporarily change the power management settings of your Mac while the indented block of code runs. Once the code is done running, the settings will return to its default state. espressomaker is a package that intends to facilitate dealing with lengthy Python jobs such that the user can, in a single line of code, forget about dealing interrupted processes. 3. Installation To install espressomaker, run on your terminal: $ pip install espressomaker Important note Installation using pip should be uneventful. However, if when importing a ModuleNotFoundError occurs, it could be possible that your current kernel is not including the directory where espressomaker is installed at. Usually, the pip installation process will place the packages contents at: /Users/<your_username>/.local/lib/pythonX.Y/site-packages/; or, /Users/<your_username>/anaconda3/lib/pythonX.Y/site-packages/, if using Anaconda; where X.Y is your current Python version (root environment). You can check if these directories are considered by Python's system's path by running: import sys sys.path If it is not included, or if you chose to install it in a specific directory, add its path by running: sys.path.append('<path>') 4. User-guide 4.1 Working principle espressomaker is a Python 3 package whose Espresso module allows to run caffeinate subprocesses from the running Python 3 kernel (e.g. a Jupyter Notebook, a .py script). The main go as subprocess as a context manager for a block of code or as a manual method call (i.e. the user defines when to start running the assertion and when to finish it). When a function of Espresso is called, an assertion that prevents a MacOS X system from sleeping is 4.2 Importing the module To import the functionalities of espressomaker to Python, run: from espressomaker import Espresso 4.3 Using as a context manager ( Espresso.shot()) One of the main advantages of espressomaker is that its Espresso module allows to run a given piece of code using a context manager. The context manager enables the caffeinate functionality for the code inside it and then closes the process — kills the caffeinate subprocess. To use this functionality, run: with Espresso.shot(display_on = True): function_1() function_2() ... 4.4 Manually opening and closing tabs ( Espresso.opentab() and Espresso.closetab()) Pending. 4.5 Viewing open tabs — caffeinate running processes ( Espresso.check()) Pending. 4.6 Killing all caffeinate processes ( Espresso.killall()) Pending. start-formatting end-formatting Formatting passed and completed. Release History v0.1a1 Basic skeleton of the package ready for shipping to TestPyPI. v0.1a2 - Automated file exporting from .ipynb to .py and standardized the formatting. - Automated file exporting from .ipynb to .md and standardized the formatting. - Improved variable handling on instance methods. - Added a message for the user to recognize the current kernel when using opentabs(). - Added debugging tracers for all private methods. - Finished class- and static- methods docstrings. - Updated setup.py. v0.1b1 - Improved HISTORY.md title formatting. - Updated "classifiers" of setup.py. - Changed opentabs() classmethod to check() in espresso.py. - Successfully ran manual tests in all APIs. TODO - Finished user-guide in README.md - Finish unittest. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/espressomaker/0.1b1/
CC-MAIN-2020-50
en
refinedweb
Troubleshooting This article provides solutions for issues you might encounter while working with the Kendo UI Charts for Angular. Installation When I try to install the Chart component, a Hammer.js-related error occurs The kendo-chart component requires Hammer.js to be installed as a dependency. The error occurs, because Hammer.js is not loaded. Solution Install the Hammer.js package and import it by using the import 'hammerjs'; command. The Chart is clipped during printing The Chart element may overflow the page during printing. To resolve this issue: - Set the print dimensions of the Chart by using a Media Query. - Before printing, call the resizemethod. Due to Bug 774398, Firefox does not support the suggested approach.
https://www.telerik.com/kendo-angular-ui-develop/components/charts/troubleshooting/
CC-MAIN-2020-50
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Metaparse is a compile-time parser generator library. Metaparse provides tools to write parsers parsing the content of string literals at compile-time, which makes it possible to embed domain specific languages (DSLs) into C++ without altering their original syntax (Note that the DSL code snippets will be written in string literals, therefore they may need to be escaped). Assuming that the following template class is available for representing rational numbers in template metaprogramming: template <class Num, class Denom> struct rational; Metaparse can be used to construct such values (instantiate the rational template class) from string literals. Instead of rational<1, 3> one can write RATIONAL("1/3") which can be processed by any standard-compliant C++11 compiler (and mean the same). This can be implemented using Metaparse the following way: using namespace boost::metaparse; typedef sequence_apply2< rational, token<int_>, last_of<lit_c<'/'>, token<int_>> > rational_grammar; typedef build_parser<entire_input<rational_grammar>> rational_parser; #define RATIONAL(s) \ (::rational_parser::apply<BOOST_METAPARSE_STRING(s)>::type::run()) Note that this is the entire implementation. Also note that this implementation can be extended to improve the error reports in certain situations. Metaparse is intended to be used by library authors to make their APIs follow the usual notation of the library's problem domain. Boost.Proto is a tool for building expression templates. Expression templates can be used for DSL embedding by reinterpreting valid C++ expressions as expressions written in the DSL to embed. This technique has the advantages over parsing the content of string literals (which is Metaparse's approach) that: Using expression templates for DSL embedding has the following disadvantages: Proto helps embedding DSLs based on expression templates, while Metaparse helps embedding DSLs based on parsing the content of string literals. Spirit is a tool that can be used to build parsers parsing (among others) the content of string literals at runtime, while Metaparse is a tool that can be used to parse the content of string literals at compile-time. This library is useful to provide an API for C++ libraries dealing with a problem domain with its own notation. Interfaces built with Metaparse make it possible for the users of the interface to use the domain's own notation, which makes it easier to write and maintain the code. Users of the interface don't need to learn a new notation (trying to follow the problem domain's original one) library authors constrained by the C++ syntax can provide. Example problem domains are regular expressions and SQL queries. Metaparse can also be useful to build libraries validating the content of string literals at compile time instead of doing it at runtime or not doing it at all. This can help finding (and fixing) bugs in the code early (during compilation). An example problem domain is printf. The parsers built with Metaparse process the content of the string literals using template metaprograms. This impacts the library using Metaparse the following way: Metaparse is based on C++98. The only exception is the BOOST_METAPARSE_STRING macro, which needs C++11 constexpr. Compilers Metaparse is actively (in a CI environment) tested on: Metaparse is expected to work on Visual C++ 2012 and 2010.
https://www.boost.org/doc/libs/1_73_0/doc/html/metaparse/preface.html
CC-MAIN-2020-50
en
refinedweb
). NETINTRO(4) BSD Kernel Interfaces Manual NETINTRO(4) NAME networking -- introduction to networking facilities SYNOPSIS #include <sys/socket.h> #include <net/route.h> #include <net/if.h> DESCRIPTION This section is a general introduction to the networking facilities available in the system. Documen-tation Documentation tation in this part of section trans-port. transport. port. A protocol family may support multiple methods of addressing, though the current protocol imple-mentations implementations mentations do not. A protocol family is normally comprised of a number of protocols, one per socket(2) type. It is not required that a protocol family support all socket types. A protocol family may con-tain contain tain out-ofband. PROTOCOLS The system currently supports the Internet protocols, the Xerox Network Systems(tm)_UNIX 1 /* local to host (pipes) */ #define AF_INET 2 /* internetwork: UDP, TCP, etc. */ #define AF_NS 6 /* Xerox NS protocols */ #define AF_CCITT 10 /* CCITT protocols, X.25 etc */ #define AF_HYLINK 15 /* NSC Hyperchannel */ #define AF_ISO 18 /* ISO protocols */ ROUTING Mac OS X provides some packet routing facilities. The kernel maintains a routing information database, which is used in selecting the appropriate network interface when transmitting packets. A user process (or possibly multiple co-operating processes) maintains this database by sending mes-sages messages sages inter-faces interfaces faces such as the loopback interface, lo(4), do not. interface. SIOCSIFBRDADDR Set broadcast address for protocol family and interface. Ioctl requests to obtain addresses and requests both to set and retrieve other data are still fully supported and use the ifreq structure: SIOCGIFADDR Get interface address for protocol family. SIOCGIFDSTADDR Get point to point address for protocol family and interface. separate calls to set destination or broadcast addresses, or network masks (now an integral feature of multiple protocols) a separate structure is used to specify all three facets simultaneously (see below). One would use a slightly tailored version of this struct specific to each family (replacing each sockaddr by one of the family-specific familyspecific specific type). Where the sockaddr itself is larger than the default size, one needs to modify the ioctl identifier itself to include the total size, as described in ioctl. SIOCDIFADDR This requests deletes the specified address from the list associated with an interface. It also uses the if_aliasreq structure to allow for the possibility of protocols allow-ing allowing ing multiple masks or destination addresses, and also adopts the convention that speci-fication specification fication of the default address means to delete the first address contain the length, in bytes, of the configuration list. /* * */ }; SEE ALSO ioctl(2), socket(2), intro(4), config(5), routed(8) HISTORY The netintro manual appeared in 4.3BSD-Tahoe. 4.2 Berkeley Distribution November 30, 1993 4.2 Berkeley Distribution
http://developer.apple.com/DOCUMENTATION/Darwin/Reference/ManPages/man4/netintro.4.html
crawl-002
en
refinedweb
Catering to a larger audience. Since RSS is XML, treat it as such What happens? The RSS or Atom feed is an XML document. When the users clicks it, the browser downloads the document and attempts to display it as XML, that is, as raw code. To make things worse, some versions of Internet Explorer display a security warning. The situation is not easy. Technically RSS feeds should have the application/rss+xml MIME type while Atom feeds should be identified as application/atom+xml. If the MIME type is correct and the visitor has a news aggregator properly setup on his machine, then the browser will launch it automatically. In practice, few visitors have the proper configuration so they are more likely to see a cryptic error message. Consequently most Web sites use the text/xml or application/xml MIME type which is incorrect but at least causes the browser to display the raw XML code. It's only a slight improvement over an error message but, hey, take what you can. To makes matter worse, some sites serve XML documents as application/octet-stream due to misconfiguration. The webmaster must update the server configuration to use the most appropriate MIME type. For example, with the popular Apache Web server, this is done in the .htaccess file. To alleviate the problem, the most recent browsers sniff incoming XML files to categorize them properly. Sniffing simply means that they read the first few bytes looking for RSS or Atom tags. But, again, that requires the visitor to use an RSS-aware browser. Fortunately there's a better solution: an XSLT stylesheet. If the browser treats the feed as an XML document, it will use the stylesheet to render a sensible page. If, on the other hand, the browser recognizes an RSS and Atom feed, it will ignore the stylesheet. Voilà, the best of both worlds! Listing 1 is an RSS document associated to a stylesheet (an excerpt from my podcast's feed). Note the second line is an xml-stylesheet processing instruction. This is the crucial link to the stylesheet. The href is the path to the stylesheet. Listing 1. RSS excerpt Listing 2 is the stylesheet. If you are familiar with XSLT, you can probably write a similar stylesheet in minutes... but for one quirk covered in the next subsection. If you know XSLT, feel free to skip directly to the next subsection. If you are not familiar with XSLT, read on as I'll cover the bare-bone minimum needed to process RSS in the remainder of this series. Listing 2. XSLT stylesheet Note that the stylesheet is an XML document (just like the RSS or Atom stream) and it uses a namespace (like Atom elements or RSS extensions). Typically for XML documents, you must be wary of the syntax. Specifically make sure that the opening tags ( <p>) have a matching closing tag ( </p>). Empty tags must follow a special syntax ( <br />). The stylesheet contains XSLT statements to control the rendering (in the namespace, in Listing 2 the XSLT statements begin with the xsl prefix) and HTML tags to control the layout of the page. If you want to modify Listing 2 to adapt to your site layout, you can edit the contents of the xsl:template element. Make sure to preserve the XSLT statements. The four XSLT instructions that you will need are xsl:value-of, xsl:for-each, xsl:if and the use of curly brackets. The xsl:value-of instruction extracts information from the RSS or Atom document and inserts it in HTML. The instruction takes one attribute called select with a path to the RSS or Atom element that you're interested in. For example, to copy the feed title, the path is rss/channel/title since the title element appears underneath channel which itself is included in rss. As you can see, the path simply lists the elements in the order in which they appear in the RSS document. To copy data from an attribute, prefix the attribute name with @ as in rss/channel/item/enclosure/@url. xsl:for-each is the looping instruction. It loops over a set of elements (selected through the attribute as well), in this case the various items. For each item, the stylesheet prints some basic information: title, description and a link to the enclosure. The curly brackets in attributes (and only in attributes) extract information from the RSS or Atom feed, like xsl:value-of does for regular text. In the stylesheet, curly brackets populate several href attributes. Last but not least, the xsl:if instruction executes only if its test succeeds. In Listing 2, xsl:if tests whether it's worth printing the enclosure information or whether the enclosure tag is absent. I have only scratched the surface of XSLT but if you make good use of copy-and-paste and Listing 2, you can adapt it to fit your site layout. Check Resources for a more complete tutorial on XSLT. If your stylesheet does not work as expected, review the following: - Make sure you declare the namespace exactly as shown (the xmlns:xslattribute), do not change the URI - If your document uses other namespaces (such as the iTunes extension), make sure you declare those as well - If the stylesheet seems to work but you cannot extract some data, it most likely is a path problem (when I teach XSLT, incorrect path causes 80% of the problems with my students) Most feed editors allow you to insert the required xml-stylesheet instruction. If yours does not support it, you can turn to FeedBurner to update the feed. FeedBurner even offers a default XSLT stylesheet (see Resources). All would be good in the land of RSS and Atom if Firefox had support for the disable-output-escaping feature in XSLT but it does not. disable-output-escaping is an obscure feature in XSLT that serves only one purpose: it processes tags that appear in other tags, such as CDATA sections. And, RSS and Atom make heavy use of CDATA sections to embed HTML code. With disable-output-escaping, you should be able to lift the HTML tags from the feed and insert them right into the HTML page...but for Firefox. Firefox essentially ignores the instruction so it ends up displaying the raw HTML code. There's been some debate in the Firefox community as to whether this behavior was standard compliant or not. Nevertheless it is a problem and one for which you need a solution. Fortunately Sean M. Burke came up with a clever piece of JavaScript that circumvents the limitation. Mr Burke was kind enough to place his code in the public domain, enabling anyone to use it in any project. For your convenience, I include a link to a copy of his script in Resources. For the script to work, your stylesheet must insert a div section with the id "cometestme." Your stylesheet must also place every item that needs escaping in paragraphs with the name "decodeable." Finally, you must call the script ( go_decoding()), as you load the HTML document. What to do in the stylesheet? Listing the items in the RSS or Atom feed is only the beginning. After all, that content is already available elsewhere on the Web site and the feed was designed to drive subscriptions, not replicate content. Most webmasters who attach an XSLT stylesheet to their RSS or Atom feed include instructions on how to install a news aggregator and subsequently subscribe to their feed. While this sounds like the right thing to do, it has been my experience that visitors who are presented with such a page are unlikely to install an aggregator. With viruses and trojans, surfers are suspicious of demands to install software. Many sites therefore include instructions that direct visitors to an online aggregator such as Google Reader or Yahoo!. While it seems like a good idea, I remain unconvinced on its efficiency. Unless they already subscribe to many feeds, visitors are not much more likely to sign up for a new service than to install new software. Assuming they do, what are the chances that they will remember to visit the online aggregator? My thinking is that if they have to bookmark a site, I'd rather they bookmark mine. Thinking outside of the box Personally I offer an option to subscribe through e-mail through one of the RSS-to-e-mail services. You can safely assume that every visitor has an e-mail address. I have drafted detailed instructions, outlining the options and including a very prominent e-mail subscription form. I have found that one fifth of the visitors to my podcast would rather subscribe through e-mail over subscribing through RSS. RSS and Atom might be better technical solutions but nothing beats a familiar service... and e-mail is the most familiar service for many visitors. To save me having to write subscription instructions twice (with the risk that they might diverge in the future), I use the stylesheet in Listing 3. It is simpler than Listing 2 and it implements an HTML redirect to send visitors to a regular page on my site. Listing 3. The most simple solution? Redirect them! When a visitor clicks the RSS feed, if her browser does not recognize RSS, it behaves like a redirect! This article has shown how to put a friendly face on an RSS or Atom feed. Until they are more widely known, it is a good idea to implement this as a safeguard. Learn - Introduction to Syndication (Vincent Lauria, developerWorks, June 2006): Get started with RSS -- find why it is so popular and its benefits, which feed readers are available and might fit your needs. Plus learn about RSS and Atom subscriptions available to you from IBM. - The RSS specification: Dig into this surprisingly readable spec for all the details on RSS. - An overview of the Atom 1.0 Syndication Format (James Snell, developerWorks, June 2005): Consider Atom, an alternative to RSS. - Hands-on training (Don Day, developerWorks, Mar 2000): Learn Extensible Stylesheet Language Transformations (XSLT) with this simple, hands-on exercise that demonstrates the principles of the XSLT. - Process Atom 1.0 with XSLT tutorial (Uche Ogbuji, developerWorks, December 2005): Take a more in-depth look at XSLT and Atom. - JavaScript hack: Download the original instructions and code from Sean M. Burke Web site. - FeedBurner: If your RSS editor does not support stylesheets, you might want to sign with FeedBurner. - developerWorks RSS feeds: Learn more about content feeds and add pre-defined or custom RSS and Atom feeds for developerWorks content to your site. - IBM trial software: Build your next development project with trial software available for download directly from developerWorks. Discuss - Participate in the discussion forum. - Atom and RSS forum: Find tips, tricks, and answers about Atom, RSS, or other syndication topics in this forum. - XML zone discussion forums: Participate in any of several XML-centered forums. - developerWorks blogs: Get involved in the developerWorks community. Benoît Marchal is a Belgian consultant. He is the author of XML by Example, Second Edition and other XML books. You can contact him at [email protected] or through his personal site at.
http://www.ibm.com/developerworks/xml/library/x-wxxm37.html
crawl-002
en
refinedweb
The QTextDecoder class provides a state-based decoder. More... #include <qtextcodec.h> List of all member functions. The decoder converts a text format into Unicode, remembering any state that is required between calls. See also QTextCodec::makeEncoder() and Internationalization with Qt. file is part of the Qt toolkit. Copyright © 1995-2002 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.0/qtextdecoder.html
crawl-002
en
refinedweb