text
stringlengths
3
1.04M
function solve(args) { let expression = args[0], openIndex = 0, closeIndex = 0; openIndex = expression.indexOf('(', openIndex) closeIndex = expression.indexOf(')', closeIndex) while (true) { if (openIndex > closeIndex) { console.log('Incorrect'); return; } openIndex = expression.indexOf('(', openIndex + 1) closeIndex = expression.indexOf(')', closeIndex + 1) if ((openIndex === -1) && (closeIndex === -1)) { console.log('Correct'); return; } } } solve(['((a+b)/5-d)']); //correct solve([')(a+b))']); //incorrect solve([')(a+b)(']); //incorrect solve([')((a+b)']); //incorrect solve(['))a + b((']); //incorrect solve(['(a+b)+d+)(b( * c)']); //correct - obviously not solve([')(a+b))']); //incorrect solve(['(a+b+(b*c)+(d/s) + (-5)) + ()()']); //correct solve(['((a + (b * c))*(())']); // incorrect
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> <title>ShoestringPHP: Data Fields - Functions</title> <link href="tabs.css" rel="stylesheet" type="text/css"/> <link href="doxygen.css" rel="stylesheet" type="text/css"/> </head> <body> <!-- Generated by Doxygen 1.6.1 --> <div class="navigation" id="top"> <div class="tabs"> <ul> <li><a href="main.html"><span>Main&nbsp;Page</span></a></li> <li><a href="pages.html"><span>Related&nbsp;Pages</span></a></li> <li class="current"><a href="annotated.html"><span>Data&nbsp;Structures</span></a></li> <li><a href="files.html"><span>Files</span></a></li> </ul> </div> <div class="tabs"> <ul> <li><a href="annotated.html"><span>Data&nbsp;Structures</span></a></li> <li><a href="hierarchy.html"><span>Class&nbsp;Hierarchy</span></a></li> <li class="current"><a href="functions.html"><span>Data&nbsp;Fields</span></a></li> </ul> </div> <div class="tabs"> <ul> <li><a href="functions.html"><span>All</span></a></li> <li class="current"><a href="functions_func.html"><span>Functions</span></a></li> <li><a href="functions_vars.html"><span>Variables</span></a></li> </ul> </div> <div class="tabs"> <ul> <li><a href="functions_func.html#index__"><span>_</span></a></li> <li><a href="functions_func_0x61.html#index_a"><span>a</span></a></li> <li><a href="functions_func_0x62.html#index_b"><span>b</span></a></li> <li><a href="functions_func_0x63.html#index_c"><span>c</span></a></li> <li><a href="functions_func_0x64.html#index_d"><span>d</span></a></li> <li><a href="functions_func_0x65.html#index_e"><span>e</span></a></li> <li><a href="functions_func_0x66.html#index_f"><span>f</span></a></li> <li><a href="functions_func_0x67.html#index_g"><span>g</span></a></li> <li><a href="functions_func_0x68.html#index_h"><span>h</span></a></li> <li><a href="functions_func_0x69.html#index_i"><span>i</span></a></li> <li><a href="functions_func_0x6a.html#index_j"><span>j</span></a></li> <li><a href="functions_func_0x6c.html#index_l"><span>l</span></a></li> <li><a href="functions_func_0x6d.html#index_m"><span>m</span></a></li> <li><a href="functions_func_0x6e.html#index_n"><span>n</span></a></li> <li><a href="functions_func_0x6f.html#index_o"><span>o</span></a></li> <li><a href="functions_func_0x70.html#index_p"><span>p</span></a></li> <li class="current"><a href="functions_func_0x71.html#index_q"><span>q</span></a></li> <li><a href="functions_func_0x72.html#index_r"><span>r</span></a></li> <li><a href="functions_func_0x73.html#index_s"><span>s</span></a></li> <li><a href="functions_func_0x74.html#index_t"><span>t</span></a></li> <li><a href="functions_func_0x75.html#index_u"><span>u</span></a></li> <li><a href="functions_func_0x76.html#index_v"><span>v</span></a></li> <li><a href="functions_func_0x78.html#index_x"><span>x</span></a></li> </ul> </div> </div> <div class="contents"> &nbsp; <h3><a class="anchor" id="index_q">- q -</a></h3><ul> <li>queryDb() : <a class="el" href="classs_m_y_s_q_l_query.html#ab00cdca580354b25b5427d1120c9db44">sMYSQLQuery</a> , <a class="el" href="classs_query.html#ab00cdca580354b25b5427d1120c9db44">sQuery</a> </li> </ul> </div> <hr size="1"/><address style="text-align: right;"><small>Generated on Wed Sep 16 20:18:10 2009 for ShoestringPHP by&nbsp; <a href="http://www.doxygen.org/index.html"> <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.6.1 </small></address> </body> </html>
(Nanowerk News) CEA-Leti today announced the launch of the TARGET-PDT project designed to increase the effectiveness of photodynamic therapy (PDT) for treating cancer by developing a novel nano carrier-based approach. PDT is a minimally invasive treatment that destroys cancer cells with a combination of a photoactive drug known as a photosensitizer and a specific wavelength of light. When the photosensitizers are activated by the laser light, they produce a form of oxygen that destroys illuminated cancer cells. Focusing on using PDT against bone cancer and head-and-neck squamous cell carcinoma, which is a tumor e.g. of the oral cavity, the project will study the delivery and targeting of photosensitizers encapsulated into lipid nano-particles. For both cancer forms, current treatment regimes often result in low cure rates and show serious side effects or a poor functional outcome. The nano-carriers offer a high payload that will include antibodies targeting specific tumor biomarkers. PDT has already shown significant potential for improving cancer treatment because it offers strictly focused application, biocompatibility with other forms of treatment, the option for repeated use, excellent cosmetic or functional outcomes and fast recovery. Indeed, typically there is a modest enhanced accumulation of the photosensitizer in tumor tissues and an additional selectivity is mainly provided by the confined illumination of the target area. But, the use of PDT has been restrained by limited effectiveness of the photosensitizers on reaching the tumor and the potential damage to healthy cells near the tumor. Improved targeting of the photosensitizer and nano-particles is necessary to prevent damage to the surrounding healthy tissue. CEA-Leti, which is coordinating this European project, expects the nano carrier-based approach will significantly improve delivery and targeting of the photosensitizer, enhancing concentrations at the tumor site even after systemic application. The TARGET-PDT project will allow the partners to study all aspects of PDT treatment: nano-carrier size and payload, photosensitizers such as chlorines and phthalocyanines, targeting method and types of laser irradiation. The experimental approach will be developed into a preclinical validation to deliver an optimised combination for first clinical “nano-PDT” at a later stage. By using nanotechnology-based photosensitizer delivery systems, the project will set the stage for improved control of the therapy and more comfort for cancer patients. CEA-Leti is coordinating TARGET-PDT as part of its research program on organic nanocarriers and delivery systems for clinical applications like molecular imaging and drug delivery. The partnership includes highly complementary partners. In addition to CEA-Leti, they are the European industrial leader in PDT, the German company biolitec AG; University Hospital Zurich, which is recognized for it clinical PDT capabilities; French academic laboratories belonging to the Centre National de la Recherche Scientifique (CNRS) and the Anticancer Research Center; and Centre Alexis Vautrin in Nancy, France, which specializes in PDT from bench to bedside. CEA is a French research and technology public organisation, with activities in four main areas: energy, information technologies and healthcare technologies, defence and security. Within CEA, the Laboratory for Electronics & Information Technology (CEA-Leti) works with companies in order to increase their competitiveness through technological innovation and transfers. CEA-Leti is focused on micro and nanotechnologies and their applications, from wireless devices and systems, to biology and healthcare or photonics. Nanoelectronics and microsystems (MEMS) are at the core of its activities. As a major player in MINATEC excellence center, CEA-Leti operates 8,000-m2 state-of-the-art clean rooms, on 24/7 mode, on 200mm and 300mm wafer standards. With 1,200 employees, CEA-Leti trains more than 150 Ph.D. students and hosts 200 assignees from partner companies. Strongly committed to the creation of value for the industry, CEA-Leti puts a strong emphasis on intellectual property and owns more than 1,400 patent families. In 2008, contractual income covered more than 75 percent of its budget worth 205 M€.
At the onset of WWII in 1940, the German Luftwaffe was the strongest and most battle-experienced air force in the world. The Luftwaffe dominated the skies over Europe with aircraft much more advanced than their counterparts. The Luftwaffe was central to the German Blitzkrieg (lightning war) doctrine, as the close air support provided by various medium two-engine bombers, Stuka dive bombers and an overwhelming force of tactical fighters were key to several early successes. Unlike the British and American Air Forces, the Luftwaffe early on decided not to develope four-engine bombers in any significant numbers, and were thus unable to conduct an effective long-range strategic bombing campaign against either the Russians or the Western Allies when needed. The new technology of radar was used effectively against the powerful German Luftwaffe during the Battle of Britain. It was the first time that German forces failed to achieve a major goal. So while Germany was on the leading edge of much technology during WWII, these two issues helped halt the powerful German war machine. The Messerschmitt Bf 109 was the most versatile and widely-produced fighter aircraft operated by the Luftwaffe. The kill ratio (almost 9:1) made this plane a superior German fighter during the war. The Focke Wulf Fw 190 is considered one of the best fighters of World War II. A superb fighting machine, it soon gained a reputation and the nickname Butcher Bird. The Junkers Ju 87 Stuka was a main asset for Blitzkrieg, able to place bombs with deadly accuracy. The Luftwaffe was desperate to get the design into operational use, but delays in engine deliveries meant only a handful were delivered before the war ended. RAF pilots began to encounter this new fighter and described it as being fast -- it maneuvered unlike anything they?ve ever seen before.
/* See LICENSE file for copyright and license details. */ static char sccsid[] = "@(#) ./lib/xcalloc.c"; #include <stdlib.h> #include "../inc/cc.h" void * xcalloc(size_t n, size_t size) { void *p = calloc(n, size); if (!p) die("out of memory"); return p; }
Superb FULLY FURNISHED 2 bedroom apartment situated just 100 metres to the BEACH, shops, station and restaurants. It's LIGHT & BRIGHT with modern decor, new carpet and under cover car parking. Current owners are heading overseas and the property is available early to mid February. Your chance to live the BEACH LIFESTYLE.
Cryptocurrencies are quickly cementing their role as the next big thing in technology, becoming more common and emerging from their reputation as being used for illicit, black market transactions. The automotive industry is increasingly embracing this new technology, bringing whats known as blockchain into industry lexicon. The blockchain is an emerging technology and we dont fully understand everything it can do. Blockchain has the potential to revolutionize the way we do things and act as a new internet of sorts. This recent development brings with it whole new ways of doing things. For the burgeoning automotive industry, where growth is happening faster than they can keep up, blockchain may be the answer.
.alfresco-navigation-Tree.categories .dijitIconFolderOpen, .alfresco-navigation-Tree.categories .dijitFolderOpened { background-position: 0; background-image: url(./images/category-16.png); } .alfresco-navigation-Tree.categories .dijitIconFolderClosed, .alfresco-navigation-Tree.categories .dijitFolderClosed { background-position: 0; background-image: url(./images/category-16.png); }
--- lastmod: 2016-10-01 date: 2014-05-14T02:13:50Z menu: main: parent: content next: /content/ordering prev: /content/types title: Archetypes weight: 50 toc: true --- In Hugo v0.11, we introduced the concept of a content builder. Using the CLI command <code>hugo new <em>[path/to/my/content]</em></code>, an author could create an empty content file, with the date and title automatically defined in the front matter of the post. While this was a welcome feature, active writers need more flexibility. When defining a custom content type, you can use an **archetype** as a way to define the default metadata for a new post of that type. **Archetypes** are quite literally archetypal content files with pre-configured [front matter](/content/front-matter). An archetype will populate each new content file of a given type with any default metadata you've defined whenever you run the `hugo new` command. ## Example ### Step 1. Creating an archetype In the following example scenario, suppose we have a blog with a single content type (blog post). Our imaginary blog will use ‘tags’ and ‘categories’ for its taxonomies, so let's create an archetype file with ‘tags’ and ‘categories’ pre-defined: #### archetypes/default.md ```toml +++ tags = ["x", "y"] categories = ["x", "y"] +++ ``` > __CAVEAT:__ Some editors (e.g. Sublime, Emacs) do not insert an EOL (end-of-line) character at the end of the file (i.e. EOF). If you get a [strange EOF error](/troubleshooting/strange-eof-error/) when using `hugo new`, please open each archetype file (i.e.&nbsp;`archetypes/*.md`) and press <kbd>Enter</kbd> to type a carriage return after the closing `+++` or `---` as necessary. ### Step 2. Using the archetype Now, with `archetypes/default.md` in place, let's create a new post in the `post` section with the `hugo new` command: $ hugo new post/my-new-post.md Hugo will now create the file with the following contents: #### content/post/my-new-post.md ```toml +++ title = "my new post" date = "2015-01-12T19:20:04-07:00" tags = ["x", "y"] categories = ["x", "y"] +++ ``` We see that the `title` and `date` variables have been added, in addition to the `tags` and `categories` variables which were carried over from `archetype/default.md`. Congratulations! We have successfully created an archetype and used it to quickly scaffold out a new post. But wait, what if we want to create some content that isn't exactly a blog post, like a profile for a musician? Let's see how using **archetypes** can help us out. ### Creating custom archetypes Previously, we had created a new content type by adding a new subfolder to the content directory. In this case, its name would be `content/musician`. To begin using a `musician` archetype for each new `musician` post, we simply need to create a file named after the content type called `musician.md`, and put it in the `archetypes` directory, similar to the one below. #### archetypes/musician.md ```toml +++ name = "" bio = "" genre = "" +++ ``` Now, let's create a new musician. $ hugo new musician/mozart.md This time, Hugo recognizes our custom `musician` archetype and uses it instead of the default one. Take a look at the new `musician/mozart.md` post. You should see that the generated file's front matter now includes the variables `name`, `bio`, and `genre`. #### content/musician/mozart.md ```toml +++ title = "mozart" date = "2015-08-24T13:04:37+02:00" name = "" bio = "" genre = "" +++ ``` ## Using a different front matter format By default, the front matter will be created in the TOML format regardless of what format the archetype is using. You can specify a different default format in your site-wide config file (e.g. `config.toml`) using the `MetaDataFormat` directive. Possible values are `"toml"`, `"yaml"` and `"json"`. ## Which archetype is being used The following rules apply when creating new content: * If an archetype with a filename matching the new post's [content type](/content/types) exists, it will be used. * If no match is found, `archetypes/default.md` will be used. * If neither is present and a theme is in use, then within the theme: * If an archetype with a filename that matches the content type being created, it will be used. * If no match is found, `archetypes/default.md` will be used. * If no archetype files are present, then the one that ships with Hugo will be used. Hugo provides a simple archetype which sets the `title` (based on the file name) and the `date` in RFC&nbsp;3339 format based on [`now()`](http://golang.org/pkg/time/#Now), which returns the current time. > *Note: `hugo new` does not automatically add `draft = true` when the user > provides an archetype. This is by design, rationale being that > the archetype should set its own value for all fields. > `title` and `date`, which are dynamic and unique for each piece of content, > are the sole exceptions.* The content type is automatically detected based on the file path passed to the Hugo CLI command <code>hugo new <em>[my-content-type/post-name]</em></code>. To override the content type for a new post, include the `--kind` flag during creation. > *Note: if you wish to use archetypes that ship with a theme, the theme MUST be specified in your `config.toml`.*
/* * Copyright (c) 2009 Erin Catto http://www.box2d.org * * This software is provided 'as-is', without any express or implied * warranty. In no event will the authors be held liable for any damages * arising from the use of this software. * Permission is granted to anyone to use this software for any purpose, * including commercial applications, and to alter it and redistribute it * freely, subject to the following restrictions: * 1. The origin of this software must not be misrepresented; you must not * claim that you wrote the original software. If you use this software * in a product, an acknowledgment in the product documentation would be * appreciated but is not required. * 2. Altered source versions must be plainly marked as such, and must not be * misrepresented as being the original software. * 3. This notice may not be removed or altered from any source distribution. */ #ifndef CONFINED_H #define CONFINED_H class Confined : public Test { public: enum { e_columnCount = 0, e_rowCount = 0 }; Confined() { { b2BodyDef bd; b2Body* ground = m_world->CreateBody(&bd); b2EdgeShape shape; // Floor shape.Set(b2Vec2(-10.0f, 0.0f), b2Vec2(10.0f, 0.0f)); ground->CreateFixture(&shape, 0.0f); // Left wall shape.Set(b2Vec2(-10.0f, 0.0f), b2Vec2(-10.0f, 20.0f)); ground->CreateFixture(&shape, 0.0f); // Right wall shape.Set(b2Vec2(10.0f, 0.0f), b2Vec2(10.0f, 20.0f)); ground->CreateFixture(&shape, 0.0f); // Roof shape.Set(b2Vec2(-10.0f, 20.0f), b2Vec2(10.0f, 20.0f)); ground->CreateFixture(&shape, 0.0f); } float32 radius = 0.5f; b2CircleShape shape; shape.m_p.SetZero(); shape.m_radius = radius; b2FixtureDef fd; fd.shape = &shape; fd.density = 1.0f; fd.friction = 0.1f; for (int32 j = 0; j < e_columnCount; ++j) { for (int i = 0; i < e_rowCount; ++i) { b2BodyDef bd; bd.type = b2_dynamicBody; bd.position.Set(-10.0f + (2.1f * j + 1.0f + 0.01f * i) * radius, (2.0f * i + 1.0f) * radius); b2Body* body = m_world->CreateBody(&bd); body->CreateFixture(&fd); } } m_world->SetGravity(b2Vec2(0.0f, 0.0f)); } void CreateCircle() { float32 radius = 2.0f; b2CircleShape shape; shape.m_p.SetZero(); shape.m_radius = radius; b2FixtureDef fd; fd.shape = &shape; fd.density = 1.0f; fd.friction = 0.0f; b2Vec2 p(RandomFloat(), 3.0f + RandomFloat()); b2BodyDef bd; bd.type = b2_dynamicBody; bd.position = p; //bd.allowSleep = false; b2Body* body = m_world->CreateBody(&bd); body->CreateFixture(&fd); } void Keyboard(unsigned char key) { switch (key) { case 'c': CreateCircle(); break; } } void Step(Settings* settings) { bool sleeping = true; for (b2Body* b = m_world->GetBodyList(); b; b = b->GetNext()) { if (b->GetType() != b2_dynamicBody) { continue; } if (b->IsAwake()) { sleeping = false; } } if (m_stepCount == 180) { m_stepCount += 0; } //if (sleeping) //{ // CreateCircle(); //} Test::Step(settings); for (b2Body* b = m_world->GetBodyList(); b; b = b->GetNext()) { if (b->GetType() != b2_dynamicBody) { continue; } b2Vec2 p = b->GetPosition(); if (p.x <= -10.0f || 10.0f <= p.x || p.y <= 0.0f || 20.0f <= p.y) { p.x += 0.0; } } m_debugDraw.DrawString(5, m_textLine, "Press 'c' to create a circle."); m_textLine += 15; } static Test* Create() { return new Confined; } }; #endif
Record your employee's time and jobs with this daily time and job form. The Daily Time and job ticket has crack and peel stick labels on part one. Ideal for Service Departments and Repair Shops. 4-1/4" x 11" Crack and Peel! Part 3 is a 80 lb. white tag copy for employees.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!-- NewPage --> <html lang="pl"> <head> <!-- Generated by javadoc (1.8.0_60) on Tue Mar 29 13:02:13 CEST 2016 --> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Uses of Class play.templates.JavaExtensions (Play! API)</title> <meta name="date" content="2016-03-29"> <link rel="stylesheet" type="text/css" href="../../../stylesheet.css" title="Style"> <script type="text/javascript" src="../../../script.js"></script> </head> <body> <script type="text/javascript"><!-- try { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="Uses of Class play.templates.JavaExtensions (Play! API)"; } } catch(err) { } //--> </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> <!-- ========= START OF TOP NAVBAR ======= --> <div class="topNav"><a name="navbar.top"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div> <a name="navbar.top.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../overview-summary.html">Overview</a></li> <li><a href="../package-summary.html">Package</a></li> <li><a href="../../../play/templates/JavaExtensions.html" title="class in play.templates">Class</a></li> <li class="navBarCell1Rev">Use</li> <li><a href="../package-tree.html">Tree</a></li> <li><a href="../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../index-all.html">Index</a></li> <li><a href="../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li>Prev</li> <li>Next</li> </ul> <ul class="navList"> <li><a href="../../../index.html?play/templates/class-use/JavaExtensions.html" target="_top">Frames</a></li> <li><a href="JavaExtensions.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_top"> <li><a href="../../../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_top"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip.navbar.top"> <!-- --> </a></div> <!-- ========= END OF TOP NAVBAR ========= --> <div class="header"> <h2 title="Uses of Class play.templates.JavaExtensions" class="title">Uses of Class<br>play.templates.JavaExtensions</h2> </div> <div class="classUseContainer">No usage of play.templates.JavaExtensions</div> <!-- ======= START OF BOTTOM NAVBAR ====== --> <div class="bottomNav"><a name="navbar.bottom"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div> <a name="navbar.bottom.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../overview-summary.html">Overview</a></li> <li><a href="../package-summary.html">Package</a></li> <li><a href="../../../play/templates/JavaExtensions.html" title="class in play.templates">Class</a></li> <li class="navBarCell1Rev">Use</li> <li><a href="../package-tree.html">Tree</a></li> <li><a href="../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../index-all.html">Index</a></li> <li><a href="../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li>Prev</li> <li>Next</li> </ul> <ul class="navList"> <li><a href="../../../index.html?play/templates/class-use/JavaExtensions.html" target="_top">Frames</a></li> <li><a href="JavaExtensions.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_bottom"> <li><a href="../../../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_bottom"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip.navbar.bottom"> <!-- --> </a></div> <!-- ======== END OF BOTTOM NAVBAR ======= --> <p class="legalCopy"><small><a href="http://guillaume.bort.fr">Guillaume Bort</a> &amp; <a href="http://www.zenexity.fr">zenexity</a> - Distributed under <a href="http://www.apache.org/licenses/LICENSE-2.0.html">Apache 2 licence</a>, without any warrantly</small></p> </body> </html>
import { async, ComponentFixture, TestBed } from '@angular/core/testing'; import { UseractivityComponent } from './useractivity.component'; describe('UseractivityComponent', () => { let component: UseractivityComponent; let fixture: ComponentFixture<UseractivityComponent>; beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [ UseractivityComponent ] }) .compileComponents(); })); beforeEach(() => { fixture = TestBed.createComponent(UseractivityComponent); component = fixture.componentInstance; fixture.detectChanges(); }); it('should create', () => { expect(component).toBeTruthy(); }); });
import PropTypes from 'prop-types'; import React from 'react'; import {connectToStores} from 'fluxible-addons-react'; import sendReportShowWrongFields from '../../actions/report/sendReportShowWrongFields'; import closeReportModal from '../../actions/report/closeReportModal'; import ContentStore from '../../stores/ContentStore'; import FocusTrap from 'focus-trap-react'; import UserProfileStore from '../../stores/UserProfileStore'; import SendReportStore from '../../stores/SendReportStore'; import { Button, Container, Form, Modal, TextArea, Icon, Segment } from 'semantic-ui-react'; import classNames from 'classnames'; import {publicRecaptchaKey} from '../../configs/general'; import ReCAPTCHA from 'react-google-recaptcha'; import sendReport from '../../actions/report/sendReport'; import {FormattedMessage, defineMessages} from 'react-intl'; const headerStyle = { 'textAlign': 'center' }; class ReportModal extends React.Component { constructor(props) { super(props); this.state = { modalOpen: false, activeTrap: false, 'grecaptcharesponse': undefined }; this.messages = defineMessages({ input_name:{ id: 'reportModal.input_name', defaultMessage:'Name' }, modal_title:{ id:'reportModal.modal_title', defaultMessage: 'Report legal or spam issue with' }, modal_title_2:{ id:'reportModal.modal_title_2', defaultMessage: 'content' }, modal_description:{ id: 'reportModal.modal_description', defaultMessage: 'Select the reason of the report and give a brief description about it.' }, reason_tooltip:{ id: 'reportModal.reason_tooltip', defaultMessage: 'Please select a reason' }, reason_option_reason:{ id: 'reportModal.reason_option_reason', defaultMessage:'Reason' }, reason_option_spam:{ id: 'reportModal.reason_option_spam', defaultMessage:'Spam' }, reason_option_copy:{ id: 'reportModal.reason_option_copy', defaultMessage:'Copyright' }, explanation:{ id: 'reportModal.explanation', defaultMessage:'Explanation' }, explanation_placeholder:{ id: 'reportModal.explanation_placeholder', defaultMessage:'Please give a short explanation about your report' }, send_button:{ id:'reportModal.send_button', defaultMessage:'Send' }, cancel_button:{ id:'reportModal.cancel_button', defaultMessage:'Cancel' }, swal_title:{ id: 'reportModal.swal_title', defaultMessage:'Deck Report' }, send_swal_text:{ id: 'reportModal.send_swal_text', defaultMessage:'Report sent. Thank you!' }, send_swal_button:{ id: 'reportModal.send_swal_button', defaultMessage:'Close' }, send_swal_error_text: { id: 'reportModal.send_swal_error_text', defaultMessage:'An error occured while sending the report. Please try again later.' }, send_swal_error_button:{ id: 'reportModal.send_swal_error_button', defaultMessage:'Close' } }); this.handleSendReport= this.handleSendReport.bind(this); this.handleOpen = this.handleOpen.bind(this); this.handleClose = this.handleClose.bind(this); this.unmountTrap = this.unmountTrap.bind(this); } componentDidMount() { $('#inlineSpeakerNotes').each(function () { $(this).css('z-index', 0); }); $(this.refs.reasonDropdown).dropdown(); const reportValidation = { fields: { reason: { identifier: 'reason' }, text: { identifier: 'text' } }, onSuccess: this.handleSendReport.bind(this) }; //$('.ui.form.report').form(reportValidation); } componentDidUpdate() { $(this.refs.reasonDropdown).dropdown(); } getSelected() { return this.refs.reason.value; } getSwalMessages(){ //Get the messages which will show in the swal showed when the report is sent return { title: this.context.intl.formatMessage(this.messages.swal_title), text: this.context.intl.formatMessage(this.messages.send_swal_text), confirmButtonText: this.context.intl.formatMessage(this.messages.send_swal_button), error_text: this.context.intl.formatMessage(this.messages.send_swal_error_text), error_confirmButtonText: this.context.intl.formatMessage(this.messages.send_swal_error_button) }; } handleSendReport(e) { let wrongFields = { reason: false, text: false, name: false }; let everythingOk = true; e.preventDefault(); if(!this.refs.reason.value){ wrongFields.reason = true; everythingOk = false; } if( !this.refs.text.value ){ wrongFields.text = true; everythingOk = false; } else { if(this.refs.text.value.trim() === ''){ wrongFields.text = true; everythingOk = false; } } if((!this.refs.name || !this.refs.name.value || (this.refs.text.value.trim() === '')) && (this.props.UserProfileStore.userid === '')) { wrongFields.name = true; everythingOk = false; } // Recaptcha Validation if( this.props.UserProfileStore.userid === '' && (this.state === null || this.state.grecaptcharesponse === undefined)) { everythingOk= false; } this.context.executeAction(sendReportShowWrongFields, wrongFields); if(everythingOk) { let deckOrSlideReportLine = ''; if(this.props.ContentStore.selector.stype === 'slide') { deckOrSlideReportLine = 'Report on Slide: ' + this.props.ContentStore.selector.sid + '\n' + 'From Deck: ' + this.props.ContentStore.selector.id + '\n'; } else { deckOrSlideReportLine = 'Report on Deck: ' + this.props.ContentStore.selector.id + '\n'; } let userId = ''; // If user is not logged in, use the name provided if(this.props.UserProfileStore.userid === '') { userId = this.refs.name.value; } else { userId = this.props.UserProfileStore.userid; } let payload = { subject :'[SlideWiki] Report on Deck/Slide' , text :'Report made by user: ' + userId + '\n' + deckOrSlideReportLine + 'Reason of the report: ' + this.refs.reason.value + '\n' + 'Description of the report: \n\n' + this.refs.text.value + '\n\n\n', swal_messages : this.getSwalMessages() }; this.context.executeAction(sendReport,payload); this.handleClose(); } } handleOpen(){ $('#app').attr('aria-hidden', 'true'); this.setState({ modalOpen:true, activeTrap:true }); } handleClose(){ $('#app').attr('aria-hidden', 'false'); this.setState({ modalOpen: false, activeTrap: false }); this.context.executeAction(closeReportModal,{}); } unmountTrap() { if(this.state.activeTrap){ this.setState({ activeTrap: false }); $('#app').attr('aria-hidden','false'); } } onRecaptchaChange(response) { this.setState({ 'grecaptcharesponse': response }); } render() { const messages = defineMessages({ tooltipReport: { id: 'reportModal.tooltip', defaultMessage: 'Report' } }); let fieldClass_reason = classNames({ 'ui': true, 'selection': true, 'dropdown': true, 'bottom': true, 'error': this.props.SendReportStore.wrongFields.reason }); let fieldClass_text = classNames({ 'ui': true, 'center': true, 'aligned': true, 'field': true, 'error': this.props.SendReportStore.wrongFields.text }); let fieldClass_name = classNames({ 'ui': true, 'center': true, 'aligned': true, 'field': true, 'error': this.props.SendReportStore.wrongFields.name }); const recaptchaStyle = {display: 'inline-block'}; let nameField = <div className={fieldClass_name} style={{width:'auto'}} > <div className="ui icon input" style={{width:'50%'}} ><input type="text" id="name_label" name="name" ref="name" placeholder={ this.context.intl.formatMessage(this.messages.input_name)} autoFocus aria-required="true"/></div> </div>; let captchaField = <div > <input type="hidden" id="recaptcha" name="recaptcha"></input> <ReCAPTCHA className="g-recaptcha" style={recaptchaStyle} ref="recaptcha" sitekey={publicRecaptchaKey} onChange={this.onRecaptchaChange.bind(this)}/> </div>; let trigger; if(this.props.deckpage) trigger = <Button basic fluid icon labelPosition='left' color='grey' aria-hidden="false" aria-label="Report" data-tooltip={this.context.intl.formatMessage(messages.tooltipReport)} onClick={this.handleOpen}><Icon name='exclamation circle' color='black'/>Report Issue</Button>; else if (!this.props.textOnly) trigger = <Button icon aria-hidden="false" className="ui button" type="button" aria-label="Report" data-tooltip={this.context.intl.formatMessage(messages.tooltipReport)} onClick={this.handleOpen} > <Icon name="warning circle" size='large' /> </Button>; else trigger = <div className={this.props.className} aria-label="Report" data-tooltip={this.context.intl.formatMessage(messages.tooltipReport)} onClick={this.handleOpen} > <span><Icon name="warning circle" size='large' /> Report</span> </div>; return( <Modal trigger={ trigger } open={this.state.modalOpen} onOpen={this.handleOpen} onClose={this.handleClose} id="reportModal" aria-labelledby="reportModalHeader" aria-describedby="reportModalDescription" tabIndex="0" > <FocusTrap id="focus-trap-reportModal" className = "header" active={this.state.activeTrap} focusTrapOptions={{ onDeactivate:this.unmountTrap, clickOutsideDeactivates:true, initialFocus: '#reportModalDescription' }} > <Modal.Header className="ui center aligned" id="reportModalHeader"> <h1 style={headerStyle}>{this.context.intl.formatMessage(this.messages.modal_title)} {this.props.ContentStore.selector.stype === 'slide' ? 'slide' : 'deck'} {this.context.intl.formatMessage(this.messages.modal_title_2)}</h1> </Modal.Header> <Modal.Content> <Container> <Segment color="blue" textAlign="left" padded> <div id="reportModalDescription" tabIndex="0">{this.context.intl.formatMessage(this.messages.modal_description)}</div> <Segment textAlign="center" > <Form id="reportForm"> <Segment textAlign="left" > {(this.props.UserProfileStore.userid === '') ? nameField: ''} <label htmlFor="reason">{this.context.intl.formatMessage(this.messages.reason_option_reason)}</label> <div style={{width:'50%'}} className={fieldClass_reason} style={{display:'block'}} data-tooltip={this.context.intl.formatMessage(this.messages.reason_tooltip)} ref="reasonDropdown"> <input type="hidden" id="reason" name="reason" ref="reason"/> <i className="dropdown icon"/> <div className="default text">{this.context.intl.formatMessage(this.messages.reason_option_reason)}</div> <div className="menu" role="menu"> <div className="item" data-value="copyright" role="menuitem">{this.context.intl.formatMessage(this.messages.reason_option_copy)}</div> <div className="item" data-value="spam" role="menuitem">{this.context.intl.formatMessage(this.messages.reason_option_spam)}</div> </div> </div> <br/> <div className={fieldClass_text}> <label htmlFor="reportComment">{this.context.intl.formatMessage(this.messages.explanation)}</label> <textarea ref="text" id="reportComment" name="text" style={{width:'100%', minHeight: '6em', height: '6em'}} placeholder={this.context.intl.formatMessage(this.messages.explanation_placeholder)}></textarea> </div> {(this.props.UserProfileStore.userid === '') ? captchaField: ''} </Segment> <Button color="blue" type="submit" content={this.context.intl.formatMessage(this.messages.send_button)} icon='warning circle' onClick={this.handleSendReport} /> <Button icon="remove" color="red" type="button" onClick={this.handleClose} content={this.context.intl.formatMessage(this.messages.cancel_button)} /> <div className="ui error message" role="region" aria-live="polite"/> </Form> </Segment> </Segment> </Container> </Modal.Content> </FocusTrap> </Modal> ); } } ReportModal.contextTypes = { executeAction: PropTypes.func.isRequired, intl: PropTypes.object.isRequired }; ReportModal = connectToStores(ReportModal, [ContentStore, UserProfileStore, SendReportStore], (context, props) => { return { UserProfileStore: context.getStore(UserProfileStore).getState(), SendReportStore: context.getStore(SendReportStore).getState(), ContentStore: context.getStore(ContentStore).getState() }; }); export default ReportModal;
<?php /** * @package Widgetkit * @author YOOtheme http://www.yootheme.com * @copyright Copyright (C) YOOtheme GmbH * @license http://www.gnu.org/licenses/gpl.html GNU/GPL */ jimport('joomla.html.editor'); /* Class: EditorWidgetkitHelper Editor helper class, to integrate the Joomla Editor Plugins. */ class EditorWidgetkitHelper extends WidgetkitHelper { /* Function: init Init System Editor Mixed */ public function init() { if (is_a($this['system']->document ,'JDocumentRAW')) { return; } $editor = JFactory::getConfig()->getValue('config.editor'); if (in_array(strtolower($editor), array('tinymce', 'jce', 'jckeditor', 'codemirror'))) { JEditorWidgetkit::getInstance($editor)->_loadEditor(); } if ($editor == 'jckeditor') { $plugin = JPluginHelper::getPlugin('editors', 'jckeditor'); $plugin->params->set('returnScript',false); JEditorWidgetkit::getInstance('jckeditor')->display('text', '', '100%', '120', 10, 5,false); } } } /* Class: JEditorWidgetkit Custom editor class. Just to have _loadEditor() as public method */ class JEditorWidgetkit extends JEditor { /* Function: init Returns the global Editor object, only creating it if it doesn't already exist. Parameters: String $editor - The editor to use. Returns: JEditorWidgetkit Obj */ public static function getInstance($editor = 'none') { static $instances; if (!isset ($instances)) { $instances = array (); } $signature = serialize($editor); if (empty ($instances[$signature])) { $instances[$signature] = new JEditorWidgetkit($editor); } return $instances[$signature]; } /* Function: _loadEditor Load the editor Parameters: Array $config - Associative array of editor config paramaters. Returns: Mixed */ public function _loadEditor($config = array()) { return parent::_loadEditor($config); } }
<?php namespace app\modules\EDBOadmin\models; use Yii; use yii\behaviors\TimestampBehavior; use \yii\db\ActiveRecord; use common\models\EDBOKOATUUL1; /** * This is the model class for table "{{%edbo_directorytables}}". * * @property integer $id * @property string $name_directory * @property integer $created_at * @property integer $updated_at */ class EdboDirectorytables extends ActiveRecord { /** * @inheritdoc */ public function behaviors() { return [ [ 'class' => TimestampBehavior::className(), 'attributes' => [ ActiveRecord::EVENT_BEFORE_INSERT => ['created_at', 'updated_at'],//,'sessionguid_updated_at' ActiveRecord::EVENT_BEFORE_UPDATE => ['updated_at' ], //'sessionguid_updated_at' ]], ]; } /** * @inheritdoc */ public static function tableName() { return '{{%edbo_directorytables}}'; } /** * @inheritdoc */ public function rules() { return [ [['name_directory'], 'required'], //, 'created_at', 'updated_at' //[['created_at', 'updated_at'], 'integer'], [['name_directory', 'description', 'function'], 'string'] //, 'max' => 255 ]; } /** * @inheritdoc */ public function attributeLabels() { return [ 'id' => Yii::t('app', 'ID'), 'name_directory' => Yii::t('app', 'Name Directory'), 'description' => Yii::t('app', 'Description'), 'function' => Yii::t('app', 'Function'), 'created_at' => Yii::t('app', 'Created At'), 'updated_at' => Yii::t('app', 'Updated At'), ]; } public function findModelTable($id) { if (($model = EDBOKOATUUL1::findOne($id)) !== null) { return $model; } else { return NULL; } } }
#!/usr/bin/env python3 # -*- coding: utf-8 -*- """Wrapper script to invoke pywikibot-based scripts. This wrapper script invokes script by its name in this search order: 1. Scripts listed in `user_script_paths` list inside your `user-config.py` settings file in the given order. Refer :ref:`External Script Path Settings<external-script-path-settings>`. 2. User scripts residing in `scripts/userscripts` (directory mode only). 3. Scripts residing in `scripts` folder (directory mode only). 4. Maintenance scripts residing in `scripts/maintenance` (directory mode only). 5. Framework scripts residing in `pywikibot/scripts`. This wrapper script is able to invoke scripts even if the script name is misspelled. In directory mode it also checks package dependencies. Run scripts with pywikibot in directory mode using:: python pwb.py <pwb options> <name_of_script> <options> or run scripts with pywikibot installed as a site package using:: pwb <pwb options> <name_of_script> <options> This wrapper script uses the package directory to store all user files, will fix up search paths so the package does not need to be installed, etc. Currently, `<pwb options>` are :ref:`global options`. This can be used for tests to set the default site (see T216825):: python pwb.py -lang:de bot_tests -v .. versionchanged:: 7.0 pwb wrapper was added to the Python site package lib """ # (C) Pywikibot team, 2012-2022 # # Distributed under the terms of the MIT license. # # ## KEEP PYTHON 2 SUPPORT FOR THIS SCRIPT ## # from __future__ import print_function import os import sys import types from difflib import get_close_matches from importlib import import_module from time import sleep from warnings import warn try: from pathlib import Path except ImportError as e: from setup import PYTHON_VERSION, VERSIONS_REQUIRED_MESSAGE print(VERSIONS_REQUIRED_MESSAGE.format(version=PYTHON_VERSION)) sys.exit(e) pwb = None site_package = False def check_pwb_versions(package): """Validate package version and scripts version. Rules: - Pywikibot version must not be older than scrips version - Scripts version must not be older than previous Pywikibot version due to deprecation policy """ from pywikibot.tools import Version scripts_version = Version(getattr(package, '__version__', pwb.__version__)) wikibot_version = Version(pwb.__version__) if scripts_version.release > wikibot_version.release: # pragma: no cover print('WARNING: Pywikibot version {} is behind scripts package ' 'version {}.\nYour Pywikibot may need an update or be ' 'misconfigured.\n'.format(wikibot_version, scripts_version)) # calculate previous minor release if wikibot_version.minor > 0: prev_wikibot = Version('{v.major}.{}.{v.micro}' .format(wikibot_version.minor - 1, v=wikibot_version)) if scripts_version.release < prev_wikibot.release: # pragma: no cover print('WARNING: Scripts package version {} is behind legacy ' 'Pywikibot version {} and current version {}\nYour scripts ' 'may need an update or be misconfigured.\n' .format(scripts_version, prev_wikibot, wikibot_version)) elif scripts_version.release < wikibot_version.release: # pragma: no cover print('WARNING: Scripts package version {} is behind current version ' '{}\nYour scripts may need an update or be misconfigured.\n' .format(scripts_version, wikibot_version)) del Version # The following snippet was developed by Ned Batchelder (and others) # for coverage [1], with Python 3 support [2] added later, # and is available under the BSD license (see [3]) # [1] # https://bitbucket.org/ned/coveragepy/src/b5abcee50dbe/coverage/execfile.py # [2] # https://bitbucket.org/ned/coveragepy/src/fd5363090034/coverage/execfile.py # [3] # https://bitbucket.org/ned/coveragepy/src/2c5fb3a8b81c/setup.py?at=default#cl-31 def run_python_file(filename, args, package=None): """Run a python file as if it were the main program on the command line. :param filename: The path to the file to execute, it need not be a .py file. :type filename: str :param args: is the argument list to present as sys.argv, as strings. :type args: List[str] :param package: The package of the script. Used for checks. :type package: Optional[module] """ # Create a module to serve as __main__ old_main_mod = sys.modules['__main__'] main_mod = types.ModuleType('__main__') sys.modules['__main__'] = main_mod main_mod.__file__ = filename main_mod.__builtins__ = sys.modules['builtins'] if package: main_mod.__package__ = package.__name__ check_pwb_versions(package) # Set sys.argv and the first path element properly. old_argv = sys.argv old_argvu = pwb.argvu sys.argv = [filename] + args pwb.argvu = [Path(filename).stem] + args sys.path.insert(0, os.path.dirname(filename)) try: with open(filename, 'rb') as f: source = f.read() exec(compile(source, filename, 'exec', dont_inherit=True), main_mod.__dict__) finally: # Restore the old __main__ sys.modules['__main__'] = old_main_mod # Restore the old argv and path sys.argv = old_argv sys.path.pop(0) pwb.argvu = old_argvu # end of snippet from coverage def abspath(path): """Convert path to absolute path, with uppercase drive letter on win32.""" path = os.path.abspath(path) if path[0] != '/': # normalise Windows drive letter path = path[0].upper() + path[1:] return path def handle_args(pwb_py, *args): """Handle args and get filename. :return: filename, script args, local args for pwb.py :rtype: tuple """ fname = None index = 0 for arg in args: if arg in ('-version', '--version'): fname = 'version.py' elif arg.startswith('-'): index += 1 continue else: fname = arg if not fname.endswith('.py'): fname += '.py' break return fname, list(args[index + int(bool(fname)):]), args[:index] def _print_requirements(requirements, script, variant): # pragma: no cover """Print pip command to install requirements.""" if not requirements: return if len(requirements) > 1: format_string = '\nPackages necessary for {} are {}.' else: format_string = '\nA package necessary for {} is {}.' print(format_string.format(script or 'pywikibot', variant)) print('Please update required module{} with:\n\n' .format('s' if len(requirements) > 1 else '')) for requirement in requirements: print(' pip install "{}"\n' .format(str(requirement).partition(';')[0])) def check_modules(script=None): """Check whether mandatory modules are present. This also checks Python version when importing dependencies from setup.py :param script: The script name to be checked for dependencies :type script: str or None :return: True if all dependencies are installed :rtype: bool :raise RuntimeError: wrong Python version found in setup.py """ import pkg_resources from setup import script_deps missing_requirements = [] version_conflicts = [] if script: dependencies = script_deps.get(Path(script).name, []) else: from setup import dependencies try: next(pkg_resources.parse_requirements(dependencies)) except ValueError as e: # pragma: no cover # T286980: setuptools is too old and requirement parsing fails import setuptools setupversion = tuple(int(num) for num in setuptools.__version__.split('.')) if setupversion < (20, 8, 1): # print the minimal requirement _print_requirements( ['setuptools>=20.8.1'], None, 'outdated ({})'.format(setuptools.__version__)) return False raise e for requirement in pkg_resources.parse_requirements(dependencies): if requirement.marker is None \ or pkg_resources.evaluate_marker(str(requirement.marker)): try: pkg_resources.resource_exists(requirement, requirement.name) except pkg_resources.DistributionNotFound as e: missing_requirements.append(requirement) print(e) except pkg_resources.VersionConflict as e: version_conflicts.append(requirement) print(e) del pkg_resources del dependencies del script_deps _print_requirements(missing_requirements, script, 'missing') _print_requirements(version_conflicts, script, 'outdated') if version_conflicts and not missing_requirements: # pragma: no cover print('\nYou may continue on your own risk; type CTRL-C to stop.') try: sleep(5) except KeyboardInterrupt: return False return not missing_requirements filename, script_args, global_args = handle_args(*sys.argv) # Search for user-config.py before creating one. # If successful, user-config.py already exists in one of the candidate # directories. See config.py for details on search order. # Use env var to communicate to config.py pwb.py location (bug T74918). _pwb_dir = os.path.split(__file__)[0] os.environ['PYWIKIBOT_DIR_PWB'] = _pwb_dir try: import pywikibot as pwb except RuntimeError: # pragma: no cover os.environ['PYWIKIBOT_NO_USER_CONFIG'] = '2' import pywikibot as pwb # user-config.py to be created if filename is not None and not (filename.startswith('generate_') or filename == 'version.py'): print("NOTE: 'user-config.py' was not found!") print('Please follow the prompts to create it:') run_python_file(os.path.join( _pwb_dir, 'pywikibot', 'scripts', 'generate_user_files.py'), []) # because we have loaded pywikibot without user-config.py loaded, # we need to re-start the entire process. Ask the user to do so. print('Now, you have to re-execute the command to start your script.') sys.exit(1) except ImportError as e: # raised in textlib sys.exit(e) def find_alternates(filename, script_paths): """Search for similar filenames in the given script paths.""" from pywikibot import config, input_choice, output from pywikibot.bot import QuitKeyboardInterrupt, ShowingListOption from pywikibot.tools.formatter import color_format assert config.pwb_close_matches > 0, \ 'config.pwb_close_matches must be greater than 0' assert 0.0 < config.pwb_cut_off < 1.0, \ 'config.pwb_cut_off must be a float in range [0, 1]' print('ERROR: {} not found! Misspelling?'.format(filename), file=sys.stderr) scripts = {} script_paths = [['.']] + script_paths # add current directory for path in script_paths: folder = Path(_pwb_dir).joinpath(*path) for script_name in folder.iterdir(): name, suffix = script_name.stem, script_name.suffix if suffix == '.py' and not name.startswith('__'): scripts[name] = script_name # remove .py for better matching filename = filename[:-3] similar_scripts = get_close_matches(filename, scripts, config.pwb_close_matches, config.pwb_cut_off) if not similar_scripts: return None if len(similar_scripts) == 1: script = similar_scripts[0] wait_time = config.pwb_autostart_waittime output(color_format( 'NOTE: Starting the most similar script ' '{lightyellow}{0}.py{default}\n' ' in {1} seconds; type CTRL-C to stop.', script, wait_time)) try: sleep(wait_time) # Wait a bit to let it be cancelled except KeyboardInterrupt: return None else: msg = '\nThe most similar scripts are:' alternatives = ShowingListOption(similar_scripts, pre=msg, post='') try: prefix, script = input_choice('Which script to be run:', alternatives, default='1') except QuitKeyboardInterrupt: return None print() # pragma: no cover return str(scripts[script]) def find_filename(filename): """Search for the filename in the given script paths. .. versionchanged:: 7.0 Search users_scripts_paths in config.base_dir """ from pywikibot import config path_list = [] # paths to find misspellings def test_paths(paths, root): """Search for filename in given paths within 'root' base directory.""" for file_package in paths: package = file_package.split('.') path = package + [filename] testpath = os.path.join(root, *path) if os.path.exists(testpath): return testpath path_list.append(package) return None if site_package: # pragma: no cover script_paths = [_pwb_dir] else: script_paths = [ 'scripts.userscripts', 'scripts', 'scripts.maintenance', 'pywikibot.scripts', ] user_script_paths = [] if config.user_script_paths: # pragma: no cover if isinstance(config.user_script_paths, list): user_script_paths = config.user_script_paths else: warn("'user_script_paths' must be a list,\n" 'found: {}. Ignoring this setting.' .format(type(config.user_script_paths))) found = test_paths(user_script_paths, config.base_dir) if found: # pragma: no cover return found found = test_paths(script_paths, _pwb_dir) if found: return found return find_alternates(filename, path_list) def execute(): """Parse arguments, extract filename and run the script. .. versionadded:: 7.0 renamed from :func:`main` """ global filename if global_args: # don't use sys.argv unknown_args = pwb.handle_args(global_args) if unknown_args: # pragma: no cover print('ERROR: unknown pwb.py argument{}: {}\n' .format('' if len(unknown_args) == 1 else 's', ', '.join(unknown_args))) return False if not filename: return False file_package = None if not os.path.exists(filename): filename = find_filename(filename) if filename is None: return True # When both pwb.py and the filename to run are within the current # working directory: # a) set __package__ as if called using python -m scripts.blah.foo # b) set __file__ to be relative, so it can be relative in backtraces, # and __file__ *appears* to be an unstable path to load data from. # This is a rough (and quick!) emulation of 'package name' detection. # a much more detailed implementation is in coverage's find_module. # https://bitbucket.org/ned/coveragepy/src/default/coverage/execfile.py cwd = abspath(os.getcwd()) absolute_path = abspath(os.path.dirname(sys.argv[0])) if absolute_path == cwd: absolute_filename = abspath(filename)[:len(cwd)] if absolute_filename == cwd: relative_filename = os.path.relpath(filename) # remove the filename, and use '.' instead of path separator. file_package = os.path.dirname( relative_filename).replace(os.sep, '.') filename = os.path.join(os.curdir, relative_filename) module = None if file_package and file_package not in sys.modules: try: module = import_module(file_package) except ImportError as e: warn('Parent module {} not found: {}' .format(file_package, e), ImportWarning) help_option = any(arg.startswith('-help:') or arg == '-help' for arg in script_args) if site_package or check_modules(filename) or help_option: run_python_file(filename, script_args, module) return True def main(): """Script entry point. Print doc if necessary. .. versionchanged:: 7.0 previous implementation was renamed to :func:`execute` """ try: if not check_modules(): # pragma: no cover raise RuntimeError('') # no further output needed # setup.py may also raise RuntimeError except RuntimeError as e: # pragma: no cover sys.exit(e) if not execute(): print(__doc__) def run(): # pragma: no cover """Site package entry point. Print doc if necessary. .. versionadded:: 7.0 """ global site_package site_package = True if not execute(): print(__doc__) if __name__ == '__main__': main()
/* -*-C++-*- // @@@ START COPYRIGHT @@@ // // Licensed to the Apache Software Foundation (ASF) under one // or more contributor license agreements. See the NOTICE file // distributed with this work for additional information // regarding copyright ownership. The ASF licenses this file // to you under the Apache License, Version 2.0 (the // "License"); you may not use this file except in compliance // with the License. You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, // software distributed under the License is distributed on an // "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY // KIND, either express or implied. See the License for the // specific language governing permissions and limitations // under the License. // // @@@ END COPYRIGHT @@@ ***************************************************************************** * * File: NADefaults.cpp * Description: Implementation for the defaults table class, NADefaults. * * Created: 7/11/96 * Language: C++ * * * * ***************************************************************************** */ #define SQLPARSERGLOBALS_FLAGS // must precede all #include's #define SQLPARSERGLOBALS_NADEFAULTS #include "Platform.h" #include "NADefaults.h" #include <stdio.h> #include <string.h> #include <stdlib.h> #ifdef NA_HAS_SEARCH_H #include <search.h> // use the bsearch binary search routine of the C RTL #else #include <unistd.h> // on OSS, bsearch comes from unistd.h #endif #include "nsk/nskport.h" #if !defined(NDEBUG) #endif #include "CliDefs.h" #include "CmpContext.h" #include "CmpErrors.h" #include "ComObjectName.h" #include "ComRtUtils.h" #include "ComSchemaName.h" #include "ex_error.h" #include "DefaultConstants.h" #include "DefaultValidator.h" #include "NAClusterInfo.h" #include "parser.h" #include "sql_id.h" #include "SQLCLIdev.h" #include "Sqlcomp.h" #include "StmtCompilationMode.h" #include "OptimizerSimulator.h" #include "CmpSeabaseDDL.h" #include "Globals.h" #include "QCache.h" #include "SqlParserGlobals.h" // MUST be last #include! #include "seabed/ms.h" #include "seabed/fs.h" #define NADHEAP CTXTHEAP #define ERRWARN(msg) ToErrorOrWarning(msg, errOrWarn) #define ERRWARNLOOP(msg) ToErrorOrWarning(msg, errOrWarnLOOP) #define ENUM_RANGE_CHECK(e) (e >= 0 && (size_t)e < numDefaultAttributes()) #define ATTR_RANGE_CHECK ENUM_RANGE_CHECK(attrEnum) #ifndef NDEBUG #define ATTR_RANGE_ASSERT CMPASSERT(ATTR_RANGE_CHECK) #else #define ATTR_RANGE_ASSERT #endif // ------------------------------------------------------------------------- // This table contains defaults used in SQLARK. // To add a default, put it in sqlcomp/DefaultConstants.h and in this table. // // The #define declares the domain (allowed range of values) of the attr-value; // typically it is Int1 or UI1 (signed or unsigned integral, >=1) // to prevent division-by-zero errors in the calling code. // // The first column is the internal enum value from sqlcomp/DefaultConstants.h. // The second column is the default value as a string. // // The DDxxxx macro identifies the domain of the attribute // (the range and properties of the possible values). // // XDDxxxx does the same *and* externalizes the attribute // (makes it visible to SHOWCONTROL; *you* need to tell Pubs to document it). // // SDDxxxx does the same and externalizes the attribute to HP support personnel // (makes it visible to HPDM when support is logged on; *you* need to tell Pubs // to document it in the support manual. You can set the // SHOWCONTROL_SUPPORT_ATTRS CQD to ON to see all the externalized and // support-level CQDs). // // For instance, DDflt0 allows any nonnegative floating-point number, while // DDflte allows any positive float (the e stands for epsilon, that tiniest // scintilla >0 in classical calculus, and something like +1E-38 on a Pentium). // DDui allows only nonnegative integral values (ui=unsigned int), // DDui1 allows only ints > 0, DDui2 only nonzero multiples of 2, etc. // // DDkwd validates keywords. Each attribute that is DDkwd has its own subset // of acceptable tokens -- the default behavior is that the attr is bivalent // (ON/OFF or TRUE/FALSE or ENABLE/DISABLE). If you want different keywords, // see enum DefaultToken in DefaultConstants.h, and NADefaults::token() below. // // Other DD's validate percentages, and Ansi names. Certainly more could be // defined, for more restrictive ranges or other criteria. // ************************************************************************* // NOTE: You must keep the entire list in alphabetical order, // or else the lookup will not work!!!!!!! Use only CAPITAL LETTERS!!!!!!!!! // ************************************************************************* // NOTE 2: If you choose to "hide" the default default value by setting it to // "ENABLE" or "SYSTEM" or "", your code must handle this possibility. // // See OptPhysRelExpr.cpp's handling of PARALLEL_NUM_ESPS, // an unsigned positive int which also accepts the keyword setting of "SYSTEM". // See ImplRule.cpp's use of INSERT_VSBB, a keyword attr which allows "SYSTEM". // // A simple way to handle ON/OFF keywords that you want to hide the default for: // Take OPTIMIZER_PRUNING as an example. Right now, it appears below with // default "OFF", and opt.cpp does // DisablePruning = (NADEFAULT(OPTIMIZER_PRUNING) == DF_OFF); // To hide the default default, // you would enter it below as "SYSTEM", and opt.cpp would do // DisablePruning = (NADEFAULT(OPTIMIZER_PRUNING) != DF_ON); // (i.e., DF_OFF and DF_SYSTEM would be treated identically, as desired). // ************************************************************************* // NOTE 3: The user is always allowed to say // CONTROL QUERY DEFAULT attrname 'SYSTEM'; -- or 'ENABLE' or '' // What this means is that the current setting for that attribute // reverts to its default-default value. This default-default value // may or may not be "SYSTEM"; this is completely orthogonal/irrelevant // to the CQD usage. // // One gotcha: 'ENABLE' is a synonym for 'SYSTEM', *EXCEPT* when the // SYSTEM default (the default-default) is "DISABLE". // In this case, 'ENABLE' is a synonym for 'ON' // (the opposite of the synonyms DISABLE/OFF). // ************************************************************************* // NOTE 4: After modifying this static table in any way, INCLUDING A CODE MERGE, // for a quick sanity check, run w:/toolbin/checkNAD. // For a complete consistency check, compile this file, link arkcmp, and // runregr TEST050. // ************************************************************************* struct DefaultDefault { enum DefaultConstants attrEnum; const char *attrName; const char *value; const DefaultValidator *validator; UInt32 flags; }; #define DD(name,value,validator) { name, "" # name "", value, validator } #define FDD(name,value,validator,flags) { name, "" # name "", value, validator, flags } #define XDD(name,value,validator) FDD(name,value,validator,DEFAULT_IS_EXTERNALIZED) #define SDD(name,value,validator) FDD(name,value,validator,DEFAULT_IS_FOR_SUPPORT) #define DDS(name,value,validator) FDD(name,value,validator,DEFAULT_IS_SSD) #define XDDS(name,value,validator) FDD(name,value,validator,DEFAULT_IS_SSD | DEFAULT_IS_EXTERNALIZED) #define SDDS(name,value,validator) FDD(name,value,validator,DEFAULT_IS_SSD | DEFAULT_IS_FOR_SUPPORT) #define DD_____(name,value) DD(name,value,&validateUnknown) #define XDD_____(name,value) XDD(name,value,&validateUnknown) #define SDD_____(name,value) SDD(name,value,&validateUnknown) #define DDS_____(name,value) DDS(name,value,&validateUnknown) #define XDDS_____(name,value) XDDS(name,value,&validateUnknown) #define DDansi_(name,value) DD(name,value,&validateAnsiName) #define XDDansi_(name,value) XDD(name,value,&validateAnsiName) #define DDcoll_(name,value) DD(name,value,&validateCollList) #define DDdskNS(name,value) DD(name,value,&validateDiskListNSK) #define SDDdskNS(name,value) SDD(name,value,&validateDiskListNSK) //SCARTCH_DRIVE_LETTERS* made internal RV 06/21/01 CR 10-010425-2440 #define DDdskNT(name,value) DD(name,value,&validateDiskListNT) #define DDint__(name,value) DD(name,value,&validateInt) #define SDDint__(name,value) SDD(name,value,&validateInt) #define XDDint__(name,value) XDD(name,value,&validateInt) #define DDSint__(name,value) DDS(name,value,&validateInt) #define XDDSint__(name,value) XDDS(name,value,&validateInt) #define XDDintN2(name,value) XDD(name,value,&validateIntNeg2) #define DDintN1__(name,value) DD(name,value,&validateIntNeg1) #define DDpct__(name,value) DD(name,value,&validatePct) #define XDDpct__(name,value) XDD(name,value,&validatePct) #define SDDpct__(name,value) SDD(name,value,&validatePct) #define DDpct1_50(name,value) DD(name,value,&validatePct1_t50) #define DD0_10485760(name,value) DD(name,value,&validate0_10485760) #define DD0_255(name,value) DD(name,value,&validate0_255) #define DD0_200000(name,value) DD(name,value,&validate0_200000) #define XDD0_200000(name,value) XDD(name,value,&validate0_200000) #define DD1_200000(name,value) DD(name,value,&validate1_200000) #define XDDui30_32000(name,value) XDD(name,value,&validate30_32000) #define DDui30_246(name,value) DD(name,value,&validate30_246) #define DDui50_4194303(name,value) DD(name,value,&validate50_4194303) #define DD1_24(name,value) DD(name,value,&validate1_24) #define XDD1_1024(name,value) XDD(name,value,&validate1_1024) #define DD1_1024(name,value) DD(name,value,&validate1_1024) #define DD18_128(name,value) DD(name,value,&validate18_128) #define DD1_128(name,value) DD(name,value,&validate1_128) #define DDui___(name,value) DD(name,value,&validateUI) #define XDDui___(name,value) XDD(name,value,&validateUI) #define SDDui___(name,value) SDD(name,value,&validateUI) #define DDui1__(name,value) DD(name,value,&validateUI1) #define XDDui1__(name,value) XDD(name,value,&validateUI1) #define SDDui1__(name,value) SDD(name,value,&validateUI1) #define DDui2__(name,value) DD(name,value,&validateUI2) #define XDDui2__(name,value) XDD(name,value,&validateUI2) #define DDui8__(name,value) DD(name,value,&validateUI8) #define DDui512(name,value) DD(name,value,&validateUI512) #define DDui0_5(name,value) DD(name,value,&validateUIntFrom0To5) #define XDDui0_5(name,value) XDD(name,value,&validateUIntFrom0To5) #define DDui1_6(name,value) DD(name,value,&validateUIntFrom1To6) #define DDui1_10(name,value) DD(name,value,&validateUIntFrom1To10) #define DDui2_10(name,value) DD(name,value,&validateUIntFrom2To10) #define DDui1500_4000(name,value) DD(name,value,&validateUIntFrom1500To4000) #define DDipcBu(name,value) DD(name,value,&validateIPCBuf) #define XDDipcBu(name,value) XDD(name,value,&validateIPCBuf) #define DDflt__(name,value) DD(name,value,&validateFlt) #define XDDflt__(name,value) XDD(name,value,&validateFlt) #define SDDflt__(name,value) SDD(name,value,&validateFlt) #define DDflt0_(name,value) DD(name,value,&validateFlt0) #define XDDflt0_(name,value) XDD(name,value,&validateFlt0) #define SDDflt0_(name,value) SDD(name,value,&validateFlt0) #define DDflte_(name,value) DD(name,value,&validateFltE) #define XDDflte_(name,value) XDD(name,value,&validateFltE) #define SDDflte_(name,value) SDD(name,value,&validateFltE) #define DDflt1_(name,value) DD(name,value,&validateFlt1) #define XDDflt1_(name,value) XDD(name,value,&validateFlt1) #define DDflt_0_1(name,value) DD(name,value,&validateFlt_0_1) #define XDDflt_0_1(name,value) XDD(name,value,&validateFlt_0_1) #define DDkwd__(name,value) DD(name,value,&validateKwd) #define XDDkwd__(name,value) XDD(name,value,&validateKwd) #define SDDkwd__(name,value) SDD(name,value,&validateKwd) #define DDSkwd__(name,value) DDS(name,value,&validateKwd) #define SDDSkwd__(name,value) SDDS(name,value,&validateKwd) #define DDnskv_(name,value) DD(name,value,&validateNSKV) #define DDnsksv(name,value) DD(name,value,&validateNSKSV) #define DDnsksy(name,value) DD(name,value,&validateNSKSY) #define DDnsklo(name,value) DD(name,value,&validateNSKMPLoc) #define DD1_4096(name,value) DD(name,value,&validate1_4096) #define DD0_18(name,value) DD(name,value,&validate0_18) #define DD0_64(name,value) DD(name,value,&validate0_64) #define DD16_64(name,value) DD(name,value,&validate16_64) #define DDvol__(name,value) DD(name,value,&validateVol) #define SDDvol__(name,value) SDD(name,value,&validateVol) #define DDalis_(name,value) DD(name,value,&validateAnsiList) #define XDDalis_(name,value) XDD(name,value,&validateAnsiList) #define XDDpos__(name,value) XDD(name,value,&validatePOSTableSizes) #define SDDpos__(name,value) SDD(name,value,&validatePOSTableSizes) #define DDpos__(name,value) DD(name,value,&validatePOSTableSizes) #define DDtp___(name,value) DD(name,value,&validateTraceStr) #define DDosch_(name,value) DD(name,value,&validateOverrideSchema) #define SDDosch_(name,value) SDD(name,value,&validateOverrideSchema) #define DDpsch_(name,value) DD(name,value,&validatePublicSchema) #define SDDpsch_(name,value) SDD(name,value,&validatePublicSchema) #define DDrlis_(name,value) DD(name,value,&validateRoleNameList) #define XDDrlis_(name,value) XDD(name,value,&validateRoleNameList) #define DDrver_(name,value) DD(name,value,&validateReplIoVersion) #define XDDMVA__(name,value) XDD(name,value,&validateMVAge) #define DDusht_(name,value) DD(name,value,&validate_uint16) const DefaultValidator validateUnknown; const DefaultValidator validateAnsiName(CASE_SENSITIVE_ANSI); // e.g. 'c.s.tbl' const ValidateDiskListNSK validateDiskListNSK; const ValidateDiskListNT validateDiskListNT; ValidateCollationList validateCollList(TRUE/*mp-format*/); // list collations const ValidateInt validateInt; // allows neg, zero, pos ints const ValidateIntNeg1 validateIntNeg1;// allows -1 to +infinity ints const ValidateIntNeg1 validateIntNeg2;// allows -1 to +infinity ints const ValidatePercent validatePct; // allows zero to 100 (integral %age) const ValidateNumericRange validatePct1_t50(VALID_UINT, 1, (float)50);// allows 1 to 50 (integral %age) const Validate_0_10485760 validate0_10485760; // allows zero to 10Meg (integer) const Validate_0_255 validate0_255; // allows zero to 255 (integer) const Validate_0_200000 validate0_200000; // allows zero to 200000 (integer) const Validate_1_200000 validate1_200000; // allows 1 to 200000 (integer) const Validate_30_32000 validate30_32000; // allows 30 to 32000 const Validate_30_246 validate30_246; // allows 30 to 246 const Validate_50_4194303 validate50_4194303; // allows 50 to 4194303 (integer) const Validate_1_24 validate1_24; // allows 1 to 24 (integer) const ValidateUInt validateUI; // allows zero and pos const ValidateUInt1 validateUI1; // allows pos only (>= 1) const ValidateUInt2 validateUI2(2); // allows pos multiples of 2 only const ValidateUInt2 validateUI8(8); // pos multiples of 8 only const ValidateUInt2 validateUI512(512); // pos multiples of 512 only const ValidateUIntFrom0To5 validateUIntFrom0To5; // integer from 0 to 5 const ValidateUIntFrom1500To4000 validateUIntFrom1500To4000; // integer from 1 to 6 const ValidateUIntFrom1To6 validateUIntFrom1To6; // integer from 1 to 6 const ValidateUIntFrom1To10 validateUIntFrom1To10; // integer from 1 to 10 const ValidateUIntFrom2To10 validateUIntFrom2To10; // integer from 2 to 10 const ValidateIPCBuf validateIPCBuf; // for IPC message buffers (DP2 msgs) const ValidateFlt validateFlt; // allows neg, zero, pos (all nums) const ValidateFltMin0 validateFlt0; // allows zero and pos const ValidateFltMinEpsilon validateFltE; // allows pos only (>= epsilon > 0) const ValidateFltMin1 validateFlt1; // allows pos only (>= 1) const ValidateSelectivity ValidateSelectivity; // allows 0 to 1 (float) const ValidateFlt_0_1 validateFlt_0_1; // allows 0 to 1 (float) const ValidateKeyword validateKwd; // allows relevant keywords only const ValidateNSKVol validateNSKV; // allows NSK volumes ($X, e.g.) const ValidateNSKSubVol validateNSKSV; // allows NSK subvols const ValidateVolumeList validateVol; // allows ':' separ. list of $volumes const ValidateNSKSystem validateNSKSY; // allows NSK system names const ValidateNSKMPLoc validateNSKMPLoc; // allows NSK MP cat names($X.Y) const Validate_1_4096 validate1_4096; // allows 1 to 4096 (integer) which is max character size supported. const Validate_0_18 validate0_18; // allows 0 to 18 (integer) because 18 is max precision supported. const Validate_1_1024 validate1_1024; // allows 1 to 1024 (integer). const Validate_0_64 validate0_64; // allows 0 to 64 (integer) const Validate_16_64 validate16_64; // allows 16 to 64 (integer) const Validate_18_128 validate18_128; // allows 18 to 128 (integer). const Validate_1_128 validate1_128; // allows 1 to 128 (integer). // allows ':' separated list of three part ANSI names const ValidateAnsiList validateAnsiList; // allows ',' separated list of role names const ValidateRoleNameList validateRoleNameList; const ValidatePOSTableSizes validatePOSTableSizes; const ValidateTraceStr validateTraceStr; const ValidateOverrideSchema validateOverrideSchema; // check OverrideSchema format const ValidatePublicSchema validatePublicSchema; // This high value should be same as default value of REPLICATE_IO_VERSION const ValidateReplIoVersion validateReplIoVersion(11,17); const ValidateMVAge validateMVAge; const Validate_uint16 validate_uint16; // See the NOTEs above for how to maintain this list! THREAD_P DefaultDefault defaultDefaults[] = { DDflt0_(ACCEPTABLE_INPUTESTLOGPROP_ERROR, "0.5"), SDDint__(AFFINITY_VALUE, "-2"), // controls the ESP allocation per core. DDkwd__(AGGRESSIVE_ESP_ALLOCATION_PER_CORE, "OFF"), SDDkwd__(ALLOW_AUDIT_ATTRIBUTE_CHANGE, "FALSE"), // Used to control if row sampling will use the sample operator in SQL/MX or the // this should be used for testing only. DML should not be executed on // non-audited tables DDkwd__(ALLOW_DML_ON_NONAUDITED_TABLE, "OFF"), // DP2_EXECUTOR_POSITION_SAMPLE method in DP2. // Valid values are ON, OFF and SYSTEM // ON => choose DP2_ROW_SAMPLING over row sampling in EID, if sampling % is less than 50. // OFF => choose EID row sampling over DP2 row sampling regardless of sampling % // SYSTEM => update stats will choose DP row sampling if sampling % is less than 5. SDDkwd__(ALLOW_DP2_ROW_SAMPLING, "SYSTEM"), DDkwd__(ALLOW_FIRSTN_IN_SUBQUERIES, "FALSE"), // ON/OFF flag to invoke ghost objects from non-licensed process (non-super.super user) who can not use parserflags DDkwd__(ALLOW_GHOST_OBJECTS, "OFF"), // This default, if set to ON, will allow Translate nodes (to/from UCS2) // to be automatically inserted by the Binder if some children of an // ItemExpr are declared as UCS2 and some are declared as ISO88591. DDkwd__(ALLOW_IMPLICIT_CHAR_CASTING, "ON"), // this default, if set to ON, will allow certain incompatible // assignment, like string to int. The assignment will be done by // implicitely CASTing one operand to another as long as CAST between // the two is supported. See binder for details. DDkwd__(ALLOW_INCOMPATIBLE_ASSIGNMENT, "OFF"), // this default, if set to ON, will allow certain incompatible // comparisons, like string to int. The comparison will be done by // implicitely CASTing one operand to another as long as CAST between // the two is supported. See binder for details. DDkwd__(ALLOW_INCOMPATIBLE_COMPARISON, "OFF"), // if set to 2, the replicateNonKeyVEGPred() mdamkey method // will try to use inputs to filter out VEG elements that are not // local to the associated table to minimize predicate replication. // It is defaulted to 0 (off), as there is some concern that this algoritm // might produce to few replications, which could lead to incorrect results. // Setting the Value to 1 will try a simpler optimization DDui___(ALLOW_INPUT_PRED_REPLICATION_REDUCTION,"0"), // if set to ON, then isolation level (read committed, etc) could be // specified in a regular CREATE VIEW (not a create MV) statement. DDkwd__(ALLOW_ISOLATION_LEVEL_IN_CREATE_VIEW, "ON"), // if set to ON, then we allow subqueries of degree > 1 in the // select list. DDkwd__(ALLOW_MULTIDEGREE_SUBQ_IN_SELECTLIST, "SYSTEM"), // by default, a primary key or unique constraint must be non-nullable. // This default, if set, allows them to be nullable. // The default value is OFF. DDkwd__(ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT, "OFF"), // if set to ON, then ORDER BY could be // specified in a regular CREATE VIEW (not a create MV) statement. DDkwd__(ALLOW_ORDER_BY_IN_CREATE_VIEW, "ON"), // rand() function in sql is disabled unless this CQD is turned on DDkwd__(ALLOW_RAND_FUNCTION, "OFF"), DDkwd__(ALLOW_RANGE_PARTITIONING, "TRUE"), DDkwd__(ALLOW_RENAME_OF_MVF_OR_SUBQ, "OFF"), DDkwd__(ALLOW_RISKY_UPDATE_WITH_NO_ROLLBACK, "OFF"), DDkwd__(ALLOW_SUBQ_IN_SET, "SYSTEM"), DDkwd__(ALLOW_UNEXTERNALIZED_MAINTAIN_OPTIONS, "OFF"), DDSkwd__(ALTPRI_ESP, ""), DDSkwd__(ALTPRI_MASTER, ""), DDS_____(AQR_ENTRIES, ""), DDkwd__(AQR_WNR, "ON"), DDkwd__(AQR_WNR_DELETE_NO_ROWCOUNT, "OFF"), DDkwd__(AQR_WNR_EXPLAIN_INSERT, "OFF"), DDkwd__(AQR_WNR_INSERT_CLEANUP, "OFF"), DDkwd__(AQR_WNR_LOCK_INSERT_TARGET, "OFF"), DDkwd__(ARKCMP_FAKE_HW, "OFF"), DDkwd__(ASG_FEATURE, "ON"), // Set ASM cache DDkwd__(ASM_ALLOWED, "ON"), // Precompute statistics in ASM DDkwd__(ASM_PRECOMPUTE, "OFF"), DDkwd__(ASYMMETRIC_JOIN_TRANSFORMATION, "MAXIMUM"), DDkwd__(ATTEMPT_ASYNCHRONOUS_ACCESS, "ON"), DDkwd__(ATTEMPT_ESP_PARALLELISM, "ON"), DDkwd__(ATTEMPT_REVERSE_SYNCHRONOUS_ORDER, "ON"), // Online Populate Index uses AuditImage for index tables only. // By setting this CQD to ON, one can generate AuditImage for // tables also. DDkwd__(AUDIT_IMAGE_FOR_TABLES, "OFF"), DDkwd__(AUTOMATIC_RECOMPILATION, "OFF"), DDkwd__(AUTO_QUERY_RETRY, "SYSTEM"), XDDkwd__(AUTO_QUERY_RETRY_WARNINGS, "OFF"), DDkwd__(BASE_NUM_PAS_ON_ACTIVE_PARTS, "OFF"), // see comments in DefaultConstants.h DDkwd__(BIGNUM_IO, "SYSTEM"), XDDkwd__(BLOCK_TO_PREVENT_HALLOWEEN, "ON"), DDflte_(BMO_CITIZENSHIP_FACTOR, "1."), DDui1__(BMO_MEMORY_SIZE, "204800"), // percentage of physical main memory availabe for BMO. // This value is only used by HJ and HGB to come up with // an initial estimate for the number of clusters to allocate. // It does NOT by any means determine the amount of memory // used by a BMO. The memory usage depends on the amount of // memory available during execution and the amount of input // data. DDflte_(BMO_MEMORY_USAGE_PERCENT, "5."), // When on, then try to bulk move nullable and variable length column values. DDkwd__(BULK_MOVE_NULL_VARCHAR, "ON"), //Temporary fix to bypass volatile schema name checking for non-table objects - ALM Case#4764 DDkwd__(BYPASS_CHECK_FOR_VOLATILE_SCHEMA_NAME, "OFF"), DDkwd__(CACHE_HISTOGRAMS, "ON"), DDkwd__(CACHE_HISTOGRAMS_CHECK_FOR_LEAKS, "OFF"), DD0_200000(CACHE_HISTOGRAMS_IN_KB, "32768"), DDkwd__(CACHE_HISTOGRAMS_MONITOR_HIST_DETAIL, "OFF"), DDkwd__(CACHE_HISTOGRAMS_MONITOR_MEM_DETAIL, "OFF"), DD_____(CACHE_HISTOGRAMS_MONITOR_OUTPUT_FILE, ""), // This is the default time interval, during which we ensure that // the histograms in the cache are correct. If the histograms in the // cache are older than this default interval, and the HISTOGRAMS // table last modification time is older than this, any requested // histograms will be checked to see if it was modified more recently // than the histograms in cache (the READ_TIME fields will also be // updated). If so, the optimizer will refetch histograms. XDDui___(CACHE_HISTOGRAMS_REFRESH_INTERVAL, "3600"), DD_____(CACHE_HISTOGRAMS_TRACE_OUTPUT_FILE, ""), DDkwd__(CALL_EMBEDDED_ARKCMP, "OFF"), DDui___(CANCEL_MINIMUM_BLOCKING_INTERVAL, "60"), DDkwd__(CASCADED_GROUPBY_TRANSFORMATION, "ON"), XDDansi_(CATALOG, TRAFODION_SYSCAT_LIT), DDkwd__(CAT_ALLOW_NEW_FEATUREX, "OFF"), // Control whether authorization caches immutable users DDkwd__(CAT_AUTHORIZATION_CACHE_IMMUTABLE_USERS, "ON"), DDkwd__(CAT_CREATE_SCHEMA_LABELS_ON_ALL_SEGMENTS, "ON"), DDkwd__(CAT_DEFAULT_COMPRESSION, "NONE"), // Metadata table distribution schemes // OFF - Place all metadata tables on one single disk // LOCAL_NODE - Distribute metadata tables across disks on local segment // where first schema in the catalog is created // ON - Distribute metadata tables across disks in local segment // and visible remote segments SDDkwd__(CAT_DISTRIBUTE_METADATA, "ON"), //SDDkwd__(CAT_DISTRIBUTE_METADATA, "ON"), // This disables Query Invalidation processing in catman when set to "OFF" SDDkwd__(CAT_ENABLE_QUERY_INVALIDATION, "ON"), // Throw an error if a column is part of the store by clause and // is not defined as NOT NULL return an error DDkwd__(CAT_ERROR_ON_NOTNULL_STOREBY, "ON"), DDui1__(CAT_FS_TIMEOUT, "9000"), // Used to make ignore "already exists" error in Create and // "does not exist" error in Drop. DDkwd__(CAT_IGNORE_ALREADY_EXISTS_ERROR, "OFF"), DDkwd__(CAT_IGNORE_DOES_NOT_EXIST_ERROR, "OFF"), // Used to make catman test134 predictable DDkwd__(CAT_IGNORE_EMPTY_CATALOGS, "OFF"), // Catalog Manager internal support for REPLICATE AUTHORIZATION DDkwd__(CAT_IGNORE_REPL_AUTHIDS_ERROR, "OFF"), // This enables the DB Limits functionality. If set to OFF, then blocksize // is restricted to 4096 and clustering key size is limited to 255 bytes. // DB Limits checking is turned off on NT since NT's DP2 does not support // large blocks or keys. DDkwd__(CAT_LARGE_BLOCKS_LARGE_KEYS, "ON"), // If DB Limits is enabled, then increase the default blocksize to 32K // on NSK if the object's clustering key length is larger than this value. DDui1__(CAT_LARGE_BLOCKS_MAX_KEYSIZE, "1"), // If DB Limits is enabled, then increase the default blocksize to 32K // on NSK if the object's row size is larger than this value. DDui1__(CAT_LARGE_BLOCKS_MAX_ROWSIZE, "1"), // Controls how pathnames for routines/procedures/SPJs are interpreted DDkwd__(CAT_LIBRARY_PATH_RELATIVE, "OFF"), DDkwd__(CAT_MORE_SCHEMA_PRIVS, "ON"), DDkwd__(CAT_OVERRIDE_CREATE_DISABLE, "OFF"), // This forces an rcb to be created with a different version number // A "0" means to take the current mxv version DDui___(CAT_RCB_VERSION, "0"), // Controls creation of column privileges for object-level privileges DDkwd__(CAT_REDUNDANT_COLUMN_PRIVS, "ON"), // If schema owner is object owner is ON, then the default owner for objects is the // schema owner. DDkwd__(CAT_SCHEMA_OWNER_IS_OBJECT_OWNER, "OFF"), DDkwd__(CAT_TEST_BOOL, "OFF"), DDint__(CAT_TEST_POINT, "0"), DD_____(CAT_TEST_STRING, "NONE"), // CMP_ERR_LOG_FILE indicates where to save a log for certain errors. DD_____(CMP_ERR_LOG_FILE, "tdm_arkcmp_errors.log"), DDkwd__(COLLECT_REORG_STATS, "ON"), DDint__(COMPILER_IDLE_TIMEOUT, "1800"), // To match with set session defaults value // tracking compilers specific defaults DDint__(COMPILER_TRACKING_INTERVAL, "0"), DD_____(COMPILER_TRACKING_LOGFILE, "NONE"), DDkwd__(COMPILER_TRACKING_LOGTABLE, "OFF"), DDkwd__(COMPILE_TIME_MONITOR, "OFF"), DD_____(COMPILE_TIME_MONITOR_LOG_ALLTIME_ONLY, "OFF"), DD_____(COMPILE_TIME_MONITOR_OUTPUT_FILE, "NONE"), // complexity threshold beyond which a // MultiJoin query is considered too complex DDflt0_(COMPLEX_MJ_QUERY_THRESHOLD, "1000000"), // Switch between new aligned internal format and exploded format DDkwd__(COMPRESSED_INTERNAL_FORMAT, "SYSTEM"), DDkwd__(COMPRESSED_INTERNAL_FORMAT_BMO, "SYSTEM"), DDkwd__(COMPRESSED_INTERNAL_FORMAT_BMO_AFFINITY, "ON"), DDkwd__(COMPRESSED_INTERNAL_FORMAT_BULK_MOVE, "ON"), DDflt0_(COMPRESSED_INTERNAL_FORMAT_DEFRAG_RATIO, "0.30"), DDkwd__(COMPRESSED_INTERNAL_FORMAT_EXPLAIN, "OFF"), DDui1__(COMPRESSED_INTERNAL_FORMAT_MIN_ROW_SIZE, "32"), DDkwd__(COMPRESSED_INTERNAL_FORMAT_ROOT_DOES_CONVERSION, "OFF"), DDflt0_(COMPRESSED_INTERNAL_FORMAT_ROW_SIZE_ADJ, "0.90"), XDDkwd__(COMPRESSION_TYPE, "NONE"), // These are switches and variables to use for compiler debugging DDkwd__(COMP_BOOL_1, "OFF"), DDkwd__(COMP_BOOL_10, "OFF"), DDkwd__(COMP_BOOL_100, "OFF"), DDkwd__(COMP_BOOL_101, "OFF"), DDkwd__(COMP_BOOL_102, "OFF"), DDkwd__(COMP_BOOL_103, "OFF"), DDkwd__(COMP_BOOL_104, "OFF"), DDkwd__(COMP_BOOL_105, "OFF"), DDkwd__(COMP_BOOL_106, "OFF"), DDkwd__(COMP_BOOL_107, "ON"), // Being used for testing default predicate synthesis in cardinality estimation DDkwd__(COMP_BOOL_108, "ON"), // Being used for testing default predicate synthesis in cardinality estimation DDkwd__(COMP_BOOL_109, "OFF"), DDkwd__(COMP_BOOL_11, "OFF"), DDkwd__(COMP_BOOL_110, "OFF"), DDkwd__(COMP_BOOL_111, "OFF"), DDkwd__(COMP_BOOL_112, "OFF"), DDkwd__(COMP_BOOL_113, "OFF"), DDkwd__(COMP_BOOL_114, "OFF"), DDkwd__(COMP_BOOL_115, "OFF"), DDkwd__(COMP_BOOL_116, "OFF"), DDkwd__(COMP_BOOL_117, "OFF"), DDkwd__(COMP_BOOL_118, "OFF"), // soln 10-100508-0135 - allow undo of fix. DDkwd__(COMP_BOOL_119, "OFF"), DDkwd__(COMP_BOOL_12, "OFF"), DDkwd__(COMP_BOOL_120, "OFF"), DDkwd__(COMP_BOOL_121, "OFF"), DDkwd__(COMP_BOOL_122, "ON"), // Solution 10-081203-7708 fix DDkwd__(COMP_BOOL_123, "OFF"), DDkwd__(COMP_BOOL_124, "OFF"), DDkwd__(COMP_BOOL_125, "ON"), DDkwd__(COMP_BOOL_126, "OFF"), DDkwd__(COMP_BOOL_127, "ON"), DDkwd__(COMP_BOOL_128, "ON"), DDkwd__(COMP_BOOL_129, "ON"), DDkwd__(COMP_BOOL_13, "OFF"), DDkwd__(COMP_BOOL_130, "ON"), DDkwd__(COMP_BOOL_131, "OFF"), DDkwd__(COMP_BOOL_132, "OFF"), DDkwd__(COMP_BOOL_133, "OFF"), DDkwd__(COMP_BOOL_134, "ON"), DDkwd__(COMP_BOOL_135, "ON"), DDkwd__(COMP_BOOL_136, "OFF"), DDkwd__(COMP_BOOL_137, "OFF"), // ON enables logging of RewriteJoinPred DDkwd__(COMP_BOOL_138, "OFF"), // ON disables tryToRewriteJoinPredicate DDkwd__(COMP_BOOL_139, "OFF"), DDkwd__(COMP_BOOL_14, "ON"), DDkwd__(COMP_BOOL_140, "ON"), DDkwd__(COMP_BOOL_141, "ON"), // Used for testing MC UEC adjustment for uplifting join cardinality DDkwd__(COMP_BOOL_142, "ON"), // Used for turning on Compile Time Statistics caching DDkwd__(COMP_BOOL_143, "OFF"), DDkwd__(COMP_BOOL_144, "OFF"), // only Key columns usage as a part of materialization of disjuncts is controlled by the CQD DDkwd__(COMP_BOOL_145, "ON"), // Used for selectivity adjustment for MC Joins DDkwd__(COMP_BOOL_146, "OFF"), DDkwd__(COMP_BOOL_147, "OFF"), DDkwd__(COMP_BOOL_148, "ON"), // Used for GroupBy Cardinality Enhancement for complex expressions DDkwd__(COMP_BOOL_149, "ON"), // Used for testing multi-col uniqueness cardinality enhancement DDkwd__(COMP_BOOL_15, "OFF"), DDkwd__(COMP_BOOL_150, "OFF"), DDkwd__(COMP_BOOL_151, "OFF"), DDkwd__(COMP_BOOL_152, "OFF"), DDkwd__(COMP_BOOL_153, "ON"), // skew buster: ON == use round robin, else Co-located. DDkwd__(COMP_BOOL_154, "OFF"), DDkwd__(COMP_BOOL_155, "OFF"), DDkwd__(COMP_BOOL_156, "ON"), // Used by RTS to turn on RTS Stats collection for ROOT operators DDkwd__(COMP_BOOL_157, "OFF"), DDkwd__(COMP_BOOL_158, "OFF"), DDkwd__(COMP_BOOL_159, "OFF"), DDkwd__(COMP_BOOL_16, "OFF"), DDkwd__(COMP_BOOL_160, "OFF"), DDkwd__(COMP_BOOL_161, "OFF"), DDkwd__(COMP_BOOL_162, "ON"), // transform NOT EXISTS subquery using anti_semijoin instead of Join-Agg DDkwd__(COMP_BOOL_163, "OFF"), DDkwd__(COMP_BOOL_164, "OFF"), DDkwd__(COMP_BOOL_165, "ON"), // set to 'ON' in M5 for SQ DDkwd__(COMP_BOOL_166, "OFF"), // ON --> turn off fix for 10-100310-8659. DDkwd__(COMP_BOOL_167, "OFF"), DDkwd__(COMP_BOOL_168, "ON"), DDkwd__(COMP_BOOL_169, "OFF"), DDkwd__(COMP_BOOL_17, "ON"), DDkwd__(COMP_BOOL_170, "ON"), DDkwd__(COMP_BOOL_171, "OFF"), DDkwd__(COMP_BOOL_172, "OFF"), DDkwd__(COMP_BOOL_173, "OFF"), // fix: make odbc params nullable DDkwd__(COMP_BOOL_174, "ON"), // internal usage: merge stmt DDkwd__(COMP_BOOL_175, "OFF"), // internal usage: merge stmt DDkwd__(COMP_BOOL_176, "OFF"), DDkwd__(COMP_BOOL_177, "OFF"), DDkwd__(COMP_BOOL_178, "OFF"), DDkwd__(COMP_BOOL_179, "OFF"), DDkwd__(COMP_BOOL_18, "OFF"), DDkwd__(COMP_BOOL_180, "OFF"), DDkwd__(COMP_BOOL_181, "OFF"), DDkwd__(COMP_BOOL_182, "OFF"), // internal usage DDkwd__(COMP_BOOL_183, "OFF"), DDkwd__(COMP_BOOL_184, "ON"), // ON => use min probe size for mdam. Using min probe size of 1 or 2 currently has a bug so this is not the default. OFF => use default probe size of 100 DDkwd__(COMP_BOOL_185, "ON"), //Fix, allows extract(year from current_date) to be treated as a userinput DDkwd__(COMP_BOOL_186, "OFF"), DDkwd__(COMP_BOOL_187, "OFF"), // reserved for internal usage DDkwd__(COMP_BOOL_188, "OFF"), DDkwd__(COMP_BOOL_189, "OFF"), // reserved for internal usage DDkwd__(COMP_BOOL_19, "OFF"), DDkwd__(COMP_BOOL_190, "OFF"), DDkwd__(COMP_BOOL_191, "OFF"), // Temp for UDF metadata switch DDkwd__(COMP_BOOL_192, "OFF"), DDkwd__(COMP_BOOL_193, "OFF"), DDkwd__(COMP_BOOL_194, "OFF"), DDkwd__(COMP_BOOL_195, "OFF"), // used to enable unexternalized get statistics options. DDkwd__(COMP_BOOL_196, "OFF"), DDkwd__(COMP_BOOL_197, "OFF"), DDkwd__(COMP_BOOL_198, "OFF"), DDkwd__(COMP_BOOL_199, "ON"), DDkwd__(COMP_BOOL_2, "OFF"), DDkwd__(COMP_BOOL_20, "OFF"), // ON -> disable ability of stmt to be canceled. DDkwd__(COMP_BOOL_200, "OFF"), DDkwd__(COMP_BOOL_201, "OFF"), DDkwd__(COMP_BOOL_202, "ON"),// For SQ: // ON: excluding fixup cost // for EXCHANGE for // anti-surf logic; // OFF: do include. // Change to ON in M5 DDkwd__(COMP_BOOL_203, "OFF"), DDkwd__(COMP_BOOL_205, "OFF"), // enable reorg on metadata DDkwd__(COMP_BOOL_206, "OFF"), // Internal Usage DDkwd__(COMP_BOOL_207, "OFF"), // Internal Usage DDkwd__(COMP_BOOL_208, "OFF"), // Internal Usage DDkwd__(COMP_BOOL_209, "OFF"), // Internal Usage DDkwd__(COMP_BOOL_21, "OFF"), DDkwd__(COMP_BOOL_210, "ON"), DDkwd__(COMP_BOOL_211, "ON"), // controls removing constants from group expression DDkwd__(COMP_BOOL_215, "OFF"), DDkwd__(COMP_BOOL_217, "OFF"), DDkwd__(COMP_BOOL_219, "OFF"), // for InMem obj defn DDkwd__(COMP_BOOL_22, "ON"), DDkwd__(COMP_BOOL_220, "OFF"), // UserLoad fastpath opt DDkwd__(COMP_BOOL_221, "OFF"), // unnests a subquery even when there is no explicit correlation DDkwd__(COMP_BOOL_222, "ON"), // R2.5 BR features enabled DDkwd__(COMP_BOOL_223, "OFF"), // enable undocumented options // bulk replicate features DDkwd__(COMP_BOOL_224, "OFF"), // enable undocumented // bulk replicate features DDkwd__(COMP_BOOL_225, "ON"), // enable optimized esps allocation DDkwd__(COMP_BOOL_226, "OFF"), // ON enables UNLOAD feature // for disk label stats. DDkwd__(COMP_BOOL_23, "ON"), DDkwd__(COMP_BOOL_24, "OFF"), // AS enhancement to adjust maxDoP DDkwd__(COMP_BOOL_25, "OFF"), // Being used in Cardinality Estimation DDkwd__(COMP_BOOL_26, "OFF"), DDkwd__(COMP_BOOL_27, "OFF"), DDkwd__(COMP_BOOL_28, "OFF"), DDkwd__(COMP_BOOL_29, "OFF"), DDkwd__(COMP_BOOL_3, "OFF"), DDkwd__(COMP_BOOL_30, "ON"), DDkwd__(COMP_BOOL_31, "OFF"), DDkwd__(COMP_BOOL_32, "OFF"), DDkwd__(COMP_BOOL_33, "OFF"), DDkwd__(COMP_BOOL_34, "OFF"), DDkwd__(COMP_BOOL_35, "OFF"), DDkwd__(COMP_BOOL_36, "OFF"), DDkwd__(COMP_BOOL_37, "OFF"), DDkwd__(COMP_BOOL_38, "OFF"), DDkwd__(COMP_BOOL_39, "OFF"), DDkwd__(COMP_BOOL_4, "OFF"), DDkwd__(COMP_BOOL_40, "ON"), DDkwd__(COMP_BOOL_41, "OFF"), DDkwd__(COMP_BOOL_42, "ON"), DDkwd__(COMP_BOOL_43, "OFF"), DDkwd__(COMP_BOOL_44, "OFF"), DDkwd__(COMP_BOOL_45, "ON"), DDkwd__(COMP_BOOL_46, "OFF"), DDkwd__(COMP_BOOL_47, "ON"), DDkwd__(COMP_BOOL_48, "ON"), // Turned "Off" because of Regression failure DDkwd__(COMP_BOOL_49, "OFF"), DDkwd__(COMP_BOOL_5, "ON"), DDkwd__(COMP_BOOL_50, "OFF"), DDkwd__(COMP_BOOL_51, "OFF"), DDkwd__(COMP_BOOL_52, "OFF"), DDkwd__(COMP_BOOL_53, "ON"), //Turned "ON" for OCB Cost DDkwd__(COMP_BOOL_54, "OFF"), DDkwd__(COMP_BOOL_55, "OFF"), DDkwd__(COMP_BOOL_56, "OFF"), DDkwd__(COMP_BOOL_57, "ON"), DDkwd__(COMP_BOOL_58, "OFF"), DDkwd__(COMP_BOOL_59, "OFF"), DDkwd__(COMP_BOOL_6, "OFF"), // comp_bool_60 is used in costing of an exchange operator. This is // used in deciding to use Nodemap decoupling and other exchange // costing logic. DDkwd__(COMP_BOOL_60, "ON"), DDkwd__(COMP_BOOL_61, "OFF"), DDkwd__(COMP_BOOL_62, "OFF"), DDkwd__(COMP_BOOL_63, "OFF"), DDkwd__(COMP_BOOL_64, "OFF"), DDkwd__(COMP_BOOL_65, "OFF"), DDkwd__(COMP_BOOL_66, "OFF"), DDkwd__(COMP_BOOL_67, "ON"), // Being used in Cardinality Estimation DDkwd__(COMP_BOOL_68, "ON"), DDkwd__(COMP_BOOL_69, "OFF"), DDkwd__(COMP_BOOL_7, "OFF"), DDkwd__(COMP_BOOL_70, "ON"), DDkwd__(COMP_BOOL_71, "OFF"), DDkwd__(COMP_BOOL_72, "OFF"), DDkwd__(COMP_BOOL_73, "OFF"), DDkwd__(COMP_BOOL_74, "ON"), DDkwd__(COMP_BOOL_75, "ON"), DDkwd__(COMP_BOOL_76, "ON"), DDkwd__(COMP_BOOL_77, "OFF"), DDkwd__(COMP_BOOL_78, "OFF"), DDkwd__(COMP_BOOL_79, "ON"), DDkwd__(COMP_BOOL_8, "OFF"), DDkwd__(COMP_BOOL_80, "OFF"), DDkwd__(COMP_BOOL_81, "OFF"), DDkwd__(COMP_BOOL_82, "OFF"), DDkwd__(COMP_BOOL_83, "ON"), DDkwd__(COMP_BOOL_84, "OFF"), DDkwd__(COMP_BOOL_85, "OFF"), DDkwd__(COMP_BOOL_86, "OFF"), DDkwd__(COMP_BOOL_87, "OFF"), DDkwd__(COMP_BOOL_88, "OFF"), DDkwd__(COMP_BOOL_89, "OFF"), DDkwd__(COMP_BOOL_9, "OFF"), DDkwd__(COMP_BOOL_90, "ON"), DDkwd__(COMP_BOOL_91, "OFF"), DDkwd__(COMP_BOOL_92, "OFF"), // used by generator. DDkwd__(COMP_BOOL_93, "ON"), // turn on pushdown for IUDs involving MVs. Default is off DDkwd__(COMP_BOOL_94, "OFF"), DDkwd__(COMP_BOOL_95, "OFF"), DDkwd__(COMP_BOOL_96, "OFF"), DDkwd__(COMP_BOOL_97, "OFF"), DDkwd__(COMP_BOOL_98, "ON"), DDkwd__(COMP_BOOL_99, "OFF"), DDflt0_(COMP_FLOAT_0, "0.002"), DDflt0_(COMP_FLOAT_1, "0.00002"), DDflt0_(COMP_FLOAT_2, "0"), DDflt0_(COMP_FLOAT_3, "0.01"), DDflt0_(COMP_FLOAT_4, "1.1"), DDflt__(COMP_FLOAT_5, "0.01"), // For Split Top cost adjustments : 0.25 DDflt__(COMP_FLOAT_6, "0.67"), // used to set the fudge factor which // is used to estimate cardinality of an // aggregate function in an equi-join expression DDflt__(COMP_FLOAT_7, "1.5"), DDflt__(COMP_FLOAT_8, "0.8"), // min expected #groups when HGB under right side of NLJ DDflt__(COMP_FLOAT_9, "1002.0"), DDint__(COMP_INT_0, "5000"), DDint__(COMP_INT_1, "0"), DDint__(COMP_INT_10, "3"), DDint__(COMP_INT_11, "-1"), DDint__(COMP_INT_12, "0"), DDint__(COMP_INT_13, "0"), DDint__(COMP_INT_14, "0"), DDint__(COMP_INT_15, "7"), DDint__(COMP_INT_16, "1000000"), DDint__(COMP_INT_17, "1000000"), DDint__(COMP_INT_18, "1"), DDint__(COMP_INT_19, "2"), DDint__(COMP_INT_2, "1"), DDint__(COMP_INT_20, "4"), DDint__(COMP_INT_21, "0"), DDint__(COMP_INT_22, "0"), // used to control old parser based INLIST transformation // 0 ==> OFF, positive value implies ON and has the effect of implicitly shutting down much of OR_PRED transformations // this cqd has been retained as a fallback in case OR_PRED has bugs. DDint__(COMP_INT_23, "22"), DDint__(COMP_INT_24, "1000000000"), DDint__(COMP_INT_25, "0"), DDint__(COMP_INT_26, "1"), DDint__(COMP_INT_27, "0"), DDint__(COMP_INT_28, "0"), DDint__(COMP_INT_29, "0"), DDint__(COMP_INT_3, "5"), DDint__(COMP_INT_30, "5"), DDint__(COMP_INT_31, "5"), DDint__(COMP_INT_32, "100"), DDint__(COMP_INT_33, "0"), DDint__(COMP_INT_34, "10000"), // lower bound: 10000 DDint__(COMP_INT_35, "500000"), // upper bound: 200000 DDint__(COMP_INT_36, "128"), // Bounds for producer for OCB DDint__(COMP_INT_37, "0"), DDint__(COMP_INT_38, "0"), // test master's abend DDint__(COMP_INT_39, "0"), // test esp's abend DDint__(COMP_INT_4, "400"), DDint__(COMP_INT_40, "10"), // this defines the percentage of selectivity after applying equality predicates on single column histograms // beyond which the optimizer should use MC stats DDint__(COMP_INT_41, "0"), DDint__(COMP_INT_42, "0"), DDint__(COMP_INT_43, "3"), // this is only for testing purposes. Once HIST_USE_SAMPLE_FOR_CARDINALITY_ESTIMATION is set to ON by default, the value of this CQD should be adjusted DDint__(COMP_INT_44, "1000000"), // frequency threshold above which // a boundary value will be inclded // in the frequentValueList (stats) DDint__(COMP_INT_45, "300"), DDint__(COMP_INT_46, "10"), DDint__(COMP_INT_47, "0"), DDint__(COMP_INT_48, "32"), // # trips thru scheduler task list before eval of CPU time limit. DDint__(COMP_INT_49, "0"), DDint__(COMP_INT_5, "0"), DDint__(COMP_INT_50, "0"), DDint__(COMP_INT_51, "0"), DDint__(COMP_INT_52, "0"), DDint__(COMP_INT_53, "0"), DDint__(COMP_INT_54, "0"), DDint__(COMP_INT_55, "0"), DDint__(COMP_INT_56, "0"), DDint__(COMP_INT_57, "0"), DDint__(COMP_INT_58, "0"), DDint__(COMP_INT_59, "0"), DDint__(COMP_INT_6, "400"), // comp_int_60 is used in costing of an exchnage operator. It is // used to indicate buffer size of a DP2 exchange when sending // messages down. DDint__(COMP_INT_60, "4"), DDint__(COMP_INT_61, "0"), // Exchange operator default value DDint__(COMP_INT_62, "10000"), DDint__(COMP_INT_63, "10000"), // SG Insert issue DDint__(COMP_INT_64, "0"), DDint__(COMP_INT_65, "0"), DDint__(COMP_INT_66, "0"), // to change #buffers per flushed cluster DDint__(COMP_INT_67, "8"), // to test #outer-buffers per a batch DDint__(COMP_INT_68, "0"), DDint__(COMP_INT_69, "0"), DDint__(COMP_INT_7, "10000000"), DDint__(COMP_INT_70, "1000000"), DDint__(COMP_INT_71, "0"), DDint__(COMP_INT_72, "0"), // if set to 1, allows keyPredicate to be inserted without passing key col. DDint__(COMP_INT_73, "1"), // if set to 1, disables cursor_delete plan if there are no alternate indexes. DDint__(COMP_INT_74, "0"), DDint__(COMP_INT_75, "0"), DDint__(COMP_INT_76, "0"), DDint__(COMP_INT_77, "0"), DDint__(COMP_INT_78, "0"), DDint__(COMP_INT_79, "0"), // this is used temporaraly as value for parallel threshold // in case ATTEMPT_ESP_PARALLELISM is set to MAXIMUM DDint__(COMP_INT_8, "20"), DDint__(COMP_INT_80, "3"), DDint__(COMP_INT_81, "0"), DDint__(COMP_INT_82, "0"), DDint__(COMP_INT_83, "0"), // max num of retries after parl purgedata open/control call errs.Default 25. DDint__(COMP_INT_84, "25"), // delay between each paral pd error retry. Default is 2 seconds. DDint__(COMP_INT_85, "2"), DDint__(COMP_INT_86, "0"), DDint__(COMP_INT_87, "0"), DDint__(COMP_INT_88, "0"), DDint__(COMP_INT_89, "2"), DDint__(COMP_INT_9, "0"), DDint__(COMP_INT_90, "0"), DDint__(COMP_INT_91, "0"), DDint__(COMP_INT_92, "0"), DDint__(COMP_INT_93, "0"), DDint__(COMP_INT_94, "0"), DDint__(COMP_INT_95, "0"), DDint__(COMP_INT_96, "0"), DDint__(COMP_INT_97, "0"), DDint__(COMP_INT_98, "512"), DDint__(COMP_INT_99, "10"), DD_____(COMP_STRING_1, "NONE"), DD_____(COMP_STRING_2, ""), DD_____(COMP_STRING_3, ""), DD_____(COMP_STRING_4, ""), DD_____(COMP_STRING_5, ""), DD_____(COMP_STRING_6, ""), // Configured_memory_for defaults are all measured in KB DDui___(CONFIGURED_MEMORY_FOR_BASE, "16384"), DDui___(CONFIGURED_MEMORY_FOR_DAM, "20480"), DDui___(CONFIGURED_MEMORY_FOR_MINIMUM_HASH, "20480"), DDui___(CONFIGURED_MEMORY_FOR_MXESP, "8192"), DDkwd__(CONSTANT_FOLDING, "OFF"), DDkwd__(COSTING_SHORTCUT_GROUPBY_FIX, "ON"), DDflt0_(COST_PROBE_DENSITY_THRESHOLD, ".25"), // As of 3/23/98 the tupp desc. length is 12 bytes. Change when executor // changes. DDflt0_(COST_TUPP_DESC_LENGTH_IN_KB, "0.01171875"), DDflt0_(CPUCOST_COMPARE_COMPLEX_DATA_TYPE_OVERHEAD, "10."), DDflt0_(CPUCOST_COMPARE_COMPLEX_DATA_TYPE_PER_BYTE, ".1"), // Same as CPUCOST_PREDICATE_COMPARISON // Change HH_OP_PROBE_HASH_TABLE when you change this value: DDflt0_(CPUCOST_COMPARE_SIMPLE_DATA_TYPE, ".200"), // no cost overhead assumed: DDflt0_(CPUCOST_COPY_ROW_OVERHEAD, "0."), // change CPUCOST_HASH_PER_KEY when changing this value DDflt0_(CPUCOST_COPY_ROW_PER_BYTE, ".0007"), DDflt0_(CPUCOST_COPY_SIMPLE_DATA_TYPE, ".005"), // This is a per data request overhead cost paid by the cpu DDflt0_(CPUCOST_DATARQST_OVHD, ".01"), DDflt0_(CPUCOST_DM_GET, ".001"), DDflt0_(CPUCOST_DM_UPDATE, ".001"), DDflt0_(CPUCOST_ENCODE_PER_BYTE, ".002"), DDflt0_(CPUCOST_ESP_INITIALIZATION, "10"), // The previous observation had calculated the number of seconds to // aggregate incorrectly. Now: // Number of seconds to scan 100,000 rows @ 208 bytes: 4 // Number of seconds to scan 100,000 rows @ 208 bytes and aggregate // 15 aggregates: 17 // Thus, number of seconds per aggregate = (17-4)/15 = 0.866667 // CPUCOST_PER_ROW = 1.13333/(0.00005*100,000) = 0.1733 // previous observation // It takes 13.96 seconds to aggregate 99,999 rows using // 15 expressions, thus at 0.00005 et_cpu, we have that // the cost to eval an arith op is: // 6.14 / (0.00005 * 99,9999 * 15) = 0.0819 DDflt0_(CPUCOST_EVAL_ARITH_OP, ".0305"), DDflt0_(CPUCOST_EVAL_FUNC_DEFAULT, "10."), DDflt0_(CPUCOST_EVAL_LOGICAL_OP, "1."), DDflt0_(CPUCOST_EVAL_SIMPLE_PREDICATE, "1."), DDflt0_(CPUCOST_EXCHANGE_COST_PER_BYTE, ".002"), DDflt0_(CPUCOST_EXCHANGE_COST_PER_ROW, ".002"), DDflt0_(CPUCOST_EXCHANGE_INTERNODE_COST_PER_BYTE, ".008"), DDflt0_(CPUCOST_EXCHANGE_MAPPING_FUNCTION, ".01"), // was 0.1, but now 0.011 // XDDflt0_(CPUCOST_EXCHANGE_REMOTENODE_COST_PER_BYTE, ".011"), // Set the additional cost of copying a byte to message buffer for // remote node to be the same as for inter node, 0.01 // Also change it to be internalized DDflt0_(CPUCOST_EXCHANGE_REMOTENODE_COST_PER_BYTE, ".01"), DDflt0_(CPUCOST_EXCHANGE_SPLIT_FUNCTION, ".01"), // Assume // CPUCOST_HASH_PER_KEY = 4 * CPUCOST_HASH_PER_BYTE // History: // Before 01/06/98: 0.005 DDflt0_(CPUCOST_HASH_PER_BYTE, ".057325"), // Assume // CPUCOST_HASH_PER_KEY = 4 * CPUCOST_HASH_PER_BYTE // From observation: // For a case when all the hash table fits into memory: // 01/05/98: 42,105 rows inserted per second @ 0.00005 seconds // per thousand of instructions, give: // seconds to insert one row = 1/42105 = 0.00002375 // thd. of instructions per row inserted = 1/42105/0.00005 = 0.4750 // The cost is distributed as follows: // CPUCOST_HASH_PER_KEY + CPUCOST_HASH_PER_BYTE*4 + // HH_OP_INSERT_ROW_TO_CHAIN + CPUCOST_COPY_ROW_PER_BYTE * 4 // = 0.4750 // Thus we have: // 2* CPUCOST_HASH_PER_KEY + 0.01 + 0.0016*4 = 0.4750 // -> CPUCOST_HASH_PER_KEY = 0.4586/2 = 0.2293 // History: // Before 01/06/98: 0.02 // Change // CPUCOST_HASH_PER_BYTE // when changing this value DDflt0_(CPUCOST_HASH_PER_KEY, "1.29"), DDflt0_(CPUCOST_LIKE_COMPARE_OVERHEAD, "10."), DDflt0_(CPUCOST_LIKE_COMPARE_PER_BYTE, ".1"), DDflt0_(CPUCOST_LOCK_ROW, ".01"), DDflt0_(CPUCOST_NJ_TUPLST_FF, "10."), // Observation (A971125_1): // CPU time to scan 100,000 rows with no exe pred: 10 // CPU time to scan 100,000 rows with an exe pred like // nonkeycol < K: 11 // CPU time spend in every row: 1/100,000 = .00001 // Thus, at 0.00005 th. inst. per sec we have: 0.00001/0.00005 = // 0.2 thousand inst. to evaluate every row: // // Predicate comparison is very expensive right now (10/08/97) // (cost it that it takes like 1000 instruction for one comparison) // 10/08/97: 1. // Change // CPUCOST_COMPARE_SIMPLE_DATA_TYPE // when you change this value: // History // Before 04/30/98: .2 DDflt0_(CPUCOST_PREDICATE_COMPARISON, ".08"), // Cost of copying the data from disk to the DP2 Cache: DDflt0_(CPUCOST_SCAN_DSK_TO_DP2_PER_KB, "2.5"), DDflt0_(CPUCOST_SCAN_DSK_TO_DP2_PER_SEEK, "0.0"), // The communication between DP2 and ExeInDp2 requires to encode // and decode the key. DDflt0_(CPUCOST_SCAN_KEY_LENGTH, "0."), // The communication between DP2 and ExeInDp2 is complex and // ever changing. The following factor is introduced to // make the costing of scan fit observed CPU time for the scan: DDflt0_(CPUCOST_SCAN_OVH_PER_KB, "0.984215"), DDflt0_(CPUCOST_SCAN_OVH_PER_ROW, "0.0"), // It takes about 1/3 of a second to open a table, thus with a // 0.00005 ff for cpu elapsed time we get: // 1/3/0.00005 = 7000 thousands instructions // CPUCOST_SUBSET_OPEN lumps together all the overhead needed // to set-up the access to each partition. Thus it is a blocking // cost, nothing can overlap with it. DDflt0_(CPUCOST_SUBSET_OPEN, "7000"), DDflt0_(CPUCOST_SUBSET_OPEN_AFTER_FIRST, "1250"), DDflt0_(CPUCOST_TUPLE_REFERENCE, ".001"), DDui___(CREATE_DEFINITION_SCHEMA_VERSION, "0"), DDkwd__(CREATE_EXTERNAL_USER_NAME_INDEX, "OFF"), DDkwd__(CREATE_FOR_NO_RDF_REPLICATE, "OFF"), DDkwd__(CREATE_METADATA_TABLE, "OFF"), DDkwd__(CREATE_OBJECTS_IN_METADATA_ONLY, "OFF"), DDkwd__(CROSS_PRODUCT_CONTROL, "ON"), SDDui___(CYCLIC_ESP_PLACEMENT, "1"), // if this one is "ON" it overwrites optimizer heuristics 4 & 5 as "ON" // if it's "OFF" then the defaults of the two heuristics will be used DDkwd__(DATA_FLOW_OPTIMIZATION, "ON"), // DDL Default location support DDdskNS(DDL_DEFAULT_LOCATIONS, ""), DDkwd__(DDL_EXPLAIN, "OFF"), DDkwd__(DDL_TRANSACTIONS, "ON"), // We ignore this setting for the first (SYSTEM_DEFAULTS) table open+read. DDkwd__(DEFAULTS_TABLE_ACCESS_WARNINGS, "OFF"), SDDkwd__(DEFAULT_CHARSET, (char *)SQLCHARSETSTRING_ISO88591), XDDui1__(DEFAULT_DEGREE_OF_PARALLELISM, "2"), SDDkwd__(DEFAULT_SCHEMA_ACCESS_ONLY, "OFF"), SDDkwd__(DEFAULT_SCHEMA_NAMETYPE, "SYSTEM"), // These DEF_xxx values of "" get filled in by updateSystemParameters(). #define def_DEF_CHUNK_SIZE 5000000.0 #define str_DEF_CHUNK_SIZE "5000000.0" // DDui2__(DEF_CHUNK_SIZE, str_DEF_CHUNK_SIZE), DD_____(DEF_CPU_ARCHITECTURE, ""), DDui1__(DEF_DISCS_ON_CLUSTER, ""), DDui1__(DEF_INSTRUCTIONS_SECOND, ""), DDui___(DEF_LOCAL_CLUSTER_NUMBER, ""), DDui___(DEF_LOCAL_SMP_NODE_NUMBER, ""), //DEF_MAX_HISTORY_ROWS made external RV 06/21/01 CR 10-010425-2440 XDDui1__(DEF_MAX_HISTORY_ROWS, "1024"), DDui___(DEF_NUM_BM_CHUNKS, ""), DDui1__(DEF_NUM_NODES_IN_ACTIVE_CLUSTERS, ""), DDui1__(DEF_NUM_SMP_CPUS, ""), DDui2__(DEF_PAGE_SIZE, ""), DDui1__(DEF_PHYSICAL_MEMORY_AVAILABLE, ""), DDui1__(DEF_TOTAL_MEMORY_AVAILABLE, ""), DDui1__(DEF_VIRTUAL_MEMORY_AVAILABLE, ""), DDkwd__(DESTROY_ORDER_AFTER_REPARTITIONING, "OFF"), // detailed executor statistics DDkwd__(DETAILED_STATISTICS, "OPERATOR"), DDkwd__(DIMENSIONAL_QUERY_OPTIMIZATION, "OFF"), DDkwd__(DISABLE_BUFFERED_INSERTS, "OFF"), DDkwd__(DISABLE_READ_ONLY, "OFF"), DD_____(DISPLAY_DATA_FLOW_GRAPH, "OFF"), XDDkwd__(DISPLAY_DIVISION_BY_COLUMNS, "OFF"), // opens are distributed among all partitions instead of just root. // 0: no distribution, only use root. // -1: max distribution, all partitions // <number>: num of partitions per segment DDint__(DISTRIBUTE_OPENS, "-1"), // temp. disable dop reduction logic DDflt0_(DOP_REDUCTION_ROWCOUNT_THRESHOLD, "0.0"), DDkwd__(DO_MINIMAL_RENAME, "OFF"), // if set, then space needed for executor structures at runtime is // optimized such that the allocation starts with a low number and then // is allocated on a need basis. This means that we may have to allocate // more smaller chunks if much space is needed. But it helps in the case // where many plans are being used and each one only takes a small amount // of space. This optimization especially helps in case of Dp2 fragments // as there is only a finite amount of space available there. Once that // limit is reached, and a new plan is shipped, it means that an existing // eid plan from dp2 memory need to be swapped out and then refixed up. // By reducing space utilization, we end up with more eid sessions in // use inside of dp2. DDkwd__(DO_RUNTIME_EID_SPACE_COMPUTATION, "OFF"), DDkwd__(DO_RUNTIME_SPACE_OPTIMIZATION, "OFF"), DDui2__(DP2_BLOCK_HEADER_SIZE, "96"), // DP2 Cache defaults as of 06/08/98. DDui1__(DP2_CACHE_1024_BLOCKS, "152"), DDui1__(DP2_CACHE_16K_BLOCKS, "1024"), DDui1__(DP2_CACHE_2048_BLOCKS, "150"), DDui1__(DP2_CACHE_32K_BLOCKS, "512"), DDui1__(DP2_CACHE_4096_BLOCKS, "4096"), DDui1__(DP2_CACHE_512_BLOCKS, "152"), DDui1__(DP2_CACHE_8K_BLOCKS, "2048"), // The cache size is about 2000 pages @ 4k each page DDui1__(DP2_CACHE_SIZE_IN_KB, "8000"), // Exchange Costing // 6/12/98. // End of buffer header is 32 bytes or .0313 KB. // Each Exchange->DP2 request is 48 bytes or .0469 KB. DDflte_(DP2_END_OF_BUFFER_HEADER_SIZE, ".0313"), DDflte_(DP2_EXCHANGE_REQUEST_SIZE, ".0469"), DDpct__(DP2_FRACTION_SEEK_FROM_RANDOM_TO_INORDER, "25"), DDui2__(DP2_MAX_READ_PER_ACCESS_IN_KB, "256"), // The buffer size, as of 10/07/97 is 32K DDui2__(DP2_MESSAGE_BUFFER_SIZE, "56"), // Exchange Costing // 6/12/98. // Message header for Exchange->DP2 is 18 bytes or .0176 KB DDflte_(DP2_MESSAGE_HEADER_SIZE, ".0176"), DDui2__(DP2_MESSAGE_HEADER_SIZE_BYTES, "18"), DDui1__(DP2_MINIMUM_FILE_SIZE_FOR_SEEK_IN_BLOCKS, "256"), DDint__(DP2_PRIORITY, "-1001"), DDint__(DP2_PRIORITY_DELTA, "-1001"), DDui1__(DP2_SEQ_READS_WITHOUT_SEEKS, "100"), DDkwd__(DYNAMIC_HISTOGRAM_COMPRESSION, "ON"), DDui2__(DYN_PA_QUEUE_RESIZE_INIT_DOWN, "1024"), DDui2__(DYN_PA_QUEUE_RESIZE_INIT_UP, "1024"), DDui2__(DYN_QUEUE_RESIZE_FACTOR, "4"), DDui2__(DYN_QUEUE_RESIZE_INIT_DOWN, "4"), DDui2__(DYN_QUEUE_RESIZE_INIT_UP, "4"), DDui1__(DYN_QUEUE_RESIZE_LIMIT, "9"), DDkwd__(EID_SPACE_USAGE_OPT, "OFF"), // For both of these CQDs see executor/ExDp2Trace.h for values. DDint__(EID_TRACE_STATES, "0"), DDtp___(EID_TRACE_STR, ""), DDkwd__(ELIMINATE_REDUNDANT_JOINS, "ON"), DDkwd__(ENABLE_DP2_XNS, "OFF"), DDSint__(ESP_ASSIGN_DEPTH, "0"), DDSint__(ESP_FIXUP_PRIORITY_DELTA, "0"), DDint__(ESP_IDLE_TIMEOUT, "1800"), // To match with set session defaults value DDkwd__(ESP_MULTI_FRAGMENTS, "ON"), DDkwd__(ESP_MULTI_FRAGMENT_QUOTAS, "ON"), DDui1500_4000(ESP_MULTI_FRAGMENT_QUOTA_VM, "4000"), DDui1_6(ESP_NUM_FRAGMENTS, "3"), DDui1_6(ESP_NUM_FRAGMENTS_WITH_QUOTAS, "6"), DDkwd__(ESP_ON_AGGREGATION_NODES_ONLY, "OFF"), DDSint__(ESP_PRIORITY, "0"), DDSint__(ESP_PRIORITY_DELTA, "0"), // Disable hints - if SYSTEM, enable on SSD, and disable only on HDD DDkwd__(EXE_BMO_DISABLE_CMP_HINTS_OVERFLOW_HASH, "SYSTEM"), DDkwd__(EXE_BMO_DISABLE_CMP_HINTS_OVERFLOW_SORT, "SYSTEM"), DDkwd__(EXE_BMO_DISABLE_OVERFLOW, "OFF"), DDui___(EXE_BMO_MIN_SIZE_BEFORE_PRESSURE_CHECK_IN_MB, "50"), DDkwd__(EXE_BMO_SET_BUFFERED_WRITES, "OFF"), SDDkwd__(EXE_DIAGNOSTIC_EVENTS, "OFF"), DDui1__(EXE_HGB_INITIAL_HT_SIZE, "262144"), // == hash buffer DDflt__(EXE_HJ_MIN_NUM_CLUSTERS, "4"), DDkwd__(EXE_LOG_RETRY_IPC, "OFF"), // Total size of memory (in MB) available to BMOs (e.g., 1200 MB) SDDui___(EXE_MEMORY_AVAILABLE_IN_MB, "1200"), SDDui___(EXE_MEMORY_FOR_PARTIALHGB_IN_MB, "100"), SDDui___(EXE_MEMORY_FOR_PROBE_CACHE_IN_MB, "100"), // lower-bound memory limit for BMOs/nbmos (in MB) DDui___(EXE_MEMORY_LIMIT_LOWER_BOUND_EXCHANGE, "10"), DDui___(EXE_MEMORY_LIMIT_LOWER_BOUND_HASHGROUPBY , "10"), DDui___(EXE_MEMORY_LIMIT_LOWER_BOUND_HASHJOIN, "10"), DDui___(EXE_MEMORY_LIMIT_LOWER_BOUND_MERGEJOIN, "10"), DDui___(EXE_MEMORY_LIMIT_LOWER_BOUND_PA , "10"), DDui___(EXE_MEMORY_LIMIT_LOWER_BOUND_PROBE_CACHE , "10"), DDui___(EXE_MEMORY_LIMIT_LOWER_BOUND_SEQUENCE , "10"), DDui___(EXE_MEMORY_LIMIT_LOWER_BOUND_SORT , "10"), // total memory limit per CPU per query in MB DDpct1_50(EXE_MEMORY_LIMIT_NONBMOS_PERCENT, "15"), XDDui___(EXE_MEMORY_LIMIT_PER_CPU, "0"), // Memory not available for BMOs in master fragment in mxosrvr // (mostly due to QIO). DDui___(EXE_MEMORY_RESERVED_FOR_MXOSRVR_IN_MB,"544"), // Override the memory quota system; set limit per each and every BMO SDDflt__(EXE_MEM_LIMIT_PER_BMO_IN_MB, "0"), DDui1__(EXE_NUM_CONCURRENT_SCRATCH_IOS, "4"), // DDkwd__(EXE_PARALLEL_DDL, "ON"), DDui___(EXE_PA_DP2_STATIC_AFFINITY, "1"), DDkwd__(EXE_SINGLE_BMO_QUOTA, "ON"), // The following 3 are only for testing overflow; zero value means: ignore DDui___(EXE_TEST_FORCE_CLUSTER_SPLIT_AFTER_MB, "0"), DDui___(EXE_TEST_FORCE_HASH_LOOP_AFTER_NUM_BUFFERS, "0"), DDui___(EXE_TEST_HASH_FORCE_OVERFLOW_EVERY, "0"), DDkwd__(EXE_UTIL_RWRS, "OFF"), DDkwd__(EXPAND_DP2_SHORT_ROWS, "ON"), XDDint__(EXPLAIN_DESCRIPTION_COLUMN_SIZE, "-1"), DDkwd__(EXPLAIN_DETAIL_COST_FOR_CALIBRATION, "FALSE"), DDkwd__(EXPLAIN_DISPLAY_FORMAT, "EXTERNAL"), DDkwd__(EXPLAIN_IN_RMS, "ON"), DDui___(EXPLAIN_OUTPUT_ROW_SIZE, "80"), DDui1__(EXPLAIN_ROOT_INPUT_VARS_MAX, "2000"), // maximum number of inputs that we can tolerate to // explain information for inputVars expression // this is needed to avoid stack overflow DDkwd__(EXPLAIN_SPACE_OPT, "ON"), DDkwd__(EXPLAIN_STRATEGIZER_PARAMETERS, "OFF"), DDflte_(EX_OP_ALLOCATE_ATP, ".02"), // Calibration // 01/23/98: 50. // Original: .1 DDflte_(EX_OP_ALLOCATE_BUFFER, "50."), DDflte_(EX_OP_ALLOCATE_BUFFER_POOL, ".1"), DDflte_(EX_OP_ALLOCATE_TUPLE, ".05"), // copy_atp affects the costing of NJ // History: // 08/21/98: 0.02, The previous change affected more than one operrator // 08/13/98: 1.0 // 01/08/98: 0.02 DDflte_(EX_OP_COPY_ATP, "1.1335"), DDflte_(EX_OP_DEQUEUE, ".02"), DDflte_(EX_OP_ENQUEUE, ".02"), DDkwd__(FAKE_VOLUME_ASSIGNMENTS, "OFF"), DDui1__(FAKE_VOLUME_NUM_VOLUMES, "24"), DDkwd__(FAST_DELETE, "OFF"), DDkwd__(FAST_DP2_SUBSET_OPT, "ON"), // upper and lower limit (2,10) must be in sync with error values in //ExFastTransport.cpp DDkwd__(FAST_EXTRACT_DIAGS, "OFF"), DDui2_10(FAST_EXTRACT_IO_BUFFERS, "6"), DDui___(FAST_EXTRACT_IO_TIMEOUT_SEC, "60"), DDkwd__(FAST_REPLYDATA_MOVE, "ON"), SDDkwd__(FFDC_DIALOUTS_FOR_MXCMP, "OFF"), DDkwd__(FIND_COMMON_SUBEXPRS_IN_OR, "ON"), DDui___(FLOAT_ESP_RANDOM_NUM_SEED, "0"), DDkwd__(FORCE_BUSHY_CQS, "ON"), DDkwd__(FORCE_PARALLEL_CREATE_INDEX, "OFF"), DDkwd__(FORCE_PARALLEL_INSERT_SELECT, "OFF"), DDkwd__(FORCE_PASS_ONE, "OFF"), DDkwd__(FORCE_PASS_TWO, "ON"), // Control if plan fragments need to be compressed // DDui___(FRAG_COMPRESSION_THRESHOLD, "16"), // Controls FSO Tests for debug // DDui___(FSO_RUN_TESTS, "0"), // Controls use of Simple File Scan Optimizer // IF 0 - Use original "Complex" File Scan Optimizer. // (in case simple causes problems) // IF 1 - Use logic to determine FSO to use. (default) // IF 2 - Use logic to determine FSO to use, but also use new // executor predicate costing. // IF >2 - Always use new "Simple" File Scan Optimizer. // (not recommended) // DDui___(FSO_TO_USE, "1"), // Disallow/Allow full outer joins in MultiJoin framework DDkwd__(FULL_OUTER_JOINS_SPOIL_JBB, "OFF"), DDkwd__(GA_PROP_INDEXES_ARITY_1, "ON"), // this default value is filled in // NADefaults::initCurrentDefaultsWithDefaultDefaults. The default value // is ON for static compiles and OFF for dynamic queries. DDkwd__(GENERATE_EXPLAIN, "ON"), DDipcBu(GEN_ALIGNED_PA_DP2_BUFFER_SIZE, "31000"), DDui1__(GEN_CBUF_BUFFER_SIZE, "30000"), DDui1__(GEN_CBUF_NUM_BUFFERS, "4"), DDui1__(GEN_CBUF_SIZE_DOWN, "8"), DDui1__(GEN_CBUF_SIZE_UP, "8"), DDui___(GEN_CS_BUFFER_SIZE, "0"), DDui___(GEN_CS_NUM_BUFFERS, "0"), DDui___(GEN_CS_SIZE_DOWN, "4"), DDui___(GEN_CS_SIZE_UP, "4"), DDkwd__(GEN_DBLIMITS_LARGER_BUFSIZE, "ON"), DDui1__(GEN_DDL_BUFFER_SIZE, "30000"), DDui1__(GEN_DDL_NUM_BUFFERS, "4"), DDui1__(GEN_DDL_SIZE_DOWN, "2"), DDui1__(GEN_DDL_SIZE_UP, "32"), DDui1__(GEN_DEL_BUFFER_SIZE, "512"), DDui1__(GEN_DEL_NUM_BUFFERS, "5"), DDui1__(GEN_DEL_SIZE_DOWN, "2"), DDui1__(GEN_DEL_SIZE_UP, "2"), DDui1__(GEN_DESC_BUFFER_SIZE, "10240"), DDui1__(GEN_DESC_NUM_BUFFERS, "4"), DDui1__(GEN_DESC_SIZE_DOWN, "2"), DDui1__(GEN_DESC_SIZE_UP, "16"), DDui1__(GEN_DP2I_BUFFER_SIZE, "10000"), DDui1__(GEN_DP2I_NUM_BUFFERS, "2"), DDui1__(GEN_DP2I_SIZE_DOWN, "32"), DDui1__(GEN_DP2I_SIZE_UP, "64"), DDui1__(GEN_DPDU_BUFFER_SIZE, "2"), DDui1__(GEN_DPDU_NUM_BUFFERS, "1"), DDui1__(GEN_DPDU_SIZE_DOWN, "2"), DDui1__(GEN_DPDU_SIZE_UP, "2"), DDui1__(GEN_DPRO_BUFFER_SIZE, "10240"), DDui1__(GEN_DPRO_NUM_BUFFERS, "1"), DDui1__(GEN_DPRO_SIZE_DOWN, "16"), DDui1__(GEN_DPRO_SIZE_UP, "16"), DDui1__(GEN_DPSO_BUFFER_SIZE, "10240"), DDui1__(GEN_DPSO_NUM_BUFFERS, "4"), DDui1__(GEN_DPSO_SIZE_DOWN, "2048"), DDui1__(GEN_DPSO_SIZE_UP, "2048"), DDui1__(GEN_DPUO_BUFFER_SIZE, "10000"), DDui1__(GEN_DPUO_NUM_BUFFERS, "4"), DDui1__(GEN_DPUO_SIZE_DOWN, "2048"), DDui1__(GEN_DPUO_SIZE_UP, "2048"), DDui1__(GEN_DPVI_BUFFER_SIZE, "10000"), DDui1__(GEN_DPVI_NUM_BUFFERS, "2"), DDui1__(GEN_DPVI_SIZE_DOWN, "32"), DDui1__(GEN_DPVI_SIZE_UP, "64"), DDui___(GEN_EIDR_BROKEN_TREE_CHECK_INTERVAL, "128"), DDipcBu(GEN_EIDR_BUFFER_SIZE, "31000"), DDui1__(GEN_EIDR_NUM_BUFFERS, "3"), DDui1__(GEN_EIDR_SIZE_DOWN, "2"), DDui1__(GEN_EIDR_SIZE_UP, "2"), DDui___(GEN_EIDR_STATS_REPLY_INTERVAL, "3000"), DDint__(GEN_EXCHANGE_MAX_MEM_IN_KB, "4000"), DDint__(GEN_EXCHANGE_MSG_COUNT, "80"), // Fast extract settings are for UDR method invocations DDui1__(GEN_FE_BUFFER_SIZE, "31000"), DDui1__(GEN_FE_NUM_BUFFERS, "2"), DDui1__(GEN_FE_SIZE_DOWN, "4"), DDui1__(GEN_FE_SIZE_UP, "4"), DDui1__(GEN_FSRT_BUFFER_SIZE, "5120"), DDui1__(GEN_FSRT_NUM_BUFFERS, "5"), DDui1__(GEN_FSRT_SIZE_DOWN, "2"), DDui1__(GEN_FSRT_SIZE_UP, "8"), // Do not alter the buffer size; it must be 56K for SCRATCH_MGMT_OPTION == 5 DDui1__(GEN_HGBY_BUFFER_SIZE, "262144"), DDui1__(GEN_HGBY_NUM_BUFFERS , "5"), DDui1__(GEN_HGBY_PARTIAL_GROUP_FLUSH_THRESHOLD, "100"), DDui___(GEN_HGBY_PARTIAL_GROUP_ROWS_PER_CLUSTER, "0"), DDui1__(GEN_HGBY_SIZE_DOWN, "2048"), DDui1__(GEN_HGBY_SIZE_UP, "2048"), // Do not alter the buffer size; it must be 56K for SCRATCH_MGMT_OPTION == 5 DDui1__(GEN_HSHJ_BUFFER_SIZE, "262144"), // Controls use of the hash join min/max optimization. DDkwd__(GEN_HSHJ_MIN_MAX_OPT, "OFF"), DDui1__(GEN_HSHJ_NUM_BUFFERS, "1"), DDui1__(GEN_HSHJ_SIZE_DOWN, "2048"), DDui1__(GEN_HSHJ_SIZE_UP, "2048"), DDui1__(GEN_IAR_BUFFER_SIZE, "10240"), DDui1__(GEN_IAR_NUM_BUFFERS, "1"), DDui1__(GEN_IAR_SIZE_DOWN, "2"), DDui1__(GEN_IAR_SIZE_UP, "4"), DDui1__(GEN_IMDT_BUFFER_SIZE, "2"), DDui1__(GEN_IMDT_NUM_BUFFERS, "1"), DDui1__(GEN_IMDT_SIZE_DOWN, "2"), DDui1__(GEN_IMDT_SIZE_UP, "2"), DDui1__(GEN_INS_BUFFER_SIZE, "10240"), DDui1__(GEN_INS_NUM_BUFFERS, "3"), DDui1__(GEN_INS_SIZE_DOWN, "4"), DDui1__(GEN_INS_SIZE_UP, "128"), // Controls LeanEr Expression generation DDkwd__(GEN_LEANER_EXPRESSIONS, "ON"), DDui1__(GEN_LOCK_BUFFER_SIZE, "1024"), DDui1__(GEN_LOCK_NUM_BUFFERS, "1"), DDui1__(GEN_LOCK_SIZE_DOWN, "4"), DDui1__(GEN_LOCK_SIZE_UP, "4"), DDui1__(GEN_MATR_BUFFER_SIZE, "2"), DDui1__(GEN_MATR_NUM_BUFFERS, "1"), DDui1__(GEN_MATR_SIZE_DOWN, "2"), DDui1__(GEN_MATR_SIZE_UP, "8"), DDui___(GEN_MAX_NUM_PART_DISK_ENTRIES, "3"), DDui___(GEN_MAX_NUM_PART_NODE_ENTRIES, "255"), DDui1__(GEN_MEM_PRESSURE_THRESHOLD, "100"), DDui1__(GEN_MJ_BUFFER_SIZE, "32768"), DDui1__(GEN_MJ_NUM_BUFFERS, "1"), DDui1__(GEN_MJ_SIZE_DOWN, "2"), DDui1__(GEN_MJ_SIZE_UP, "1024"), DDui1__(GEN_ONLJ_BUFFER_SIZE, "5120"), DDui1__(GEN_ONLJ_LEFT_CHILD_QUEUE_DOWN, "4"), DDui1__(GEN_ONLJ_LEFT_CHILD_QUEUE_UP, "2048"), DDui1__(GEN_ONLJ_NUM_BUFFERS, "5"), DDui1__(GEN_ONLJ_RIGHT_SIDE_QUEUE_DOWN, "2048"), DDui1__(GEN_ONLJ_RIGHT_SIDE_QUEUE_UP, "2048"), DDkwd__(GEN_ONLJ_SET_QUEUE_LEFT, "ON"), DDkwd__(GEN_ONLJ_SET_QUEUE_RIGHT, "ON"), DDui1__(GEN_ONLJ_SIZE_DOWN, "2048"), DDui1__(GEN_ONLJ_SIZE_UP, "2048"), DDui1__(GEN_PAR_LAB_OP_BUFFER_SIZE, "1024"), DDui1__(GEN_PAR_LAB_OP_NUM_BUFFERS, "1"), DDui1__(GEN_PAR_LAB_OP_SIZE_DOWN, "2"), DDui1__(GEN_PAR_LAB_OP_SIZE_UP, "4"), DDipcBu(GEN_PA_BUFFER_SIZE, "31000"), DDui1__(GEN_PA_NUM_BUFFERS, "5"), DDui1__(GEN_PA_SIZE_DOWN, "2048"), DDui1__(GEN_PA_SIZE_UP, "2048"), DDui1__(GEN_PROBE_CACHE_NUM_ENTRIES, "16384"),// number of entries DDui___(GEN_PROBE_CACHE_NUM_INNER, "0"), //0 means compiler decides DDui1__(GEN_PROBE_CACHE_SIZE_DOWN, "2048"), DDui1__(GEN_PROBE_CACHE_SIZE_UP, "2048"), DDui1__(GEN_RCRS_BUFFER_SIZE, "2"), DDui1__(GEN_RCRS_NUM_BUFFERS, "1"), DDui1__(GEN_RCRS_SIZE_DOWN, "8"), DDui1__(GEN_RCRS_SIZE_UP, "16"), DDkwd__(GEN_RESET_ACCESS_COUNTER, "OFF"), DDui1__(GEN_ROOT_BUFFER_SIZE, "2"), DDui1__(GEN_ROOT_NUM_BUFFERS, "1"), DDui1__(GEN_ROOT_SIZE_DOWN, "2"), DDui1__(GEN_ROOT_SIZE_UP, "2"), DDui1__(GEN_SAMPLE_BUFFER_SIZE, "5120"), DDui1__(GEN_SAMPLE_NUM_BUFFERS, "5"), DDui1__(GEN_SAMPLE_SIZE_DOWN, "16"), DDui1__(GEN_SAMPLE_SIZE_UP, "16"), DDui1__(GEN_SCAN_BUFFER_SIZE, "10240"), DDui1__(GEN_SCAN_NUM_BUFFERS, "10"), DDui1__(GEN_SCAN_SIZE_DOWN, "16"), DDui1__(GEN_SCAN_SIZE_UP, "32"), DDui1__(GEN_SEQFUNC_BUFFER_SIZE, "5120"), DDui1__(GEN_SEQFUNC_NUM_BUFFERS, "5"), DDui1__(GEN_SEQFUNC_SIZE_DOWN, "16"), DDui1__(GEN_SEQFUNC_SIZE_UP, "16"), DDkwd__(GEN_SEQFUNC_UNLIMITED_HISTORY, "OFF"), DDui1__(GEN_SEQ_BUFFER_SIZE, "512"), DDui1__(GEN_SEQ_NUM_BUFFERS, "5"), DDui1__(GEN_SEQ_SIZE_DOWN, "2"), DDui1__(GEN_SEQ_SIZE_UP, "2"), DDui1__(GEN_SGBY_BUFFER_SIZE, "5120"), DDui1__(GEN_SGBY_NUM_BUFFERS, "5"), DDui1__(GEN_SGBY_SIZE_DOWN, "2048"), DDui1__(GEN_SGBY_SIZE_UP, "2048"), DDui1__(GEN_SID_BUFFER_SIZE, "1024"), DDui1__(GEN_SID_NUM_BUFFERS, "4"), DDui1__(GEN_SNDB_BUFFER_SIZE, "2"), DDui1__(GEN_SNDB_NUM_BUFFERS, "4"), DDui1__(GEN_SNDB_SIZE_DOWN, "4"), DDui1__(GEN_SNDB_SIZE_UP, "128"), DDui___(GEN_SNDT_BUFFER_SIZE_DOWN, "0"), DDui___(GEN_SNDT_BUFFER_SIZE_UP, "0"), DDui1__(GEN_SNDT_NUM_BUFFERS, "2"), DDkwd__(GEN_SNDT_RESTRICT_SEND_BUFFERS, "ON"), DDui1__(GEN_SNDT_SIZE_DOWN, "4"), DDui1__(GEN_SNDT_SIZE_UP, "128"), DDui1__(GEN_SORT_MAX_BUFFER_SIZE, "5242880"), DDui1__(GEN_SORT_MAX_NUM_BUFFERS, "160"), DDui___(GEN_SORT_MIN_BUFFER_SIZE, "0"), DDui1__(GEN_SORT_NUM_BUFFERS, "4"), DDui1__(GEN_SORT_SIZE_DOWN, "2"), DDui1__(GEN_SORT_SIZE_UP, "1024"), DDui1__(GEN_SPLB_BUFFER_SIZE, "2"), DDui1__(GEN_SPLB_NUM_BUFFERS, "1"), DDui1__(GEN_SPLB_SIZE_DOWN, "2"), DDui1__(GEN_SPLB_SIZE_UP, "2"), DDui1__(GEN_SPLT_BUFFER_SIZE, "2"), DDui1__(GEN_SPLT_NUM_BUFFERS, "1"), DDui1__(GEN_SPLT_SIZE_DOWN, "2048"), DDui1__(GEN_SPLT_SIZE_UP, "2048"), DDui1__(GEN_STPR_BUFFER_SIZE, "1024"), DDui1__(GEN_STPR_NUM_BUFFERS, "3"), DDui1__(GEN_STPR_SIZE_DOWN, "2"), DDui1__(GEN_STPR_SIZE_UP, "2"), DDui1__(GEN_TFLO_BUFFER_SIZE, "5120"), DDui1__(GEN_TFLO_NUM_BUFFERS, "2"), DDui1__(GEN_TFLO_SIZE_DOWN, "8"), DDui1__(GEN_TFLO_SIZE_UP, "16"), DDui512(GEN_TIMEOUT_BUFFER_SIZE, "4096"), DDui1__(GEN_TIMEOUT_NUM_BUFFERS, "1"), DDui2__(GEN_TIMEOUT_SIZE_DOWN, "2"), DDui2__(GEN_TIMEOUT_SIZE_UP, "4"), DDui1__(GEN_TRAN_BUFFER_SIZE, "4096"), DDui1__(GEN_TRAN_NUM_BUFFERS, "1"), DDui1__(GEN_TRAN_SIZE_DOWN, "2"), DDui1__(GEN_TRAN_SIZE_UP, "4"), DDui1__(GEN_TRSP_BUFFER_SIZE, "10240"), DDui1__(GEN_TRSP_NUM_BUFFERS, "5"), DDui1__(GEN_TRSP_SIZE_DOWN, "16"), DDui1__(GEN_TRSP_SIZE_UP, "16"), DDui1__(GEN_TUPL_BUFFER_SIZE, "1024"), DDui1__(GEN_TUPL_NUM_BUFFERS, "4"), DDui1__(GEN_TUPL_SIZE_DOWN, "2048"), DDui1__(GEN_TUPL_SIZE_UP, "2048"), // GEN_UDRRS_ settings are for stored procedure result // set proxy plans DDui1__(GEN_UDRRS_BUFFER_SIZE, "31000"), DDui1__(GEN_UDRRS_NUM_BUFFERS, "2"), DDui1__(GEN_UDRRS_SIZE_DOWN, "4"), DDui1__(GEN_UDRRS_SIZE_UP, "128"), // GEN_UDR_ settings are for UDR method invocations DDui1__(GEN_UDR_BUFFER_SIZE, "31000"), DDui1__(GEN_UDR_NUM_BUFFERS, "2"), DDui1__(GEN_UDR_SIZE_DOWN, "4"), DDui1__(GEN_UDR_SIZE_UP, "4"), DDui1__(GEN_UNLJ_BUFFER_SIZE, "5120"), DDui1__(GEN_UNLJ_NUM_BUFFERS, "5"), DDui1__(GEN_UNLJ_SIZE_DOWN, "8"), DDui1__(GEN_UNLJ_SIZE_UP, "16"), DDui1__(GEN_UN_BUFFER_SIZE, "10240"), DDui1__(GEN_UN_NUM_BUFFERS, "5"), DDui1__(GEN_UN_SIZE_DOWN, "8"), DDui1__(GEN_UN_SIZE_UP, "16"), DDui1__(GEN_UPD_BUFFER_SIZE, "5120"), DDui1__(GEN_UPD_NUM_BUFFERS, "5"), DDui1__(GEN_UPD_SIZE_DOWN, "2"), DDui1__(GEN_UPD_SIZE_UP, "2"), // Used when Compressed_Internal_Format is on to reduce space in the // hash buffers (Hash Join and Hash Groupby) and sort buffers. DDkwd__(GEN_VARIABLE_LENGTH_BUFFERS, "OFF"), DDui1__(GEN_XPLN_BUFFER_SIZE, "4096"), DDui1__(GEN_XPLN_NUM_BUFFERS, "3"), DDui1__(GEN_XPLN_SIZE_DOWN, "8"), DDui1__(GEN_XPLN_SIZE_UP, "16"), // When less or equal to this CQD (5000 rows by default), a partial root // will be running in the Master. Set to 0 to disable the feature. DDint__(GROUP_BY_PARTIAL_ROOT_THRESHOLD, "5000"), DDkwd__(GROUP_BY_USING_ORDINAL, "MINIMUM"), // HASH_JOINS ON means do HASH_JOINS XDDkwd__(HASH_JOINS, "ON"), DDkwd__(HASH_JOINS_TYPE1_PLAN1, "ON"), DDkwd__(HASH_JOINS_TYPE1_PLAN2, "ON"), // HBase defaults // Some of the more important ones: // HBASE_CATALOG: Catalog of "_ROW_" and "_CELL_" schemas // HBASE_COPROCESSORS: Enable use of co-processors for aggregates. // need to set the coprocessor in HBase config file // HBASE_INTERFACE: JNI or JNI_TRX (transactional interface) // HBASE_MAX_COLUMN_xxx_LENGTH: Max length of some // string columns in the "_ROW_" and "_CELL_" schemas // HBASE_SQL_IUD_SEMANTICS: Off: Don't check for existing rows for insert/update DDkwd__(HBASE_ASYNC_DROP_TABLE, "OFF"), DDkwd__(HBASE_ASYNC_OPERATIONS, "ON"), // HBASE_CACHE_BLOCKS, ON => cache every scan, OFF => cache no scan // SYSTEM => cache scans which take less than 1 RS block cache mem. DDui___(HBASE_BLOCK_SIZE, "65536"), DDkwd__(HBASE_CACHE_BLOCKS, "SYSTEM"), DD_____(HBASE_CATALOG, "HBASE"), DDkwd__(HBASE_CHECK_AND_UPDEL_OPT, "ON"), DDkwd__(HBASE_COMPRESSION_OPTION, ""), DDkwd__(HBASE_COPROCESSORS, "ON"), DDkwd__(HBASE_CREATE_OLD_MD_FOR_UPGRADE_TESTING, "OFF"), DDkwd__(HBASE_DATA_BLOCK_ENCODING_OPTION, ""), // If set to 'OFF' we get a stub cost of 1 for delete operations. // We can remove this once the delete costing code has broader // exposure. DDkwd__(HBASE_DELETE_COSTING, "ON"), DDflt0_(HBASE_DOP_PARALLEL_SCANNER, "0."), DDkwd__(HBASE_FILTER_PREDS, "OFF"), DDkwd__(HBASE_HASH2_PARTITIONING, "ON"), DDui___(HBASE_INDEX_LEVEL, "0"), DDui___(HBASE_MAX_COLUMN_INFO_LENGTH, "10000"), DDui___(HBASE_MAX_COLUMN_NAME_LENGTH, "100"), DDui___(HBASE_MAX_COLUMN_VAL_LENGTH, "1000"), DDui___(HBASE_MAX_ESPS, "9999"), DDui___(HBASE_MAX_NUM_SEARCH_KEYS, "512"), DDui1__(HBASE_MIN_BYTES_PER_ESP_PARTITION, "67108864"), DDkwd__(HBASE_NATIVE_IUD, "ON"), DDui1__(HBASE_NUM_CACHE_ROWS_MAX, "10000"), DDui1__(HBASE_NUM_CACHE_ROWS_MIN, "100"), DDkwd__(HBASE_RANGE_PARTITIONING, "ON"), DDkwd__(HBASE_RANGE_PARTITIONING_MC_SPLIT, "ON"), DDkwd__(HBASE_RANGE_PARTITIONING_PARTIAL_COLS,"ON"), DDui___(HBASE_REGION_SERVER_MAX_HEAP_SIZE, "1024"), // in units of MB DDkwd__(HBASE_ROWSET_VSBB_OPT, "ON"), DDusht_(HBASE_ROWSET_VSBB_SIZE, "1024"), DDflt0_(HBASE_SALTED_TABLE_MAX_FILE_SIZE, "0"), DDkwd__(HBASE_SALTED_TABLE_SET_SPLIT_POLICY, "ON"), DD_____(HBASE_SCHEMA, "HBASE"), DDkwd__(HBASE_SERIALIZATION, "ON"), DD_____(HBASE_SERVER, ""), DDkwd__(HBASE_SMALL_SCANNER, "OFF"), DDkwd__(HBASE_SQL_IUD_SEMANTICS, "ON"), DDkwd__(HBASE_STATS_PARTITIONING, "ON"), DDkwd__(HBASE_TRANSFORM_UPDATE_TO_DELETE_INSERT, "OFF"), // If set to 'OFF' we get a stub cost of 1 for update operations. // We can remove this once the delete costing code has broader // exposure. This is 'OFF' at the moment because the update code // is only partially written. DDkwd__(HBASE_UPDATE_COSTING, "OFF"), DDkwd__(HBASE_UPDEL_CURSOR_OPT, "ON"), DDui___(HBASE_USE_FAKED_REGIONS, "0"), DD_____(HBASE_ZOOKEEPER_PORT, ""), DDui1__(HDFS_IO_BUFFERSIZE, "65536"), DDui___(HDFS_IO_BUFFERSIZE_BYTES, "0"), DDui1__(HDFS_IO_RANGE_TAIL, "16384"), DDkwd__(HDFS_PREFETCH, "ON"), DDkwd__(HDFS_READ_CONTINUE_ON_ERROR, "OFF"), DDui1__(HDFS_REPLICATION, "1"), DDkwd__(HDFS_USE_CURSOR_MULTI, "OFF"), DDkwd__(HGB_BITMUX, "OFF"), DDflt0_(HGB_CPUCOST_INITIALIZE, "1."), DDflt0_(HGB_DP2_MEMORY_LIMIT, "10000."), DDflte_(HGB_GROUPING_FACTOR_FOR_SPILLED_CLUSTERS, ".5"), DDflte_(HGB_MAX_TABLE_SIZE_FOR_CLUSTERS, "4E5"), DDflte_(HGB_MEMORY_AVAILABLE_FOR_CLUSTERS, "10"), DDflte_(HH_OP_ALLOCATE_BUCKET_ARRAY, ".1"), DDflte_(HH_OP_ALLOCATE_CLUSTER, ".1"), DDflte_(HH_OP_ALLOCATE_CLUSTERDB, ".1"), DDflte_(HH_OP_ALLOCATE_HASH_TABLE, ".05"), DDflt1_(HH_OP_HASHED_ROW_OVERHEAD, "8."), // From observation: // 03/11/98: probing the hash table is very inexpensive, // thus reduce this to almost zero. // change // CPUCOST_HASH_PER_KEY // when changing this value // It takes around 2 seconds to insert 100,000 rows into the chain: // @ 0.00005 secs per k instr: // k instr= 2/0.00005/100000 = 0.4 // History: // Before 03/11/98: 0.4 // Initially: 0.01 DDflte_(HH_OP_INSERT_ROW_TO_CHAIN, "0.51"), // From observation: // 03/11/98: probing the hash table is very inexpensive, // thus reduce this to almost zero. // 01/05/98: 15,433 rows probed per second @ 0.00005 seconds // per thousand of instructions, give: // seconds to probe one row = 1/15,433 = 0.000064796 // This time includes: time to position and to compare. Thus // subtract the time to compare to arrive to the proper number: // thd. of instructions per row inserted = // 1/15,433/0.00005 - CPUCOST_COMPARE_SIMPLE_DATA_TYPE = // 1.2959 - 0.2 = 1.0959 // History: // Before 03/11/98: 1.0959 // Before 01/05/98: 0.01 DDflt0_(HH_OP_PROBE_HASH_TABLE, "0.011"), DDflt0_(HH_OP_READ_HASH_BUFFER, "0."), DDflt0_(HH_OP_WRITE_HASH_BUFFER, "0."), // Added 10/16/02 DDkwd__(HIDE_INDEXES, "NONE"), DDansi_(HISTOGRAMS_SCHEMA, ""), // ------------------------------------------------------------------------- // Histogram fudge factors // ------------------------------------------------------------------------- //HIST_BASE_REDUCTION and HIST_PREFETCH externalized 08/21/01 CR 10-010713-3895 DDkwd__(HIST_ASSUME_INDEPENDENT_REDUCTION, "ON"), XDDkwd__(HIST_AUTO_GENERATION_OF_SAMPLE, "OFF"), DDkwd__(HIST_BASE_REDUCTION, "ON"), DDflt0_(HIST_BASE_REDUCTION_FUDGE_FACTOR, "0.1"), DDflt0_(HIST_CONSTANT_ALPHA, "0.5"), DDflt_0_1(HIST_DEFAULT_BASE_SEL_FOR_LIKE_WILDCARD, "0.50"), DDui1__(HIST_DEFAULT_NUMBER_OF_INTERVALS, "50"), DDui1__(HIST_DEFAULT_SAMPLE_MAX, "1000000"), DDui1__(HIST_DEFAULT_SAMPLE_MIN, "10000"), DDflt_0_1(HIST_DEFAULT_SAMPLE_RATIO, "0.01"), DDflte_(HIST_DEFAULT_SEL_FOR_BOOLEAN, "0.3333"), DDflt_0_1(HIST_DEFAULT_SEL_FOR_IS_NULL, "0.01"), DDflt_0_1(HIST_DEFAULT_SEL_FOR_JOIN_EQUAL, "0.3333"), DDflt_0_1(HIST_DEFAULT_SEL_FOR_JOIN_RANGE, "0.3333"), DDflt_0_1(HIST_DEFAULT_SEL_FOR_LIKE_NO_WILDCARD,"1.0"), DDflt_0_1(HIST_DEFAULT_SEL_FOR_LIKE_WILDCARD, "0.10"), DDflt_0_1(HIST_DEFAULT_SEL_FOR_PRED_EQUAL, "0.01"), DDflt_0_1(HIST_DEFAULT_SEL_FOR_PRED_RANGE, "0.3333"), // control the amount of data in each partition of the persistent sample tble. DDflt1_(HIST_FETCHCOUNT_SCRATCH_VOL_THRESHOLD, "10240000"), DDkwd__(HIST_FREQ_VALS_NULL_FIX, "ON"), DDkwd__(HIST_INCLUDE_SKEW_FOR_NON_INNER_JOIN, "ON"), DDkwd__(HIST_INTERMEDIATE_REDUCTION, "OFF"), DDflt0_(HIST_INTERMEDIATE_REDUCTION_FUDGE_FACTOR, "0.25"), DDflt_0_1(HIST_JOIN_CARD_LOWBOUND, "1.0"), DDui1__(HIST_LOW_UEC_THRESHOLD, "55"), DDui1__(HIST_MAX_NUMBER_OF_INTERVALS, "10000"), DDkwd__(HIST_MC_STATS_NEEDED, "ON"), DDkwd__(HIST_MERGE_FREQ_VALS_FIX, "ON"), // Histogram min/max optimization: when the predicate is of form // T.A = MIN/MAX(S.B), replace the histogram(T.A) with // single_int_histogram(MIN/MAX(S.B)). Do this only when // there is no local predicate on S and there exists a frequent // value that is equals to MIN/MAX(S.B). DDkwd__(HIST_MIN_MAX_OPTIMIZATION, "ON"), // This CQD is used to control the number of missing stats warnings // that should be generated. // 0 ? Display no warnings. // 1 ? Display only missing single column stats warnings. These include 6008 and 6011 // 2 ? Display all single column missing stats warnings and // multi-column missing stats warnings for Scans only. // 3 ? Display all missing single column stats warnings and missing // multi-column stats warnings for Scans and Join operators only.. // 4 ? Display all missing single column stats and missing multi-column // stats warnings for all operators including Scans, Joins and GroupBys. // The CQD also does not have an impact on the auto update stats behavior. The stats will // still be automatically generated even if the warnings have been suppressed. // USTAT_AUTO_MISSING_STATS_LEVEL. // Default behavior is to generate all warnings XDDui___(HIST_MISSING_STATS_WARNING_LEVEL, "4"), // This specifies the time interval after which the fake statistics // should be refreshed. This was done primarirly for users // which did not want to update statistics on temporary tables. // If this statistics is cached, then this results in bad plans // These users can have this default set to 0, in which case histograms // with fake statistics will never be cached. Note that when ustat // automation is on, this value divided by 360 is used. XDDui___(HIST_NO_STATS_REFRESH_INTERVAL, "3600"), DDflt1_(HIST_NO_STATS_ROWCOUNT, "100"), DDflt1_(HIST_NO_STATS_UEC, "2"), DDflt1_(HIST_NO_STATS_UEC_CHAR1, "10"), DDui1__(HIST_NUM_ADDITIONAL_DAYS_TO_EXTRAPOLATE, "4"), DDintN1__(HIST_ON_DEMAND_STATS_SIZE, "0"), DDui___(HIST_OPTIMISTIC_CARD_OPTIMIZATION, "1"), XDDkwd__(HIST_PREFETCH, "ON"), XDDkwd__(HIST_REMOVE_TRAILING_BLANKS, "ON"), // should remove after verifying code is solid DDansi_(HIST_ROOT_NODE, ""), XDDflt1_(HIST_ROWCOUNT_REQUIRING_STATS, "500"), DDflt0_(HIST_SAME_TABLE_PRED_REDUCTION, "0.0"), DDvol__(HIST_SCRATCH_VOL, ""), // control the amount of data in each partition of the sample tble. DDflt1_(HIST_SCRATCH_VOL_THRESHOLD, "10240000"), DDflt_0_1(HIST_SKEW_COST_ADJUSTMENT, "0.2"), DDkwd__(HIST_SKIP_MC_FOR_NONKEY_JOIN_COLUMNS, "OFF"), DDui___(HIST_TUPLE_FREQVAL_LIST_THRESHOLD, "40"), DDkwd__(HIST_USE_HIGH_FREQUENCY_INFO, "ON"), XDDkwd__(HIST_USE_SAMPLE_FOR_CARDINALITY_ESTIMATION , "ON"), // CQDs for Trafodion on Hive // Main ones to use: // HIVE_MAX_STRING_LENGTH: Hive "string" data type gets converted // into a VARCHAR with this length // This should be deprecated from Trafodion R2.1 // HIVE_MAX_STRING_LENGTH_IN_BYTES: Hive "string" data type gets converted // into a VARCHAR with this length // HIVE_MIN_BYTES_PER_ESP_PARTITION: Make one ESP for this many bytes // HIVE_NUM_ESPS_PER_DATANODE: Equivalent of MAX_ESPS_PER_CPU_PER_OP // Note that this is really per SeaQuest node DD_____(HIVE_CATALOG, ""), DDkwd__(HIVE_DATA_MOD_CHECK, "ON"), DDkwd__(HIVE_DEFAULT_CHARSET, (char *)SQLCHARSETSTRING_UTF8), DD_____(HIVE_DEFAULT_SCHEMA, "HIVE"), DD_____(HIVE_FILE_CHARSET, ""), DD_____(HIVE_FILE_NAME, "/hive/tpcds/customer/customer.dat" ), DD_____(HIVE_HDFS_STATS_LOG_FILE, ""), DDui___(HIVE_INSERT_ERROR_MODE, "1"), DDint__(HIVE_LIB_HDFS_PORT_OVERRIDE, "-1"), DDint__(HIVE_LOCALITY_BALANCE_LEVEL, "0"), DDui___(HIVE_MAX_ESPS, "9999"), DDui___(HIVE_MAX_STRING_LENGTH, "32000"), DDui___(HIVE_MAX_STRING_LENGTH_IN_BYTES, "32000"), DDkwd__(HIVE_METADATA_JAVA_ACCESS, "ON"), DDint__(HIVE_METADATA_REFRESH_INTERVAL, "0"), DDflt0_(HIVE_MIN_BYTES_PER_ESP_PARTITION, "67108864"), DDui___(HIVE_NUM_ESPS_PER_DATANODE, "2"), DDpct__(HIVE_NUM_ESPS_ROUND_DEVIATION, "34"), DDint__(HIVE_SCAN_SPECIAL_MODE, "0"), DDkwd__(HIVE_SORT_HDFS_HOSTS, "ON"), DDkwd__(HIVE_USE_EXT_TABLE_ATTRS, "ON"), DD_____(HIVE_USE_FAKE_SQ_NODE_NAMES, "" ), DDkwd__(HIVE_USE_FAKE_TABLE_DESC, "OFF"), DDkwd__(HIVE_USE_HASH2_AS_PARTFUNCION, "ON"), // ------------------------------------------------------------------------- DDui2__(HJ_BUFFER_SIZE, "32"), DDflt0_(HJ_CPUCOST_INITIALIZE, "1."), DDui1__(HJ_INITIAL_BUCKETS_PER_CLUSTER, "4."), DDkwd__(HJ_NEW_MCSB_PLAN, "OFF"), DDint__(HJ_SCAN_TO_NJ_PROBE_SPEED_RATIO, "2000"), DDkwd__(HJ_TYPE, "HYBRID"), DD_____(HP_ROUTINES_SCHEMA, "NEO.HP_ROUTINES"), // Must be in form <cat>.<sch> DDkwd__(HQC_CONVDOIT_DISABLE_NUMERIC_CHECK, "OFF"), DDkwd__(HQC_LOG, "OFF"), DD_____(HQC_LOG_FILE, ""), DDui1_10(HQC_MAX_VALUES_PER_KEY, "5"), DDkwd__(HYBRID_QUERY_CACHE, "ON"), DDkwd__(IF_LOCKED, "WAIT"), // ignore_duplicate_keys is no more valid. It is still // here as dummy for compatibility with existing scripts. DDkwd__(IGNORE_DUPLICATE_KEYS, "SYSTEM"), // in mode_special_1, duplicate rows are ignored if inserting a row in the // base table which has a user defined primary key. If this default is set // to OFF in mode_special_1, then duplicate rows are not ignored. // // If not in mode_special_1, and this default is ON, then duplicate rows // are ignored. DDkwd__(IGNORE_DUPLICATE_ROWS, "SYSTEM"), DDkwd__(IMPLICIT_DATETIME_INTERVAL_HOSTVAR_CONVERSION, "FALSE"), DDkwd__(IMPLICIT_HOSTVAR_CONVERSION, "FALSE"), // threshold for the number of rows inserted into a volatile/temp // table which will cause an automatic update stats. // -1 indicates do not upd stats. 0 indicates always upd stats. DDint__(IMPLICIT_UPD_STATS_THRESHOLD, "-1"), //"10000"), DDkwd__(INCORPORATE_SKEW_IN_COSTING, "ON"), DDkwd__(INDEX_ELIMINATION_LEVEL, "AGGRESSIVE"), DDui1__(INDEX_ELIMINATION_THRESHOLD, "50"), SDDkwd__(INFER_CHARSET, "OFF"), // UDF initial row cost CQDs DDui___(INITIAL_UDF_CPU_COST, "100"), DDui___(INITIAL_UDF_IO_COST, "1"), DDui___(INITIAL_UDF_MSG_COST, "2"), DDkwd__(INPUT_CHARSET, (char *)SQLCHARSETSTRING_ISO88591), // SQLCHARSETSTRING_UTF8 XDDkwd__(INSERT_VSBB, "SYSTEM"), //10-040621-7139-begin //This CDQ will alllow the user to force the compiler to //choose an interactive access path. ie., prefer access path with //index in it. If such a path is not found which ever access path is //available is chosen. DDkwd__(INTERACTIVE_ACCESS, "OFF"), //10-040621-7139-end DDkwd__(IN_MEMORY_OBJECT_DEFN, "OFF"), DDflte_(IO_SEEKS_INORDER_FACTOR, "0.10"), // History: // 3/11/99 Changed to zero because in large tables the read-ahead // seems negligible (and/or hard to simulate) // Before 3/11/99: 0.58 DDflt0_(IO_TRANSFER_COST_PREFETCH_MISSES_FRACTION, "0."), XDDkwd__(ISOLATION_LEVEL, "READ_COMMITTED"), XDDkwd__(ISOLATION_LEVEL_FOR_UPDATES, "NONE"), SDDkwd__(ISO_MAPPING, (char *)SQLCHARSETSTRING_ISO88591), DDkwd__(IS_DB_TRANSPORTER, "OFF"), DDkwd__(IS_SQLCI, "FALSE"), DDkwd__(IUD_NONAUDITED_INDEX_MAINT, "OFF"), DDkwd__(JDBC_PROCESS, "FALSE"), // Force the join order given by the user XDDkwd__(JOIN_ORDER_BY_USER, "OFF"), DDkwd__(KEYLESS_NESTED_JOINS, "OFF"), XDDkwd__(LAST0_MODE, "OFF"), DDansi_(LDAP_USERNAME, ""), // Disallow/Allow left joins in MultiJoin framework DDkwd__(LEFT_JOINS_SPOIL_JBB, "OFF"), DDkwd__(LIMIT_HBASE_SCAN_DOP, "OFF"), // if this default is set to ON, then the max precision of a numeric // expression(arithmetic, aggregate) is limited to MAX_NUMERIC_PRECISION // (= 18). If this is set to OFF, the default value, then the max precision // is computed based on the operands and the operation which could make the // result a software datatype(BIGNUM). Software datatypes give better // precision but degraded performance. SDDkwd__(LIMIT_MAX_NUMERIC_PRECISION, "SYSTEM"), // Size in bytes used to perform garbage collection to lob data file // default size is 5GB . Change to adjust disk usage. DDint__(LOB_GC_LIMIT_SIZE, "5000"), DDint__(LOB_HDFS_PORT, "0"), DD_____(LOB_HDFS_SERVER, "default"), // Size of memoryin bytes used to perform I/O to lob data file // default size is 512MB . Change to adjust memory usage. DDint__(LOB_MAX_CHUNK_MEM_SIZE, "512"), // default size is 10 G (10000 M) DDint__(LOB_MAX_SIZE, "10000"), // default size is 32000. Change this to extract more data into memory. DDui___(LOB_OUTPUT_SIZE, "32000"), DD_____(LOB_STORAGE_FILE_DIR, "/lobs"), // storage types defined in exp/ExpLOBenum.h. // Default is hdfs_file (value = 1) DDint__(LOB_STORAGE_TYPE, "2"), //New default size for buffer size for local node DDui2__(LOCAL_MESSAGE_BUFFER_SIZE, "50"), DDansi_(MAINTAIN_CATALOG, "NEO"), // Set the maintain control table timeout to 5 minutes DDint__(MAINTAIN_CONTROL_TABLE_TIMEOUT, "30000"), DDint__(MAINTAIN_REORG_PRIORITY, "-1"), DDint__(MAINTAIN_REORG_PRIORITY_DELTA, "0"), DDint__(MAINTAIN_REORG_RATE, "40"), DDint__(MAINTAIN_REORG_SLACK, "0"), DDint__(MAINTAIN_UPD_STATS_SAMPLE, "-1"), DDkwd__(MARIAQUEST_PROCESS, "OFF"), DDSint__(MASTER_PRIORITY, "0"), DDSint__(MASTER_PRIORITY_DELTA, "0"), DDint__(MATCH_CONSTANTS_OF_EQUALITY_PREDICATES, "2"), DDui1__(MAX_ACCESS_NODES_PER_ESP, "1024"), // this is the default length of a param which is typed as a VARCHAR. DDui2__(MAX_CHAR_PARAM_DEFAULT_SIZE, "32"), DDint__(MAX_DEPTH_TO_CHECK_FOR_CYCLIC_PLAN, "1"), // default value of maximum dp2 groups for a hash-groupby DDui1__(MAX_DP2_HASHBY_GROUPS, "1000"), // // The max number of ESPs per cpu for a given operator. // i.e. this number times the number of available CPUs is "max pipelines". // // On Linux, "CPU" means cores. // DDflt__(MAX_ESPS_PER_CPU_PER_OP, "0.5"), DDui1__(MAX_EXPRS_USED_FOR_CONST_FOLDING, "1000"), // used in hash groupby costing in esp/master DDui1__(MAX_HEADER_ENTREIS_PER_HASH_TABLE, "250000"), DDui1__(MAX_LONG_VARCHAR_DEFAULT_SIZE, "2000"), DDui1__(MAX_LONG_WVARCHAR_DEFAULT_SIZE, "2000"), DD18_128(MAX_NUMERIC_PRECISION_ALLOWED, "128"), // The max number of vertical partitions for optimization to be done under // a VPJoin. DDui___(MAX_NUM_VERT_PARTS_FOR_OPT, "20"), DDui1__(MAX_ROWS_LOCKED_FOR_STABLE_ACCESS, "1"), // The max number of skewed values detected - skew buster DDui1__(MAX_SKEW_VALUES_DETECTED, "10000"), // multi-column skew inner table broadcast threashold in bytes (=1 MB) DDui___(MC_SKEW_INNER_BROADCAST_THRESHOLD, "1000000"), // multi-column skew sensitivity threshold // // For new MCSB (that is, we utilize MC skews directly), // apply the MC skew buster when // frequency of MC skews > MC_SKEW_SENSITIVITY_THRESHOLD / count_of_cpus // // For old MCSB (that is, we guess MC skews from SC skews), // apply the MC skew buster when // SFa,b... * countOfPipeline > MC_SKEW_SENSITIVITY_THRESHOLD // SFa,b ... is the skew factor for multi column a,b,... // XDDflt__(MC_SKEW_SENSITIVITY_THRESHOLD, "0.1"), DDui___(MDAM_APPLY_RESTRICTION_CHECK, "2"), DDflt0_(MDAM_CPUCOST_NET_OVH, "2000."), // The cost that takes to build the mdam network per predicate: // (we assume that the cost to build the mdam network is a linear function // of the key predicates) DDflt0_(MDAM_CPUCOST_NET_PER_PRED, ".5"), // controls the max. number of seek positions under which MDAM will be // allowed. Set it to 0 turns off the feature. XDDui___(MDAM_NO_STATS_POSITIONS_THRESHOLD, "10"), // MDAM_SCAN_METHOD ON means MDAM is enabled, // OFF means MDAM is disabled. MDAM is enabled by default // externalized 06/21/01 RV // mdam off on open source at this point XDDkwd__(MDAM_SCAN_METHOD, "ON"), DDflt0_(MDAM_SELECTION_DEFAULT, "0.5"), DDflt0_(MDAM_TOTAL_UEC_CHECK_MIN_RC_THRESHOLD, "10000"), DDflt0_(MDAM_TOTAL_UEC_CHECK_UEC_THRESHOLD, "0.2"), DDkwd__(MDAM_TRACING, "OFF"), // controls the max. number of probes at which MDAM under NJ plan will be // generated. Set it to 0 turns off the feature. XDDui___(MDAM_UNDER_NJ_PROBES_THRESHOLD, "0"), // controls the amount of penalty for CPU resource required that is // beyond the value specified by MDOP_CPUS_SOFT_LIMIT. The number of extra CPUs // actually allocated is computed as the origial value divided by the CQD. // If the CQD is set to 1 (default), then there is no penalty. DDflt1_(MDOP_CPUS_PENALTY, "70"), // specify the limit beyond which the number of CPUs will be limited. DDui1__(MDOP_CPUS_SOFT_LIMIT, "64"), // controls the amount of penalty for CPU resource per memory unit // required that is beyond the value specified by MDOP_CPUS_SOFT_LIMIT. // The number of extra CPUs actually allocated is computed as the // origial value divided by the CQD. DDflt1_(MDOP_MEMORY_PENALTY, "70"), // CQD to test/enforce heap memory upper limits // values are in KB DDui___(MEMORY_LIMIT_CMPCTXT_UPPER_KB, "0"), DDui___(MEMORY_LIMIT_CMPSTMT_UPPER_KB, "0"), DDui___(MEMORY_LIMIT_HISTCACHE_UPPER_KB, "0"), DDui___(MEMORY_LIMIT_NATABLECACHE_UPPER_KB, "0"), DDui___(MEMORY_LIMIT_QCACHE_UPPER_KB, "0"), // SQL/MX Compiler/Optimzer Memory Monitor. DDkwd__(MEMORY_MONITOR, "OFF"), DDui1__(MEMORY_MONITOR_AFTER_TASKS, "30000"), DDkwd__(MEMORY_MONITOR_IN_DETAIL, "OFF"), DD_____(MEMORY_MONITOR_LOGFILE, "NONE"), DDkwd__(MEMORY_MONITOR_LOG_INSTANTLY, "OFF"), DDui1__(MEMORY_MONITOR_TASK_INTERVAL, "5000"), // Hash join currently uses 20 Mb before it overflows, use this // as the limit DDui1__(MEMORY_UNITS_SIZE, "20480"), // amount of memory available per CPU for any query SDDflte_(MEMORY_UNIT_ESP, "300"), DDflt1_(MEMORY_USAGE_NICE_CONTEXT_FACTOR, "1"), DDflt1_(MEMORY_USAGE_OPT_PASS_FACTOR, "1.5"), DDui1__(MEMORY_USAGE_SAFETY_NET, "500"), // MERGE_JOINS ON means do MERGE_JOINS XDDkwd__(MERGE_JOINS, "ON"), DDkwd__(MERGE_JOIN_ACCEPT_MULTIPLE_NJ_PROBES, "ON"), DDkwd__(MERGE_JOIN_CONTROL, "OFF"), DDkwd__(MERGE_JOIN_WITH_POSSIBLE_DEADLOCK, "OFF"), // controls if merge/upsert is supported on table with a unique index DDkwd__(MERGE_WITH_UNIQUE_INDEX, "ON"), SDDui___(METADATA_CACHE_SIZE, "20"), DDkwd__(METADATA_STABLE_ACCESS, "OFF"), //------------------------------------------------------------------- // Minimum ESP parallelism. If the user does not specify this value // (default value 0 does not change) then the number of segments // (totalNumCPUs/16, where totalNumCPUs=gpClusterInfo->numOfSMPs()) // will be used as the value of minimum ESP parallelism. If user sets // this value it should be integer between 1 and totalNumCPUs. In // this case actual value of minimum ESP parallelism will be // min(CDQ value, MDOP), where MDOP (maximum degree of parallelism) // is defined by adaptive segmentation //------------------------------------------------------------------- DDui___(MINIMUM_ESP_PARALLELISM, "0"), DDui1__(MIN_LONG_VARCHAR_DEFAULT_SIZE, "1"), DDui1__(MIN_LONG_WVARCHAR_DEFAULT_SIZE, "1"), DDkwd__(MIN_MAX_OPTIMIZATION, "ON"), DDpct__(MJ_BMO_QUOTA_PERCENT, "0"), DDflt0_(MJ_CPUCOST_ALLOCATE_LIST, ".05"), DDflt0_(MJ_CPUCOST_CLEAR_LIST, ".01"), DDflt0_(MJ_CPUCOST_GET_NEXT_ROW_FROM_LIST, ".01"), // calibrated 01/16/98: // 01/13/98 40000., this did not work with small tables // Before 01/13/98: 0.5 DDflt0_(MJ_CPUCOST_INITIALIZE, "1."), // Before 03/12/98: 0.4 // Before 01/13/98: 0.01 DDflt0_(MJ_CPUCOST_INSERT_ROW_TO_LIST, ".0001"), DDflt0_(MJ_CPUCOST_REWIND_LIST, ".01"), DDflte_(MJ_LIST_NODE_SIZE, ".01"), DDkwd__(MJ_OVERFLOW, "ON"), DDkwd__(MODE_SEABASE, "ON"), DDkwd__(MODE_SEAHIVE, "ON"), SDDkwd__(MODE_SPECIAL_1, "OFF"), SDDkwd__(MODE_SPECIAL_2, "OFF"), // enable special features in R2.93 DDkwd__(MODE_SPECIAL_3, "OFF"), DDkwd__(MODE_SPECIAL_4, "OFF"), DDkwd__(MODE_SPECIAL_5, "OFF"), DDnsklo(MP_CATALOG, "$SYSTEM.SQL"), DDnsksv(MP_SUBVOLUME, "SUBVOL"), DDnsksy(MP_SYSTEM, ""), DDnskv_(MP_VOLUME, "$VOL"), DDflt0_(MSCF_CONCURRENCY_IO, "0.10"), DDflt0_(MSCF_CONCURRENCY_MSG, "0.10"), // Tests suggest that RELEASE is about 2.5 times faster than DEBUG // RELEASE is always faster than DEBUG code so this default must be // at least one. DDflt1_(MSCF_DEBUG_TO_RELEASE_MULTIPLIER, "2.5"), // MSCF_ET_CPU units are seconds/thousand of CPU instructions // History: // Before 02/01/99, the speed was calibrated for debug, now its is for // release: 0.00005 DDflte_(MSCF_ET_CPU, "0.000014"), // was 0.00002 12/2k // MSCF_ET_IO_TRANSFER units are seconds/Kb // History // Changed to '0.000455' to reflect new calibration data // Before 03/11/99 "0.000283" DDflte_(MSCF_ET_IO_TRANSFER, "0.00002"), // Assume time to transfer a KB of local message is 5 times // faster than the time to transfer a KB from disk // Units of MSCF_ET_LOCAL_MSG_TRANSFER are seconds/Kb DDflte_(MSCF_ET_LOCAL_MSG_TRANSFER, "0.000046"), // $$$ This should be removed. It is only used by preliminary costing // for the materialize operator, which should not be using it. DDflte_(MSCF_ET_NM_PAGE_FAULTS, "1"), // "?" used? // : for calibration on 04/08/2004 // Seek time will be derived from disk type. // MSCF_ET_NUM_IO_SEEKS units are seconds DDflte_(MSCF_ET_NUM_IO_SEEKS, "0.0038"), // Assume sending a local message takes 1000 cpu instructions DDflte_(MSCF_ET_NUM_LOCAL_MSGS, "0.000125"), // Assume sending a remote message takes 10000 cpu instructions // DDflte_(MSCF_ET_NUM_REMOTE_MSGS, "0.00125"), // Change the number of instructions to encode a remote message to be // the same as the local message DDflte_(MSCF_ET_NUM_REMOTE_MSGS, "0.000125"), // Assume 1MB/second transfer rate for transferring remote message bytes // (Based on 10 Megabit/second Ethernet transfer rate) // MSCF_ET_REMOTE_MSG_TRANSFER units are kb/Sec // DDflte_(MSCF_ET_REMOTE_MSG_TRANSFER, "0.001"), // the remote msg are 10% more costly than the local transfer // but also may depend on the physical link, so externalize it DDflte_(MSCF_ET_REMOTE_MSG_TRANSFER, "0.00005"), // ------------------------------------------------------------------------- // Factors used for estimating overlappability of I/O and messaging used // in the calculation for overlapped addition // Assume 50% overlap for now. // ------------------------------------------------------------------------- DDflte_(MSCF_OV_IO, "0.5"), DDflte_(MSCF_OV_IO_TRANSFER, "0.5"), DDflte_(MSCF_OV_LOCAL_MSG_TRANSFER, "0.5"), DDflte_(MSCF_OV_MSG, "0.5"), DDflte_(MSCF_OV_NUM_IO_SEEKS, "0.5"), DDflte_(MSCF_OV_NUM_LOCAL_MSGS, "0.5"), DDflte_(MSCF_OV_NUM_REMOTE_MSGS, "0.5"), DDflte_(MSCF_OV_REMOTE_MSG_TRANSFER, "0.5"), DDui___(MSCF_SYS_DISKS, "16"), // "?" used? DDui___(MSCF_SYS_MEMORY_PER_CPU, "1"), // "?" used? DDui___(MSCF_SYS_TEMP_SPACE_PER_DISK, "50"), // "?" used? DDkwd__(MTD_GENERATE_CC_PREDS, "ON"), DDint__(MTD_MDAM_NJ_UEC_THRESHOLD, "100"), // Allow for the setting of the row count in a long running operation XDDui1__(MULTI_COMMIT_SIZE, "10000"), // try the join order specified in the queries, this will cause the // enumeration of the initial join order specified by the user // among the join orders enumerated // ** This is currently OFF by default ** DDkwd__(MULTI_JOIN_CONSIDER_INITIAL_JOIN_ORDER, "OFF"), // used in JBBSubsetAnalysis::isAStarPattern for finding lowest cost // outer subtree for NJ into fact table. DDflt0_(MULTI_JOIN_PROBE_HASH_TABLE, "0.000001"), // threshold above which a query is considered complex // this only applies to queries that can be rewritten // as Multi Joins DDint__(MULTI_JOIN_QUERY_COMPLEXITY_THRESHOLD, "5120"), // threshold above which a query is considered to do // a lot of work his only applies to queries that can be // rewritten as Multi Joins DDflt__(MULTI_JOIN_QUERY_WORK_THRESHOLD, "0"), SDDint__(MULTI_JOIN_THRESHOLD, "3"), DDint__(MULTI_PASS_JOIN_ELIM_LIMIT, "5"), DDflt0_(MU_CPUCOST_INITIALIZE, ".05"), DDui___(MU_INITIAL_BUFFER_COUNT, "5."), DDflte_(MU_INITIAL_BUFFER_SIZE, "1033.7891"), //-------------------------------------------------------------------------- //++ MV XDDkwd__(MVGROUP_AUTOMATIC_CREATION, "ON"), DDkwd__(MVQR_ALL_JBBS_IN_QD, "OFF"), #ifdef NDEBUG DDkwd__(MVQR_ENABLE_LOGGING, "OFF"), // No logging by default for release #else DDkwd__(MVQR_ENABLE_LOGGING, "ON"), #endif DD_____(MVQR_FILENAME_PREFIX, "/usr/tandem/sqlmx/log"), DDkwd__(MVQR_LOG_QUERY_DESCRIPTORS, "OFF"), DDint__(MVQR_MAX_EXPR_DEPTH, "20"), DDint__(MVQR_MAX_EXPR_SIZE, "100"), DDint__(MVQR_MAX_MV_JOIN_SIZE, "10"), DDkwd__(MVQR_PARAMETERIZE_EQ_PRED, "ON"), DDkwd__(MVQR_PRIVATE_QMS_INIT, "SMD"), DDansi_(MVQR_PUBLISH_TABLE_LOCATION, ""), DDkwd__(MVQR_PUBLISH_TO, "BOTH"), DDansi_(MVQR_REWRITE_CANDIDATES, ""), XDDkwd__(MVQR_REWRITE_ENABLED_OPTION, "OFF"), // @ZX -- change to ON later XDDui0_5(MVQR_REWRITE_LEVEL, "0"), XDDkwd__(MVQR_REWRITE_SINGLE_TABLE_QUERIES, "ON"), DDkwd__(MVQR_USE_EXTRA_HUB_TABLES, "ON"), DDkwd__(MVQR_USE_RI_FOR_EXTRA_HUB_TABLES, "OFF"), DD_____(MVQR_WORKLOAD_ANALYSIS_MV_NAME, ""), XDDMVA__(MV_AGE, "0 MINUTES"), XDDkwd__(MV_ALLOW_SELECT_SYSTEM_ADDED_COLUMNS, "OFF"), DDkwd__(MV_AS_ROW_TRIGGER, "OFF"), DDkwd__(MV_AUTOMATIC_LOGGABLE_COLUMN_MAINTENANCE, "ON"), DDkwd__(MV_DUMP_DEBUG_INFO, "OFF"), DDkwd__(MV_ENABLE_INTERNAL_REFRESH_SHOWPLAN, "OFF"), DDui___(MV_LOG_CLEANUP_SAFETY_FACTOR, "200"), DDui___(MV_LOG_CLEANUP_USE_MULTI_COMMIT, "1"), SDDkwd__(MV_LOG_PUSH_DOWN_DP2_DELETE, "OFF"), // push down mv logging tp dp2 for delete SDDkwd__(MV_LOG_PUSH_DOWN_DP2_INSERT, "OFF"), // push down mv logging tp dp2 for insert SDDkwd__(MV_LOG_PUSH_DOWN_DP2_UPDATE, "ON"), // push down mv logging tp dp2 for update SDDui___(MV_REFRESH_MAX_PARALLELISM, "0"), DDui___(MV_REFRESH_MAX_PIPELINING, "0"), DDint__(MV_REFRESH_MDELTA_MAX_DELTAS_THRESHOLD, "31"), DDint__(MV_REFRESH_MDELTA_MAX_JOIN_SIZE_FOR_SINGLE_PHASE, "3"), DDint__(MV_REFRESH_MDELTA_MIN_JOIN_SIZE_FOR_SINGLE_PRODUCT_PHASE, "8"), DDint__(MV_REFRESH_MDELTA_PHASE_SIZE_FOR_MID_RANGE, "6"), DDkwd__(MV_TRACE_INCONSISTENCY, "OFF"), DDSint__(MXCMP_PRIORITY, "0"), DDSint__(MXCMP_PRIORITY_DELTA, "0"), DDkwd__(NAMETYPE, "ANSI"), DDkwd__(NAR_DEPOBJ_ENABLE, "ON"), DDkwd__(NAR_DEPOBJ_ENABLE2, "ON"), // NATIONAL_CHARSET reuses the "kwd" logic here, w/o having to add any // DF_ token constants (this can be considered either clever or kludgy coding). DDkwd__(NATIONAL_CHARSET, (char *)SQLCHARSETSTRING_UNICODE), // These CQDs are reserved for NCM. These are mostly used for // internal testing, turning on/off features for debugging, and for tuning. // In normal situations, these will not be externalized in keeping // with the very few CQDs philosophy of NCM. // These are applicable only in conjunction with SIMPLE_COST_MODEL 'on'. DDflt__(NCM_CACHE_SIZE_IN_BLOCKS, "52"), DDflt__(NCM_COSTLIMIT_FACTOR, "0.05"), //change to 0.05 DDint__(NCM_ESP_FIXUP_WEIGHT, "300"), DDkwd__(NCM_ESP_STARTUP_FIX, "ON"), DDflt__(NCM_EXCH_MERGE_FACTOR, "0.10"), // change to 0.10 DDkwd__(NCM_EXCH_NDCS_FIX, "ON"), // change to ON DDkwd__(NCM_HBASE_COSTING, "ON"), // change to ON DDkwd__(NCM_HGB_OVERFLOW_COSTING, "ON"), DDkwd__(NCM_HJ_OVERFLOW_COSTING, "ON"), DDflt__(NCM_IND_JOIN_COST_ADJ_FACTOR, "1.0"), DDflt__(NCM_IND_JOIN_SELECTIVITY, "1.0"), DDflt__(NCM_IND_SCAN_COST_ADJ_FACTOR, "1.0"), DDflt__(NCM_IND_SCAN_SELECTIVITY, "1.0"), DDflt__(NCM_MAP_CPU_FACTOR, "4.0"), DDflt__(NCM_MAP_MSG_FACTOR, "4.0"), DDflt__(NCM_MAP_RANDIO_FACTOR, "4.0"), DDflt__(NCM_MAP_SEQIO_FACTOR, "4.0"), DDflt__(NCM_MDAM_COST_ADJ_FACTOR, "1.0"), DDflt__(NCM_MJ_TO_HJ_FACTOR, "0.6"), DDflt__(NCM_NJ_PC_THRESHOLD, "1.0"), DDflt0_(NCM_NJ_PROBES_MAXCARD_FACTOR, "10000"), DDkwd__(NCM_NJ_SEQIO_FIX, "ON"), // change to ON DDint__(NCM_NUM_SORT_RUNS, "4"), DDflt__(NCM_OLTP_ET_THRESHOLD, "60.0"), DDflt__(NCM_PAR_ADJ_FACTOR, "0.10"), DDkwd__(NCM_PAR_GRPBY_ADJ, "ON"), DDkwd__(NCM_PRINT_ROWSIZE, "OFF"), DDflt__(NCM_RAND_IO_ROWSIZE_FACTOR, "0"), DDflt__(NCM_RAND_IO_WEIGHT, "3258"), DDflt__(NCM_SEQ_IO_ROWSIZE_FACTOR, "0"), DDflt__(NCM_SEQ_IO_WEIGHT, "543"), DDflt__(NCM_SERIAL_NJ_FACTOR, "2"), DDflt__(NCM_SGB_TO_HGB_FACTOR, "0.8"), DDkwd__(NCM_SKEW_COST_ADJ_FOR_PROBES, "OFF"), DDkwd__(NCM_SORT_OVERFLOW_COSTING, "ON"), DDflt__(NCM_TUPLES_ROWSIZE_FACTOR, "0.5"), DDflt__(NCM_UDR_NANOSEC_FACTOR, "0.01"), DDkwd__(NCM_USE_HBASE_REGIONS, "ON"), // NESTED_JOINS ON means do NESTED_JOINS XDDkwd__(NESTED_JOINS, "ON"), // max. number of ESPs that will deal with skews for OCR // 0 means to turn off the feature DDintN1__(NESTED_JOINS_ANTISKEW_ESPS , "16"), DDkwd__(NESTED_JOINS_CHECK_LEADING_KEY_SKEW, "OFF"), DDkwd__(NESTED_JOINS_FULL_INNER_KEY, "OFF"), DDkwd__(NESTED_JOINS_KEYLESS_INNERJOINS, "ON"), DDui1__(NESTED_JOINS_LEADING_KEY_SKEW_THRESHOLD, "15"), DDkwd__(NESTED_JOINS_NO_NSQUARE_OPENS, "ON"), DDkwd__(NESTED_JOINS_OCR_GROUPING, "OFF"), // 128X32 being the default threshold for OCR. // 128 partitions per table and 32 ESPs per NJ operator SDDint__(NESTED_JOINS_OCR_MAXOPEN_THRESHOLD, "4096"), // PLAN0 is solely controlled by OCR. If this CQD is off, then // PLAN0 is off unconditionally. This CQD is used by OCR unit test. DDkwd__(NESTED_JOINS_PLAN0, "ON"), // try the explicit sort plan when plan2 produces a non-sort plan DDkwd__(NESTED_JOINS_PLAN3_TRY_SORT, "ON"), // Enable caching for eligible nested joins - see NestedJoin::preCodeGen. DDkwd__(NESTED_JOIN_CACHE, "ON"), // Enable pulling up of predicates into probe cache DDkwd__(NESTED_JOIN_CACHE_PREDS, "ON"), // Nested Join Heuristic DDkwd__(NESTED_JOIN_CONTROL, "ON"), // Allow nested join for cross products DDkwd__(NESTED_JOIN_FOR_CROSS_PRODUCTS, "ON"), DDkwd__(NEW_MDAM, "ON"), DDkwd__(NEW_OPT_DRIVER, "ON"), // Ansi name of the next DEFAULTS table to read in. // Contains blanks, or the name of a DEFAULTS table to read values from next, // after reading all values from this DEFAULTS table. The name may contain // format strings of '%d' and '%u', which are replaced with the domain name // and user name, respectively, of the current user. The name may begin with // '$', in which it is replaced by its value as a SYSTEM environment variable. // This value in turn may contain '%d' and '%u' formats. When these // replacements are complete, the resulting name is qualified by the current // default catalog and schema, if necessary, and the resulting three-part ANSI // table's default values are read in. This table may contain another // NEXT_DEFAULTS_TABLE value, and different default CATALOG and // SCHEMA values to qualify the resulting table name, and so on, allowing a // chain of tables to be read; combined with the format and environment // variable replacements, this allows per-domain, per-system, and per-user // customization of SQL/MX default values. DDansi_(NEXT_DEFAULTS_TABLE, ""), DDui1__(NEXT_VALUE_FOR_BUFFER_SIZE, "10240"), DDui1__(NEXT_VALUE_FOR_NUM_BUFFERS, "3"), DDui1__(NEXT_VALUE_FOR_SIZE_DOWN, "4"), DDui1__(NEXT_VALUE_FOR_SIZE_UP, "2048"), DDflt0_(NJ_CPUCOST_INITIALIZE, ".1"), DDflt0_(NJ_CPUCOST_PASS_ROW, ".02"), DDflte_(NJ_INC_AFTERLIMIT, "0.0055"), DDflte_(NJ_INC_MOVEROWS, "0.0015"), DDflte_(NJ_INC_UPTOLIMIT, "0.0225"), DDui___(NJ_INITIAL_BUFFER_COUNT, "5"), DDui1__(NJ_INITIAL_BUFFER_SIZE, "5"), DDui1__(NJ_MAX_SEEK_DISTANCE, "5000"), // UDF costing CQDs for processing a steady state row DDui___(NORMAL_UDF_CPU_COST, "100"), DDui___(NORMAL_UDF_IO_COST, "0"), DDui___(NORMAL_UDF_MSG_COST, "2"), XDDui30_32000(NOT_ATOMIC_FAILURE_LIMIT, "32000"), //NOT IN ANSI NULL semantics rule DDkwd__(NOT_IN_ANSI_NULL_SEMANTICS, "ON"), //NOT IN optimization DDkwd__(NOT_IN_OPTIMIZATION, "ON"), //NOT IN outer column optimization DDkwd__(NOT_IN_OUTER_OPTIMIZATION, "ON"), // NOT IN skew buster optimization DDkwd__(NOT_IN_SKEW_BUSTER_OPTIMIZATION, "ON"), DDkwd__(NOT_NULL_CONSTRAINT_DROPPABLE_OPTION, "OFF"), DDkwd__(NOWAITED_FIXUP_MESSAGE_TO_DP2, "OFF"), // NSK DEBUG defaults DDansi_(NSK_DBG, "OFF"), DDansi_(NSK_DBG_COMPILE_INSTANCE, "USER"), DDkwd__(NSK_DBG_GENERIC, "OFF"), DDansi_(NSK_DBG_LOG_FILE, ""), DDkwd__(NSK_DBG_MJRULES_TRACKING, "OFF"), DDkwd__(NSK_DBG_PRINT_CHAR_INPUT, "OFF"), DDkwd__(NSK_DBG_PRINT_CHAR_OUTPUT, "OFF"), DDkwd__(NSK_DBG_PRINT_CONSTRAINT, "OFF"), DDkwd__(NSK_DBG_PRINT_CONTEXT, "OFF"), DDkwd__(NSK_DBG_PRINT_CONTEXT_POINTER, "OFF"), DDkwd__(NSK_DBG_PRINT_COST, "OFF"), DDkwd__(NSK_DBG_PRINT_COST_LIMIT, "OFF"), DDkwd__(NSK_DBG_PRINT_INDEX_ELIMINATION, "OFF"), DDkwd__(NSK_DBG_PRINT_ITEM_EXPR, "OFF"), DDkwd__(NSK_DBG_PRINT_LOG_PROP, "OFF"), DDkwd__(NSK_DBG_PRINT_PHYS_PROP, "OFF"), DDkwd__(NSK_DBG_PRINT_TASK, "OFF"), DDkwd__(NSK_DBG_PRINT_TASK_STACK, "OFF"), DDkwd__(NSK_DBG_QUERY_LOGGING_ONLY, "OFF"), DDansi_(NSK_DBG_QUERY_PREFIX, ""), DDkwd__(NSK_DBG_SHOW_PASS1_PLAN, "OFF"), DDkwd__(NSK_DBG_SHOW_PASS2_PLAN, "OFF"), DDkwd__(NSK_DBG_SHOW_PLAN_LOG, "OFF"), DDkwd__(NSK_DBG_SHOW_TREE_AFTER_ANALYSIS, "OFF"), DDkwd__(NSK_DBG_SHOW_TREE_AFTER_BINDING, "OFF"), DDkwd__(NSK_DBG_SHOW_TREE_AFTER_CODEGEN, "OFF"), DDkwd__(NSK_DBG_SHOW_TREE_AFTER_NORMALIZATION, "OFF"), DDkwd__(NSK_DBG_SHOW_TREE_AFTER_PARSING, "OFF"), DDkwd__(NSK_DBG_SHOW_TREE_AFTER_PRE_CODEGEN, "OFF"), DDkwd__(NSK_DBG_SHOW_TREE_AFTER_SEMANTIC_QUERY_OPTIMIZATION, "OFF"), DDkwd__(NSK_DBG_SHOW_TREE_AFTER_TRANSFORMATION, "OFF"), DDkwd__(NSK_DBG_STRATEGIZER, "OFF"), DDflt0_(NUMBER_OF_PARTITIONS_DEVIATION, "0.25"), DDui1__(NUMBER_OF_ROWS_PARALLEL_THRESHOLD, "5000"), DDui1__(NUMBER_OF_USERS, "1"), DDui1__(NUM_OF_BLOCKS_PER_ACCESS, "SYSTEM"), DDflt0_(NUM_OF_PARTS_DEVIATION_TYPE2_JOINS, "SYSTEM"), DDkwd__(NVCI_PROCESS, "FALSE"), DDflt0_(OCB_COST_ADJSTFCTR, "0.996"), DDui___(OCR_FOR_SIDETREE_INSERT, "1"), DDkwd__(ODBC_METADATA_PROCESS, "FALSE"), DDkwd__(ODBC_PROCESS, "FALSE"), DDflte_(OHJ_BMO_REUSE_SORTED_BMOFACTOR_LIMIT, "3.0"), DDflte_(OHJ_BMO_REUSE_SORTED_UECRATIO_UPPERLIMIT, "0.7"), DDflte_(OHJ_BMO_REUSE_UNSORTED_UECRATIO_UPPERLIMIT, "0.01"), DDflte_(OHJ_VBMOLIMIT, "5.0"), DDui1__(OLAP_BUFFER_SIZE, "262144"), // Do not alter (goes to DP2) DDkwd__(OLAP_CAN_INVERSE_ORDER, "ON"), DDui1__(OLAP_MAX_FIXED_WINDOW_EXTRA_BUFFERS, "2"), DDui1__(OLAP_MAX_FIXED_WINDOW_FRAME, "50000"), DDui1__(OLAP_MAX_NUMBER_OF_BUFFERS, "100000"), DDui___(OLAP_MAX_ROWS_IN_OLAP_BUFFER, "0"), //aplies for fixed window-- number of additional oplap buffers //to allocate on top of the minumum numbers DDkwd__(OLD_HASH2_GROUPING, "FALSE"), DDkwd__(OLT_QUERY_OPT, "ON"), DDkwd__(OLT_QUERY_OPT_LEAN, "OFF"), // ----------------------------------------------------------------------- // Optimizer pruning heuristics. // ----------------------------------------------------------------------- DDkwd__(OPH_EXITHJCRCONTCHILOOP, "ON"), DDkwd__(OPH_EXITMJCRCONTCHILOOP, "ON"), DDkwd__(OPH_EXITNJCRCONTCHILOOP, "OFF"), DDkwd__(OPH_PRUNE_WHEN_COST_LIMIT_EXCEEDED, "OFF"), DDflt__(OPH_PRUNING_COMPLEXITY_THRESHOLD, "10.0"), DDflt__(OPH_PRUNING_PASS2_COST_LIMIT, "-1.0"), DDkwd__(OPH_REDUCE_COST_LIMIT_FROM_CANDIDATES, "OFF"), DDkwd__(OPH_REDUCE_COST_LIMIT_FROM_PASS1_SOLUTION, "ON"), DDkwd__(OPH_REUSE_FAILED_PLAN, "ON"), DDkwd__(OPH_REUSE_OPERATOR_COST, "OFF"), DDkwd__(OPH_SKIP_OGT_FOR_SHARED_GC_FAILED_CL, "OFF"), DDkwd__(OPH_USE_CACHED_ELAPSED_TIME, "ON"), DDkwd__(OPH_USE_CANDIDATE_PLANS, "OFF"), DDkwd__(OPH_USE_COMPARE_COST_THRESHOLD, "ON"), DDkwd__(OPH_USE_CONSERVATIVE_COST_LIMIT, "OFF"), DDkwd__(OPH_USE_ENFORCER_PLAN_PROMOTION, "OFF"), DDkwd__(OPH_USE_FAILED_PLAN_COST, "ON"), DDkwd__(OPH_USE_NICE_CONTEXT, "OFF"), DDkwd__(OPH_USE_ORDERED_MJ_PRED, "OFF"), DDkwd__(OPH_USE_PWS_FLAG_FOR_CONTEXT, "OFF"), XDDui___(OPI_ERROR73_RETRIES, "10"), DDflt__(OPTIMIZATION_BUDGET_FACTOR, "5000"), DDkwd__(OPTIMIZATION_GOAL, "LASTROW"), XDDkwd__(OPTIMIZATION_LEVEL, "3"), DDpct__(OPTIMIZATION_LEVEL_1_CONSTANT_1, "50"), DDpct__(OPTIMIZATION_LEVEL_1_CONSTANT_2, "0"), DDui1__(OPTIMIZATION_LEVEL_1_IMMUNITY_LIMIT, "5000"), DDui1__(OPTIMIZATION_LEVEL_1_MJENUM_LIMIT, "20"), DDui1__(OPTIMIZATION_LEVEL_1_SAFETY_NET, "30000"), DDflt__(OPTIMIZATION_LEVEL_1_SAFETY_NET_MULTIPLE, "3.0"), DDui1__(OPTIMIZATION_LEVEL_1_THRESHOLD, "1000"), DDui1__(OPTIMIZATION_TASKS_LIMIT, "2000000000"), DDui1__(OPTIMIZATION_TASK_CAP, "30000"), // Optimizer Graceful Termination: // 1=> randomProbabilistic pruning // > 1 pruning based on potential DDui1__(OPTIMIZER_GRACEFUL_TERMINATION, "2"), DDkwd__(OPTIMIZER_HEURISTIC_1, "OFF"), DDkwd__(OPTIMIZER_HEURISTIC_2, "OFF"), DDkwd__(OPTIMIZER_HEURISTIC_3, "OFF"), DDkwd__(OPTIMIZER_HEURISTIC_4, "OFF"), DDkwd__(OPTIMIZER_HEURISTIC_5, "OFF"), // Tells the compiler to print costing information DDkwd__(OPTIMIZER_PRINT_COST, "OFF"), // Tells the compiler to issue a warning with its internal counters DDkwd__(OPTIMIZER_PRINT_INTERNAL_COUNTERS, "OFF"), // Pruning is OFF because of bugs, turn to ON when bugs are fixed // (03/03/98) SDDkwd__(OPTIMIZER_PRUNING, "ON"), DDkwd__(OPTIMIZER_PRUNING_FIX_1, "ON"), //change to ON DDkwd__(OPTIMIZER_SYNTH_FUNC_DEPENDENCIES, "ON"), //OPTS_PUSH_DOWN_DAM made external RV 06/21/01 CR 10-010425-2440 DDui___(OPTS_PUSH_DOWN_DAM, "0"), DDkwd__(ORDERED_HASH_JOIN_CONTROL, "ON"), SDDkwd__(OR_OPTIMIZATION, "ON"), DDkwd__(OR_PRED_ADD_BLOCK_TO_IN_LIST, "ON"), DDkwd__(OR_PRED_KEEP_CAST_VC_UCS2, "ON"), // controls the jump table method of evaluating an or pred. in a scan node // 0 => feature is OFF, positive integer denotes max OR pred that will be // processed through a jump table. DDint__(OR_PRED_TO_JUMPTABLE, "2000"), // controls semijoin method of evaluating an or pred. // 0 => feature is OFF, positive number means if pred do not cover key cols // and jump table is not available, then the transformation is done if // inlist is larger than this value. DDint__(OR_PRED_TO_SEMIJOIN, "100"), // Ratio of tablesize (without application of any preds)to probes below // which semijoin trans. is favoured. DDflt0_(OR_PRED_TO_SEMIJOIN_PROBES_MAX_RATIO, "0.001"), // Minimum table size beyond which semijoin trans. is considered DDint__(OR_PRED_TO_SEMIJOIN_TABLE_MIN_SIZE, "10000"), // The Optimizer Simulator (OSIM) CQDs DDkwd__(OSIM_USE_POS, "OFF"), DDint__(OSIM_USE_POS_DISK_SIZE_GB, "0"), DD_____(OSIM_USE_POS_NODE_NAMES, ""), DDui2__(OS_MESSAGE_BUFFER_SIZE, "32"), // if set to "ansi", datetime output is in ansi format. Currently only // used in special_1 mode if the caller needs datetime value in // ansi format (like, during upd stats). DDansi_(OUTPUT_DATE_FORMAT, ""), // Overflow mode for scratch files DDkwd__(OVERFLOW_MODE, "MMAP"), // Sequence generator override identity values DDkwd__(OVERRIDE_GENERATED_IDENTITY_VALUES, "OFF"), // allow users to specify a source schema to be // replaced by a target schema SDDosch_(OVERRIDE_SCHEMA, ""), // Allows users to specify their own SYSKEY value. In other words // the system does not generate one for them. // Prior to this CQD, pm_regenerate_syskey_for_insert was being used // to preserve the syskey. Carrying over these comments from // pm_regenerate_syskey_for_insert // For audited target partition, PM does the copy in multiple transactions // In each transaction PM does a insert/select from the source to the target // partition. The clustering key values from the last row of a transaction // is used as begin key value for the next transaction. If the table // has a syskey then it gets regenerated and last row contains the new // value for the syskey. This obviously causes us to start at a different // place then we intended to start from. The following default when set // to off forces the engine to not regenerate syskey. DDkwd__(OVERRIDE_SYSKEY, "OFF"), DDui___(PARALLEL_ESP_NODEMASK, "0"), // by default all parallelism heuristics are switched ON. DDkwd__(PARALLEL_HEURISTIC_1, "ON"), DDkwd__(PARALLEL_HEURISTIC_2, "ON"), DDkwd__(PARALLEL_HEURISTIC_3, "ON"), DDkwd__(PARALLEL_HEURISTIC_4, "ON"), // If PARALLEL_NUM_ESPS is "SYSTEM", // optimizer will compute the number of ESPs. XDDui1__(PARALLEL_NUM_ESPS, "SYSTEM"), // If PARALLEL_NUM_ESPS is "SYSTEM", // optimizer will compute the number of ESPs to be used for parallel ddl // operations. DDui1__(PARALLEL_NUM_ESPS_DDL, "SYSTEM"), // If PARALLEL_NUM_ESPS is "SYSTEM", // optimizer will compute the number of ESPs to be used for parallel purgedata // operation. DDui1__(PARALLEL_NUM_ESPS_PD, "SYSTEM"), // is partial sort applicable; if so adjust sort cost accordingly DDflt0_(PARTIAL_SORT_ADJST_FCTR, "1"), DDint__(PARTITIONING_SCHEME_SHARING, "1"), // The optimal number of partition access nodes for a process. // NOTE: Setting this to anything other than 1 will cause problems // with Cascades plan stealing! Don't do it unless you have to! DDui1__(PARTITION_ACCESS_NODES_PER_ESP, "1"), DD_____(PCODE_DEBUG_LOGDIR, "" ), // Pathname of log directory for PCode work DDint__(PCODE_EXPR_CACHE_CMP_ONLY, "0" ), // PCode Expr Cache compare-only mode DDint__(PCODE_EXPR_CACHE_DEBUG, "0" ), // PCode Expr Cache debug (set to 1 to enable dbg logging) DDint__(PCODE_EXPR_CACHE_ENABLED, "1" ), // PCode Expr Cache Enabled (set to 0 to disable the cache) DD0_10485760(PCODE_EXPR_CACHE_SIZE,"2000000"), // PCode Expr Cache Max Size // Maximum number of PCODE Branch Instructions in an Expr // for which we will attempt PCODE optimizations. // NOTE: Default value reduced to 12000 for Trafodion to avoid stack // overflow in PCODE optimization where recursion is used. DDint__(PCODE_MAX_OPT_BRANCH_CNT, "12000"), // Maximum number of PCODE Instructions in an Expr // for which we will attempt PCODE optimizations. DDint__(PCODE_MAX_OPT_INST_CNT, "50000"), DDint__(PCODE_NE_DBG_LEVEL, "-1"), // Native Expression Debug Level DDint__(PCODE_NE_ENABLED, "1" ), // Native Expressions Enabled DDkwd__(PCODE_NE_IN_SHOWPLAN, "ON"), // Native Expression in Showplan output // This PCODE_NE_LOG_PATH cqd is now obsolete. Use PCODE_DEBUG_LOGDIR instead. // Would delete the following line except that would also mean deleting the // corresponding line in DefaultConstants.h which would change the values for // the following definitions in the same enum. DD_____(PCODE_NE_LOG_PATH, "" ), // Pathname of log file for Native Expression work - OBSOLETE DDint__(PCODE_OPT_FLAGS, "60"), DDkwd__(PCODE_OPT_LEVEL, "MAXIMUM"), DDint__(PHY_MEM_CONTINGENCY_MB, "3072"), DDkwd__(PLAN_STEALING, "ON"), DDui50_4194303(PM_OFFLINE_TRANSACTION_GRANULARITY, "5000"), DDui50_4194303(PM_ONLINE_TRANSACTION_GRANULARITY, "400"), // Not in use anymore. OVERRIDE_SYSKEY is used instead. DDkwd__(PM_REGENERATE_SYSKEY_FOR_INSERT, "ON"), // Partition OVerlay Support (POS) options SDDkwd__(POS, "DISK_POOL"), XDDpos__(POS_ABSOLUTE_MAX_TABLE_SIZE, ""), DDkwd__(POS_ALLOW_NON_PK_TABLES, "OFF"), DDui___(POS_CPUS_PER_SEGMENT, "16"), // default to 300 GB DDui___(POS_DEFAULT_LARGEST_DISK_SIZE_GB, "300"), // default to 72GB DDui___(POS_DEFAULT_SMALLEST_DISK_SIZE_GB, "72"), DDdskNS(POS_DISKS_IN_SEGMENT, ""), SDDui___(POS_DISK_POOL, "0"), DD_____(POS_FILE_OPTIONS, ""), SDDdskNS(POS_LOCATIONS, ""), DDkwd__(POS_MAP_HASH_TO_HASH2, "ON"), DDpos__(POS_MAX_EXTENTS, ""), SDDui___(POS_NUM_DISK_POOLS, "0"), DDui___(POS_NUM_OF_PARTNS, "SYSTEM"), SDDint__(POS_NUM_OF_TEMP_TABLE_PARTNS, "SYSTEM"), SDDpos__(POS_PRI_EXT_SIZE, "25"), DDkwd__(POS_RAISE_ERROR, "OFF"), SDDpos__(POS_SEC_EXT_SIZE, ""), SDDpos__(POS_TABLE_SIZE, ""), SDDpct__(POS_TEMP_TABLE_FREESPACE_THRESHOLD_PERCENT, "0"), SDDdskNS(POS_TEMP_TABLE_LOCATIONS, ""), SDDpos__(POS_TEMP_TABLE_SIZE, ""), DDkwd__(POS_TEST_MODE, "OFF"), DDui___(POS_TEST_NUM_NODES, "0"), DDui___(POS_TEST_NUM_VOLUMES_PER_NODE, "0"), // Use info from right child to require order on left child of NJ //PREFERRED_PROBING_ORDER_FOR_NESTED_JOIN made external RV 06/21/01 CR 10-010425-2440 DDkwd__(PREFERRED_PROBING_ORDER_FOR_NESTED_JOIN, "OFF"), DD0_18(PRESERVE_MIN_SCALE, "0"), DDkwd__(PRIMARY_KEY_CONSTRAINT_DROPPABLE_OPTION, "OFF"), DDkwd__(PSHOLD_CLOSE_ON_ROLLBACK, "OFF"), DDkwd__(PSHOLD_UPDATE_BEFORE_FETCH, "OFF"), SDDpsch_(PUBLIC_SCHEMA_NAME, ""), XDDrlis_(PUBLISHING_ROLES, ""), DDkwd__(PURGEDATA_WITH_OFFLINE_TABLE, "OFF"), // Query Invalidation - Debug/Regression test CQDs -- DO NOT externalize these DD_____(QI_PATH, "" ), // Specifies cat.sch.object path for object to have cache entries removed DD0_255(QI_PRIV, "0"), // Note: 0 disables the Debug Mechanism. Set non-zero to kick out cache entries. // Then set back to 0 *before* setting to a non-zero value again. // Do the query analysis phase DDkwd__(QUERY_ANALYSIS, "ON"), // query_cache max should be 200 MB. Set it 0 to turn off query cache //XDD0_200000(QUERY_CACHE, "0"), XDD0_200000(QUERY_CACHE, "16384"), // the initial average plan size (in kbytes) to use for configuring the // number of hash buckets to use for mxcmp's hash table of cached plans DD1_200000(QUERY_CACHE_AVERAGE_PLAN_SIZE, "30"), // literals longer than this are not parameterized DDui___(QUERY_CACHE_MAX_CHAR_LEN, "32000"), // a query with more than QUERY_CACHE_MAX_EXPRS ExprNodes is not cacheable DDint__(QUERY_CACHE_MAX_EXPRS, "1000"), // the largest number of cache entries that an unusually large cache // entry is allowed to displace from mxcmp's cache of query plans DD0_200000(QUERY_CACHE_MAX_VICTIMS, "10"), DDkwd__(QUERY_CACHE_MPALIAS, "OFF"), DD0_255(QUERY_CACHE_REQUIRED_PREFIX_KEYS, "255"), DDkwd__(QUERY_CACHE_RUNTIME, "ON"), SDDflt0_(QUERY_CACHE_SELECTIVITY_TOLERANCE, "0"), // query cache statement pinning is off by default DDkwd__(QUERY_CACHE_STATEMENT_PINNING, "OFF"), DDkwd__(QUERY_CACHE_STATISTICS, "OFF"), DD_____(QUERY_CACHE_STATISTICS_FILE, "qcachsts"), DDkwd__(QUERY_CACHE_TABLENAME, "OFF"), DDkwd__(QUERY_CACHE_USE_CONVDOIT_FOR_BACKPATCH, "ON"), // Limit CPU time a query can use in master or any ESP. Unit is seconds. XDDint__(QUERY_LIMIT_SQL_PROCESS_CPU, "0"), // Extra debugging info for QUERY_LIMIT feature. DDkwd__(QUERY_LIMIT_SQL_PROCESS_CPU_DEBUG, "OFF"), // How many iterations in scheduler subtask list before evaluating limits. DDint__(QUERY_LIMIT_SQL_PROCESS_CPU_DP2_FREQ, "16"), // For X-prod HJ: (# of rows joined * LIMIT) before preempt. DDint__(QUERY_LIMIT_SQL_PROCESS_CPU_XPROD, "10000"), // controls various expr optimizations based on bit flags. // see enum QueryOptimizationOptions in DefaultConstants.h DDint__(QUERY_OPTIMIZATION_OPTIONS, "3"), DDkwd__(QUERY_STRATEGIZER, "ON"), DDflt0_(QUERY_STRATEGIZER_2N_COMPLEXITY_FACTOR, "1"), DDflt0_(QUERY_STRATEGIZER_EXHAUSTIVE_COMPLEXITY_FACTOR, "1"), DDflt0_(QUERY_STRATEGIZER_N2_COMPLEXITY_FACTOR, "1"), DDflt0_(QUERY_STRATEGIZER_N3_COMPLEXITY_FACTOR, "1"), DDflt0_(QUERY_STRATEGIZER_N4_COMPLEXITY_FACTOR, "1"), DDflt0_(QUERY_STRATEGIZER_N_COMPLEXITY_FACTOR, "1"), DDkwd__(QUERY_TEMPLATE_CACHE, "ON"), DDkwd__(QUERY_TEXT_CACHE, "SYSTEM"), DDkwd__(R2_HALLOWEEN_SUPPORT, "OFF"), DDkwd__(RANGESPEC_TRANSFORMATION, "ON"), // RangeSpec Transformation CQD. // To be ANSI compliant you would have to set this default to 'FALSE' DDkwd__(READONLY_CURSOR, "TRUE"), // ReadTableDef compares transactional identifiers during endTransaction() processing DDkwd__(READTABLEDEF_TRANSACTION_ASSERT, "OFF"), DDkwd__(READTABLEDEF_TRANSACTION_ENABLE_WARNINGS, "OFF"), DDint__(READTABLEDEF_TRANSACTION_TESTPOINT, "0"), DDflt0_(READ_AHEAD_MAX_BLOCKS, "16.0"), // OFF means Ansi/NIST setting, ON is more similar to the SQL/MP behavior DDkwd__(RECOMPILATION_WARNINGS, "OFF"), // CLI caller to redrive CTAS(create table as) for child query monitoring DDkwd__(REDRIVE_CTAS, "OFF"), // The group by reduction for pushing a partial group by past the // right side of the TSJ must be at least this much. If 0.0, then // pushing it will always be tried. DDflt0_(REDUCTION_TO_PUSH_GB_PAST_TSJ, "0.0000000001"), // This is the code base for the calibration machine. It must be either // "DEBUG" or "RELEASE" // History: // Before 02/01/99: DEBUG DDkwd__(REFERENCE_CODE, "RELEASE"), // This is the frequency of the representative CPU of the base calibration // cluster. // REFERENCE_CPU_FREQUENCY units are MhZ DDflte_(REFERENCE_CPU_FREQUENCY, "199."), // This is the seek time of the representative disk of the base // calibration cluster. // REFERENCE_IO_SEEK_TIME units are seconds DDflte_(REFERENCE_IO_SEEK_TIME, "0.0038"), // This is the sequential transfer rate for the representative // disk of the base calibration cluster. // REFERENCE_IO_SEQ_READ_RATE units are Mb/Sec DDflte_(REFERENCE_IO_SEQ_READ_RATE, "50.0"), // This is the transfer rate for the fast speed connection of // nodes in the base calibration cluster. // REFERENCE_MSG_LOCAL_RATE units are Mb/Sec DDflte_(REFERENCE_MSG_LOCAL_RATE, "10."), // This is the timeper local msg for the fast speed connection of // nodes in the base calibration cluster. // REFERENCE_MSG_LOCAL_TIME units are seconds DDflte_(REFERENCE_MSG_LOCAL_TIME, "0.000125"), // This is the transfer rate for the connection among clusters // in the base calibration cluster (this only applies to NSK) // REFERENCE_MSG_REMOTE_RATE units are Mb/Sec DDflte_(REFERENCE_MSG_REMOTE_RATE, "1."), // This is the time per remote msg for the fast speed connection of // nodes in the base calibration cluster. // REFERENCE_MSG_REMOTE_TIME units are seconds DDflte_(REFERENCE_MSG_REMOTE_TIME, "0.00125"), DDkwd__(REF_CONSTRAINT_NO_ACTION_LIKE_RESTRICT, "SYSTEM"), DDkwd__(REMOTE_ESP_ALLOCATION, "SYSTEM"), DDkwd__(REORG_IF_NEEDED, "OFF"), DDkwd__(REORG_VERIFY, "OFF"), DDrlis_(REPLICATE_ALLOW_ROLES, ""), // Determines the compression type to be used with DDL when replicating DDkwd__(REPLICATE_COMPRESSION_TYPE, "SYSTEM"), // Determines if DISK POOL setting should be passed with DDL when replicating DDkwd__(REPLICATE_DISK_POOL, "ON"), // Display a BDR-internally-generated command before executing it DDkwd__(REPLICATE_DISPLAY_INTERNAL_CMD, "OFF"), // Executing commands generated internally by BDR DDkwd__(REPLICATE_EXEC_INTERNAL_CMD, "OFF"), // VERSION of the message from the source system to maintain compatibility // This version should be same as REPL_IO_VERSION_CURR in executor/ExeReplInterface.h // Make changes accordingly in validataorReplIoVersion validator DDrver_(REPLICATE_IO_VERSION, "17"), DDansi_(REPLICATE_MANAGEABILITY_CATALOG, "MANAGEABILITY"), // max num of retries after replicate server(mxbdrdrc) returns an error DDui___(REPLICATE_NUM_RETRIES, "0"), DDansi_(REPLICATE_TEST_TARGET_CATALOG, ""), DDansi_(REPLICATE_TEST_TARGET_MANAGEABILITY_CATALOG, ""), DDkwd__(REPLICATE_WARNINGS, "OFF"), DDkwd__(RETURN_AVG_STREAM_WAIT, "OFF"), DDkwd__(REUSE_BASIC_COST, "ON"), // if set, tables are not closed at the end of a query. This allows // the same open to be reused for the next query which accesses that // table. // If the table is shared opened by multiple openers from the same // process, then the share count is decremented until it reaches 1. // At that time, the last open is preserved so it could be reused. // Tables are closed if user id changes. DDkwd__(REUSE_OPENS, "ON"), // multiplicative factor used to inflate cost of risky operators. // = 1.0 means do not demand an insurance premium from risky operators. // = 1.2 means demand a 20% insurance premium that cost of risky operators // must overcome before they will be chosen over less-risky operators. DDflt0_(RISK_PREMIUM_MJ, "1.15"), XDDflt0_(RISK_PREMIUM_NJ, "1.0"), XDDflt0_(RISK_PREMIUM_SERIAL, "1.0"), XDDui___(RISK_PREMIUM_SERIAL_SCALEBACK_MAXCARD_THRESHOLD, "10000"), DDflt0_(ROBUST_HJ_TO_NJ_FUDGE_FACTOR, "0.0"), DDflt0_(ROBUST_PAR_GRPBY_EXCHANGE_FCTR, "0.25"), DDflt0_(ROBUST_PAR_GRPBY_LEAF_FCTR, "0.25"), // external master CQD that sets following internal CQDs // robust_query_optimization // MINIMUM SYSTEM HIGH MAXIMUM // risk_premium_NJ 1.0 system 2.5 5.0 // risk_premium_SERIAL 1.0 system 1.5 2.0 // partitioning_scheme_sharing 0 system 2 2 // robust_hj_to_nj_fudge_factor 0.0 system 3.0 1.0 // robust_sortgroupby 0 system 2 2 // risk_premium_MJ 1.0 system 1.5 2.0 // see optimizer/ControlDB.cpp ControlDB::doRobustQueryOptimizationCQDs // for the actual cqds that set these values XDDkwd__(ROBUST_QUERY_OPTIMIZATION, "SYSTEM"), // 0: allow sort group by in all // 1: disallow sort group by from partial grpByRoot if no order requirement // 2: disallow sort group by from partial grpByRoot // 3: disallow sort group by in ESP DDint__(ROBUST_SORTGROUPBY, "1"), SDDui___(ROUNDING_MODE, "0"), DDui___(ROUTINE_CACHE_SIZE, "20"), // UDF default Uec DDui___(ROUTINE_DEFAULT_UEC, "1"), DDkwd__(ROUTINE_JOINS_SPOIL_JBB, "OFF"), DDkwd__(ROWSET_ROW_COUNT, "OFF"), DDint__(SAP_KEY_NJ_TABLE_SIZE_THRESHOLD, "10000000"), DDkwd__(SAP_PA_DP2_AFFINITY_FOR_INSERTS, "ON"), DDkwd__(SAP_PREFER_KEY_NESTED_JOIN, "OFF"), DDint__(SAP_TUPLELIST_SIZE_THRESHOLD, "5000"), XDDkwd__(SAVE_DROPPED_TABLE_DDL, "OFF"), XDDansi_(SCHEMA, "SEABASE"), SDDdskNS(SCRATCH_DISKS, ""), SDDdskNS(SCRATCH_DISKS_EXCLUDED, "$SYSTEM"), DDdskNS(SCRATCH_DISKS_PREFERRED, ""), DDkwd__(SCRATCH_DISK_LOGGING, "OFF"), DDdskNT(SCRATCH_DRIVE_LETTERS, ""), DDdskNT(SCRATCH_DRIVE_LETTERS_EXCLUDED, ""), DDdskNT(SCRATCH_DRIVE_LETTERS_PREFERRED, ""), SDDpct__(SCRATCH_FREESPACE_THRESHOLD_PERCENT, "1"), DDui___(SCRATCH_IO_BLOCKSIZE_SORT, "524288"), //On LINUX, writev and readv calls are used to perform //scratch file IO. This CQD sets the vector size to use //in writev and readv calls. Overall IO size is affected //by this cqd. Also, related cqds that are related to //IO size are: COMP_INT_67, GEN_HGBY_BUFFER_SIZE. //GEN_HSHJ_BUFFER_SIZE, OLAP_BUFFER_SIZE, //EXE_HGB_INITIAL_HT_SIZE. Vector size is no-op on other //platforms. DDui___(SCRATCH_IO_VECTOR_SIZE_HASH, "8"), DDui___(SCRATCH_IO_VECTOR_SIZE_SORT, "1"), DDui___(SCRATCH_MAX_OPENS_HASH, "1"), DDui___(SCRATCH_MAX_OPENS_SORT, "1"), DDui___(SCRATCH_MGMT_OPTION, "11"), DDkwd__(SCRATCH_PREALLOCATE_EXTENTS, "OFF"), DD_____(SEABASE_CATALOG, TRAFODION_SYSCAT_LIT), DDkwd__(SEABASE_VOLATILE_TABLES, "ON"), // SeaMonster messaging -- the default can be ON, OFF, or SYSTEM. // When the default is SYSTEM we take the setting from env var // SQ_SEAMONSTER which will have a value of 0 or 1. DDkwd__(SEAMONSTER, "SYSTEM"), SDDkwd__(SEMIJOIN_TO_INNERJOIN_TRANSFORMATION, "SYSTEM"), // Disallow/Allow semi and anti-semi joins in MultiJoin framework DDkwd__(SEMI_JOINS_SPOIL_JBB, "OFF"), DDkwd__(SEQUENTIAL_BLOCKSPLIT, "SYSTEM"), DDansi_(SESSION_ID, ""), DDkwd__(SESSION_IN_USE, "OFF"), DDansi_(SESSION_USERNAME, ""), DDflt0_(SGB_CPUCOST_INITIALIZE, ".05"), DDui___(SGB_INITIAL_BUFFER_COUNT, "5."), DDui1__(SGB_INITIAL_BUFFER_SIZE, "5."), DDkwd__(SHAREOPENS_ON_REFCOUNT, "ON"), DDkwd__(SHARE_TEMPLATE_CACHED_PLANS, "ON"), DDui___(SHORT_OPTIMIZATION_PASS_THRESHOLD, "12"), SDDkwd__(SHOWCONTROL_SHOW_ALL, "OFF"), SDDkwd__(SHOWCONTROL_SHOW_SUPPORT, "OFF"), DDkwd__(SHOWDDL_DISPLAY_FORMAT, "EXTERNAL"), DDkwd__(SHOWDDL_DISPLAY_PRIVILEGE_GRANTS, "SYSTEM"), DDint__(SHOWDDL_FOR_REPLICATE, "0"), DDkwd__(SHOWLABEL_LOCKMODE, "OFF"), DDkwd__(SHOWWARN_OPT, "ON"), DDkwd__(SHOW_MEMO_STATS, "OFF"), DDkwd__(SIMILARITY_CHECK, "ON "), DDkwd__(SIMPLE_COST_MODEL, "ON"), XDDkwd__(SKEW_EXPLAIN, "ON"), XDDflt__(SKEW_ROWCOUNT_THRESHOLD, "1000000"), // Column row count // threshold below // which skew // buster is disabled. XDDflt__(SKEW_SENSITIVITY_THRESHOLD, "0.1"), DDkwd__(SKIP_METADATA_VIEWS, "OFF"), DDkwd__(SKIP_TRANSLATE_SYSCAT_DEFSCH_NAMES, "ON"), DDkwd__(SKIP_UNAVAILABLE_PARTITION, "OFF"), DDkwd__(SKIP_VCC, "OFF"), DDui0_5(SOFT_REQ_HASH_TYPE, "2"), DDkwd__(SORT_ALGO, "QS"), // Calibration // 01/23/98: 10000 // Original: 10. DDflt0_(SORT_CPUCOST_INITIALIZE, "10000."), DDui1__(SORT_EX_BUFFER_SIZE, "5."), DDkwd__(SORT_INTERMEDIATE_SCRATCH_CLEANUP, "ON"), DDui1__(SORT_IO_BUFFER_SIZE, "128."), DD1_200000(SORT_MAX_HEAP_SIZE_MB, "800"), DDkwd__(SORT_MEMORY_QUOTA_SYSTEM, "ON"), DD1_128(SORT_MERGE_BUFFER_UNIT_56KB, "1"), // Calibration // 04/06/2005: 1.5 DDflte_(SORT_QS_FACTOR, "1.5"), //Maximum records after which sort would switch over to //iterative heap sort. Most often in partial sort, we may want //do a quick sort or similar to avoid larger in-memory sort //setup. DDint__(SORT_REC_THRESHOLD, "1000"), // Calibration DDflte_(SORT_RS_FACTOR, "3.55"), // Calibration // 04/06/2005: 2.1 DDflte_(SORT_RW_FACTOR, "2.1"), DDflte_(SORT_TREE_NODE_SIZE, ".012"), DDkwd__(SQLMX_REGRESS, "OFF"), DDkwd__(SQLMX_SHOWDDL_SUPPRESS_ROW_FORMAT, "OFF"), DDansi_(SQLMX_UTIL_EXPLAIN_PLAN, "OFF"), SDDkwd__(SQLMX_UTIL_ONLINE_POPINDEX, "ON"), SDDui___(SSD_BMO_MAX_MEM_THRESHOLD_IN_MB, "1200"), // BertBert VV // Timeout for a streaming cursor to return to the fetch(), even if no // rows to return. The cursor is NOT closed, it just gives control to // the user again. // "0" means no timeout, just check instead. // "negative" means never timeout. // "positive" means the number of centiseconds to wait before timing out. XDDint__(STREAM_TIMEOUT, "-1"), XDDkwd__(SUBQUERY_UNNESTING, "ON"), DDkwd__(SUBQUERY_UNNESTING_P2, "ON"), DDkwd__(SUBSTRING_TRANSFORMATION, "OFF"), DDui___(SYNCDEPTH, "1"), XDDkwd__(TABLELOCK, "SYSTEM"), // This is the code base for the end user calibration cluster. // It must be either "DEBUG" or "RELEASE" #ifdef NDEBUG DDkwd__(TARGET_CODE, "RELEASE"), #else DDkwd__(TARGET_CODE, "DEBUG"), #endif // This is the frequency of the representative CPU of the end user // cluster. // TARGET_CPU_FREQUENCY units are MhZ. DDflte_(TARGET_CPU_FREQUENCY, "199."), // This is the seek time of the representative disk of the end user // cluster. // TARGET_IO_SEEK_TIME units are seconds DDflte_(TARGET_IO_SEEK_TIME, "0.0038"), // This is the sequential transfer rate for the representative // disk of the end user cluster. // TARGET_IO_SEQ_READ_RATE units are Mb/Sec DDflte_(TARGET_IO_SEQ_READ_RATE, "50.0"), // This is the transfer rate for the fast speed connection of // nodes in the end user cluster. // TARGET_MSG_LOCAL_RATE units are Mb/Sec DDflte_(TARGET_MSG_LOCAL_RATE, "10."), // This is the per msg time for the fast speed connection of // nodes in the end user cluster. // TARGET_MSG_LOCAL_TIME are seconds DDflte_(TARGET_MSG_LOCAL_TIME, "0.000125"), // This is the transfer rate for the connection among clusters // in the end user cluster (this only applies to NSK) // TARGET_MSG_REMOTE_RATE units are Mb/Sec DDflte_(TARGET_MSG_REMOTE_RATE, "1."), // This is the per msg time for the the connection among clusters // nodes in the end user cluster. // TARGET_MSG_REMOTE_TIME are seconds DDflte_(TARGET_MSG_REMOTE_TIME, "0.00125"), DDvol__(TEMPORARY_TABLE_HASH_PARTITIONS, "" ), DDkwd__(TERMINAL_CHARSET, (char *)SQLCHARSETSTRING_ISO88591), DDint__(TEST_PASS_ONE_ASSERT_TASK_NUMBER, "-1"), DDint__(TEST_PASS_TWO_ASSERT_TASK_NUMBER, "-1"), XDDintN2(TIMEOUT, "6000"), DDflt0_(TMUDF_CARDINALITY_FACTOR, "1"), DDflt0_(TMUDF_LEAF_CARDINALITY, "1"), DDkwd__(TOTAL_RESOURCE_COSTING, "ON"), DDint__(TRAF_ALIGNED_FORMAT_ADD_COL_METHOD, "2"), DDkwd__(TRAF_ALIGNED_ROW_FORMAT, "OFF"), DDkwd__(TRAF_ALLOW_ESP_COLOCATION, "OFF"), DDkwd__(TRAF_ALLOW_RESERVED_COLNAMES, "OFF"), DDkwd__(TRAF_ALLOW_SELF_REF_CONSTR, "ON"), DDkwd__(TRAF_ALTER_COL_ATTRS, "ON"), DDkwd__(TRAF_BLOB_AS_VARCHAR, "ON"), //set to OFF to enable Lobs support DDkwd__(TRAF_BOOTSTRAP_MD_MODE, "OFF"), DDkwd__(TRAF_CLOB_AS_VARCHAR, "ON"), //set to OFF to enable Lobs support DDkwd__(TRAF_COL_LENGTH_IS_CHAR, "ON"), DDkwd__(TRAF_CREATE_SIGNED_NUMERIC_LITERAL, "ON"), DDansi_(TRAF_CREATE_TABLE_WITH_UID, ""), DDkwd__(TRAF_DEFAULT_COL_CHARSET, (char *)SQLCHARSETSTRING_ISO88591), DDkwd__(TRAF_ENABLE_ORC_FORMAT, "OFF"), DDkwd__(TRAF_INDEX_ALIGNED_ROW_FORMAT, "ON"), DDkwd__(TRAF_INDEX_CREATE_OPT, "OFF"), DDkwd__(TRAF_LARGEINT_UNSIGNED_IO, "OFF"), DDkwd__(TRAF_LOAD_ALLOW_RISKY_INDEX_MAINTENANCE, "OFF"), DDkwd__(TRAF_LOAD_CONTINUE_ON_ERROR, "OFF"), DD_____(TRAF_LOAD_ERROR_COUNT_ID, "" ), DD_____(TRAF_LOAD_ERROR_COUNT_TABLE, "ERRORCOUNTER" ), DD_____(TRAF_LOAD_ERROR_LOGGING_LOCATION, "/bulkload/logs/" ), DDint__(TRAF_LOAD_FLUSH_SIZE_IN_KB, "1024"), DDkwd__(TRAF_LOAD_FORCE_CIF, "ON"), DDkwd__(TRAF_LOAD_LOG_ERROR_ROWS, "OFF"), DDint__(TRAF_LOAD_MAX_ERROR_ROWS, "0"), DDint__(TRAF_LOAD_MAX_HFILE_SIZE, "10240"), // in MB -->10GB by default DDkwd__(TRAF_LOAD_PREP_ADJUST_PART_FUNC, "ON"), DDkwd__(TRAF_LOAD_PREP_CLEANUP, "ON"), DDkwd__(TRAF_LOAD_PREP_KEEP_HFILES, "OFF"), DDkwd__(TRAF_LOAD_PREP_PHASE_ONLY, "OFF"), DDkwd__(TRAF_LOAD_PREP_SKIP_DUPLICATES , "OFF"), //need add code to check if folder exists or not. if not issue an error and ask //user to create it DD_____(TRAF_LOAD_PREP_TMP_LOCATION, "/bulkload/" ), DDkwd__(TRAF_LOAD_TAKE_SNAPSHOT , "OFF"), DDkwd__(TRAF_LOAD_USE_FOR_INDEXES, "ON"), DDkwd__(TRAF_LOAD_USE_FOR_STATS, "OFF"), // max size in bytes of a char or varchar column. DDui2__(TRAF_MAX_CHARACTER_COL_LENGTH, "200000"), DDkwd__(TRAF_MULTI_COL_FAM, "ON"), DDkwd__(TRAF_NO_CONSTR_VALIDATION, "OFF"), DDkwd__(TRAF_NO_DTM_XN, "OFF"), DDint__(TRAF_NUM_HBASE_VERSIONS, "0"), DDint__(TRAF_NUM_OF_SALT_PARTNS, "-1"), DDkwd__(TRAF_RELOAD_NATABLE_CACHE, "OFF"), DD_____(TRAF_SAMPLE_TABLE_LOCATION, "/sample/"), DDint__(TRAF_SEQUENCE_CACHE_SIZE, "-1"), DDkwd__(TRAF_STRING_AUTO_TRUNCATE, "OFF"), DDkwd__(TRAF_STRING_AUTO_TRUNCATE_WARNING, "OFF"), //TRAF_TABLE_SNAPSHOT_SCAN CQD can be set to : //NONE--> Snapshot scan is disabled and regular scan is used , //SUFFIX --> Snapshot scan enabled for the bulk unload (bulk unload // behavior id not changed) //LATEST --> enabled for the scan independently from bulk unload // the latest snapshot is used if it exists DDkwd__(TRAF_TABLE_SNAPSHOT_SCAN, "NONE"), DD_____(TRAF_TABLE_SNAPSHOT_SCAN_SNAP_SUFFIX, "SNAP"), //when the estimated table size is below the threshold (in MBs) //defined by TRAF_TABLE_SNAPSHOT_SCAN_TABLE_SIZE_THRESHOLD //regular scan instead of snapshot scan //does not apply to bulk unload which maintains the old behavior DDint__(TRAF_TABLE_SNAPSHOT_SCAN_TABLE_SIZE_THRESHOLD, "1000"), //timeout before we give up when trying to create the snapshot scanner DDint__(TRAF_TABLE_SNAPSHOT_SCAN_TIMEOUT, "6000"), //location for temporary links and files produced by snapshot scan DD_____(TRAF_TABLE_SNAPSHOT_SCAN_TMP_LOCATION, "/bulkload/"), DDkwd__(TRAF_TINYINT_INPUT_PARAMS, "OFF"), DDkwd__(TRAF_TINYINT_RETURN_VALUES, "OFF"), DDkwd__(TRAF_TINYINT_SPJ_SUPPORT, "OFF"), DDkwd__(TRAF_TINYINT_SUPPORT, "ON"), // DTM Transaction Type: MVCC, SSCC XDDkwd__(TRAF_TRANS_TYPE, "MVCC"), DDkwd__(TRAF_UNLOAD_BYPASS_LIBHDFS, "ON"), DD_____(TRAF_UNLOAD_DEF_DELIMITER, "|" ), DD_____(TRAF_UNLOAD_DEF_RECORD_SEPARATOR, "\n" ), DDint__(TRAF_UNLOAD_HDFS_COMPRESS, "0"), DDkwd__(TRAF_UNLOAD_SKIP_WRITING_TO_FILES, "OFF"), DDkwd__(TRAF_UPSERT_ADJUST_PARAMS, "OFF"), DDkwd__(TRAF_UPSERT_MODE, "MERGE"), DDint__(TRAF_UPSERT_WB_SIZE, "2097152"), DDkwd__(TRAF_UPSERT_WRITE_TO_WAL, "OFF"), DDkwd__(TRAF_USE_RWRS_FOR_MD_INSERT, "ON"), DDkwd__(TRY_DP2_REPARTITION_ALWAYS, "OFF"), SDDkwd__(TRY_PASS_ONE_IF_PASS_TWO_FAILS, "OFF"), // Disallow/Allow TSJs in MultiJoin framework DDkwd__(TSJS_SPOIL_JBB, "OFF"), // type a CASE expression or ValueIdUnion as varchar if its leaves // are of type CHAR of unequal length DDkwd__(TYPE_UNIONED_CHAR_AS_VARCHAR, "ON"), // UDF scalar indicating maximum number of rows out for each row in. DDui___(UDF_FANOUT, "1"), // Must be in form <cat>.<sch>. Delimited catalog names not allowed. DD_____(UDF_METADATA_SCHEMA, "TRAFODION.\"_UDF_\""), DDkwd__(UDF_SUBQ_IN_AGGS_AND_GBYS, "SYSTEM"), XDDui___(UDR_DEBUG_FLAGS, "0"), // see sqludr/sqludr.h for values SDD_____(UDR_JAVA_OPTIONS, "OFF"), DD_____(UDR_JAVA_OPTION_DELIMITERS, " "), XDDui___(UDR_JVM_DEBUG_PORT, "0"), XDDui___(UDR_JVM_DEBUG_TIMEOUT, "0"), DDkwd__(UNAVAILABLE_PARTITION, "STOP"), // "?" used? DDkwd__(UNC_PROCESS, "OFF"), SDDkwd__(UNIQUE_HASH_JOINS, "SYSTEM"), SDDui___(UNIQUE_HASH_JOIN_MAX_INNER_SIZE, "1000"), SDDui___(UNIQUE_HASH_JOIN_MAX_INNER_SIZE_PER_INSTANCE, "100"), SDDui___(UNIQUE_HASH_JOIN_MAX_INNER_TABLES, "2"), DDui___(UNOPTIMIZED_ESP_BUFFER_SIZE_DOWN, "31000"), DDui___(UNOPTIMIZED_ESP_BUFFER_SIZE_UP, "31000"), DDui1__(UPDATED_BYTES_PER_ESP, "400000"), DDkwd__(UPDATE_CLUSTERING_OR_UNIQUE_INDEX_KEY,"ON"), DDkwd__(UPD_ABORT_ON_ERROR, "OFF"), XDDkwd__(UPD_ORDERED, "ON"), DDkwd__(UPD_PARTIAL_ON_ERROR, "OFF"), DDkwd__(UPD_SAVEPOINT_ON_ERROR, "ON"), DDkwd__(USER_EXPERIENCE_LEVEL, "BEGINNER"), // ------------------------------------------------------------------------ // This default will use a new type of an ASSERT, CCMPASSERT as a CMPASSERT // when ON, else use that as a DCMPASSERT. Changed this default to OFF // just before the final build for R2 07/23/2004 RV // ------------------------------------------------------------------------- DDkwd__(USE_CCMPASSERT_AS_CMPASSERT, "OFF"), DDkwd__(USE_DENSE_BUFFERS, "ON"), // Use Hive tables as source for traf ustat and popindex DDkwd__(USE_HIVE_SOURCE, ""), // Use large queues on RHS of Flow/Nested Join when appropriate DDkwd__(USE_LARGE_QUEUES, "ON"), DDkwd__(USE_MAINTAIN_CONTROL_TABLE, "OFF"), DDkwd__(USE_OLD_DT_CONSTRUCTOR, "OFF"), // Adaptive segmentation, use operator max to determine degree of parallelism DDui___(USE_OPERATOR_MAX_FOR_DOP, "1"), // Specify the number of partitions before invoking parallel label operations DDui1__(USE_PARALLEL_FOR_NUM_PARTITIONS, "32"), DDkwd__(USTAT_ADD_SALTED_KEY_PREFIXES_FOR_MC, "ON"), // When ON, generate MCs for primary key prefixes as well as full key // of salted table when ON EVERY KEY or ON EVERY COLUMN is specified. DDkwd__(USTAT_ATTEMPT_ESP_PARALLELISM, "ON"), // for reading column values DDui___(USTAT_AUTOMATION_INTERVAL, "0"), XDDflt0_(USTAT_AUTO_CV_SAMPLE_SLOPE, "0.5"), // CV multiplier for sampling %. DDkwd__(USTAT_AUTO_EMPTYHIST_TWO_TRANS, "OFF"), // When ON empty hist insert will be 2 trans. DDkwd__(USTAT_AUTO_FOR_VOLATILE_TABLES, "OFF"), // Toggle for vol tbl histogram usage DDui___(USTAT_AUTO_MAX_HIST_AGE, "0"), // Age of oldest unused histogram - only applies when automation is on. DDui1__(USTAT_AUTO_MC_MAX_WIDTH, "10"), // The max columns in an MC histogram for automation. DDui___(USTAT_AUTO_MISSING_STATS_LEVEL, "4"), // Similar to HIST_MISSING_STATS_WARNING_LEVEL, but controls // if automation inserts missing stats to HISTOGRAMS table. // 0 - insert no stats, // 1 - insert single col hists, // 2 - insert all single col hists and MC hists for scans, // 3 - insert all single col hists and MC stats for scans and joins. // 4 - insert all single col hists and MC stats for scans, joins, and groupbys. XDDui___(USTAT_AUTO_PRIORITY, "150"), // Priority of ustats under USAS. DDui1__(USTAT_AUTO_READTIME_UPDATE_INTERVAL, "86400"), // Seconds between updates of READ_COUNT. // Should be > CACHE_HISTOGRAMS_REFRESH_INTERVAL. DDkwd__(USTAT_CHECK_HIST_ACCURACY, "OFF"), DDui1__(USTAT_CLUSTER_SAMPLE_BLOCKS, "1"), DDkwd__(USTAT_COLLECT_FILE_STATS, "ON"), // do we collect file stats DDkwd__(USTAT_COLLECT_MC_SKEW_VALUES, "OFF"), DD_____(USTAT_CQDS_ALLOWED_FOR_SPAWNED_COMPILERS, ""), // list of CQDs that can be pushed to seconday compilers // CQDs are delimited by "," DDkwd__(USTAT_DEBUG_FORCE_FETCHCOUNT, "OFF"), DD_____(USTAT_DEBUG_TEST, ""), DDflte_(USTAT_DSHMAX, "50.0"), DDkwd__(USTAT_ESTIMATE_HBASE_ROW_COUNT, "ON"), DDkwd__(USTAT_FETCHCOUNT_ACTIVE, "OFF"), DDkwd__(USTAT_FORCE_MOM_ESTIMATOR, "OFF"), DDkwd__(USTAT_FORCE_TEMP, "OFF"), DDflt0_(USTAT_FREQ_SIZE_PERCENT, "0.5"), // >100 effectively disables DDflt0_(USTAT_GAP_PERCENT, "10.0"), DDflt0_(USTAT_GAP_SIZE_MULTIPLIER, "1.5"), DDui___(USTAT_HBASE_SAMPLE_RETURN_INTERVAL, "10000000"), // Avoid scanner timeout by including on average at // least one row per this many when sampling within HBase. DDflt0_(USTAT_INCREMENTAL_FALSE_PROBABILITY, "0.01"), DDkwd__(USTAT_INCREMENTAL_UPDATE_STATISTICS, "ON"), DDkwd__(USTAT_INSERT_TO_NONAUDITED_TABLE, "OFF"), // Used internally to overcome problem in which insert // to the non-audited sample table must be done on same // process it was created on. This CQD is NOT externalized. DDkwd__(USTAT_INTERNAL_SORT, "HYBRID"), DDkwd__(USTAT_IS_IGNORE_UEC_FOR_MC, "OFF"), // if MCIS is ON, use IS to compute SC stats DDflt_0_1(USTAT_IS_MEMORY_FRACTION, "0.6"), DDflt0_(USTAT_IUS_INTERVAL_ROWCOUNT_CHANGE_THRESHOLD, "0.05"), DDflt0_(USTAT_IUS_INTERVAL_UEC_CHANGE_THRESHOLD, "0.05"), DDui1_6(USTAT_IUS_MAX_NUM_HASH_FUNCS, "5"), // the max disk space IUS CBFs can use is // MINOF(USTAT_IUS_MAX_PERSISTENT_DATA_IN_MB, // TtotalSpace * USTAT_IUS_MAX_PERSISTENT_DATA_IN_PERCENTAGE) DDui___(USTAT_IUS_MAX_PERSISTENT_DATA_IN_MB, "50000"), // 50GB DDflt0_(USTAT_IUS_MAX_PERSISTENT_DATA_IN_PERCENTAGE, "0.20"), // 20% of the total DDui1_6(USTAT_IUS_MAX_TRANSACTION_DURATION, "5"), // in minutes DDkwd__(USTAT_IUS_NO_BLOCK, "OFF"), DDansi_(USTAT_IUS_PERSISTENT_CBF_PATH, "SYSTEM"), // if turned on, IUS incremental statements will not take any "on existing" or // "on necessary" clause DDkwd__(USTAT_IUS_SIMPLE_SYNTAX, "OFF"), DDflt0_(USTAT_IUS_TOTAL_ROWCOUNT_CHANGE_THRESHOLD, "0.05"), DDflt0_(USTAT_IUS_TOTAL_UEC_CHANGE_THRESHOLD, "0.05"), DDkwd__(USTAT_IUS_USE_PERIODIC_SAMPLING, "OFF"), DDkwd__(USTAT_JIT_LOGGING, "OFF"), DDkwd__(USTAT_LOCK_HIST_TABLES, "OFF"), DD_____(USTAT_LOG, "ULOG"), DDui30_246(USTAT_MAX_CHAR_BOUNDARY_LEN, "30"), // Values can be 30-246. DDflt0_ (USTAT_MAX_CHAR_DATASIZE_FOR_IS, "1000"), // max data size in MB for char type to use XDDui___(USTAT_MAX_READ_AGE_IN_MIN, "5760"), DDui___(USTAT_MAX_SAMPLE_AGE, "365"), // For R2.5 set to a year so user created samples won't be removed. // internal sort without checking UEC. DDflt0_(USTAT_MIN_CHAR_UEC_FOR_IS, "0.2"), // minimum UEC for char type to use internal sort DDflt0_(USTAT_MIN_DEC_BIN_UEC_FOR_IS, "0.0"), // minimum UEC for binary types to use internal sort DDflt0_(USTAT_MIN_ESTIMATE_FOR_ROWCOUNT, "10000000"), DDui1__(USTAT_MIN_ROWCOUNT_FOR_CTS_SAMPLE, "10000"), XDDui1__(USTAT_MIN_ROWCOUNT_FOR_LOW_SAMPLE, "1000000"), XDDui1__(USTAT_MIN_ROWCOUNT_FOR_SAMPLE, "10000"), DDflt0_(USTAT_MODIFY_DEFAULT_UEC, "0.05"), DDflt0_(USTAT_NAHEAP_ESTIMATED_MAX, "1.3"), // estimated max memory allocation (in GB) feasible with NAHEAP. XDDui1__(USTAT_NECESSARY_SAMPLE_MAX, "5000000"), // Maximum sample size with NECESSARY DDui1__(USTAT_NUM_MC_GROUPS_FOR_KEYS, "10"), XDDpct__(USTAT_OBSOLETE_PERCENT_ROWCOUNT, "15"), DDkwd__(USTAT_PROCESS_GAPS, "ON"), DD0_255(USTAT_RETRY_DELAY, "100"), DD0_255(USTAT_RETRY_LIMIT, "3"), DD0_255(USTAT_RETRY_NEC_COLS_LIMIT, "3"), // by default, use retry for AddNecessaryColumns DDui1__(USTAT_RETRY_SECURITY_COUNT, "120"), DDpct__(USTAT_SAMPLE_PERCENT_DIFF, "10"), DDansi_(USTAT_SAMPLE_TABLE_NAME, " "), DDansi_(USTAT_SAMPLE_TABLE_NAME_CREATE, " "), DDkwd__(USTAT_SHOW_MC_INTERVAL_INFO, "OFF"), DDkwd__(USTAT_SHOW_MFV_INFO, "OFF"), DDflte_(USTAT_UEC_HI_RATIO, "0.5"), DDflte_(USTAT_UEC_LOW_RATIO, "0.1"), DDkwd__(USTAT_USE_BACKING_SAMPLE, "OFF"), DDkwd__(USTAT_USE_BULK_LOAD, "OFF"), DDkwd__(USTAT_USE_GROUPING_FOR_SAMPLING, "ON"), DDkwd__(USTAT_USE_INTERNAL_SORT_FOR_MC, "ON"), DDkwd__(USTAT_USE_INTERNAL_SORT_FOR_MC_LOOP, "ON"), DDkwd__(USTAT_USE_INTERNAL_SORT_FOR_MC_NEW_HIST, "OFF"), // TEMP FOR TESTING -- SHOULD REMOVE DDkwd__(USTAT_USE_IS_WHEN_NO_STATS, "ON"), // use IS when no histograms exist for the column DDkwd__(USTAT_USE_SIDETREE_INSERT, "ON"), DDkwd__(USTAT_USE_SLIDING_SAMPLE_RATIO, "ON"), // Trend sampling rate down w/increasing table size, going // flat at 1%. XDDflt1_(USTAT_YOULL_LIKELY_BE_SORRY, "100000000"), // guard against unintentional long-running UPDATE STATS DDkwd__(VALIDATE_RFORK_REDEF_TS, "OFF"), DDkwd__(VALIDATE_VIEWS_AT_OPEN_TIME, "OFF"), //this is the default length of a param which is typed as a VARCHAR. DD1_4096(VARCHAR_PARAM_DEFAULT_SIZE, "255"), // allows pcodes for varchars DDkwd__(VARCHAR_PCODE, "ON"), DDansi_(VOLATILE_CATALOG, ""), DDkwd__(VOLATILE_SCHEMA_IN_USE, "OFF"), // if this is set to ON or SYSTEM, then find a suitable key among all the // columns of a volatile table. // If this is set to OFF, and there is no user specified primary key or // store by clause, then make the first column of the volatile table // to be the clustering key. DDkwd__(VOLATILE_TABLE_FIND_SUITABLE_KEY, "SYSTEM"), // if this is set, and there is no user specified primary key or // store by clause, then make the first column of the volatile table // to be the clustering key. // Default is ON. DDkwd__(VOLATILE_TABLE_FIRST_COL_IS_CLUSTERING_KEY, "ON"), DDkwd__(VSBB_TEST_MODE, "OFF"), XDDkwd__(WMS_CHILD_QUERY_MONITORING, "OFF"), XDDkwd__(WMS_QUERY_MONITORING, "OFF"), // amount of work we are willing to assign per CPU for any query // not running at full system parallelism SDDflte_(WORK_UNIT_ESP, "0.08"), SDDflte_(WORK_UNIT_ESP_DATA_COPY_COST, "0.001"), // ZIG_ZAG_TREES ON means do ZIG_ZAG_TREES // $$$ OFF for beta DDkwd__(ZIG_ZAG_TREES, "SYSTEM"), DDkwd__(ZIG_ZAG_TREES_CONTROL, "OFF") }; // // NOTE: The defDefIx_ array is an array of integers that map // 'enum' values to defaultDefaults[] entries. // The defDefIx_ array could probably be made global static // since all threads should map the same 'enum' values to the // same defaultDefaults[] entries. Such as change is being // left to a future round of optimizations. // static THREAD_P size_t defDefIx_[__NUM_DEFAULT_ATTRIBUTES]; inline static const char *getAttrName(Int32 attrEnum) { return defaultDefaults[defDefIx_[attrEnum]].attrName; } inline static const char *getDefaultDefaultValue(Int32 attrEnum) { return defaultDefaults[defDefIx_[attrEnum]].value; } inline static const DefaultValidator *validator(Int32 attrEnum) { return defaultDefaults[defDefIx_[attrEnum]].validator; } inline static UInt32 getFlags(Int32 attrEnum) { return defaultDefaults[defDefIx_[attrEnum]].flags; } inline static NABoolean isFlagOn(Int32 attrEnum, NADefaultFlags flagbit) { #pragma nowarn(1506) // warning elimination return defaultDefaults[defDefIx_[attrEnum]].flags & (UInt32)flagbit; #pragma warn(1506) // warning elimination } inline static void setFlagOn(Int32 attrEnum, NADefaultFlags flagbit) { defaultDefaults[defDefIx_[attrEnum]].flags |= (UInt32)flagbit; } static NABoolean isSynonymOfRESET(NAString &value) { return (value == "RESET"); } static NABoolean isSynonymOfSYSTEM(Int32 attrEnum, NAString &value) { if (value == "") return TRUE; if (value == "SYSTEM") return !isFlagOn(attrEnum, DEFAULT_ALLOWS_SEPARATE_SYSTEM); if (value == "ENABLE"){ value = "ON"; return FALSE; } else if (value == "DISABLE"){ value = "OFF"; return FALSE; } // if (getDefaultDefaultValue(attrEnum) != NAString("DISABLE")) // cast reqd!! // return TRUE; // else // value = "ON"; return FALSE; } // Helper class used for holding and restoring CQDs class NADefaults::HeldDefaults { public: HeldDefaults(void); ~HeldDefaults(void); // CMPASSERT's on stack overflow void pushDefault(const char * value); // returns null if nothing to pop char * popDefault(void); private: enum { STACK_SIZE = 3 }; int stackPointer_; char * stackValue_[STACK_SIZE]; }; // Methods for helper class HeldDefaults NADefaults::HeldDefaults::HeldDefaults(void) : stackPointer_(0) { for (int i = 0; i < STACK_SIZE; i++) stackValue_[i] = NULL; } NADefaults::HeldDefaults::~HeldDefaults(void) { for (int i = 0; i < STACK_SIZE; i++) { if (stackValue_[i]) { NADELETEBASIC(stackValue_[i], NADHEAP); } } } // CMPASSERT's on stack overflow void NADefaults::HeldDefaults::pushDefault(const char * value) { CMPASSERT(stackPointer_ < STACK_SIZE); stackValue_[stackPointer_] = new NADHEAP char[strlen(value) + 1]; strcpy(stackValue_[stackPointer_],value); stackPointer_++; } // returns null if nothing to pop char * NADefaults::HeldDefaults::popDefault(void) { char * result = 0; if (stackPointer_ > 0) { stackPointer_--; result = stackValue_[stackPointer_]; stackValue_[stackPointer_] = NULL; } return result; } size_t NADefaults::numDefaultAttributes() { return (size_t)__NUM_DEFAULT_ATTRIBUTES; } // Returns current defaults in alphabetic order (for SHOWCONTROL listing). const char *NADefaults::getCurrentDefaultsAttrNameAndValue( size_t ix, const char* &name, const char* &value, NABoolean userDefaultsOnly) { if (ix < numDefaultAttributes()) { NABoolean get = FALSE; if (userDefaultsOnly) { // if this default was entered by user, return it. get = userDefault(defaultDefaults[ix].attrEnum); } else { // display the control if // - it is externalized or // - it is for support only and a CQD is set to show those, or // - a CQD is set to show all the controls get = (defaultDefaults[ix].flags & DEFAULT_IS_EXTERNALIZED) || // bit-AND ((defaultDefaults[ix].flags & DEFAULT_IS_FOR_SUPPORT) && (getToken(SHOWCONTROL_SHOW_SUPPORT) == DF_ON)) || (getToken(SHOWCONTROL_SHOW_ALL) == DF_ON); } if (get) { name = defaultDefaults[ix].attrName; value = currentDefaults_[defaultDefaults[ix].attrEnum]; return name; } } return name = value = NULL; } // ----------------------------------------------------------------------- // convert the default defaults into a table organized by enum values // ----------------------------------------------------------------------- void NADefaults::initCurrentDefaultsWithDefaultDefaults() { deleteMe(); const size_t numAttrs = numDefaultAttributes(); if (numAttrs != sizeof(defaultDefaults) / sizeof(DefaultDefault)) return; CMPASSERT_STRING (numAttrs == sizeof(defaultDefaults) / sizeof(DefaultDefault), "Check sqlcomp/DefaultConstants.h for a gap in enum DefaultConstants or sqlcomp/nadefaults.cpp for duplicate entries in array defaultDefaults[]."); SqlParser_NADefaults_Glob = SqlParser_NADefaults_ = new NADHEAP SqlParser_NADefaults(); provenances_ = new NADHEAP char [numAttrs]; // enum fits in 2 bits flags_ = new NADHEAP char [numAttrs]; resetToDefaults_ = new NADHEAP char * [numAttrs]; currentDefaults_ = new NADHEAP const char * [numAttrs]; currentFloats_ = new NADHEAP float * [numAttrs]; currentTokens_ = new NADHEAP DefaultToken * [numAttrs]; currentState_ = INIT_DEFAULT_DEFAULTS; heldDefaults_ = new NADHEAP HeldDefaults * [numAttrs]; // reset all entries size_t i = 0; for (i = 0; i < numAttrs; i++) { provenances_[i] = currentState_; flags_[i] = 0; defDefIx_[i] = 0; } memset( resetToDefaults_, 0, sizeof(char *) * numAttrs ); memset( currentDefaults_, 0, sizeof(char *) * numAttrs ); memset( currentFloats_, 0, sizeof(float *) * numAttrs ); memset( currentTokens_, 0, sizeof(DefaultToken *) * numAttrs ); memset( heldDefaults_, 0, sizeof(HeldDefaults *) * numAttrs ); #ifndef NDEBUG // This env-var turns on consistency checking of default-defaults and // other static info. The env-var does not get passed from sqlci to arkdev // until *AFTER* the initialization code runs, so you must do a static // arkcmp compile to do this checking. TEST050 does this, in fact. NABoolean nadval = !!getenv("NADEFAULTS_VALIDATE"); #endif // for each entry of the (alphabetically sorted) default defaults // table, enter the default default into the current default table // which is sorted by enum values NAString prevAttrName; for (i = 0; i < numAttrs; i++) { // the enum must be less than the max (if this assert fails // you might have made the range of constants in the enum // non-contiguous by assigning hard-coded numbers to some entries) CMPASSERT(ENUM_RANGE_CHECK(defaultDefaults[i].attrEnum)); // can't have the same enum value twice in defaultDefaults CMPASSERT(currentDefaults_[defaultDefaults[i].attrEnum] == NULL); // set currentDefaults_[enum] to the static string, // leaving the "allocated from heap" flag as FALSE char * value = new NADHEAP char[strlen(defaultDefaults[i].value) + 1]; strcpy(value,defaultDefaults[i].value); // trim trailing spaces (except UDR_JAVA_OPTION_DELIMITERS, since // trailing space is allowed for it) if (defaultDefaults[i].attrEnum != UDR_JAVA_OPTION_DELIMITERS) { Lng32 len = strlen(value); while ((len > 0) && (value[len-1] == ' ')) { value[len-1] = 0; len--; } } currentDefaults_[defaultDefaults[i].attrEnum] = value; // set up our backlink which maps [enum] to its defaultDefaults entry defDefIx_[defaultDefaults[i].attrEnum] = i; // attrs must be in ascending sorted order. If not, error out. if (prevAttrName > defaultDefaults[i].attrName) { SqlParser_NADefaults_ = NULL; return; } prevAttrName = defaultDefaults[i].attrName; // validate initial default default values CMPASSERT(defaultDefaults[i].validator); if (! defaultDefaults[i].validator->validate( defaultDefaults[i].value, this, defaultDefaults[i].attrEnum, +1/*warning*/)) { SqlParser_NADefaults_ = NULL; cerr << "\nERROR: " << defaultDefaults[i].attrName << " has invalid value" << defaultDefaults[i].value << endl; return; } // LCOV_EXCL_START // for debugging only #ifndef NDEBUG if (nadval) { // additional sanity checking we want to do occasionally NAString v; // ensure the static table really is in alphabetic order CMPASSERT(i == 0 || strcmp(defaultDefaults[i-1].attrName, defaultDefaults[i].attrName) < 0); // ensure these names are fit and trim and in canonical form v = defaultDefaults[i].attrName; TrimNAStringSpace(v); v.toUpper(); CMPASSERT(v == defaultDefaults[i].attrName); // validate initial default default values CMPASSERT(defaultDefaults[i].validator); defaultDefaults[i].validator->validate( defaultDefaults[i].value, this, defaultDefaults[i].attrEnum, +1/*warning*/); // ensure these values are fit and trim and in canonical form v = defaultDefaults[i].value; TrimNAStringSpace(v); defaultDefaults[i].validator->applyUpper(v); CMPASSERT(v == defaultDefaults[i].value); // alert the programmer if (isSynonymOfSYSTEM(defaultDefaults[i].attrEnum, v)) if (v != "" || defaultDefaults[i].validator != &validateAnsiName) cerr << "\nWARNING: " << defaultDefaults[i].attrName << " has SYSTEM default (" << v << ");\n\t read NOTE 2 in " << __FILE__ << endl; if (isSynonymOfRESET(v)) if (v != "" || defaultDefaults[i].validator != &validateAnsiName) cerr << "\nWARNING: " << defaultDefaults[i].attrName << " has RESET default (" << v << ");\n\t this makes no sense!" << endl; if (defaultDefaults[i].validator == &validateUnknown) cerr << "\nWARNING: " << defaultDefaults[i].attrName << " has a NO-OP validator" << endl; // the token keyword array must have no missing strings, // it must also be in alphabetic order, // each entry must be canonical, and // must have no embedded spaces (see token() method, space/uscore...) if (i == 0) for (size_t j = 0; j < DF_lastToken; j++) { CMPASSERT(keywords_[j]); CMPASSERT(j == 0 || strcmp(keywords_[j-1], keywords_[j]) < 0); NAString v(keywords_[j]); TrimNAStringSpace(v); v.toUpper(); // we know keywords must be caseINsens CMPASSERT(v == keywords_[j]); CMPASSERT(v.first(' ') == NA_NPOS); } } // if env-var #endif // NDEBUG // LCOV_EXCL_STOP } // for i // set the default value for GENERATE_EXPLAIN depending on whether // this is a static compile or a dynamic compile. if (CmpCommon::context()->GetMode() == STMT_STATIC) { currentDefaults_[GENERATE_EXPLAIN] = "ON"; currentDefaults_[DO_RUNTIME_EID_SPACE_COMPUTATION] = "ON"; currentDefaults_[DETAILED_STATISTICS] = "MEASURE"; } else { currentDefaults_[GENERATE_EXPLAIN] = "OFF"; currentDefaults_[DO_RUNTIME_EID_SPACE_COMPUTATION] = "OFF"; currentDefaults_[DETAILED_STATISTICS] = "OPERATOR"; } // set the default value of hive_catalog to the hive_system_catalog currentDefaults_[HIVE_CATALOG] = HIVE_SYSTEM_CATALOG; // set the default value of hbase_catalog to the hbase_system_catalog currentDefaults_[HBASE_CATALOG] = HBASE_SYSTEM_CATALOG; currentDefaults_[SEABASE_CATALOG] = TRAFODION_SYSCAT_LIT; // Test for TM_USE_SSCC from ms.env. // Only a setting of TM_USE_SSCC set to 1 will change the value to SSCC. // Otherwise, the default will remain at MVCC. char * ev = getenv("TM_USE_SSCC"); Lng32 useValue = 0; if (ev) { useValue = (Lng32)str_atoi(ev, str_len(ev)); if (useValue == 1) currentDefaults_[TRAF_TRANS_TYPE] = "SSCC"; } // Begin: Temporary workaround for SQL build regressions to pass NABoolean resetNeoDefaults = FALSE; // On SQ, the way to get an envvar from inside a un-attached process // is to use the msg_getenv_str() call and set the env inside // the SQ_PROP_ property file. In this case the property // file is $MY_SQROOT/etc/SQ_PROP_tdm_arkcmp which contains the line // "SQLMX_REGRESS=1". This file was generated by tools/setuplnxenv. // resetNeoDefaults = (msg_getenv_str("SQLMX_REGRESS") != NULL); resetNeoDefaults = (getenv("SQLMX_REGRESS") != NULL); if(resetNeoDefaults) { // turn similarity check OFF stats during regressions run. currentDefaults_[SIMILARITY_CHECK] = "OFF"; // turn on ALL stats during regressions run. currentDefaults_[COMP_BOOL_157] = "ON"; // turn on INTERNAL format for SHOWDDL statements currentDefaults_[SHOWDDL_DISPLAY_FORMAT] = "INTERNAL"; } // End: Temporary workaround for SQL build regressions to pass // Cache all the default keywords up front, // leaving other non-keyword token to be cached on demand. // The "keyword" that is not cached is the kludge/clever trick that // Matt puts in for NATIONAL_CHARSET. NAString tmp( NADHEAP ); for ( i = 0; i < numAttrs; i++ ) { #ifndef NDEBUG #pragma nowarn(1506) // warning elimination const DefaultValidatorType validatorType = validator(i)->getType(); #pragma warn(1506) // warning elimination #endif #pragma nowarn(1506) // warning elimination if ( validator(i)->getType() == VALID_KWD && (i != NATIONAL_CHARSET) && (i != INPUT_CHARSET) && (i != ISO_MAPPING) ) #pragma warn(1506) // warning elimination { currentTokens_[i] = new NADHEAP DefaultToken; // do not call 'token' method as it will return an error if FALSE // is to be inserted. Just directly assign DF_OFF to non-resetable defs. if (isNonResetableAttribute(defaultDefaults[defDefIx_[i]].attrName)) *currentTokens_[i] = DF_OFF; else #pragma nowarn(1506) // warning elimination *currentTokens_[i] = token( i, tmp ); #pragma warn(1506) // warning elimination } } if (getToken(MODE_SEABASE) == DF_ON) { currentDefaults_[CATALOG] = TRAFODION_SYSCAT_LIT; if (getToken(SEABASE_VOLATILE_TABLES) == DF_ON) { NAString sbCat = getValue(SEABASE_CATALOG); CmpCommon::context()->sqlSession()->setVolatileCatalogName(sbCat, TRUE); } } SqlParser_NADefaults_->NAMETYPE_ = getToken(NAMETYPE); SqlParser_NADefaults_->NATIONAL_CHARSET_ = CharInfo::getCharSetEnum(currentDefaults_[NATIONAL_CHARSET]); SqlParser_NADefaults_->ISO_MAPPING_ = CharInfo::getCharSetEnum(currentDefaults_[ISO_MAPPING]); SqlParser_NADefaults_->DEFAULT_CHARSET_ = CharInfo::getCharSetEnum(currentDefaults_[DEFAULT_CHARSET]); SqlParser_NADefaults_->ORIG_DEFAULT_CHARSET_ = CharInfo::getCharSetEnum(currentDefaults_[DEFAULT_CHARSET]); // Set the NAString_isoMappingCS memory cache for use by routines // ToInternalIdentifier() and ToAnsiIdentifier[2|3]() in module // w:/common/NAString[2].cpp. These routines currently cannot // access SqlParser_ISO_MAPPING directly due to the complex // build hierarchy. NAString_setIsoMapCS((SQLCHARSET_CODE) SqlParser_NADefaults_->ISO_MAPPING_); } NADefaults::NADefaults(NAMemory * h) : provenances_(NULL) , flags_(NULL) , resetToDefaults_(NULL) , currentDefaults_(NULL) , currentFloats_(NULL) , currentTokens_(NULL) , heldDefaults_(NULL) , currentState_(UNINITIALIZED) , readFromSQDefaultsTable_(FALSE) , SqlParser_NADefaults_(NULL) , catSchSetToUserID_(NULL) , heap_(h) , resetAll_(FALSE) , defFlags_(0) { static THREAD_P NABoolean systemParamterUpdated = FALSE; // First (but only if NSK-LITE Services exist), // write system parameters (attributes DEF_*) into DefaultDefaults, if (!systemParamterUpdated && !cmpCurrentContext->isStandalone()) { updateSystemParameters(); systemParamterUpdated = TRUE; } // then copy DefaultDefaults into CurrentDefaults. initCurrentDefaultsWithDefaultDefaults(); // Set additional defaultDefaults flags: // If an attr allows ON/OFF/SYSTEM and the default-default is not SYSTEM, // then you must set this flag. Otherwise, CQD attr 'system' will revert // the value back to the default-default, which is not SYSTEM. // setFlagOn(...attr..., DEFAULT_ALLOWS_SEPARATE_SYSTEM); // // (See attESPPara in OptPhysRelExpr.cpp.) setFlagOn(ATTEMPT_ESP_PARALLELISM, DEFAULT_ALLOWS_SEPARATE_SYSTEM); setFlagOn(HJ_TYPE, DEFAULT_ALLOWS_SEPARATE_SYSTEM); setFlagOn(ZIG_ZAG_TREES, DEFAULT_ALLOWS_SEPARATE_SYSTEM); setFlagOn(COMPRESSED_INTERNAL_FORMAT, DEFAULT_ALLOWS_SEPARATE_SYSTEM); setFlagOn(COMPRESSED_INTERNAL_FORMAT_BMO, DEFAULT_ALLOWS_SEPARATE_SYSTEM); setFlagOn(HBASE_SMALL_SCANNER, DEFAULT_ALLOWS_SEPARATE_SYSTEM); } NADefaults::~NADefaults() { deleteMe(); } void NADefaults::deleteMe() { if (resetToDefaults_) { for (size_t i = numDefaultAttributes(); i--; ) NADELETEBASIC(resetToDefaults_[i], NADHEAP); NADELETEBASIC(resetToDefaults_, NADHEAP); } if (currentDefaults_) { for (size_t i = numDefaultAttributes(); i--; ) if (provenances_[i] > INIT_DEFAULT_DEFAULTS) NADELETEBASIC(currentDefaults_[i], NADHEAP); NADELETEBASIC(currentDefaults_, NADHEAP); } if (currentFloats_) { for (size_t i = numDefaultAttributes(); i--; ) NADELETEBASIC(currentFloats_[i], NADHEAP); NADELETEBASIC(currentFloats_, NADHEAP); } if (currentTokens_) { for (size_t i = numDefaultAttributes(); i--; ) NADELETEBASIC(currentTokens_[i], NADHEAP); NADELETEBASIC(currentTokens_, NADHEAP); } if (heldDefaults_) { for (size_t i = numDefaultAttributes(); i--; ) NADELETE(heldDefaults_[i], HeldDefaults, NADHEAP); NADELETEBASIC(heldDefaults_, NADHEAP); } for (CollIndex i = tablesRead_.entries(); i--; ) tablesRead_.removeAt(i); NADELETEBASIC(provenances_, NADHEAP); NADELETEBASIC(flags_, NADHEAP); NADELETE(SqlParser_NADefaults_, SqlParser_NADefaults, NADHEAP); } // ----------------------------------------------------------------------- // Find the attribute name from its enum value in the defaults table. // ----------------------------------------------------------------------- const char *NADefaults::lookupAttrName(Int32 attrEnum, Int32 errOrWarn) { if (ATTR_RANGE_CHECK) return getAttrName(attrEnum); static THREAD_P char noSuchAttr[20]; sprintf(noSuchAttr, "**%d**", attrEnum); if (errOrWarn) // $0~string0 is not the name of any DEFAULTS table attribute. *CmpCommon::diags() << DgSqlCode(ERRWARN(2050)) << DgString0(noSuchAttr); return noSuchAttr; } // ----------------------------------------------------------------------- // Find the enum value from its string representation in the defaults table. // ----------------------------------------------------------------------- enum DefaultConstants NADefaults::lookupAttrName(const char *name, Int32 errOrWarn, Int32 *position) { NAString attrName(name); TrimNAStringSpace(attrName, FALSE, TRUE); // trim trailing blanks only attrName.toUpper(); // start with the full range of defaultDefaults size_t lo = 0; size_t hi = numDefaultAttributes(); size_t split; Int32 cresult; // perform a binary search in the ordered table defaultDefaults do { // compare the token with the middle entry in the range split = (lo + hi) / 2; cresult = attrName.compareTo(defaultDefaults[split].attrName); if (cresult < 0) { // token < split value, search first half of range hi = split; } else if (cresult > 0) { if (lo == split) // been there, done that { CMPASSERT(lo == hi-1); break; } // token > split value, search second half of range lo = split; } } while (cresult != 0 && lo < hi); if (position != 0) #pragma nowarn(1506) // warning elimination *position = split; #pragma warn(1506) // warning elimination // if the last comparison result was equal, return value at "split" if (cresult == 0) return defaultDefaults[split].attrEnum; // otherwise the string has no corresponding enum value if (errOrWarn) // $0~string0 is not the name of any DEFAULTS table attribute. *CmpCommon::diags() << DgSqlCode(ERRWARN(2050)) << DgString0(attrName); return __INVALID_DEFAULT_ATTRIBUTE; // negative } #define WIDEST_CPUARCH_VALUE 30 // also wider than any utoa_() result static void utoa_(UInt32 val, char *buf) { sprintf(buf, "%u", val); } static void itoa_(Int32 val, char *buf) { sprintf(buf, "%d", val); } static void ftoa_(float val, char *buf) { snprintf(buf, WIDEST_CPUARCH_VALUE, "%0.2f", val); } // Updates the system parameters in the defaultDefaults table. void NADefaults::updateSystemParameters(NABoolean reInit) { static const char *arrayOfSystemParameters[] = { "DEF_CPU_ARCHITECTURE", "DEF_DISCS_ON_CLUSTER", "DEF_INSTRUCTIONS_SECOND", "DEF_PAGE_SIZE", "DEF_LOCAL_CLUSTER_NUMBER", "DEF_LOCAL_SMP_NODE_NUMBER", "DEF_NUM_SMP_CPUS", "MAX_ESPS_PER_CPU_PER_OP", "DEFAULT_DEGREE_OF_PARALLELISM", "DEF_NUM_NODES_IN_ACTIVE_CLUSTERS", // this is deliberately not in the list: "DEF_CHUNK_SIZE", "DEF_NUM_BM_CHUNKS", "DEF_PHYSICAL_MEMORY_AVAILABLE", //returned in KB not bytes "DEF_TOTAL_MEMORY_AVAILABLE", //returned in KB not bytes "DEF_VIRTUAL_MEMORY_AVAILABLE" , "GEN_MAX_NUM_PART_DISK_ENTRIES" , "USTAT_IUS_PERSISTENT_CBF_PATH" }; //returned in KB not bytes char valuestr[WIDEST_CPUARCH_VALUE]; // Set up global cluster information. setUpClusterInfo(CmpCommon::contextHeap()); // Extract SMP node number and cluster number where this arkcmp is running. short nodeNum = 0; Int32 clusterNum = 0; OSIM_getNodeAndClusterNumbers(nodeNum, clusterNum); // First (but only if NSK-LITE Services exist), // write system parameters (attributes DEF_*) into DefaultDefaults, // then copy DefaultDefaults into CurrentDefaults. if (!cmpCurrentContext->isStandalone()) { size_t numElements = sizeof(arrayOfSystemParameters) / sizeof(char *); for (size_t i = 0; i < numElements; i++) { Int32 j; // perform a lookup for the string, using a binary search lookupAttrName(arrayOfSystemParameters[i], -1, &j); CMPASSERT(j >= 0); if(reInit) NADELETEBASIC(defaultDefaults[j].value,NADHEAP); char *newValue = new (GetCliGlobals()->exCollHeap()) char[WIDEST_CPUARCH_VALUE]; newValue[0] = '\0'; defaultDefaults[j].value = newValue; switch(defaultDefaults[j].attrEnum) { case DEF_CPU_ARCHITECTURE: switch(gpClusterInfo->cpuArchitecture()) { // 123456789!1234567890@123456789 case CPU_ARCH_INTEL_80386: strcpy(newValue, "INTEL_80386"); break; case CPU_ARCH_INTEL_80486: strcpy(newValue, "INTEL_80486"); break; case CPU_ARCH_PENTIUM: strcpy(newValue, "PENTIUM"); break; case CPU_ARCH_PENTIUM_PRO: strcpy(newValue, "PENTIUM_PRO"); break; case CPU_ARCH_MIPS: strcpy(newValue, "MIPS"); break; case CPU_ARCH_ALPHA: strcpy(newValue, "ALPHA"); break; case CPU_ARCH_PPC: strcpy(newValue, "PPC"); break; default: strcpy(newValue, "UNKNOWN"); break; } if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j], FALSE); break; case DEF_DISCS_ON_CLUSTER: strcpy(newValue, "8"); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEF_PAGE_SIZE: utoa_(gpClusterInfo->pageSize(), valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEF_LOCAL_CLUSTER_NUMBER: utoa_(clusterNum, valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEF_LOCAL_SMP_NODE_NUMBER: utoa_(nodeNum, valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEF_NUM_SMP_CPUS: utoa_(gpClusterInfo->numberOfCpusPerSMP(), valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEFAULT_DEGREE_OF_PARALLELISM: { Lng32 x = 2; utoa_(x, valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); } break; case MAX_ESPS_PER_CPU_PER_OP: { float espsPerCore = computeNumESPsPerCore(FALSE); ftoa_(espsPerCore, valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); } break; case DEF_NUM_NODES_IN_ACTIVE_CLUSTERS: utoa_(((NAClusterInfoLinux*)gpClusterInfo)->numLinuxNodes(), valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEF_PHYSICAL_MEMORY_AVAILABLE: utoa_(gpClusterInfo->physicalMemoryAvailable(), valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEF_TOTAL_MEMORY_AVAILABLE: utoa_(gpClusterInfo->totalMemoryAvailable(), valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEF_VIRTUAL_MEMORY_AVAILABLE: utoa_(gpClusterInfo->virtualMemoryAvailable(), valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); break; case DEF_NUM_BM_CHUNKS: { UInt32 numChunks = (UInt32) (gpClusterInfo->physicalMemoryAvailable() / def_DEF_CHUNK_SIZE / 4); utoa_(numChunks, valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); } break; case DEF_INSTRUCTIONS_SECOND: { Int32 frequency, speed; frequency = gpClusterInfo->processorFrequency(); switch (gpClusterInfo->cpuArchitecture()) { case CPU_ARCH_PENTIUM_PRO: speed = (Int32) (frequency * 0.5); break; case CPU_ARCH_PENTIUM: speed = (Int32) (frequency * 0.4); break; default: speed = (Int32) (frequency * 0.3); break; } itoa_(speed, valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults(). updateCurrentDefaultsForOSIM(&defaultDefaults[j]); } break; case GEN_MAX_NUM_PART_DISK_ENTRIES: { // Make sure the gpClusterInfo points at an NAClusterLinux object. // In osim simulation mode, the pointer can point at a // NAClusterNSK object, for which the method numTSEsForPOS() is not // defined. NAClusterInfoLinux* gpLinux = dynamic_cast<NAClusterInfoLinux*>(gpClusterInfo); if ( gpLinux ) { UInt32 numTSEs = (UInt32)gpLinux->numTSEsForPOS(); utoa_(numTSEs, valuestr); strcpy(newValue, valuestr); if(reInit) ActiveSchemaDB()-> getDefaults().updateCurrentDefaultsForOSIM(&defaultDefaults[j]); } } break; case USTAT_IUS_PERSISTENT_CBF_PATH: { // set the CQD it to $HOME/cbfs const char* home = getenv("HOME"); if ( home ) { str_cat(home, "/cbfs", newValue); } } break; default: #ifndef NDEBUG cerr << "updateSystemParameters: no case for " << defaultDefaults[j].attrName << endl; #endif break; } // switch (arrayOfSystemParameters) } // for } // isStandalone } // updateSystemParameters() //============================================================================== // Get SMP node number and cluster number on which this arkcmp.exe is running. //============================================================================== void NADefaults::getNodeAndClusterNumbers(short& nodeNum, Int32& clusterNum) { SB_Phandle_Type pHandle; Int32 error = XPROCESSHANDLE_GETMINE_(&pHandle); Int32 nodeNumInt; // XPROCESSHANDLE_DECOMPOSE_ takes an integer. Int32 pin; error = XPROCESSHANDLE_DECOMPOSE_(&pHandle, &nodeNumInt, &pin, &clusterNum); nodeNum = nodeNumInt; // Store 4-byte integer back to short integer CMPASSERT(error == 0); } inline static NABoolean initializeSQLdone() { return FALSE; } // Setup for readFromSQLTable(): // #include "SQLCLIdev.h" const SQLMODULE_ID __SQL_mod_866668761818000 = { /* version */ SQLCLI_CURRENT_VERSION, /* module name */ "HP_SYSTEM_CATALOG.SYSTEM_SCHEMA.READDEF_N29_000", /* time stamp */ 866668761818000LL, /* char set */ "ISO88591", /* name length */ 47 }; static const Int32 MAX_VALUE_LEN = 1000; // Read the SQL defaults table, to layer on further defaults. // // [1] This is designed such that it can be called multiple times // (a site-wide defaults table, then a user-specific one, e.g.) // and by default it will supersede values read/computed from earlier tables. // // [2] It can also be called *after* CQD's have been issued // (e.g. from the getCatalogAndSchema() method) // and by default it will supersede values from earlier tables // but *not* explicitly CQD-ed settings. // // This default behavior is governed by the overwrite* arguments in // various methods (see the .h file). Naturally you can override such behavior, // e.g., if you wanted to reset to an earlier state, erasing all user CQD's. // void NADefaults::readFromSQLTable(const char *tname, Provenance overwriteIfNotYet, Int32 errOrWarn) { char value[MAX_VALUE_LEN + 1]; // CMPASSERT(MAX_VALUE_LEN >= ComMAX_2_PART_EXTERNAL_UCS2_NAME_LEN_IN_NAWCHARS); // First (but only if NSK-LITE Services exist), // write system parameters (attributes DEF_*) into DefaultDefaults, // then copy DefaultDefaults into CurrentDefaults. if (!cmpCurrentContext->isStandalone()) { Lng32 initialErrCnt = CmpCommon::diags()->getNumber(); // Set this *before* doing any insert()'s ... currentState_ = READ_FROM_SQL_TABLE; Int32 loop_here=0; while (loop_here > 10) { loop_here++; if (loop_here > 1000) loop_here=100; } if (tname) { NABoolean isSQLTable = TRUE; if (*tname == ' ') { // called from NADefaults::readFromFlatFile() isSQLTable = FALSE; // -- see kludge in .h file! tname++; } char attrName[101]; // column ATTRIBUTE VARCHAR(100) UPSHIFT Int32 sqlcode; static THREAD_P struct SQLCLI_OBJ_ID __SQL_id0; FILE *flatfile = NULL; if (isSQLTable) { init_SQLCLI_OBJ_ID(&__SQL_id0, SQLCLI_CURRENT_VERSION, cursor_name, &__SQL_mod_866668761818000, "S1", 0, SQLCHARSETSTRING_ISO88591, 2); /* EXEC SQL OPEN S1; See file NADefaults.mdf for cursor declaration */ sqlcode = SQL_EXEC_ClearDiagnostics(&__SQL_id0); sqlcode = SQL_EXEC_Exec(&__SQL_id0,NULL,1,tname,NULL); } else { flatfile = fopen(tname, "r"); sqlcode = flatfile ? 0 : -ABS(arkcmpErrorFileOpenForRead); } /* EXEC SQL FETCH S1 INTO :attrName, :value; */ // Since the DEFAULTS table is PRIMARY KEY (SUBSYSTEM, ATTRIBUTE), // we'll fetch (scanning the clustering index) // CATALOG before SCHEMA; this is important if user has rows like // ('CATALOG','c1') and ('SCHEMA','c2.sn') -- // the schema setting must supersede the catalog one. // We should also put an ORDER BY into the cursor decl in the .mdf, // to handle user-created DEFAULTS tables w/o a PK. if (sqlcode >= 0) if (isSQLTable) { sqlcode = SQL_EXEC_Fetch(&__SQL_id0,NULL,2,attrName,NULL,value,NULL); if (sqlcode >= 0) readFromSQDefaultsTable_ = TRUE; } else { value[0] = 0; // NULL terminator if (fscanf(flatfile, " %100[A-Za-z0-9_#] ,", attrName) < 0) sqlcode = +100; else fgets((char *) value, sizeof(value), flatfile); } // Ignore warnings except for end-of-data while (sqlcode >= 0 && sqlcode != +100) { NAString v(value); // skip comments, indicated by a # if (attrName[0] != '#') validateAndInsert(attrName, v, FALSE, errOrWarn, overwriteIfNotYet); /* EXEC SQL FETCH S1 INTO :attrName, :value; */ if (isSQLTable) sqlcode = SQL_EXEC_Fetch(&__SQL_id0,NULL,2,attrName,NULL,value,NULL); else { value[0] = 0; // NULL terminator if (fscanf(flatfile, " %100[A-Za-z0-9_#] ,", attrName) < 0) sqlcode = +100; else fgets((char *) value, sizeof(value), flatfile); } } if (sqlcode < 0 && errOrWarn && initializeSQLdone()) { if (ABS(sqlcode) == ABS(CLI_MODULEFILE_OPEN_ERROR) && cmpCurrentContext->isInstalling()) { // Emit no warning when (re)installing, // because obviously the module will not exist before we have // (re)arkcmp'd it! } else { // 2001 Error $0 reading table $1. Using $2 values. CollIndex n = tablesRead_.entries(); const char *errtext = n ? tablesRead_[n-1].data() : "default-default"; *CmpCommon::diags() << DgSqlCode(ERRWARN(2001)) << DgInt0(sqlcode) << DgTableName(tname) << DgString0(errtext); } } if (isSQLTable) { /* EXEC SQL CLOSE S1; */ sqlcode = SQL_EXEC_ClearDiagnostics(&__SQL_id0); sqlcode = SQL_EXEC_CloseStmt(&__SQL_id0); // The above statement should not start any transactions because // it uses read uncommitted access. If it ever changes, then we // would need to commit it at this time. } } // tname if (initialErrCnt < CmpCommon::diags()->getNumber() && errOrWarn) *CmpCommon::diags() << DgSqlCode(ERRWARN(2059)) << DgString0(tname ? tname : ""); } // isStandalone } // NADefaults::readFromSQLTable() void NADefaults::readFromSQLTables(Provenance overwriteIfNotYet, Int32 errOrWarn) { NABoolean cat = FALSE; NABoolean sch = FALSE; if (getToken(MODE_SEABASE) == DF_ON && !readFromSQDefaultsTable()) { // Read system defaults from configuration file. // keep this name in sync with file cli/SessionDefaults.cpp NAString confFile(getenv("MY_SQROOT")); confFile += "/etc/SQSystemDefaults.conf"; readFromFlatFile(confFile, overwriteIfNotYet, errOrWarn); tablesRead_.insert(confFile); CmpSeabaseDDL cmpSBD((NAHeap *)heap_, FALSE); Lng32 hbaseErr = 0; NAString hbaseErrStr; Lng32 errNum = cmpSBD.validateVersions(this, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, &hbaseErr, &hbaseErrStr); if (errNum == 0) // seabase is initialized properly { // read from seabase defaults table cmpSBD.readAndInitDefaultsFromSeabaseDefaultsTable (overwriteIfNotYet, errOrWarn, this); // set authorization state NABoolean checkAllPrivTables = FALSE; errNum = cmpSBD.isPrivMgrMetadataInitialized(this,checkAllPrivTables); CmpCommon::context()->setAuthorizationState(errNum); } else { CmpCommon::context()->setIsUninitializedSeabase(TRUE); CmpCommon::context()->uninitializedSeabaseErrNum() = errNum; CmpCommon::context()->hbaseErrNum() = hbaseErr; CmpCommon::context()->hbaseErrStr() = hbaseErrStr; } } currentState_ = SET_BY_CQD; // enter the next state... // Make self fully consistent, by executing deferred actions last of all getSqlParser_NADefaults(); } // NADefaults::readFromSQLTables() // This method is used by SchemaDB::initPerStatement const char * NADefaults::getValueWhileInitializing(Int32 attrEnum) { // We can't rely on our state_ because SQLC might have called CQD::bindNode() // which does a setState(SET_BY_CQD)... if (!tablesRead_.entries()) if (getProvenance(attrEnum) < SET_BY_CQD) readFromSQLTables(SET_BY_CQD); return getValue(attrEnum); } // This method is used by SchemaDB::initPerStatement *and* // by CmpCommon, CmpStatement, and SQLC/SQLCO. void NADefaults::getCatalogAndSchema(NAString &cat, NAString &sch) { cat = getValueWhileInitializing(CATALOG); sch = getValueWhileInitializing(SCHEMA); } // Should be called only privately and by DefaultValidator! Int32 NADefaults::validateFloat(const char *value, float &result, Int32 attrEnum, Int32 errOrWarn) const { Int32 n = -1; // NT's scanf("%n") is not quite correct; hence this code-around sscanf(value, "%g%n", &result, &n); if (n > 0 && value[n] == '\0') { switch (attrEnum) { case HIVE_INSERT_ERROR_MODE: { Lng32 v = str_atoi(value, str_len(value)); if (v >= 0 && v <= 3) return TRUE; } break; default: return TRUE; // a valid float } } NAString v(value); NABoolean silentIf = (errOrWarn == SilentIfSYSTEM); if (silentIf) errOrWarn = 0/*silent*/; NABoolean useSYSTEM = (token(attrEnum, v, TRUE, errOrWarn) == DF_SYSTEM); if (useSYSTEM && silentIf) // ValidateNumeric is caller return SilentIfSYSTEM; // special it-is-valid return! if (errOrWarn) *CmpCommon::diags() << DgSqlCode(ERRWARN(2055)) << DgString0(value) << DgString1(lookupAttrName(attrEnum, errOrWarn)); if (useSYSTEM) { // programmer error CMPASSERT("Numeric attr allows SYSTEM -- you need to call token() first to see if its current value is this keyword, and compute your system default value!" == NULL); } // ensure an out-of-range error if domainMatch or ValidateNumeric is called result = -FLT_MAX; return FALSE; // not valid } NABoolean NADefaults::insert(Int32 attrEnum, const NAString &value, Int32 errOrWarn) { // private method; callers have all already done this: ATTR_RANGE_ASSERT; assert(errOrWarn != SilentIfSYSTEM); // yeh private, but just in case // Update cache: // (Re)validate that new value is numeric. // Useful if programmer did not correctly specify the DefaultValidator for // this attr in DefaultDefaults. // if (currentFloats_[attrEnum]) { float result; if (validateFloat(value, result, attrEnum, errOrWarn)) *currentFloats_[attrEnum] = result; else return FALSE; // not a valid float } // Update cache for DefaultToken by deallocating the cached entry. if ( currentTokens_[attrEnum] ) { NADELETEBASIC( currentTokens_[attrEnum], NADHEAP ); currentTokens_[attrEnum] = NULL; } // If we're past the read-from-SQLTable phase, then // the first CQD of a given attr must first save the from-SQLTable value, // to which the user can RESET if desired. // if (currentState_ >= SET_BY_CQD && !resetToDefaults_[attrEnum]) { NAString currValStr(currentDefaults_[attrEnum]); Lng32 currValLen = str_len(currValStr) + 1; char *pCurrVal = new NADHEAP char[currValLen]; str_cpy_all(pCurrVal, currValStr, currValLen); resetToDefaults_[attrEnum] = pCurrVal; } char *newVal = NULL; Lng32 newValLen = str_len(value) + 1; if (provenances_[attrEnum] > INIT_DEFAULT_DEFAULTS) { Lng32 oldValLen = str_len(currentDefaults_[attrEnum]) + 1; if (oldValLen >= newValLen && oldValLen < newValLen + 100) newVal = const_cast<char*>(currentDefaults_[attrEnum]); // reuse, to reduce mem frag else NADELETEBASIC(currentDefaults_[attrEnum], NADHEAP); } if (!newVal) newVal = new NADHEAP char[newValLen]; str_cpy_all(newVal, value, newValLen); currentDefaults_[attrEnum] = newVal; // when the parser flag is on for a set-once CQD // set its provenance as INIT_DEFAULT_DEFAULTS, // so the user can set it once later if ( isSetOnceAttribute(attrEnum) && Get_SqlParser_Flags(INTERNAL_QUERY_FROM_EXEUTIL) ) { provenances_[attrEnum] = INIT_DEFAULT_DEFAULTS; } else { provenances_[attrEnum] = currentState_; } return TRUE; } NADefaults::Provenance NADefaults::getProvenance(Int32 attrEnum) const { ATTR_RANGE_ASSERT; return (Provenance)provenances_[attrEnum]; } NABoolean NADefaults::getValue(Int32 attrEnum, NAString &result) const { ATTR_RANGE_ASSERT; result = currentDefaults_[attrEnum]; return TRUE; // we always have a STRING REPRESENTATION value } NAString NADefaults::getString(Int32 attrEnum) const { ATTR_RANGE_ASSERT; return currentDefaults_[attrEnum]; } const char * NADefaults::getValue(Int32 attrEnum) const { ATTR_RANGE_ASSERT; return currentDefaults_[attrEnum]; } NABoolean NADefaults::getFloat(Int32 attrEnum, float &result) const { ATTR_RANGE_ASSERT; if (currentFloats_[attrEnum]) { result = *currentFloats_[attrEnum]; } else if (validateFloat(currentDefaults_[attrEnum], result, attrEnum)) { currentFloats_[attrEnum] = new NADHEAP float; // cache the result *currentFloats_[attrEnum] = result; } else { return FALSE; // result is neg, from failed validateFloat() } return TRUE; } double NADefaults::getAsDouble(Int32 attrEnum) const { // No domainMatch() needed: any float or double (or int or uint) is okay; // getFloat()/validateFloat() will disallow any non-numerics. float flt; getFloat(attrEnum, flt); return double(flt); } Lng32 NADefaults::getAsLong(Int32 attrEnum) const { float flt; getFloat(attrEnum, flt); if (!domainMatch(attrEnum, VALID_INT, &flt)) { CMPBREAK; } return Lng32(flt); } ULng32 NADefaults::getAsULong(Int32 attrEnum) const { float flt; getFloat(attrEnum, flt); if (!domainMatch(attrEnum, VALID_UINT, &flt)) { CMPBREAK; } return (ULng32)(flt); } ULng32 NADefaults::getNumOfESPsPerNode() const { return (ULng32)MAXOF(ceil(getNumOfESPsPerNodeInFloat()), 1); } float NADefaults::getNumOfESPsPerNodeInFloat() const { double maxEspPerCpuPerOp = getAsDouble(MAX_ESPS_PER_CPU_PER_OP); CollIndex cores = ( (CmpCommon::context() && CURRSTMT_OPTDEFAULTS->isFakeHardware()) ) ? getAsLong(DEF_NUM_SMP_CPUS) : gpClusterInfo->numberOfCpusPerSMP(); return float(maxEspPerCpuPerOp * cores); } ULng32 NADefaults::getTotalNumOfESPsInCluster(NABoolean& fakeEnv) const { fakeEnv = FALSE; if (getToken(PARALLEL_NUM_ESPS, 0) != DF_SYSTEM ) { fakeEnv = TRUE; return getAsLong(PARALLEL_NUM_ESPS); } float espsPerNode = getNumOfESPsPerNodeInFloat(); CollIndex numOfNodes = gpClusterInfo->numOfSMPs(); if ( (CmpCommon::context() && CURRSTMT_OPTDEFAULTS->isFakeHardware())) { fakeEnv = TRUE; numOfNodes = getAsLong(DEF_NUM_NODES_IN_ACTIVE_CLUSTERS); } return MAXOF(ceil(espsPerNode * numOfNodes), 1); } NABoolean NADefaults::domainMatch(Int32 attrEnum, Int32 expectedType/*DefaultValidatorType*/, float *flt) const { if (validator(attrEnum)->getType() == expectedType) return TRUE; // yes, domains match // Emit error messages only if the value is actually out-of-range. // // Users (optimizer code) should REALLY be using 'unsigned long' fields // and calling getAsULong, instead of using 'long' fields to retrieve // unsigned(DDui*) attr values via getAsLong ... // // LCOV_EXCL_START // if we get here the compiler will crash if (flt) { DefaultValidator *validator = NULL; if (expectedType == VALID_INT) validator = (DefaultValidator *)&validateInt; else if (expectedType == VALID_UINT) validator = (DefaultValidator *)&validateUI; // Explicitly check for TRUE here -- // both FALSE/error and SilentIfSYSTEM are out-of-range/out-of-domain // from this method's point of view. if (validator) if (validator->validate( currentDefaults_[attrEnum], this, attrEnum, -1, flt) == TRUE) return TRUE; // domain mismatch, but value *is* in the domain range } // fall thru to emit additional failure info *CmpCommon::diags() << DgSqlCode(+2058) // emit a mismatch WARNING << DgString0(lookupAttrName(attrEnum)) << DgString1(validator(attrEnum)->getTypeText()) << DgString2(DefaultValidator::getTypeText( DefaultValidatorType(expectedType))); #ifndef NDEBUG cerr << "Warning[2058] " << lookupAttrName(attrEnum) << " " << validator(attrEnum)->getTypeText() << " " << DefaultValidator::getTypeText( DefaultValidatorType(expectedType)) << " " << (flt ? *flt : 123.45) << endl; #endif // LCOV_EXCL_STOP return FALSE; } // CONTROL QUERY DEFAULT attr RESET; // resets the single attr to the value it had right after we read all // the DEFAULTS tables, // or the value it had right before a CQD * RESET RESET. // CONTROL QUERY DEFAULT * RESET; // resets all attrs to the values they had by same criteria as above. // CONTROL QUERY DEFAULT * RESET RESET; // resets the "reset-to" values so that all current values become the // effective "reset-to"'s -- i.e, the current values can't be lost // on the next CQD * RESET; // Useful for apps that dynamically send startup settings that ought // to be preserved -- ODBC and SQLCI do this. // void NADefaults::resetAll(NAString &value, NABoolean reset, Int32 errOrWarn) { size_t i, numAttrs = numDefaultAttributes(); if (reset == 1) { // CQD * RESET; (not RESET RESET) setResetAll(TRUE); for (i = 0; i < numAttrs; i++) { const char * attributeName = defaultDefaults[i].attrName; DefaultConstants attrEnum = lookupAttrName(attributeName, errOrWarn); if (isNonResetableAttribute(attributeName)) continue; validateAndInsert(attributeName, value, TRUE, errOrWarn); } // if DEFAULT_SCHEMA_NAMETYPE=USER after CQD * RESET // set SCHEMA to LDAP_USERNAME // if SCHEMA has not been specified by user if ( (getToken(DEFAULT_SCHEMA_NAMETYPE) == DF_USER) && schSetByNametype() ) { setSchemaAsLdapUser(); } setResetAll(FALSE); } else if (reset == 2) { for (i = 0; i < numAttrs; i++) { if (resetToDefaults_[i]) { // CONTROL QUERY DEFAULT * RESET RESET; -- this code cloned below // Can't reset prov, because to which? // provenances_[i] = READ_FROM_SQL_TABLE or COMPUTED ?? NADELETEBASIC(resetToDefaults_[i], NADHEAP); resetToDefaults_[i] = NULL; } } } else { CMPASSERT(!reset); } } // Reset to default-defaults, as if readFromSQLTables() had not executed, // but setting state and provenance so no future reads will be triggered. // See StaticCompiler and Genesis 10-990204-2469 above for motivation. void NADefaults::undoReadsAndResetToDefaultDefaults() { initCurrentDefaultsWithDefaultDefaults(); } NABoolean NADefaults::isReadonlyAttribute(const char* attrName) const { if ((( stricmp(attrName, "ISO_MAPPING") == 0 ) || ( stricmp(attrName, "OVERFLOW_MODE") == 0 ) || ( stricmp(attrName, "SORT_ALGO") == 0 )) && ( CmpCommon::getDefault(DISABLE_READ_ONLY) == DF_ON )) return FALSE; // for internal development and testing purposes if (( stricmp(attrName, "ISO_MAPPING") == 0 )|| ( stricmp(attrName, "MODE_SPECIAL_1") == 0 ) || ( stricmp(attrName, "MODE_SPECIAL_2") == 0 ) || ( stricmp(attrName, "NATIONAL_CHARSET") == 0 ) || ( stricmp(attrName, "VALIDATE_VIEWS_AT_OPEN_TIME") == 0 ) || ( stricmp(attrName, "USER_EXPERIENCE_LEVEL") == 0 ) || ( stricmp(attrName, "POS_DISKS_IN_SEGMENT") == 0 ) || ( stricmp(attrName, "EXE_MEMORY_LIMIT_LOWER_BOUND_HASHJOIN") == 0 ) || ( stricmp(attrName, "EXE_MEMORY_LIMIT_LOWER_BOUND_MERGEJOIN") == 0 ) || ( stricmp(attrName, "EXE_MEMORY_LIMIT_LOWER_BOUND_HASHGROUPBY") == 0 ) || ( stricmp(attrName, "EXE_MEMORY_LIMIT_LOWER_BOUND_SORT") == 0 ) || ( stricmp(attrName, "EXE_MEMORY_LIMIT_LOWER_BOUND_PROBE_CACHE") == 0 ) || ( stricmp(attrName, "EXE_MEMORY_LIMIT_LOWER_BOUND_PA") == 0 ) || ( stricmp(attrName, "EXE_MEMORY_LIMIT_LOWER_BOUND_SEQUENCE") == 0 ) || ( stricmp(attrName, "EXE_MEMORY_LIMIT_LOWER_BOUND_EXCHANGE") == 0 ) || ( stricmp(attrName, "SORT_ALGO") == 0 ) || ( stricmp(attrName, "OVERFLOW_MODE") == 0 ) ) return TRUE; if (strlen(attrName) > 0) { DefaultConstants v = lookupAttrName(attrName, 0, 0); if ((v != __INVALID_DEFAULT_ATTRIBUTE) && (getFlags(v) & DEFAULT_IS_SSD)) return TRUE; } return FALSE; } // these defaults cannot be reset or set to FALSE through a cqd. NABoolean NADefaults::isNonResetableAttribute(const char* attrName) const { if (( stricmp(attrName, "IS_SQLCI") == 0 ) || ( stricmp(attrName, "NVCI_PROCESS") == 0 ) || ( stricmp(attrName, "SESSION_ID") == 0 ) || ( stricmp(attrName, "LDAP_USERNAME") == 0 ) || ( stricmp(attrName, "VOLATILE_SCHEMA_IN_USE") == 0 ) || ( stricmp(attrName, "SESSION_USERNAME") == 0 ) ) return TRUE; return FALSE; } // these defaults can be set only once by user. NABoolean NADefaults::isSetOnceAttribute(Int32 attrEnum) const { if ( attrEnum == DEFAULT_SCHEMA_ACCESS_ONLY || attrEnum == PUBLISHING_ROLES ) return TRUE; return FALSE; } void NADefaults::resetSessionOnlyDefaults() { NAString value; validateAndInsert("NVCI_PROCESS", value, 3, 0); } // Parameter <reset> must not be a reference (&); // see <value = ... fall thru> below. enum DefaultConstants NADefaults::validateAndInsert(const char *attrName, NAString &value, NABoolean reset, Int32 errOrWarn, Provenance overwriteIfNotYet) { NABoolean overwrite = FALSE; NABoolean isJDBC = FALSE; NABoolean isODBC = FALSE; if (ActiveSchemaDB()) { isJDBC = (CmpCommon::getDefault(JDBC_PROCESS) == DF_ON ? TRUE : FALSE); isODBC = (CmpCommon::getDefault(ODBC_PROCESS) == DF_ON ? TRUE : FALSE); } if (reset && !attrName[0]) { // CONTROL QUERY DEFAULT * RESET overwrite = currentState_ < overwriteIfNotYet; if (overwrite) resetAll(value, reset, errOrWarn); return (DefaultConstants)0; // success } // Perform a lookup for the string, using a binary search. DefaultConstants attrEnum = lookupAttrName(attrName, errOrWarn); if (attrEnum >= 0) { // valid attrName // ignore DEFAULT_SCHEMA_ACCESS_ONLY if it is in system defaults if ( attrEnum == DEFAULT_SCHEMA_ACCESS_ONLY && getState() < SET_BY_CQD ) return attrEnum; // do the following check when // this is the primary mxcmp // and INTERNAL_QUERY_FROM_EXEUTIL is not set if (!CmpCommon::context()->isSecondaryMxcmp() && !Get_SqlParser_Flags(INTERNAL_QUERY_FROM_EXEUTIL)) { // This logic will catch if the set-once CQD // is set, but the ALLOW_SET_ONCE_DEFAULTS parserflags // are not set. This is absolutely necessary for security // to ensure that the correct parserflags are set. if ((isSetOnceAttribute(attrEnum)) && (!isResetAll()) && // no error msg for cqd * reset (NOT Get_SqlParser_Flags(ALLOW_SET_ONCE_DEFAULTS))) { *CmpCommon::diags() << DgSqlCode(-30042) << DgString0(attrName); return attrEnum; } // if DEFAULT_SCHEMA_ACCESS_ONLY is on, // users cannot change the following CQDs if ( getState() >= SET_BY_CQD && getToken(DEFAULT_SCHEMA_ACCESS_ONLY) == DF_ON ) { if (attrEnum == SCHEMA || attrEnum == PUBLIC_SCHEMA_NAME || attrEnum == DEFAULT_SCHEMA_NAMETYPE || attrEnum == PUBLISHING_ROLES) { if (!isResetAll()) // no error msg for cqd * reset *CmpCommon::diags() << DgSqlCode(-30043) << DgString0(attrName); return attrEnum; } } } else { // ignore LAST0_MODE cqd if we are in secondary mxcmp or if // internal_query_from_exeutil is set. This cqd is not meant // to apply in these cases if ( attrEnum == LAST0_MODE ) return attrEnum; } overwrite = getProvenance(attrEnum) < overwriteIfNotYet; // Put value into canonical form (trimmed, upcased where pertinent). // // Possibly revert to initial default default value -- see NOTE 3 up above. // Note further that ANSI names cannot revert on values of // 'SYSTEM' or 'ENABLE', as those are legal cat/sch/tbl names, // nor can they revert on '' (empty/blank), as ANSI requires us to // emit a syntax error for this. // // Possibly RESET to read-from-table value (before any CQD value). // TrimNAStringSpace(value); if (validator(attrEnum) != &validateAnsiName && !reset) { validator(attrEnum)->applyUpper(value); if (isSynonymOfSYSTEM(attrEnum, value)) value = getDefaultDefaultValue(attrEnum); else if (isSynonymOfRESET(value)) // CQD attr 'RESET'; ... reset = 1; } if (reset) { // CQD attr RESET; if ((isNonResetableAttribute(attrName)) && (reset != 3)) return attrEnum; if (!resetToDefaults_[attrEnum]) { if (overwrite) value = currentDefaults_[attrEnum]; // return actual val to caller if (attrEnum == ISOLATION_LEVEL) { // reset this in the global area TransMode::IsolationLevel il; getIsolationLevel(il); CmpCommon::transMode()->updateAccessModeFromIsolationLevel(il); } // Solution: 10-060418-5903. Do not update MXCMP global access mode // with CQD ISOLATION_LEVEL_FOR_UPDATES as it will overwrite that // set by ISOLATION_LEVE. The CQD ISOLATION_LEVEL_FOR_UPDATES is // always accessed directly when necessary. //else if (attrEnum == ISOLATION_LEVEL_FOR_UPDATES) // { // // reset this in the global area // TransMode::IsolationLevel il; // getIsolationLevel(il, getToken(attrEnum)); // CmpCommon::transMode()->updateAccessModeFromIsolationLevel(il, // FALSE); // } return attrEnum; } value = resetToDefaults_[attrEnum]; // fall thru, REINSERT this val } if (attrEnum == CATALOG) { if (!setCatalog(value, errOrWarn, overwrite)) attrEnum = __INVALID_DEFAULT_ATTRIBUTE; else { if (getState() == READ_FROM_SQL_TABLE) { // set the volatile catalog to be same as the catalog read from // defaults table. If there is no catalog or volatile_catalog // specified in the defaults table, then volatile catalog name // will be the default catalog in use in the session where // volatile tables are created. CmpCommon::context()->sqlSession()->setVolatileCatalogName(value); } } } else if (attrEnum == SCHEMA) { if (!setSchema(value, errOrWarn, overwrite)) attrEnum = __INVALID_DEFAULT_ATTRIBUTE; else { if (getState() == READ_FROM_SQL_TABLE) { // set the volatile catalog to be same as the catalog read from // defaults table. If there is no catalog or volatile_catalog // specified in the defaults table, then volatile catalog name // will be the default catalog in use in the session where // volatile tables are created. NAString cat(getValue(CATALOG)); CmpCommon::context()->sqlSession()->setVolatileCatalogName(cat); } } } else if (attrEnum == MP_SUBVOLUME && value.first('.') != NA_NPOS) { if (!setMPLoc(value, errOrWarn, overwriteIfNotYet)) attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } else { if ( attrEnum == MAX_LONG_VARCHAR_DEFAULT_SIZE || attrEnum == MAX_LONG_WVARCHAR_DEFAULT_SIZE ) { ULng32 minLength; switch (attrEnum) { case MAX_LONG_VARCHAR_DEFAULT_SIZE: minLength = (Lng32)getAsULong(MIN_LONG_VARCHAR_DEFAULT_SIZE); break; case MAX_LONG_WVARCHAR_DEFAULT_SIZE: minLength = (Lng32)getAsULong(MIN_LONG_WVARCHAR_DEFAULT_SIZE); break; default: attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } if ( attrEnum != __INVALID_DEFAULT_ATTRIBUTE ) { UInt32 newMaxLength; Int32 n = -1; sscanf(value.data(), "%u%n", &newMaxLength, &n); if ( n>0 && (UInt32)n == value.length() ) { // a valid unsigned number if ( newMaxLength < minLength ) { *CmpCommon::diags() << DgSqlCode(-2030) << DgInt0((Lng32)minLength); attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } } } } if ( attrEnum == MIN_LONG_VARCHAR_DEFAULT_SIZE || attrEnum == MIN_LONG_WVARCHAR_DEFAULT_SIZE ) { ULng32 maxLength; switch (attrEnum) { case MIN_LONG_VARCHAR_DEFAULT_SIZE: maxLength = getAsULong(MAX_LONG_VARCHAR_DEFAULT_SIZE); break; case MIN_LONG_WVARCHAR_DEFAULT_SIZE: maxLength = getAsULong(MAX_LONG_WVARCHAR_DEFAULT_SIZE); break; default: attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } if ( attrEnum != __INVALID_DEFAULT_ATTRIBUTE ) { UInt32 newMinLength; Int32 n = -1; sscanf(value.data(), "%u%n", &newMinLength, &n); if ( n>0 && (UInt32)n == value.length() ) { // a valid unsigned number if ( newMinLength > maxLength ) { *CmpCommon::diags() << DgSqlCode(-2029) << DgInt0((Lng32)maxLength); attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } } } } if (errOrWarn && (attrEnum == ROUNDING_MODE)) { if (NOT ((value.length() == 1) && ((*value.data() == '0') || (*value.data() == '1') || (*value.data() == '2')))) { *CmpCommon::diags() << DgSqlCode(-2055) << DgString0(value) << DgString1(lookupAttrName(attrEnum)); attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } } if ( attrEnum == SCRATCH_MAX_OPENS_HASH || attrEnum == SCRATCH_MAX_OPENS_SORT ) { if (NOT ((value.length() == 1) && ((*value.data() == '1') || (*value.data() == '2') || (*value.data() == '3') || (*value.data() == '4')))) { *CmpCommon::diags() << DgSqlCode(-2055) << DgString0(value) << DgString1(lookupAttrName(attrEnum)); attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } } if (attrEnum != __INVALID_DEFAULT_ATTRIBUTE) { // We know that the MP_COLLATIONS validator emits only warnings // and always returns TRUE. On the validate-but-do-not-insert step // (CQD compilation), those warnings will be seen by the user. // On the validate-AND-insert (CQD execution), there is no need // to repeat them (besides, that causes Executor to choke on the // warnings in the diags and say 'Error fetching from TCB tree'). Int32 isValid = TRUE; if (!overwrite || currentState_ < SET_BY_CQD || validator(attrEnum) != &validateCollList) isValid = validator(attrEnum)->validate(value, this, attrEnum, errOrWarn); // if an internal reset is being done, then make it a valid attr // even if the 'validate' method above returned invalid. if ((!isValid) && (isNonResetableAttribute(attrName)) && (reset == 3)) { isValid = TRUE; } if (!isValid) attrEnum = __INVALID_DEFAULT_ATTRIBUTE; else if (overwrite) { if (isValid == SilentIfSYSTEM) { // defDef value was "SYSTEM" or "" // Undo any caching from getFloat() NADELETEBASIC(currentFloats_[attrEnum], NADHEAP); currentFloats_[attrEnum] = NULL; // Undo any caching from getToken() NADELETEBASIC( currentTokens_[attrEnum], NADHEAP ); currentTokens_[attrEnum] = NULL; // Now fall thru to insert the string "SYSTEM" or "" } if (attrEnum == MP_CATALOG) { // This will apply default \sys to value if only $v.sv was specified. ComMPLoc loc(value, ComMPLoc::SUBVOL); value = loc.getMPName(); } if (!insert(attrEnum, value, errOrWarn)) attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } // overwrite (i.e. insert) } } // not special val/ins for CAT, SCH, or MPLOC } // valid attrName if (attrEnum >= 0) { if (overwrite) { if ((! reset) && (currentState_ == SET_BY_CQD)) { // indicate that this attribute was set by a user CQD. setUserDefault(attrEnum, TRUE); } switch (attrEnum) { case MP_SYSTEM: case MP_VOLUME: case MP_SUBVOLUME: // // Signal to reconstruct MPLOC and MPLOC_as_SchemaName // on the next query, i.e. next call to getSqlParser_NADefaults(). SqlParser_NADefaults_->MPLOC_.setUnknown(); case CATALOG: case SCHEMA: break; case ISOLATION_LEVEL: { // Ansi 14.1 SR 4. See comexe/ExControlArea::addControl(). //## I now think this implementation is wrong //## because this is setting GLOBAL state //## for something that should be CONTEXT-dependent. //## Will cause us headaches later, when we //## make arkcmp be a multi-context multi-threaded server. TransMode::IsolationLevel il; getIsolationLevel(il); CmpCommon::transMode()->updateAccessModeFromIsolationLevel(il); } break; // Solution: 10-060418-5903. Do not update MXCMP global access mode // with CQD ISOLATION_LEVEL_FOR_UPDATES as it will overwrite that // set by ISOLATION_LEVEL. The CQD ISOLATION_LEVEL_FOR_UPDATES is // always accessed directly when necessary. //case ISOLATION_LEVEL_FOR_UPDATES: //{ // TransMode::IsolationLevel il; // getIsolationLevel(il, getToken(attrEnum)); // CmpCommon::transMode()->updateAccessModeFromIsolationLevel(il, // FALSE); //} //break; case MODE_SPECIAL_1: { if (getToken(MODE_SPECIAL_2) == DF_ON) { // MS1 was already set by now. Reset it and return an error. insert(MODE_SPECIAL_1, "OFF", errOrWarn); attrEnum = __INVALID_DEFAULT_ATTRIBUTE; } // find_suitable_key to be turned off in this mode, unless // it has been explicitely set. if (getToken(VOLATILE_TABLE_FIND_SUITABLE_KEY) == DF_SYSTEM) { insert(VOLATILE_TABLE_FIND_SUITABLE_KEY, "OFF", errOrWarn); } } break; case MODE_SPECIAL_2: { NAString val; if (getToken(MODE_SPECIAL_1) == DF_ON) { // MS2 was already set by now. Reset it and return an error. insert(MODE_SPECIAL_2, "OFF", errOrWarn); attrEnum = __INVALID_DEFAULT_ATTRIBUTE; break; } if (value == "ON") val = "ON"; else val = resetToDefaults_[LIMIT_MAX_NUMERIC_PRECISION]; if (getToken(LIMIT_MAX_NUMERIC_PRECISION) == DF_SYSTEM) { insert(LIMIT_MAX_NUMERIC_PRECISION, val, errOrWarn); } if (value == "ON") val = "2"; else val = resetToDefaults_[ROUNDING_MODE]; insert(ROUNDING_MODE, val, errOrWarn); } break; case MODE_SPECIAL_4: { NAString val; if (value == "ON") val = "ON"; else val = "OFF"; insert(ALLOW_INCOMPATIBLE_COMPARISON, val, errOrWarn); insert(ALLOW_INCOMPATIBLE_ASSIGNMENT, val, errOrWarn); insert(ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT, val, errOrWarn); insert(MODE_SPECIAL_3, val, errOrWarn); NAString csVal; if (value == "ON") csVal = SQLCHARSETSTRING_UTF8; else csVal = ""; validateAndInsert("TRAF_DEFAULT_COL_CHARSET", csVal, FALSE, errOrWarn); NAString notVal; if (value == "ON") notVal = "OFF"; else notVal = "ON"; insert(TRAF_COL_LENGTH_IS_CHAR, notVal, errOrWarn); NAString costVal1; NAString costVal2; if (value == "ON") { costVal1 = "8.0"; costVal2 = "16.0" ; } else { costVal1 = "1.0"; costVal2 = "1.0" ; } validateAndInsert("NCM_IND_JOIN_COST_ADJ_FACTOR", costVal1, FALSE, errOrWarn); validateAndInsert("NCM_IND_SCAN_COST_ADJ_FACTOR", costVal2, FALSE, errOrWarn); if (value == "ON") Set_SqlParser_Flags(IN_MODE_SPECIAL_4); else Reset_SqlParser_Flags(IN_MODE_SPECIAL_4); } break; case MODE_SPECIAL_5: { NAString val; if (value == "ON") val = "ON"; else val = "OFF"; insert(ALLOW_INCOMPATIBLE_COMPARISON, val, errOrWarn); insert(ALLOW_INCOMPATIBLE_ASSIGNMENT, val, errOrWarn); insert(ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT, val, errOrWarn); insert(TRAF_ALLOW_SELF_REF_CONSTR, val, errOrWarn); } break; case MODE_SEABASE: { if (value == "ON") { if (NOT seabaseDefaultsTableRead()) { CmpSeabaseDDL cmpSBD((NAHeap *)heap_); Lng32 errNum = cmpSBD.validateVersions(this); if (errNum == 0) // seabase is initialized properly { // read from seabase defaults table cmpSBD.readAndInitDefaultsFromSeabaseDefaultsTable (overwriteIfNotYet, errOrWarn, this); } else { CmpCommon::context()->setIsUninitializedSeabase(TRUE); CmpCommon::context()->uninitializedSeabaseErrNum() = errNum; } } NAString sbCat = getValue(SEABASE_CATALOG); insert(SEABASE_VOLATILE_TABLES, "ON", errOrWarn); CmpCommon::context()->sqlSession()->setVolatileCatalogName(sbCat, TRUE); insert(UPD_SAVEPOINT_ON_ERROR, "OFF", errOrWarn); } else { NAString defCat = getValue(CATALOG); insert(SEABASE_VOLATILE_TABLES, "OFF", errOrWarn); CmpCommon::context()->sqlSession()->setVolatileCatalogName(defCat); insert(UPD_SAVEPOINT_ON_ERROR, "ON", errOrWarn); } } break; case MEMORY_LIMIT_QCACHE_UPPER_KB: CURRENTQCACHE->setHeapUpperLimit((size_t) 1024 * atoi(value.data())); break; case MEMORY_LIMIT_HISTCACHE_UPPER_KB: CURRCONTEXT_HISTCACHE->setHeapUpperLimit((size_t) 1024 * atoi(value.data())); break; case MEMORY_LIMIT_CMPSTMT_UPPER_KB: STMTHEAP->setUpperLimit((size_t) 1024 * atoi(value.data())); break; case MEMORY_LIMIT_CMPCTXT_UPPER_KB: CTXTHEAP->setUpperLimit((size_t) 1024 * atoi(value.data())); break; case MEMORY_LIMIT_NATABLECACHE_UPPER_KB: ActiveSchemaDB()->getNATableDB()->setHeapUpperLimit((size_t) 1024 * atoi(value.data())); break; case NAMETYPE: SqlParser_NADefaults_->NAMETYPE_ = token(NAMETYPE, value, TRUE); break; case NATIONAL_CHARSET: SqlParser_NADefaults_->NATIONAL_CHARSET_ = CharInfo::getCharSetEnum(value); break; case SESSION_ID: { CmpCommon::context()->sqlSession()->setSessionId(value); } break; case SESSION_USERNAME: { CmpCommon::context()->sqlSession()->setSessionUsername(value); } break; case SESSION_IN_USE: { CmpCommon::context()->sqlSession()->setSessionInUse ((getToken(attrEnum) == DF_ON)); } break; case SQLMX_REGRESS: { if (value == "ON") { insert(SIMILARITY_CHECK, "OFF", errOrWarn); insert(COMP_BOOL_157, "ON", errOrWarn); insert(SHOWDDL_DISPLAY_FORMAT, "INTERNAL", errOrWarn); insert(MODE_SPECIAL_1, "OFF", errOrWarn); if (getToken(VOLATILE_TABLE_FIND_SUITABLE_KEY) == DF_SYSTEM) { insert(VOLATILE_TABLE_FIND_SUITABLE_KEY, "OFF", errOrWarn); } char * env = getenv("SQLMX_REGRESS"); if (env) CmpCommon::context()->setSqlmxRegress(atoi(env)); else CmpCommon::context()->setSqlmxRegress(1); } else { insert(SIMILARITY_CHECK, "ON", errOrWarn); insert(COMP_BOOL_157, "OFF", errOrWarn); insert(SHOWDDL_DISPLAY_FORMAT, "EXTERNAL", errOrWarn); CmpCommon::context()->setSqlmxRegress(0); } } break; case VOLATILE_CATALOG: { CmpCommon::context()->sqlSession()->setVolatileCatalogName(value); } break; case VOLATILE_SCHEMA_IN_USE: { CmpCommon::context()->sqlSession()->setVolatileSchemaInUse ((getToken(attrEnum) == DF_ON)); } break; case ISO_MAPPING: { SqlParser_NADefaults_->ISO_MAPPING_ = CharInfo::getCharSetEnum(value); // Set the NAString_isoMappingCS memory cache for use by routines // ToInternalIdentifier() and ToAnsiIdentifier[2|3]() in module // w:/common/NAString[2].cpp. These routines currently cannot // access SqlParser_ISO_MAPPING directly due to the complex // build hierarchy. NAString_setIsoMapCS((SQLCHARSET_CODE) SqlParser_NADefaults_->ISO_MAPPING_); } break; case DEFAULT_CHARSET: { SqlParser_NADefaults_->DEFAULT_CHARSET_ = CharInfo::getCharSetEnum(value); SqlParser_NADefaults_->ORIG_DEFAULT_CHARSET_ = CharInfo::getCharSetEnum(value); } break; case ESP_ON_AGGREGATION_NODES_ONLY: { NABoolean useAgg = (getToken(attrEnum) == DF_ON); gpClusterInfo->setUseAggregationNodesOnly(useAgg); break; } case QUERY_TEXT_CACHE: { // If public schema is in use, query text cache has to be off NAString pSchema = getValue(PUBLIC_SCHEMA_NAME); if (pSchema != "") value = "OFF"; } break; case PUBLIC_SCHEMA_NAME: { // when PUBLIC_SCHEMA is used, turn off Query Text Cache if ( (value != "") && !(getToken(QUERY_TEXT_CACHE) == DF_OFF) ) insert(QUERY_TEXT_CACHE, "OFF"); // when PUBLIC_SCHEMA is not used, reset to the default value if ( value == "" ) { NAString v(""); validateAndInsert("QUERY_TEXT_CACHE", v, TRUE); } } break; case LDAP_USERNAME: { // when the LDAP_USERNAME is set (first time by CLI) // if DEFAULT_SCHEMA_NAMETYPE is USER, set schema to LDAP_USERNAME if ( !value.isNull() && (getToken(DEFAULT_SCHEMA_NAMETYPE) == DF_USER) && !userDefault(SCHEMA) && // do not change user setting ( schSetToUserID() || // only when schema was initialized to guardian id schSetByNametype() ) ) // or changed by same CQD { setSchemaAsLdapUser(value); setSchByNametype(TRUE); } } break; case DEFAULT_SCHEMA_ACCESS_ONLY: { if ( value == "ON" ) { NAString schemaNameType = getValue(DEFAULT_SCHEMA_NAMETYPE); if ( schemaNameType == "USER" ) { setSchemaAsLdapUser(); } } } break; case DEFAULT_SCHEMA_NAMETYPE: { if ( userDefault(SCHEMA) ) // if SCHEMA has been changed by user, do nothing break; if ( value == "SYSTEM" ) // reset to default schema { if ( schSetByNametype() ) // only when schema was changed by this CQD { // do not change catSchSetToUserID_ flag Int32 preVal = catSchSetToUserID_; NAString v(""); validateAndInsert("SCHEMA", v, TRUE); catSchSetToUserID_ = preVal; } } if ( value == "USER" ) // set default schema to ldpa username { if ( schSetToUserID() || // only when schema was initialized to guardian id schSetByNametype() ) // or was changed by this CQD { setSchemaAsLdapUser(); setSchByNametype(TRUE); } } } break; case USTAT_IUS_PERSISTENT_CBF_PATH: { // if the CBF path is SYSTEM, set it to $HOME/cbfs if ( value == "SYSTEM" ) { const char* home = getenv("HOME"); if ( home ) { value = home; value += "/cbfs"; validateAndInsert("USTAT_IUS_PERSISTENT_CBF_PATH", value, FALSE); } } } break; case TRAF_LOAD_ERROR_LOGGING_LOCATION: { if (value.length() > 512) { *CmpCommon::diags() << DgSqlCode(-2055) << DgString0(value) << DgString1(lookupAttrName(attrEnum)); } } break; case AGGRESSIVE_ESP_ALLOCATION_PER_CORE: { NABoolean useAgg = (getToken(attrEnum) == DF_ON); float numESPsPerCore = computeNumESPsPerCore(useAgg); char valuestr[WIDEST_CPUARCH_VALUE]; ftoa_(numESPsPerCore, valuestr); NAString val(valuestr); insert(MAX_ESPS_PER_CPU_PER_OP, val, errOrWarn); } break; default: break; } } // code to valid overwrite (insert) if (reset && overwrite) { // CONTROL QUERY DEFAULT attr RESET; -- this code cloned above // Can't reset prov, because to which? // provenances_[attrEnum] = READ_FROM_SQL_TABLE or COMPUTED ?? NADELETEBASIC(resetToDefaults_[attrEnum], NADHEAP); resetToDefaults_[attrEnum] = NULL; } else if (!overwrite && errOrWarn && getProvenance(attrEnum) >= IMMUTABLE) { *CmpCommon::diags() << DgSqlCode(ERRWARN(2200)) << DgString0(lookupAttrName(attrEnum, errOrWarn)); } } // valid attrName return attrEnum; } // NADefaults::validateAndInsert() float NADefaults::computeNumESPsPerCore(NABoolean aggressive) { #define DEFAULT_ESPS_PER_NODE 2 // for conservation allocation #define DEFAULT_ESPS_PER_CORE 0.5 // for aggressive allocation // Make sure the gpClusterInfo points at an NAClusterLinux object. // In osim simulation mode, the pointer can point at a NAClusterNSK // object, for which the method numTSEsForPOS() is not defined. NAClusterInfoLinux* gpLinux = dynamic_cast<NAClusterInfoLinux*>(gpClusterInfo); assert(gpLinux); // cores per node Lng32 coresPerNode = gpClusterInfo->numberOfCpusPerSMP(); if ( aggressive ) { float totalMemory = gpLinux->totalMemoryAvailable(); // per Node, in KB totalMemory /= (1024*1024); // per Node, in GB totalMemory /= coresPerNode ; // per core, in GB totalMemory /= 2; // per core, 2GB per ESP return MINOF(DEFAULT_ESPS_PER_CORE, totalMemory); } else { Lng32 numESPsPerNode = DEFAULT_ESPS_PER_NODE; return (float)(numESPsPerNode)/(float)(coresPerNode); } // The following lines of code are comment out but retained for possible // future references. // // // number of POS TSE // Lng32 numTSEsPerCluster = gpLinux->numTSEsForPOS(); // // // cluster nodes // Lng32 nodesdPerCluster = gpClusterInfo->getTotalNumberOfCPUs(); // // // TSEs per node // Lng32 TSEsPerNode = numTSEsPerCluster/nodesdPerCluster; // // // // // For Linux/nt, we conservatively allocate ESPs per node as follows // // - 1 ESP per 2 cpu cores if cores are equal or less than TSEs // // - 1 ESP per TSE if number of cores is more than double the TSEs // // - 1 ESP per 2 TSEs if cores are more than TSEs but less than double the TSEs // // - 1 ESP per node. Only possible on NT or workstations // // - number of cores less than TSEs and there are 1 or 2 cpur cores per node // // - number of TSEs is less than cpu cores and there 1 or 2 TSEs per node. // // This case is probable if virtual nodes are used // // // TSEsPerNode is 0 for arkcmps started by the seapilot universal comsumers // // in this case we only consider cpu cores // if ( coresPerNode <= TSEsPerNode || TSEsPerNode == 0 ) // { // if (coresPerNode > 1) // numESPsPerNode = DEFAULT_ESPS_PER_NODE; // } // else if (coresPerNode > (TSEsPerNode*2)) // { // numESPsPerNode = TSEsPerNode; // } // else if (TSEsPerNode > 1) // { // numESPsPerNode = TSEsPerNode/2; // } // else // not really needed since numESPsPerNode is set to 1 from above // { // numESPsPerNode = DEFAULT_ESPS_PER_NODE; // } // // return (float)(numESPsPerNode)/(float)(coresPerNode); } enum DefaultConstants NADefaults::holdOrRestore (const char *attrName, Lng32 holdOrRestoreCQD) { DefaultConstants attrEnum = __INVALID_DEFAULT_ATTRIBUTE; if (holdOrRestoreCQD == 0) { *CmpCommon::diags() << DgSqlCode(-2050) << DgString0(attrName); return attrEnum; } // Perform a lookup for the string, using a binary search. attrEnum = lookupAttrName(attrName, -1); if (attrEnum < 0) { *CmpCommon::diags() << DgSqlCode(-2050) << DgString0(attrName); return attrEnum; } char * value = NULL; if (holdOrRestoreCQD == 1) // hold cqd { if (currentDefaults_[attrEnum]) { value = new NADHEAP char[strlen(currentDefaults_[attrEnum]) + 1]; strcpy(value, currentDefaults_[attrEnum]); } else { value = new NADHEAP char[strlen(defaultDefaults[defDefIx_[attrEnum]].value) + 1]; strcpy(value, defaultDefaults[defDefIx_[attrEnum]].value); } if (! heldDefaults_[attrEnum]) heldDefaults_[attrEnum] = new NADHEAP HeldDefaults(); heldDefaults_[attrEnum]->pushDefault(value); } else { // restore cqd from heldDefaults_ array, if it was held. if (! heldDefaults_[attrEnum]) return attrEnum; value = heldDefaults_[attrEnum]->popDefault(); if (! value) return attrEnum; // there is an odd semantic that if currentDefaults_[attrEnum] // is null, we leave it as null, but pop a held value anyway; // this semantic was preserved when heldDefaults_ was converted // to a stack. if (currentDefaults_[attrEnum]) { // do a validateAndInsert so the caches (such as currentToken_) // get updated and so appropriate semantic actions are taken. // Note that validateAndInsert will take care of deleting the // storage currently held by currentDefaults_[attrEnum]. NAString valueS(value); validateAndInsert(lookupAttrName(attrEnum), // sad that we have to do a lookup again valueS, FALSE); } NADELETEBASIC(value, NADHEAP); } return attrEnum; } const SqlParser_NADefaults *NADefaults::getSqlParser_NADefaults() { // "Precompile" the MPLOC into a handier format for name resolution. // The pure ComMPLoc is used in a few places, and the SchemaName form // is used when NAMETYPE is NSK. // if (SqlParser_NADefaults_->MPLOC_.getFormat() == ComMPLoc::UNKNOWN) { NAString sys, vol, subvol; getValue(MP_SYSTEM, sys); getValue(MP_VOLUME, vol); getValue(MP_SUBVOLUME, subvol); if (!sys.isNull()) sys += "."; sys += vol + "." + subvol; SqlParser_NADefaults_->MPLOC_.parse(sys, ComMPLoc::SUBVOL); // For NAMETYPE NSK, catalog name is e.g. "\AZTEC.$FOO" SqlParser_NADefaults_->MPLOC_as_SchemaName_.setCatalogName( SqlParser_NADefaults_->MPLOC_.getSysDotVol()); // For NAMETYPE NSK, schema name is e.g. " SqlParser_NADefaults_->MPLOC_as_SchemaName_.setSchemaName( SqlParser_NADefaults_->MPLOC_.getSubvolName()); // We've already validated the heck out of this // in validateAndInsert() and setMPLoc()! #if defined(NA_NSK) || defined(_DEBUG) CMPASSERT(SqlParser_NADefaults_->MPLOC_.isValid(ComMPLoc::SUBVOL)); #endif // defined(NA_NSK) || defined(_DEBUG) } return SqlParser_NADefaults_; } static void setCatSchErr(NAString &value, Lng32 sqlCode, Int32 errOrWarn, NABoolean catErr = FALSE) { if (!sqlCode || !errOrWarn) return; TrimNAStringSpace(value); // prettify further (neater errmsg) *CmpCommon::diags() << DgSqlCode(ERRWARN(sqlCode)) << DgCatalogName(value) << DgSchemaName(value) << DgString0(value) << DgString1(value); if (value.first('"') == NA_NPOS) { // delimited names too complicated ! NAString namepart = value; size_t dot = value.first('.'); if (dot != NA_NPOS) { namepart.remove(dot); if (!IsSqlReservedWord(namepart)) { namepart = value; namepart.remove(0, dot+1); } } if (IsSqlReservedWord(namepart)) { *CmpCommon::diags() << DgSqlCode(ERRWARN(3128)) << DgString0(namepart) << DgString1(namepart); return; } } // must determine if the defaults have been set up before parseDML is called if (IdentifyMyself::GetMyName() == I_AM_UNKNOWN){ return; // diagnostic already put into diags above. } // Produce additional (more informative) syntax error messages, // trying delimited-value first and then possibly regular-value-itself. Parser parser(CmpCommon::context()); Lng32 errs = CmpCommon::diags()->getNumber(DgSqlCode::ERROR_); NAString pfx(catErr ? "SET CATALOG " : "SET SCHEMA "); NAString stmt; char c = *value.data(); if (c && c != '\"') { stmt = pfx; stmt += "\""; stmt += value; stmt += "\""; stmt += ";"; #pragma nowarn(1506) // warning elimination parser.parseDML(stmt, stmt.length(), OBJECTNAMECHARSET ); #pragma warn(1506) // warning elimination } if (errs == CmpCommon::diags()->getNumber(DgSqlCode::ERROR_)) { stmt = pfx; stmt += value; stmt += ";"; #pragma nowarn(1506) // warning elimination parser.parseDML(stmt, stmt.length(), OBJECTNAMECHARSET ); #pragma warn(1506) // warning elimination } // Change errors to warnings if errOrWarn is +1 (i.e. warning). if (errOrWarn > 0) NegateAllErrors(CmpCommon::diags()); } NABoolean NADefaults::setCatalog(NAString &value, Int32 errOrWarn, NABoolean overwrite, NABoolean alreadyCanonical) { setCatUserID(currentState_ == COMPUTED); // The input value is in external (Ansi) format. // If we are in the COMPUTED currentState_, // make the value strictly canonical, // and try non-delimited first, then delimited. // Prettify removes lead/trailing blanks, // and upcases where unquoted (for nicer errmsgs); // ComSchemaName parses/validates. // if (alreadyCanonical) ; // leave it alone, for performance's sake else if (currentState_ == COMPUTED) { // ' SQL.FOO' TrimNAStringSpace(value); // 'SQL.FOO' NAString tmp(value); value = ToAnsiIdentifier(value); // nondelim ok? if (value.isNull()) value = NAString("\"") + tmp + "\""; // '"SQL.FOO"' } else PrettifySqlText(value); ComSchemaName nam(value); if (nam.getSchemaNamePart().isEmpty() || // 0 name parts, if *any* error !nam.getCatalogNamePart().isEmpty()) { // 2 parts (cat.sch) is an error setCatSchErr(value, EXE_INVALID_CAT_NAME, errOrWarn, TRUE); return FALSE; // invalid value } else { // Get the 1 name part (the "schema" part as far as ComSchema knows...) if (overwrite) insert(CATALOG, nam.getSchemaNamePartAsAnsiString()); return TRUE; } } NABoolean NADefaults::setMPLoc(const NAString &value, Int32 errOrWarn, Provenance overwriteIfNotYet) { NABoolean isValid = TRUE; // Validate the entire string all at once, // so that if any namepart is in error, // we insert NONE of the MP_xxx values. ComMPLoc loc(value, ComMPLoc::SUBVOL); if (!loc.isValid(ComMPLoc::SUBVOL)) { // Call the MPLOC validator solely to emit proper errmsg validateNSKMPLoc.validate(value, this, MP_SUBVOLUME, errOrWarn); isValid = FALSE; } else { NAString v; DefaultConstants e; if (loc.hasSystemName()) { v = loc.getSystemName(); e = validateAndInsert("MP_SYSTEM", v, 0, errOrWarn, overwriteIfNotYet); CMPASSERT(e >= 0); // this is just double-checking! } v = loc.getVolumeName(); e = validateAndInsert("MP_VOLUME", v, 0, errOrWarn, overwriteIfNotYet); CMPASSERT(e >= 0); // this is just double-checking! v = loc.getSubvolName(); e = validateAndInsert("MP_SUBVOLUME", v, 0, errOrWarn, overwriteIfNotYet); CMPASSERT(e >= 0); // this is just double-checking! } return isValid; } NABoolean NADefaults::setSchema(NAString &value, Int32 errOrWarn, NABoolean overwrite, NABoolean alreadyCanonical) { // if this is part of CQD *RESET and it was initialized with role name // do not change the following flags // to allow DEFAULT_SCHEMA_NAMETYPE to set its value if (!( schSetToUserID() && isResetAll() )) { setSchUserID(currentState_ == COMPUTED); setSchByNametype(FALSE); } if (alreadyCanonical) ; // leave it alone, for performance's sake else if (currentState_ == COMPUTED) { // ' SQL.FOO' TrimNAStringSpace(value); // 'SQL.FOO' NAString tmp(value); value = ToAnsiIdentifier(value); // nondelim ok? if (value.isNull()) value = NAString("\"") + tmp + "\""; // '"SQL.FOO"' } else PrettifySqlText(value); ComSchemaName nam(value); if (nam.getSchemaNamePart().isEmpty()) { // 0 name parts, if *any* error setCatSchErr(value, EXE_INVALID_SCH_NAME, errOrWarn); return FALSE; // invalid value } else { if (overwrite) insert(SCHEMA, nam.getSchemaNamePartAsAnsiString()); // If 2 parts, overwrite any prior catalog default if (!nam.getCatalogNamePart().isEmpty()) { if (overwrite) { insert(CATALOG, nam.getCatalogNamePartAsAnsiString()); if (currentState_ == SET_BY_CQD) { // indicate that this attribute was set by a user CQD. setUserDefault(CATALOG, TRUE); } } } return TRUE; } } NAString NADefaults::keyword(DefaultToken tok) { CMPASSERT(tok >= 0 && tok < DF_lastToken); return keywords_[tok]; } // Defaults Tokens // There is a set of keywords which can appear as values of Defaults entries // in the Defaults Table. We declare, for each such token, a string (the // keyword), and an enumeration value. The string values belong in an // array, DFkeywords, in sorted order. The idea is we can use binary // search in order to obtain the index of a string to the matching // entry in this sorted array. // // If we define the enumerations carefully (pay attention here!), then // that index we just found (see previous paragraph) is the enum value // of the token. // In simple words: this has to be in identical order with enum DefaultToken // in DefaultConstants.h const char *NADefaults::keywords_[DF_lastToken] = { "ACCUMULATED", "ADVANCED", "AGGRESSIVE", "ALL", "ANSI", "BEGINNER", "BOTH", "CLEAR", "DEBUG", "DISK", "DISK_POOL", "DUMP", "DUMP_MV", "EXTERNAL", "EXTERNAL_DETAILED", "FIRSTROW", "HARDWARE", "HEAP", "HIGH", "HYBRID", "IEEE", "INDEXES", "INTERNAL", "IQS", "JNI", "JNI_TRX", "KEYINDEXES", "LASTROW", "LATEST", "LOADNODUP", "LOCAL", "LOCAL_NODE", "LOG", "MAXIMUM", "MEASURE", "MEDIUM", "MEDIUM_LOW", "MERGE", "MINIMUM", "MMAP", "MULTI_NODE", "MVCC", "NONE", "NSK", "OFF", "ON", "OPENS_FOR_WRITE", "OPERATOR", "OPTIMAL", "ORDERED", "PERTABLE", "PRINT", "PRIVATE", "PUBLIC", "QS", "READ_COMMITTED", "READ_UNCOMMITTED", "RELEASE", "REMOTE", "REPEATABLE_READ", "REPLACE", "REPSEL", "RESOURCES", "RETURN", "SAMPLE", "SERIALIZABLE", "SHORTANSI", "SIMPLE", "SKIP", "SMD", "SOFTWARE", "SOURCE", "SQLMP", "SSCC", "SSD", "STOP", "SUFFIX", "SYSTEM", "TANDEM", "THRIFT", "USER", "VERTICAL", "WAIT", "WARN", "XML" }; // To call bsearch we must satisfy each of its arguments. Either // NULL comes back, or, comes back a pointer to the element which is // a true match for our key. bsearch.key is upperKey.data(). // bsearch.base is keywords_. nel is DF_lastToken. // The next argument is sizeof char*. Finally, the comparison // function can simply be the strcmp function. // // Note that this function makes heavy reliance on the idea that // the DefaultToken enumerations go up in sequence 0, 1, 2, 3... . // // We do the cast on strcmp because its signature from the header // file is: int (*)(const char *, const char *). In general, we're // doing a lot of type casting in here. static Int32 stringCompare(const void* s1, const void* s2) { return strcmp( * (char**) s1, * (char**) s2); } DefaultToken NADefaults::token(Int32 attrEnum, NAString &value, NABoolean valueAlreadyGotten, Int32 errOrWarn) const { ATTR_RANGE_ASSERT; if (!valueAlreadyGotten) { value = getValue(attrEnum); // already trim & upper (by validateAndInsert) TrimNAStringSpace(value); // can't trust that the stored value is canonical } else { TrimNAStringSpace(value); // can't trust that input value is canonical, value.toUpper(); // so here do what validateAndInsert does } DefaultToken tok = DF_noSuchToken; if (value.isNull()) tok = DF_SYSTEM; else { if ((attrEnum == TERMINAL_CHARSET) || (attrEnum == USE_HIVE_SOURCE) || (attrEnum == HIVE_FILE_CHARSET) || (attrEnum == HBASE_DATA_BLOCK_ENCODING_OPTION) || (attrEnum == HBASE_COMPRESSION_OPTION)) return DF_USER; if ( attrEnum == NATIONAL_CHARSET || attrEnum == DEFAULT_CHARSET || attrEnum == HIVE_DEFAULT_CHARSET || attrEnum == ISO_MAPPING || attrEnum == INPUT_CHARSET || attrEnum == TRAF_DEFAULT_COL_CHARSET ) { CharInfo::CharSet cs = CharInfo::getCharSetEnum(value); Int32 err_found = 0; if ( !CharInfo::isCharSetSupported(cs) ) { err_found = 1; } else { switch( attrEnum ) { case NATIONAL_CHARSET: if (cs == CharInfo::KANJI_MP) break; //Allow (for regression test) if ((cs != CharInfo::UNICODE) && (cs != CharInfo::ISO88591)) err_found = 1; break; case DEFAULT_CHARSET: if (cs != CharInfo::ISO88591 && cs != CharInfo::UTF8 // && cs != CharInfo::SJIS ) err_found = 1; break; case HIVE_DEFAULT_CHARSET: case TRAF_DEFAULT_COL_CHARSET: if ((cs != CharInfo::UTF8) && (cs != CharInfo::ISO88591)) err_found = 1; break; case ISO_MAPPING: if (cs != CharInfo::ISO88591) err_found = 1; break; default: break; } } if ( (err_found != 0) && errOrWarn ) *CmpCommon::diags() << DgSqlCode(ERRWARN(3010)) << DgString0(value); else return DF_USER; // kludge, return any valid token } //else //else fall thru to see if value is SYSTEM // OPTIMIZATION_LEVEL if ((attrEnum == OPTIMIZATION_LEVEL) && value.length() == 1) switch (*value.data()) { case '0': return DF_MINIMUM; case '1': return DF_MINIMUM; case '2': return DF_MEDIUM_LOW; case '3': return DF_MEDIUM; case '4': return DF_MEDIUM; case '5': return DF_MAXIMUM; } // PCODE_OPT_LEVEL if ((attrEnum == PCODE_OPT_LEVEL) && value.length() == 1) switch (*value.data()) { case '0': return DF_MINIMUM; case '1': return DF_MEDIUM; case '2': return DF_HIGH; case '3': return DF_MAXIMUM; } // HBASE_FILTER_PREDS if ((attrEnum == HBASE_FILTER_PREDS) && value.length()==1) switch (*value.data()){ case '0': return DF_OFF; case '1': return DF_MINIMUM; case '2': return DF_MEDIUM; // in the future add DF_HIGH and DF_MAXIMUM when we implement more // pushdown capabilities } if ( attrEnum == TEMPORARY_TABLE_HASH_PARTITIONS || attrEnum == MVQR_REWRITE_CANDIDATES || attrEnum == MVQR_PUBLISH_TABLE_LOCATION || attrEnum == MVQR_WORKLOAD_ANALYSIS_MV_NAME || attrEnum == HIST_SCRATCH_VOL) return DF_SYSTEM; const char *k = value.data(); char *match = (char*) bsearch( &k, keywords_, DF_lastToken, sizeof(char*), stringCompare); if (match) tok = (DefaultToken) (((const char**) match) - keywords_); else { // Check for synonyms const char *c = value; for (; *c == '0'; c++) ; // all ascii '0' ? if (*c == '\0') // terminating nul '\0' tok = DF_OFF; else if (value.length() <= 2) { if (value == "1" || value == "+1" || value == "-1") tok = DF_ON; } else { if ((value == "STOP_AT") || (value == "STOP AT")) tok = DF_STOP; else if (value == "READ COMMITTED") tok = DF_READ_COMMITTED; else if (value == "READ UNCOMMITTED") tok = DF_READ_UNCOMMITTED; else if (value == "REPEATABLE READ") tok = DF_REPEATABLE_READ; else if (value == "BEGINNER") tok = DF_BEGINNER; else if (value == "ADVANCED") tok = DF_ADVANCED; #define CONVERT_SYNONYM(from,to) \ else if (value == "" # from "") { \ CMPASSERT(DF_ ## from == DF_ ## to); \ tok = DF_ ## to; \ } CONVERT_SYNONYM(COMPAQ, TANDEM) CONVERT_SYNONYM(DISABLE, OFF) CONVERT_SYNONYM(ENABLE, SYSTEM) CONVERT_SYNONYM(FALSE, OFF) CONVERT_SYNONYM(FULL, MAXIMUM) CONVERT_SYNONYM(TRUE, ON) } } } NABoolean isValid = FALSE; if (tok != DF_noSuchToken) switch (attrEnum) { case DEFAULT_SCHEMA_ACCESS_ONLY: if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case DEFAULT_SCHEMA_NAMETYPE: if (tok == DF_SYSTEM || tok == DF_USER) isValid = TRUE; break; case DETAILED_STATISTICS: if (tok == DF_ALL || tok == DF_MEASURE || tok == DF_ACCUMULATED || tok == DF_OPERATOR || tok == DF_PERTABLE || tok == DF_OFF) isValid = TRUE; break; case GROUP_BY_USING_ORDINAL: if (tok == DF_ALL || tok == DF_MINIMUM || tok == DF_OFF) isValid = TRUE; break; case HIDE_INDEXES: if (tok == DF_NONE || tok == DF_ALL || tok == DF_VERTICAL || tok == DF_INDEXES || tok == DF_KEYINDEXES) isValid = TRUE; break; case HIVE_USE_EXT_TABLE_ATTRS: if (tok == DF_ALL || tok == DF_OFF || tok == DF_ON ) isValid = TRUE; break; case INDEX_ELIMINATION_LEVEL: if (tok == DF_MINIMUM || tok == DF_MEDIUM || tok == DF_MAXIMUM || tok == DF_AGGRESSIVE ) isValid = TRUE; break; case IF_LOCKED: if (tok == DF_RETURN || tok == DF_WAIT) isValid = TRUE; break; case INSERT_VSBB: if (tok == DF_OFF || tok == DF_LOADNODUP || tok == DF_SYSTEM || tok == DF_USER) isValid = TRUE; break; case OVERFLOW_MODE: if (tok == DF_DISK || tok == DF_SSD || tok == DF_MMAP) isValid = TRUE; break; case SORT_ALGO: if(tok == DF_HEAP || tok == DF_IQS || tok == DF_REPSEL || tok == DF_QS) isValid = TRUE; break; case QUERY_CACHE_MPALIAS: case QUERY_TEMPLATE_CACHE: case SHARE_TEMPLATE_CACHED_PLANS: case VSBB_TEST_MODE: if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case QUERY_TEXT_CACHE: if (tok == DF_ON || tok == DF_OFF || tok == DF_SYSTEM || tok == DF_SKIP) isValid = TRUE; break; case DISABLE_BUFFERED_INSERTS: if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case ISOLATION_LEVEL: { TransMode::IsolationLevel iltmp; isValid = getIsolationLevel(iltmp, tok); } break; case ISOLATION_LEVEL_FOR_UPDATES: { TransMode::IsolationLevel iltmp; isValid = getIsolationLevel(iltmp, tok); } break; case MVGROUP_AUTOMATIC_CREATION: case MV_TRACE_INCONSISTENCY: //++ MV case MV_AS_ROW_TRIGGER: //++ MV { if(DF_ON == tok || DF_OFF == tok) { isValid = TRUE; } } break; case IUD_NONAUDITED_INDEX_MAINT: if (tok == DF_OFF || tok == DF_SYSTEM || tok == DF_WARN || tok == DF_ON) isValid = TRUE; break; case HIVE_SCAN_SPECIAL_MODE: isValid = TRUE; break; case IS_SQLCI: // for primary mxcmp that is invoked for user queries, the only valid // value for mxci_process cqd is TRUE. This cqd is set once by mxci // at startup time and cannot be changed by user. That way we know that // a request has come in from mxci(trusted) process. // For secondary mxcmp's invoked for internal queries where cqd's are // sent using sendAllControls method, all values are valid. This will // ensure that if this default is not set and is sent over to secondary // mxcmp using an internal CQD statement, it doesn't return an error. if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case NVCI_PROCESS: // for primary mxcmp that is invoked for user queries, the only valid // value for nvci_process cqd is TRUE. This cqd is set once by nvci // at startup time and cannot be changed by user. That way we know that // a request has come in from nvci(trusted) process. // For secondary mxcmp's invoked for internal queries where cqd's are // sent using sendAllControls method, all values are valid. This will // ensure that if this default is not set and is sent over to secondary // mxcmp using an internal CQD statement, it doesn't return an error. if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case NAMETYPE: if (tok == DF_ANSI || tok == DF_SHORTANSI || tok == DF_NSK) isValid = TRUE; break; case OPTIMIZATION_GOAL: if (tok == DF_FIRSTROW || tok == DF_LASTROW || tok == DF_RESOURCES) isValid = TRUE; break; case USER_EXPERIENCE_LEVEL: if (tok == DF_ADVANCED || tok == DF_BEGINNER) isValid = TRUE; break; case PCODE_OPT_LEVEL: if (tok == DF_OFF) { isValid = TRUE; break; } // else fall through to the next case, all those keywords are allowed // as well case ATTEMPT_ESP_PARALLELISM: if (tok == DF_SYSTEM || tok == DF_ON || tok == DF_OFF || tok == DF_MAXIMUM) isValid = TRUE; break; case OPTIMIZATION_LEVEL: if (tok == DF_MINIMUM || tok == DF_MEDIUM_LOW || tok == DF_MEDIUM || tok == DF_MAXIMUM) isValid = TRUE; break; case HBASE_FILTER_PREDS: if(tok == DF_OFF || tok == DF_ON) { if (tok == DF_ON) tok = DF_MINIMUM; // to keep backward compatibility isValid= TRUE; } break; case ROBUST_QUERY_OPTIMIZATION: if (tok == DF_MINIMUM || tok == DF_SYSTEM || tok == DF_MAXIMUM || tok == DF_HIGH) isValid = TRUE; break; case REFERENCE_CODE: case TARGET_CODE: if (tok == DF_RELEASE || tok == DF_DEBUG) isValid = TRUE; break; /* case ROLLBACK_ON_ERROR: if (tok == DF_OFF || tok == DF_ON || tok == DF_SYSTEM) isValid = TRUE; break; */ case AUTO_QUERY_RETRY: if (tok == DF_ON || tok == DF_OFF || tok == DF_SYSTEM) isValid = TRUE; break; case AUTO_QUERY_RETRY_WARNINGS: if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case EXE_PARALLEL_DDL: if (tok == DF_OFF || tok == DF_ON || tok == DF_EXTERNAL || tok == DF_INTERNAL) isValid = TRUE; break; case UNAVAILABLE_PARTITION: if (tok == DF_SKIP || tok == DF_STOP) isValid = TRUE; break; case QUERY_CACHE_STATISTICS: // on, off are no-ops if (tok == DF_PRINT || tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case QUERY_CACHE_STATEMENT_PINNING: if (tok == DF_CLEAR || tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case HJ_TYPE: if (tok == DF_ORDERED || tok == DF_HYBRID || tok == DF_SYSTEM) isValid = TRUE; break; case REF_CONSTRAINT_NO_ACTION_LIKE_RESTRICT: if (tok == DF_OFF || tok == DF_ON || tok == DF_SYSTEM) isValid = TRUE; break; case POS: if (tok == DF_LOCAL_NODE || tok == DF_OFF || tok == DF_MULTI_NODE || tok == DF_DISK_POOL) isValid = TRUE; break; case USTAT_INTERNAL_SORT: if (tok == DF_ON || tok == DF_OFF || tok == DF_HYBRID) isValid = TRUE; break; case USTAT_AUTO_FOR_VOLATILE_TABLES: if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case SUBQUERY_UNNESTING: if (tok == DF_OFF || tok == DF_ON || tok == DF_DEBUG) isValid = TRUE; break; case SORT_INTERMEDIATE_SCRATCH_CLEANUP: if(tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case SORT_MEMORY_QUOTA_SYSTEM: if(tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; /* If MDAM_SCAN_METHOD's value is "MAXIMUM" only, Right side of Nested Join will use the MDAM path Allowable values for MDAM_SCAN_METHOD are 'ON' | 'OFF' | 'MAXIMUM' */ case MDAM_SCAN_METHOD: if (tok == DF_ON || tok == DF_OFF || tok == DF_MAXIMUM) isValid = TRUE; break; case SHOWDDL_DISPLAY_FORMAT: if (tok == DF_INTERNAL || tok == DF_EXTERNAL || tok == DF_LOG) isValid = TRUE; break; case SHOWDDL_DISPLAY_PRIVILEGE_GRANTS: if (tok == DF_SYSTEM || tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case EXPLAIN_DISPLAY_FORMAT: if (tok == DF_INTERNAL || tok == DF_EXTERNAL || tok == DF_EXTERNAL_DETAILED) isValid = TRUE; break; case UPDATE_CLUSTERING_OR_UNIQUE_INDEX_KEY: if (tok == DF_ON || tok == DF_OFF || tok == DF_AGGRESSIVE) isValid = TRUE; break; case MVQR_ALL_JBBS_IN_QD: case MVQR_REWRITE_ENABLED_OPTION: case MVQR_REWRITE_SINGLE_TABLE_QUERIES: case MVQR_USE_EXTRA_HUB_TABLES: case MVQR_ENABLE_LOGGING: if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case MVQR_LOG_QUERY_DESCRIPTORS: if (tok == DF_OFF || tok == DF_DUMP || tok == DF_DUMP_MV || tok == DF_LOG) isValid = TRUE; break; case MVQR_PRIVATE_QMS_INIT: if (tok == DF_SMD || tok == DF_XML || tok == DF_NONE) isValid = TRUE; break; case MVQR_PUBLISH_TO: if (tok == DF_PUBLIC || tok == DF_PRIVATE || tok == DF_BOTH || tok == DF_NONE) isValid = TRUE; break; case MVQR_WORKLOAD_ANALYSIS_MV_NAME: isValid = TRUE; break; case ELIMINATE_REDUNDANT_JOINS: if (tok == DF_OFF || tok == DF_ON || tok == DF_DEBUG || tok == DF_MINIMUM) isValid = TRUE; break; case VOLATILE_TABLE_FIND_SUITABLE_KEY: if (tok == DF_SYSTEM || tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case CAT_DISTRIBUTE_METADATA: if (tok == DF_OFF || tok == DF_LOCAL_NODE || tok == DF_ON) isValid = TRUE; break; case MV_DUMP_DEBUG_INFO: if (tok == DF_OFF || tok == DF_ON) isValid = TRUE; break; case RANGESPEC_TRANSFORMATION: if (tok == DF_OFF || tok == DF_ON || tok == DF_MINIMUM) isValid = TRUE; break; case ASYMMETRIC_JOIN_TRANSFORMATION: if (tok == DF_MINIMUM || tok == DF_MAXIMUM) isValid = TRUE; break; case CAT_DEFAULT_COMPRESSION: if (tok == DF_NONE || tok == DF_HARDWARE || tok == DF_SOFTWARE) isValid = TRUE; break; case REPLICATE_DISK_POOL: if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; case COMPRESSION_TYPE: if (tok == DF_NONE || tok == DF_HARDWARE || tok == DF_SOFTWARE) isValid = TRUE; break; // The DF_SAMPLE setting indicates that the persistent sample will be // updated incrementally, but not the histograms; they will be created // anew from the incrementally updated sample. case USTAT_INCREMENTAL_UPDATE_STATISTICS: if (tok == DF_OFF || tok == DF_SAMPLE || tok == DF_ON) isValid = TRUE; break; case REPLICATE_COMPRESSION_TYPE: if (tok == DF_NONE || tok == DF_HARDWARE || tok == DF_SOFTWARE || tok == DF_SOURCE || tok == DF_SYSTEM) isValid = TRUE; break; case REUSE_OPENS: if (tok==DF_ON || tok == DF_OFF || tok == DF_OPENS_FOR_WRITE) isValid = TRUE; break; case USE_HIVE_SOURCE: isValid = TRUE; break; case TRAF_TABLE_SNAPSHOT_SCAN: if (tok == DF_NONE || tok == DF_SUFFIX || tok == DF_LATEST) isValid = TRUE; break; case LOB_OUTPUT_SIZE: if (tok >=0 && tok <= 512000) isValid = TRUE; break; case LOB_MAX_CHUNK_MEM_SIZE: if (tok >=0 && tok <= 512000) isValid = TRUE; break; case LOB_GC_LIMIT_SIZE: if (tok >= 0 ) isValid=TRUE; case TRAF_TRANS_TYPE: if (tok == DF_MVCC || tok == DF_SSCC) isValid = TRUE; break; case HBASE_RANGE_PARTITIONING_PARTIAL_COLS: if (tok == DF_OFF || tok == DF_MINIMUM || tok == DF_MEDIUM || tok == DF_MAXIMUM || tok == DF_ON) isValid = TRUE; break; case TRAF_UPSERT_MODE: if (tok == DF_MERGE || tok == DF_REPLACE || tok == DF_OPTIMAL) isValid = TRUE; break; // Nothing needs to be added here for ON/OFF/SYSTEM keywords -- // instead, add to DEFAULT_ALLOWS_SEPARATE_SYSTEM code in the ctor. default: if (tok == DF_ON || tok == DF_OFF) isValid = TRUE; break; } // See "NOTE 2" way up top. if (!isValid) { if (tok == DF_SYSTEM) { isValid = isFlagOn(attrEnum, DEFAULT_ALLOWS_SEPARATE_SYSTEM); if (!isValid) { NAString tmp(getDefaultDefaultValue(attrEnum)); isValid = isSynonymOfSYSTEM(attrEnum, tmp); } } } if (!isValid) { tok = DF_noSuchToken; if (errOrWarn) *CmpCommon::diags() << DgSqlCode(ERRWARN(2055)) << DgString0(value) << DgString1(lookupAttrName(attrEnum)); } return tok; } DefaultToken NADefaults::getToken( const Int32 attrEnum, const Int32 errOrWarn ) const { // Check the cache first. if ( currentTokens_[attrEnum] != NULL ) { return *currentTokens_[attrEnum]; } // Get the token and allocate memory to store the token value. NAString tmp( NADHEAP ); currentTokens_[attrEnum] = new NADHEAP DefaultToken; *currentTokens_[attrEnum] = token( attrEnum, tmp, FALSE, errOrWarn ); return *currentTokens_[attrEnum]; } NABoolean NADefaults::getIsolationLevel(TransMode::IsolationLevel &arg, DefaultToken tok) const { NABoolean specifiedOK = TRUE; if (tok == DF_noSuchToken) tok = getToken(ISOLATION_LEVEL); switch (tok) { case DF_READ_COMMITTED: arg = TransMode::READ_COMMITTED_; break; case DF_READ_UNCOMMITTED: arg = TransMode::READ_UNCOMMITTED_; break; case DF_REPEATABLE_READ: arg = TransMode::REPEATABLE_READ_; break; case DF_SERIALIZABLE: case DF_SYSTEM: arg = TransMode::SERIALIZABLE_; break; case DF_NONE: arg = TransMode::IL_NOT_SPECIFIED_; break; default: arg = TransMode::SERIALIZABLE_; specifiedOK = FALSE; NAString value(NADHEAP); if (tok != DF_noSuchToken) value = keyword(tok); *CmpCommon::diags() << DgSqlCode(-2055) << DgString0(value) << DgString1("ISOLATION_LEVEL"); } return specifiedOK; } // find the packed length for all the default values stored // in currentDefaults_ array. // currentDefaults_ is a fixed sized array of "char *" where each // entry is pointing to the default value for that default. // After pack, the default values are put in the buffer in // sequential order with a null terminator. Lng32 NADefaults::packedLengthDefaults() { Lng32 size = 0; const size_t numAttrs = numDefaultAttributes(); for (size_t i = 0; i < numAttrs; i++) { size += strlen(currentDefaults_[i]) + 1; } return size; } Lng32 NADefaults::packDefaultsToBuffer(char * buffer) { const size_t numAttrs = numDefaultAttributes(); Lng32 totalSize = 0; Lng32 size = 0; for (UInt32 i = 0; i < numAttrs; i++) { size = (Lng32)strlen(currentDefaults_[i]) + 1; strcpy(buffer, currentDefaults_[i]); buffer += size; totalSize += size; } return totalSize; } Lng32 NADefaults::unpackDefaultsFromBuffer(Lng32 numEntriesInBuffer, char * buffer) { return 0; } NABoolean NADefaults::isSameCQD(Lng32 numEntriesInBuffer, char * buffer, Lng32 bufLen) { const Lng32 numCurrentDefaultAttrs = (Lng32)numDefaultAttributes(); // check to see if the default values in 'buffer' are the same // as those in the currentDefaults_ array. // Return TRUE if they are all the same. if (numCurrentDefaultAttrs != numEntriesInBuffer) return FALSE; if (bufLen == 0) return FALSE; Int32 curPos = 0; for (Int32 i = 0; i < numEntriesInBuffer; i++) { if (strcmp(currentDefaults_[i], &buffer[curPos]) != 0) return FALSE; curPos += strlen(&buffer[curPos]) + 1; } // everything matches. return TRUE; } Lng32 NADefaults::createNewDefaults(Lng32 numEntriesInBuffer, char * buffer) { const Lng32 numCurrentDefaultAttrs = (Lng32)numDefaultAttributes(); // save the current defaults savedCurrentDefaults_ = currentDefaults_; savedCurrentFloats_ = currentFloats_; savedCurrentTokens_ = currentTokens_; // VO, Plan Versioning Support. // // This code may execute in a downrev compiler, which knows about fewer // defaults than the compiler originally used to compile the statement. // Only copy those defaults we know about, and skip the rest. Lng32 numEntriesToCopy = _min (numEntriesInBuffer, numCurrentDefaultAttrs); // allocate a new currentDefaults_ array and make it point to // the default values in the input 'buffer'. // If the current number of default attributes are greater than the // ones in the input buffer, then populate the remaining default // entries in the currentDefaults_ array with the values from the // the savedCurrentDefaults_. currentDefaults_ = new NADHEAP const char * [numCurrentDefaultAttrs]; Int32 curPos = 0; Int32 i = 0; for (i = 0; i < numEntriesToCopy; i++) { currentDefaults_[i] = &buffer[curPos]; curPos += strlen(&buffer[curPos]) + 1; } for (i = numEntriesToCopy; i < numCurrentDefaultAttrs; i++) { currentDefaults_[i] = savedCurrentDefaults_[i]; } // allocate two empty arrays for floats and tokens. currentFloats_ = new NADHEAP float * [numCurrentDefaultAttrs]; currentTokens_ = new NADHEAP DefaultToken * [numCurrentDefaultAttrs]; memset( currentFloats_, 0, sizeof(float *) * numCurrentDefaultAttrs ); memset( currentTokens_, 0, sizeof(DefaultToken *) * numCurrentDefaultAttrs ); return 0; } Lng32 NADefaults::restoreDefaults(Lng32 numEntriesInBuffer, char * buffer) { // Deallocate the currentDefaults_ array. // The array entries are not to be deleted as they point to // entries in 'buffer' or the 'savedCurrentDefaults_'. // See NADefaults::createNewDefaults() method. if (currentDefaults_) { NADELETEBASIC(currentDefaults_, NADHEAP); } if (currentFloats_) { for (size_t i = numDefaultAttributes(); i--; ) NADELETEBASIC(currentFloats_[i], NADHEAP); NADELETEBASIC(currentFloats_, NADHEAP); } if (currentTokens_) { for (size_t i = numDefaultAttributes(); i--; ) NADELETEBASIC(currentTokens_[i], NADHEAP); NADELETEBASIC(currentTokens_, NADHEAP); } // restore the saved defaults currentDefaults_ = savedCurrentDefaults_; currentFloats_ = savedCurrentFloats_; currentTokens_ = savedCurrentTokens_; return 0; } void NADefaults::updateCurrentDefaultsForOSIM(DefaultDefault * defaultDefault, NABoolean validateFloatVal) { Int32 attrEnum = defaultDefault->attrEnum; const char * defaultVal = defaultDefault->value; const char * valueStr = currentDefaults_[attrEnum]; if(valueStr) { NADELETEBASIC(valueStr,NADHEAP); } char * value = new NADHEAP char[strlen(defaultVal) + 1]; strcpy(value, defaultVal); currentDefaults_[attrEnum] = value; if ( validateFloatVal ) { float floatVal = 0; if (validateFloat(currentDefaults_[attrEnum], floatVal, attrEnum)) { if (currentFloats_[attrEnum]) { NADELETEBASIC(currentFloats_[attrEnum], NADHEAP); } currentFloats_[attrEnum] = new NADHEAP float; *currentFloats_[attrEnum] = floatVal; } } if ( currentTokens_[attrEnum] ) { NADELETEBASIC( currentTokens_[attrEnum], NADHEAP ); currentTokens_[attrEnum] = NULL; } } void NADefaults::setSchemaAsLdapUser(const NAString val) { NAString ldapUsername = val; if ( ldapUsername.isNull() ) ldapUsername = getValue(LDAP_USERNAME); if ( ldapUsername.isNull() ) return; ldapUsername.toUpper(); NAString schName = '"'; schName += ldapUsername; schName += '"'; // check schema name before insert // may get special characters from ldap ComSchemaName cSchName(schName); if ( !cSchName.getSchemaNamePart().isEmpty() && cSchName.getCatalogNamePart().isEmpty()) // should have no catalog { insert(SCHEMA, schName); } else { *CmpCommon::diags() << DgSqlCode(-2055) << DgString0(schName) << DgString1("SCHEMA"); } }
package com.twitter.querulous.evaluator import java.sql.ResultSet import com.twitter.xrayspecs.TimeConversions._ import net.lag.configgy.ConfigMap import database._ import query._ object QueryEvaluatorFactory { def fromConfig(config: ConfigMap, databaseFactory: DatabaseFactory, queryFactory: QueryFactory): QueryEvaluatorFactory = { var factory: QueryEvaluatorFactory = new StandardQueryEvaluatorFactory(databaseFactory, queryFactory) config.getConfigMap("disable").foreach { disableConfig => factory = new AutoDisablingQueryEvaluatorFactory(factory, disableConfig("error_count").toInt, disableConfig("seconds").toInt.seconds) } factory } def fromConfig(config: ConfigMap, statsCollector: Option[StatsCollector]): QueryEvaluatorFactory = { fromConfig(config, DatabaseFactory.fromConfig(config.configMap("connection_pool"), statsCollector), QueryFactory.fromConfig(config, statsCollector)) } } object QueryEvaluator extends QueryEvaluatorFactory { private def createEvaluatorFactory() = { val queryFactory = new SqlQueryFactory val databaseFactory = new ApachePoolingDatabaseFactory(10, 10, 1.second, 10.millis, false, 0.seconds) new StandardQueryEvaluatorFactory(databaseFactory, queryFactory) } def apply(dbhosts: List[String], dbname: String, username: String, password: String, urlOptions: Map[String, String]) = { createEvaluatorFactory()(dbhosts, dbname, username, password, urlOptions) } } trait QueryEvaluatorFactory { def apply(dbhosts: List[String], dbname: String, username: String, password: String, urlOptions: Map[String, String]): QueryEvaluator def apply(dbhost: String, dbname: String, username: String, password: String, urlOptions: Map[String, String]): QueryEvaluator = { apply(List(dbhost), dbname, username, password, urlOptions) } def apply(dbhosts: List[String], dbname: String, username: String, password: String): QueryEvaluator = { apply(dbhosts, dbname, username, password, null) } def apply(dbhost: String, dbname: String, username: String, password: String): QueryEvaluator = { apply(List(dbhost), dbname, username, password, null) } def apply(dbhost: String, username: String, password: String): QueryEvaluator = { apply(List(dbhost), null, username, password, null) } def apply(dbhosts: List[String], username: String, password: String): QueryEvaluator = { apply(dbhosts, null, username, password, null) } def apply(config: ConfigMap): QueryEvaluator = { apply( config.getList("hostname").toList, config.getString("database").getOrElse(null), config("username"), config.getString("password").getOrElse(null), // this is so lame, why do I have to cast this back? config.getConfigMap("url_options").map(_.asMap.asInstanceOf[Map[String, String]]).getOrElse(null) ) } } trait QueryEvaluator { def select[A](query: String, params: Any*)(f: ResultSet => A): Seq[A] def selectOne[A](query: String, params: Any*)(f: ResultSet => A): Option[A] def count(query: String, params: Any*): Int def execute(query: String, params: Any*): Int def nextId(tableName: String): Long def insert(query: String, params: Any*): Long def transaction[T](f: Transaction => T): T }
/* $Id$ * * Ripped && Adapted from the PrBoom project: * PrBoom: a Doom port merged with LxDoom and LSDLDoom * based on BOOM, a modified and improved DOOM engine * Copyright (C) 1999 by * id Software, Chi Hoang, Lee Killough, Jim Flynn, Rand Phares, Ty Halderman * Copyright (C) 1999-2000 by * Jess Haas, Nicolas Kalkhof, Colin Phipps, Florian Schulze * Copyright 2005, 2006 by * Florian Schulze, Colin Phipps, Neil Stevens, Andrey Budko * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301 USA. * * DESCRIPTION: * System interface for sound, using SDL. * */ #include "h2stdinc.h" #include <math.h> /* pow() */ #include "SDL.h" #include "doomdef.h" #include "sounds.h" #include "i_sound.h" #define SAMPLE_FORMAT AUDIO_S16SYS #define SAMPLE_ZERO 0 #define SAMPLE_RATE 11025 /* Hz */ #define SAMPLE_CHANNELS 2 #if 0 #define SAMPLE_TYPE char #else #define SAMPLE_TYPE short #endif /* * SOUND HEADER & DATA */ int snd_Channels; int snd_MaxVolume, /* maximum volume for sound */ snd_MusicVolume; /* maximum volume for music */ boolean snd_MusicAvail, /* whether music is available */ snd_SfxAvail; /* whether sfx are available */ /* * SOUND FX API */ typedef struct { unsigned char *begin; /* pointers into Sample.firstSample */ unsigned char *end; SAMPLE_TYPE *lvol_table; /* point into vol_lookup */ SAMPLE_TYPE *rvol_table; unsigned int pitch_step; unsigned int step_remainder; /* 0.16 bit remainder of last step. */ int pri; unsigned int time; } Channel; typedef struct { /* Sample data is a lump from a wad: byteswap the a, freq * and the length fields before using them */ short a; /* always 3 */ short freq; /* always 11025 */ int32_t length; /* sample length */ unsigned char firstSample; } Sample; COMPILE_TIME_ASSERT(Sample, offsetof(Sample,firstSample) == 8); #define CHAN_COUNT 8 static Channel channel[CHAN_COUNT]; #define MAX_VOL 64 /* 64 keeps our table down to 16Kb */ static SAMPLE_TYPE vol_lookup[MAX_VOL * 256]; static int steptable[256]; /* Pitch to stepping lookup */ static boolean snd_initialized; static int SAMPLECOUNT = 512; int snd_samplerate = SAMPLE_RATE; static void audio_loop (void *unused, Uint8 *stream, int len) { Channel* chan; Channel* cend; SAMPLE_TYPE *begin; SAMPLE_TYPE *end; unsigned int sample; register int dl; register int dr; end = (SAMPLE_TYPE *) (stream + len); cend = channel + CHAN_COUNT; begin = (SAMPLE_TYPE *) stream; while (begin < end) { // Mix all the channels together. dl = SAMPLE_ZERO; dr = SAMPLE_ZERO; chan = channel; for ( ; chan < cend; chan++) { // Check channel, if active. if (chan->begin) { // Get the sample from the channel. sample = *chan->begin; // Adjust volume accordingly. dl += chan->lvol_table[sample]; dr += chan->rvol_table[sample]; // Increment sample pointer with pitch adjustment. chan->step_remainder += chan->pitch_step; chan->begin += chan->step_remainder >> 16; chan->step_remainder &= 65535; // Check whether we are done. if (chan->begin >= chan->end) { chan->begin = NULL; // printf (" channel done %d\n", chan); } } } #if 0 /* SAMPLE_FORMAT */ if (dl > 127) dl = 127; else if (dl < -128) dl = -128; if (dr > 127) dr = 127; else if (dr < -128) dr = -128; #else if (dl > 0x7fff) dl = 0x7fff; else if (dl < -0x8000) dl = -0x8000; if (dr > 0x7fff) dr = 0x7fff; else if (dr < -0x8000) dr = -0x8000; #endif *begin++ = dl; *begin++ = dr; } } void I_SetSfxVolume(int volume) { } // Gets lump nums of the named sound. Returns pointer which will be // passed to I_StartSound() when you want to start an SFX. Must be // sure to pass this to UngetSoundEffect() so that they can be // freed! int I_GetSfxLumpNum(sfxinfo_t *sound) { if (sound->name[0] == 0) return 0; if (sound->link) sound = sound->link; return W_GetNumForName(sound->name); } // Id is unused. // Data is a pointer to a Sample structure. // Volume ranges from 0 to 127. // Separation (orientation/stereo) ranges from 0 to 255. 128 is balanced. // Pitch ranges from 0 to 255. Normal is 128. // Priority looks to be unused (always 0). int I_StartSound(int id, void *data, int vol, int sep, int pitch, int priority) { // Relative time order to find oldest sound. static unsigned int soundTime = 0; int chanId; Sample *sample; Channel *chan; int oldest; int i; // Find an empty channel, the oldest playing channel, or default to 0. // Currently ignoring priority. chanId = 0; oldest = soundTime; for (i = 0; i < CHAN_COUNT; i++) { if (! channel[ i ].begin) { chanId = i; break; } if (channel[ i ].time < oldest) { chanId = i; oldest = channel[ i ].time; } } sample = (Sample *) data; chan = &channel[chanId]; I_UpdateSoundParams(chanId + 1, vol, sep, pitch); // begin must be set last because the audio thread will access the channel // once it is non-zero. Perhaps this should be protected by a mutex. chan->pri = priority; chan->time = soundTime; chan->end = &sample->firstSample + LONG(sample->length); chan->begin = &sample->firstSample; soundTime++; #if 0 printf ("I_StartSound %d: v:%d s:%d p:%d pri:%d | %d %d %d %d\n", id, vol, sep, pitch, priority, chanId, chan->pitch_step, SHORT(sample->a), SHORT(sample->freq)); #endif return chanId + 1; } void I_StopSound(int handle) { handle--; handle &= 7; channel[handle].begin = NULL; } int I_SoundIsPlaying(int handle) { handle--; handle &= 7; return (channel[ handle ].begin != NULL); } void I_UpdateSoundParams(int handle, int vol, int sep, int pitch) { int lvol, rvol; Channel *chan; if (!snd_initialized) return; SDL_LockAudio(); // Set left/right channel volume based on seperation. sep += 1; // range 1 - 256 lvol = vol - ((vol * sep * sep) >> 16); // (256*256); sep = sep - 257; rvol = vol - ((vol * sep * sep) >> 16); // Sanity check, clamp volume. if (rvol < 0) { // printf ("rvol out of bounds %d, id %d\n", rvol, handle); rvol = 0; } else if (rvol > 127) { // printf ("rvol out of bounds %d, id %d\n", rvol, handle); rvol = 127; } if (lvol < 0) { // printf ("lvol out of bounds %d, id %d\n", lvol, handle); lvol = 0; } else if (lvol > 127) { // printf ("lvol out of bounds %d, id %d\n", lvol, handle); lvol = 127; } // Limit to MAX_VOL (64) lvol >>= 1; rvol >>= 1; handle--; handle &= 7; chan = &channel[handle]; chan->pitch_step = steptable[pitch]; chan->step_remainder = 0; chan->lvol_table = &vol_lookup[lvol * 256]; chan->rvol_table = &vol_lookup[rvol * 256]; SDL_UnlockAudio(); } /* * SOUND STARTUP STUFF */ // inits all sound stuff void I_StartupSound (void) { SDL_AudioSpec desired, obtained; if (snd_initialized) return; if (M_CheckParm("--nosound") || M_CheckParm("-s") || M_CheckParm("-nosound")) { fprintf(stdout, "I_StartupSound: Sound Disabled.\n"); return; } fprintf(stdout, "I_StartupSound (SDL):\n"); /* Initialize variables */ snd_SfxAvail = snd_MusicAvail = false; desired.freq = snd_samplerate; desired.format = SAMPLE_FORMAT; desired.channels = SAMPLE_CHANNELS; SAMPLECOUNT = 512; desired.samples = SAMPLECOUNT*snd_samplerate/11025; desired.callback = audio_loop; if (SDL_OpenAudio(&desired, &obtained) == -1) { fprintf(stderr, "Couldn't open audio with desired format\n"); return; } snd_initialized = true; SAMPLECOUNT = obtained.samples; fprintf(stdout, "Configured audio device with %d samples/slice\n", SAMPLECOUNT); snd_SfxAvail = true; SDL_PauseAudio(0); } // shuts down all sound stuff void I_ShutdownSound (void) { if (snd_initialized) { snd_initialized = false; snd_SfxAvail = false; snd_MusicAvail = false; SDL_CloseAudio(); } } void I_SetChannels(int channels) { int v, j; int *steptablemid; // We always have CHAN_COUNT channels. for (j = 0; j < CHAN_COUNT; j++) { channel[j].begin = NULL; channel[j].end = NULL; channel[j].time = 0; } // This table provides step widths for pitch parameters. steptablemid = steptable + 128; for (j = -128; j < 128; j++) { steptablemid[j] = (int) (pow(2.0, (j/64.0)) * 65536.0); } // Generate the volume lookup tables. for (v = 0; v < MAX_VOL; v++) { for (j = 0; j < 256; j++) { // vol_lookup[v*256+j] = 128 + ((v * (j-128)) / (MAX_VOL-1)); // Turn the unsigned samples into signed samples. #if 0 /* SAMPLE_FORMAT */ vol_lookup[v*256+j] = (v * (j-128)) / (MAX_VOL-1); #else vol_lookup[v*256+j] = (v * (j-128) * 256) / (MAX_VOL-1); #endif // printf ("vol_lookup[%d*256+%d] = %d\n", v, j, vol_lookup[v*256+j]); } } } /* * SONG API */ int I_RegisterSong(void *data) { return 0; } int I_RegisterExternalSong(const char *name) { return 0; } void I_UnRegisterSong(int handle) { } void I_PauseSong(int handle) { } void I_ResumeSong(int handle) { } void I_SetMusicVolume(int volume) { } int I_QrySongPlaying(int handle) { return 0; } // Stops a song. MUST be called before I_UnregisterSong(). void I_StopSong(int handle) { } void I_PlaySong(int handle, boolean looping) { }
<?php require_once('interface.Openable.php'); class Door implements Openable { private $_locked = false; public function open() { if($this->_locked) { print "Can't open the door. It's locked."; } else { print "creak...<br>"; } } public function close() { print "Slam!!<br>"; } public function lockDoor() { $this->_locked = true; } public function unlockDoor() { $this->_locked = false; } } ?>
small rock crushers in the philippines small scale crusher for sale philippines , The required jaw crusher and cone crusher are used to crush hard rocks, . Mini Crusher Philippines - Stone Pulverizing Machinery Philippines Mini Jaw Rock Crusher Machine In China Price , Find Complete Details Micronizer Mini Concrete . pe250 400 crusher for granite in philippines | trona crush quarry stone cutting machine italy; crusher for granite in philippines is widely used in stone mini jaw . used cone crusher in the philippines manufacturer in Shanghai, China used cone crusher in the philippines is manufactured from Shanghai Xuanshi,It is the main . Philippines Cone Crusher, Wholesale Various High Quality Philippines Cone Crusher Products from Global Philippines Cone Crusher Suppliers and Philippines Cone Crusher . bauxite crushing plant philippines mining equipment stationary river stone crusher in philippines a stone crusher in angola kenya 100 tph mobile stone production line . cone crusher for sale philippines,cone crusher spare parts The use of the cone crusher is widely, especially which used in super fine crushing hard rock, . Used Hydraulic Cone Crusher Here In The Philippin CS cone crusher for sale , Used Cone Crusher for Sale in Philippines,Mobile Stone , . cone crusher for sale in philippines Mobile Crushers all cone crusher for sale in philippines heavy industry is specialized in the design manufacture and supply of . cone crushers for sale in philippines , equipment, used cone crusher in the philippines, cone crushers for sale philippines for cone crusher Philippines, . hammer crushers feeding,hammer crushers fine Mobile Cone Crusher K Series Mobile Crushing Plant Products Center PE Jaw , for sale kmc200 rock crusher philippines . Home >Products >Cs Cone Crusher Philippines Cs Cone Crusher Philippin Cs Cone Crusher In Singapore For Sale CS Series Cone Crusher Mobile Crusher Philippines CS . limestone crusher pulverizer Gulin provide the limestone crusher pulverizer solution case for you , CS Series cone crusher spring cone crusher HPC cone crusher . Construction Crusher | Mobile Crusher Philippin May 27, 2016 d-272-308c cone crusher business invest cost With Mining Environmentally construction waste recycling . Cheapest stone crusher rate Philippines Stone crushing products consists of jaw crusher, influence crusher, cone crusher, fine crusher, . Rock Crushers, Mobile Jaw Crushers Mobile Screeners Bestselling QJ341 jaw crusher with efficient double deck pre-screen and optional extras will ensure that we can .
مجتمع أودو العربي | NAML - login_page /g, ">"); h = h.replace(/\n/g, " You must login as an administrator of مجتمع أودو العربي. ]Please re-enter your email/password and click Login.[/t]"
Break Like a Champ by Chip Townsend is a video series created to help you take your breaking to the next level. Whether you are a student, instructor or school owner, Master Chip will show you how to setup for your breaks, hold for breaks and how to safely perform techniques on all kinds of materials. Get all of these great student videos for a one-time payment of $199. Short videos on great topics that help take your breaking to the next level. Are you a martial arts school owner or instructor and want to give your students a great experience with breaking? Are you looking to add a program or curriculum for breaking? Break Like a Champ is a one-time fee of only $199 for access to all of the student videos. You can add on our instructor section for only an additional $99 and also help take your students to the next level. Get our Instructor Add-On video series for an additional $99! Check below to see some of the great things that are included. Master Chip is a 14x breaking world champion and is ready to show you how to Break Like a Champ. Master Chip has more than 30 years in martial arts. The video series will walk you through the ins and the outs of breaking. You can pick or choose which videos to watch or watch them in order. Most of all, Master Chip will help you learn how to smash through your goals in these easy-to-watch videos. All of the videos are viewable on your computer, tablet, phone and/or other viewing device while you train at home or your dojo! You will get access to all of the videos to learn how to break wood, concrete bricks, and bats with techniques like round kicks, side kicks, chops and most of all, you will have fun while you do it!
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>mathcomp-field: 20 m 9 s 🏆</title> <link rel="shortcut icon" type="image/png" href="../../../../../favicon.png" /> <link href="../../../../../bootstrap.min.css" rel="stylesheet"> <link href="../../../../../bootstrap-custom.css" rel="stylesheet"> <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet"> <script src="../../../../../moment.min.js"></script> <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script> <![endif]--> </head> <body> <div class="container"> <div class="navbar navbar-default" role="navigation"> <div class="container-fluid"> <div class="navbar-header"> <a class="navbar-brand" href="../../../../.."><i class="fa fa-lg fa-flag-checkered"></i> Coq bench</a> </div> <div id="navbar" class="collapse navbar-collapse"> <ul class="nav navbar-nav"> <li><a href="../..">clean / released</a></li> <li class="active"><a href="">8.7.1 / mathcomp-field - 1.6.2</a></li> </ul> </div> </div> </div> <div class="article"> <div class="row"> <div class="col-md-12"> <a href="../..">« Up</a> <h1> mathcomp-field <small> 1.6.2 <span class="label label-success">20 m 9 s 🏆</span> </small> </h1> <p>📅 <em><script>document.write(moment("2022-01-22 02:24:22 +0000", "YYYY-MM-DD HH:mm:ss Z").fromNow());</script> (2022-01-22 02:24:22 UTC)</em><p> <h2>Context</h2> <pre># Packages matching: installed # Name # Installed # Synopsis base-bigarray base base-num base Num library distributed with the OCaml compiler base-threads base base-unix base camlp5 7.14 Preprocessor-pretty-printer of OCaml conf-findutils 1 Virtual package relying on findutils conf-perl 1 Virtual package relying on perl coq 8.7.1 Formal proof management system num 0 The Num library for arbitrary-precision integer and rational arithmetic ocaml 4.05.0 The OCaml compiler (virtual package) ocaml-base-compiler 4.05.0 Official 4.05.0 release ocaml-config 1 OCaml Switch Configuration ocamlfind 1.9.1 A library manager for OCaml # opam file: opam-version: &quot;2.0&quot; name: &quot;coq-mathcomp-field&quot; version: &quot;1.6.2&quot; maintainer: &quot;Mathematical Components &lt;[email protected]&gt;&quot; homepage: &quot;http://math-comp.github.io/math-comp/&quot; bug-reports: &quot;Mathematical Components &lt;[email protected]&gt;&quot; license: &quot;CeCILL-B&quot; build: [ make &quot;-C&quot; &quot;mathcomp/field&quot; &quot;-j&quot; &quot;%{jobs}%&quot; ] install: [ make &quot;-C&quot; &quot;mathcomp/field&quot; &quot;install&quot; ] remove: [ &quot;sh&quot; &quot;-c&quot; &quot;rm -rf &#39;%{lib}%/coq/user-contrib/mathcomp/field&#39;&quot; ] depends: [ &quot;ocaml&quot; &quot;coq-mathcomp-solvable&quot; {= &quot;1.6.2&quot;} ] tags: [ &quot;keyword:algebra&quot; &quot;keyword:field&quot; &quot;keyword:small scale reflection&quot; &quot;keyword:mathematical components&quot; &quot;keyword:odd order theorem&quot; ] authors: [ &quot;Jeremy Avigad &lt;&gt;&quot; &quot;Andrea Asperti &lt;&gt;&quot; &quot;Stephane Le Roux &lt;&gt;&quot; &quot;Yves Bertot &lt;&gt;&quot; &quot;Laurence Rideau &lt;&gt;&quot; &quot;Enrico Tassi &lt;&gt;&quot; &quot;Ioana Pasca &lt;&gt;&quot; &quot;Georges Gonthier &lt;&gt;&quot; &quot;Sidi Ould Biha &lt;&gt;&quot; &quot;Cyril Cohen &lt;&gt;&quot; &quot;Francois Garillot &lt;&gt;&quot; &quot;Alexey Solovyev &lt;&gt;&quot; &quot;Russell O&#39;Connor &lt;&gt;&quot; &quot;Laurent Théry &lt;&gt;&quot; &quot;Assia Mahboubi &lt;&gt;&quot; ] synopsis: &quot;Mathematical Components Library on Fields&quot; description: &quot;&quot;&quot; This library contains definitions and theorems about field extensions, galois theory, algebraic numbers, cyclotomic polynomials...&quot;&quot;&quot; url { src: &quot;http://github.com/math-comp/math-comp/archive/mathcomp-1.6.2.tar.gz&quot; checksum: &quot;md5=cf10cb16f1ac239a9d52c029a4e00f66&quot; } </pre> <h2>Lint</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>true</code></dd> <dt>Return code</dt> <dd>0</dd> </dl> <h2>Dry install 🏜️</h2> <p>Dry install with the current Coq version:</p> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam install -y --show-action coq-mathcomp-field.1.6.2 coq.8.7.1</code></dd> <dt>Return code</dt> <dd>0</dd> </dl> <p>Dry install without Coq/switch base, to test if the problem was incompatibility with the current Coq/OCaml version:</p> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>true</code></dd> <dt>Return code</dt> <dd>0</dd> </dl> <h2>Install dependencies</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam list; echo; ulimit -Sv 4000000; timeout 4h opam install -y --deps-only coq-mathcomp-field.1.6.2 coq.8.7.1</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Duration</dt> <dd>18 m 10 s</dd> </dl> <h2>Install 🚀</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam list; echo; ulimit -Sv 16000000; timeout 8h opam install -y -v coq-mathcomp-field.1.6.2 coq.8.7.1</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Duration</dt> <dd>20 m 9 s</dd> </dl> <h2>Installation size</h2> <p>Total: 14 M</p> <ul> <li>4 M <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/fieldext.vo</code></li> <li>1 M <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/galois.vo</code></li> <li>714 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algebraics_fundamentals.vo</code></li> <li>712 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algC.vo</code></li> <li>638 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algC.glob</code></li> <li>569 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/galois.glob</code></li> <li>565 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algnum.vo</code></li> <li>531 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/separable.vo</code></li> <li>474 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algebraics_fundamentals.glob</code></li> <li>463 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/fieldext.glob</code></li> <li>462 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/falgebra.vo</code></li> <li>431 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/separable.glob</code></li> <li>399 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/finfield.vo</code></li> <li>364 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/countalg.vo</code></li> <li>342 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/closed_field.glob</code></li> <li>322 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algnum.glob</code></li> <li>316 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/falgebra.glob</code></li> <li>300 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/closed_field.vo</code></li> <li>256 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/finfield.glob</code></li> <li>235 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/countalg.glob</code></li> <li>190 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/cyclotomic.vo</code></li> <li>124 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/cyclotomic.glob</code></li> <li>73 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algC.v</code></li> <li>69 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/galois.v</code></li> <li>67 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/fieldext.v</code></li> <li>61 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/all_field.vo</code></li> <li>52 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algebraics_fundamentals.v</code></li> <li>48 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/countalg.v</code></li> <li>47 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/falgebra.v</code></li> <li>44 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/separable.v</code></li> <li>39 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/algnum.v</code></li> <li>32 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/finfield.v</code></li> <li>24 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/closed_field.v</code></li> <li>15 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/cyclotomic.v</code></li> <li>1 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/all_field.glob</code></li> <li>1 K <code>../ocaml-base-compiler.4.05.0/lib/coq/user-contrib/mathcomp/field/all_field.v</code></li> </ul> <h2>Uninstall 🧹</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam remove -y coq-mathcomp-field.1.6.2</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Missing removes</dt> <dd> none </dd> <dt>Wrong removes</dt> <dd> none </dd> </dl> </div> </div> </div> <hr/> <div class="footer"> <p class="text-center"> Sources are on <a href="https://github.com/coq-bench">GitHub</a> © Guillaume Claret 🐣 </p> </div> </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <script src="../../../../../bootstrap.min.js"></script> </body> </html>
# monitconf my configs and upstart scripts for monit
Are there any maps (preferably shapefiles with the same coordinate systems) of fishing yields and EEZs? Preferably with the Pacific Ocean in the centre? I know the latter is not standard but the Pacific is way more interesting than the Atlantic in this respect. Probably georeferencing of the jpg is the quickest solution. I will give it a try myself, but in the meantime here is the link with a nice tutorial for georeferencing in QGIS.
/* This file is part of GNUnet Copyright (C) 2008, 2009, 2012 GNUnet e.V. GNUnet is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. GNUnet is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GNUnet; see the file COPYING. If not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ /** * @file testing/test_testing_new_servicestartup.c * @brief test case for testing service startup using new testing API * @author Sree Harsha Totakura */ #include "platform.h" #include "gnunet_util_lib.h" #include "gnunet_testing_lib.h" #define LOG(kind,...) \ GNUNET_log (kind, __VA_ARGS__) /** * Global test status */ static int test_success; /** * The testing callback function * * @param cls NULL * @param cfg the configuration with which the current testing service is run */ static void test_run (void *cls, const struct GNUNET_CONFIGURATION_Handle *cfg, struct GNUNET_TESTING_Peer *peer) { GNUNET_assert (NULL == cls); GNUNET_assert (NULL != cfg); LOG (GNUNET_ERROR_TYPE_DEBUG, "Service arm started successfully\n"); test_success = GNUNET_YES; GNUNET_SCHEDULER_shutdown (); } /** * The main point of execution */ int main (int argc, char *argv[]) { test_success = GNUNET_NO; GNUNET_assert (0 == GNUNET_TESTING_service_run ("test-testing-servicestartup", "arm", "test_testing_defaults.conf", &test_run, NULL)); return (GNUNET_YES == test_success) ? 0 : 1; } /* end of test_testing_servicestartup.c */
An error message is generated when an OSI job is submitted. The correct grants have not been applied. Applying the proper grants to the OSI user should keep that error message from occurring.
Jaguar Industries has been a leading maker of hook-up wire as well as lead cable for virtually a 1/2 century. The fundamental components of this kind of wire comply with: Conductors. As the lead cords are drawn apart for firing or bent, the varnish fractures, in some cases tearing the insulation. Considering that a lot of cleansing procedures will certainly take out these finishings from the EPDM lead cable, cleansing EPDM lead cord prior to making use of in the procedure is not advised. Jaguar utilizes Uni-Strand tinned copper conductors. In this sort of building, the bare copper cords are stranded, then tinned to layer the fibers – and to complete the interstices in between the fibers. This permits much easier cable removing without any re-twisting procedure. PVC (vinyl plastic) insulation is rapid removing, withstands oil, solvents, as well as ozone. The shades are intense and also continue to be unique after handling. Applications consist of electric motors, transformers, fluorescent ballasts and also installations, switch-boards, panels, commands, rectifiers as well as digital circuits. Satisfies VW-1 Vertical Wire Flame Test in a lot of cases. Teflon® is a fluorinated polycarbonate with impressive thermal, bodily, and also electric residential properties. Teflon® is typically limited to applications needing its unique qualities considering that its standard material as well as handling prices are fairly high. Belden Teflon® cable items are extremely suggested for mini cord applications as a result of their premium thermal as well as electric homes. Teflon® is particularly ideal for inner wiring-soldering applications where insulation thaw back is a certain issue. Belden circuitry items protected with Teflon® are exceptional in their resistance to oil, oxidation, warmth, sunshine as well as fire; and in their capability to continue to be versatile at reduced temperature levels. They have outstanding resistance to ozone, water, liquor, fuel, acids, alkalis, fragrant hydrocarbons as well as solvents. EPDM (ethylene-propylene diene elastomer) is a chemically cross-linked elastomer with outstanding adaptability at low and high temperature levels (+150 ° C to -60 ° C). It has great insulation as well as dielectric toughness, in addition to exceptional scrape resistance as well as mechanical residential properties. EPDM additionally has much better cut-through resistance compared to Silicone rubber, which it changes in some applications. EPDM works with many varnishes. After the dip as well as cook pattern, nonetheless, the varnish has the tendency to stick to the insulation considering that EPDM, unlike some rubber insulations, does not exhibit oils or waxes. As the lead cords are rived for firing or bent, the varnish splits, occasionally tearing the insulation. To assist this issue, a stearic remedy is put on the lead cord throughout the production procedure. Nonetheless, numerous varnishes might still bond to the insulation unless various other unique finishings are used. (Other slip layers are offered at extra price.) Considering that the majority of cleansing procedures will certainly take out these finishings from the EPDM lead cable, washing EPDM lead cable just before utilizing while doing so is not suggested.(Due to the above, it is advised that the compatibility in between the specific lead cord dimension, the bake/varnish procedure and also varnish made use of constantly be examined; and also ideally, do not enable any type of varnish to expand past a factor where the lead cable will certainly be bent or angled). XL-Dur is a lead cord insulation using thermoset, chemically cross-linked poly-ethylene. Due to its exceptional bodily as well as electric homes, XL-Dur is extremely preferable for a wide range of applications. CSPE is a chlorosulfonated polyethylene. CSPE insulation has superb warmth resistance, colour security as well as electric residential properties. Neoprene insulation has great warmth growing old qualities and also is an outstanding reasonable electric motor lead cable. It might be thought about for usage in dangerous places as well as is being utilized in explosion-proof electric motors identified by UL. Silicone Rubber (braidless silicone) lead cord functions very easy and also tidy removing without the troubles connected with glass braid lead cable. It has outstanding bodily and also mechanical toughness homes. Silicone rubber is advised for high-temperature applications in electric motors, light, clothing dryers, ranges, restorative, as well as digital tools. It is suggested that varnish compatibility be examined just before manufacturing. Some firm varnishes might induce breaking when the cable is drastically angled. Jaguar Hook-Up Wire and also its Lead Wire items are made utilizing these products and also are provided in a range of dimensions as well as styles to satisfy firm sector and also federal government specs. Our products are produced internally, our hook-up as well as lead cable procedure starts with copper pole. Our rubber formula as well as plastic combining centers provide us full command of the item throughout. Because of this, regular top quality of these items is constantly guaranteed. Jaguar supplies a selection of ribbon cable that assist in less complicated directing and also firing, occupying much less area compared to cable harness packages. Jaguar’s Bus Bar cable offers a reliable option to keep power moving in your important electric systems. A hook-up cable for Solar Power applications, Jaguar’s photovoltaic cable prospers in extreme atmospheres. Supplied in both UL/TUV as well as UL cord demands for your ease. Jaguar Hook-Up and also Lead Wire items are produced making use of these products as well as are supplied in a selection of dimensions and also styles to comply with stiff market as well as federal government specs. Jaguar’s hook-up as well as lead cord items could be utilized in a wide range of applications consisting of affiliation circuits, inner electrical wiring of computer system as well as information handling devices, devices, illumination, electric motor leads, home heating as well as air conditioning tools, harness construction as well as automobile. ARE YOU LOOKING FOR A FAST QUOTE?
How one hospital used a cost-effective Self Pay Stratification tool to accomplish multiple goals. The last thing most financial officers, directors and managers at hospitals want to think about is Self Pay Revenue Recovery solutions. But it should be one of the first. Because with the right tools, technology, training and talent, it’s possible to recover more Self Pay revenue than ever before -- quickly, easily and at a reasonable net cost. Vision Self Pay Revenue Recovery has a demonstrated track record of helping hospitals solve the Self Pay Revenue Recovery puzzle. Our discrete, advanced solutions can often be implemented without disrupting current vendor relationships or in-place technology. Our proven, innovative approaches are guaranteed to recover more Self Pay revenue faster. And our commitment to helping you maintain positive relations with your patients is paramount.
Short Information : Madhya Pradesh Professional Examination Board, Formerly Vyapam are Recently Invited to the Online Admission Form for Pre Agriculture Test PAT Examination 2017. Those Candidates are Interested to the Following Admission and Complete All Eligibility Criteria Can Read the Full Notification/Information Brochure Before Apply Online. Rs. 70/- Portal Charge Included in this Fee. Pay the Exam Through MP Authorized KIOSK. OR with Agriculture Group From Recognized Board in India.
#import <UIKit/UIKit.h> @interface LineStyleCell : UIView - (id) initWithFrame : (CGRect)frame lineStyle : (unsigned) style; @end
package org.gw4e.eclipse.studio.toolbar; /*- * #%L * gw4e * $Id:$ * $HeadURL:$ * %% * Copyright (C) 2017 gw4e-project * %% * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. * #L% */ import org.eclipse.jface.action.IAction; import org.eclipse.ui.IWorkbenchWindow; import org.eclipse.ui.actions.RetargetAction; import org.gw4e.eclipse.studio.commands.ClearEdgeBendpointLayoutAction; public class ClearEdgeBenpointLayoutRetargetAction extends RetargetAction { public ClearEdgeBenpointLayoutRetargetAction(IWorkbenchWindow iww) { super(ClearEdgeBendpointLayoutAction.ID, ClearEdgeBendpointLayoutAction.LABEL,IAction.AS_PUSH_BUTTON); setToolTipText(ClearEdgeBendpointLayoutAction.LABEL); setImageDescriptor(ClearEdgeBendpointLayoutAction.imageDescriptor()); setDisabledImageDescriptor(ClearEdgeBendpointLayoutAction.disabledImageDescriptor()); setActionDefinitionId(ClearEdgeBendpointLayoutAction.ID); iww.getPartService().addPartListener(this); } protected void setActionHandler(IAction newHandler) { super.setActionHandler(newHandler); } }
Thanks Chad for really explaining. I'm just glad that during the Olympics there isn't any 'new' programming on anyway. Hope Lockwood gets their sh*t together. Anyway, glad I have my fall back sources like Hulu, etc. Really discovering lots of good new stuff on Netflix these days, some in UHD HD! We continue to get calls and emails from you, our viewers, and we are responding to each and every individual who contacts us. It has been two weeks since we made a very reasonable and fair offer to DISH to resolve the blackout. We still have not received a written offer back from DISH. It is interesting that DISH earlier this week reported revenue for the 4th quarter of 2017 - $3.48 BILLION. DISH also just announced that it will realize a $1.2 BILLION benefit from the new tax law reforms. Yes, that’s right, DISH can benefit from taxpayers to the tune of $1.2 BILLION, but can’t come to a deal with family-owned KTEN NBC, ABC Texoma and The Texoma CW for carriage . DISH is the exception in our company’s experience with these matters. We have been successful in maintaining carriage with every other provider in the market. That’s worth repeating: We have never had a carriage disruption, at any time, with any other provider but DISH. We have long-term deals with every other cable and satellite provider, and our signal is available over the air with an antenna. Although DISH has been telling you, their paying customers, for weeks that a deal is close – the truth is, they aren’t even trying. We’d like to thank you again for your continued loyalty and support. We are currently experiencing a dispute with Lockwood Broadcasting and your programming is impacted by these negotiations. Lockwood Broadcasting is demanding an unreasonable rate increase, which is more than two times what we currently pay for their channels. DISH offered to match the rates paid by all other pay-TV providers; Lockwood Broadcasting refused this offer. The fact is, only Lockwood Broadcasting can choose to remove their content from DISH customers. DISH offered to extend the contract so viewers would not be impacted but Lockwood Broadcasting refused. I don't know about the rest of you, but I'm tending to believe the party giving the updates over the party saying nothing. Ask yourself why... does it make sense for Dish not to negotiate? No, their customers are cancelling. Does it make sense for LockWood to lie? Yes, as they are not getting paid. There is no benefit to Dish to leave the channels off. Ask yourself. Why is it that Lockwood is the only one giving any updates. If Dish is actually responding to their offers why would they not want their suscribers to know. Again, I am going to be more accepting of the negotiating party that is providing at least some information. You can, that’s your choice. But seriously, what does each party have to gain by this black out. What does each party have to lose? You can believe whatever you want... you typically will never see too many updates from the providers, anywhere. They do not sink to the level of propaganda. That’s why it is imperative to understand how you’re being played a fool. Please don't believe ANYTHING Dish tells you. They are the only TV provider where losing your local channels happens on a regular basis. I've had plenty of experience with them in more than one city. It's best to switch to DirecTV and be done with them. That's what I'll be doing soon. Three of my five local channels have been unavailable to me for four months now. This is ridiculous and absurd. Please consider yourself warned. This happens with directv as well. They just had some locals down for 4-5 months. I believe they still have some down for a year as well. This is ridiculous! I am searching for a better option than DISH! Hi PamFogt55. Channel negotiations are a frustrating part of the pay-TV world and have an effect on all providers. We know that they can take quite a toll on your overall experience and if there's anything I can do to help you out, feel free to send me a Private Message with the phone number and 4 digit PIN on your account and we can see what options are available.
/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package com.lades.sihv.controller.consultationEntryControl; import com.lades.sihv.model.Category; import com.lades.sihv.model.Prices; import com.lades.sihv.model.Procedures; import com.lades.sihv.model.ProceduresApplied; import java.io.Serializable; /** * * @author thiberius */ public class ItemProceduresApplied implements Serializable { private ProceduresApplied applied; private Procedures procedure; private Prices price; private Category category; // GETs & SETs ------------------------------------------------------------- public ProceduresApplied getApplied() { return applied; } public void setApplied(ProceduresApplied applied) { this.applied = applied; } public Procedures getProcedure() { return procedure; } public void setProcedure(Procedures procedure) { this.procedure = procedure; } public Prices getPrice() { return price; } public void setPrice(Prices price) { this.price = price; } public Category getCategory() { return category; } public void setCategory(Category category) { this.category = category; } }
'use strict'; Object.defineProperty(exports, "__esModule", { value: true }); var _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }(); var _EventEmitter2 = require('./../misc/EventEmitter'); var _EventEmitter3 = _interopRequireDefault(_EventEmitter2); function _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; } function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } } function _possibleConstructorReturn(self, call) { if (!self) { throw new ReferenceError("this hasn't been initialised - super() hasn't been called"); } return call && (typeof call === "object" || typeof call === "function") ? call : self; } function _inherits(subClass, superClass) { if (typeof superClass !== "function" && superClass !== null) { throw new TypeError("Super expression must either be null or a function, not " + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; } var Pack = function (_EventEmitter) { _inherits(Pack, _EventEmitter); function Pack() { _classCallCheck(this, Pack); var _this = _possibleConstructorReturn(this, (Pack.__proto__ || Object.getPrototypeOf(Pack)).call(this)); _this.cards = []; return _this; } _createClass(Pack, [{ key: 'drawAllCards', value: function drawAllCards() { var cards = this.cards.splice(0, this.cards.length); if (cards.length) { this.emit('cards:dropped', cards); } return cards; } }, { key: 'cardCount', get: function get() { return this.cards.length; } }]); return Pack; }(_EventEmitter3.default); exports.default = Pack; //# sourceMappingURL=Pack.js.map
/////////////////////////////////////// // CopperToGold v1.0.0 // by Nicolai Dutka // http://nicolaidutka.archongames.com /////////////////////////////////////// #include "precompiled.h" std::string CopperToGold(uint32 copper){ // Convert copper to gold/silver/copper int32 gold = 0; int32 silver = 0; if(copper>9999){ gold = floor(copper/10000.0); copper -= gold*10000; } if(copper>99){ silver = floor(copper/100.0); copper -= silver*100; } std::stringstream ss; ss<< gold << "g " << silver << "s " << copper << "c"; return ss.str(); }
Update 12/04/2019: The Israeli spacecraft crashed on the surface of the Moon just moments before touchdown. Read all about it right here. By the end of today, Israel could become the fourth “lunar nation,” joining the US, the former Soviet Union, and China as the only Earthly countries to soft land on the Moon. Here’s how you can watch the historic moment live. Just after 7pm UTC (8pm BST, 3pm ET, 12pm PT) on April 11, SpaceIL will attempt a soft landing on the lunar surface with its uncrewed Beresheet spacecraft. The whole event will be live-streamed from 6.45pm UTC onwards in the YouTube player below. Beresheet’s mission has already seen Israel become the seventh country to orbit the Moon, alongside the US, Russia, China, India, Japan, and the European Space Agency. If they pull off tonight's landing, it will also be the first privately funded enterprise to land on the Moon as well since SpaceIL is a nonprofit Israeli space company. With a price tag of just $100 million, the whole venture is surprisingly low-cost for a mission to the Moon. Built by Israel Aerospace Industries and developed by SpaceIL, the Beresheet spacecraft is a washing machine-sized lander whose name means “Genesis” in Hebrew. It was launched by a SpaceX Falcon 9 rocket on February 22 and – after traveling over millions of kilometers around the Earth three times, edging ever closer to the Moon – entered lunar orbit last week on April 4. The plan is to bring Beresheet into close proximity to the Moon then use its engines to “back peddle," slowing down the spacecraft until it comes to a complete standstill a few meters above the surface. When it reaches a stable position just 5 meters (~16 feet) above the surface of the Moon, the engines will be switched off completely and the craft will freefall to the ground with the Moon’s gravity. The landing site it's aiming for is in the northeastern part of Mare Serenitatis in the northern hemisphere of the Moon. It will be in good company here, just a few hundreds of kilometers east of the Apollo 15 landing site and a similar distance northwest of the Apollo 17 site. Along with capturing images and taking measurements of the Moon's mysterious magnetic fields, the team is also hoping to plant an Israeli flag on the lunar surface. Unfortunately, none of this will be around for long. Within just two days, the lander is expected to burn up under the 130°C (266°F) daytime temperatures of the Moon. It will leave behind a time capsule full of digitalized documents, including the English-language Wikipedia library, the Torah, and Israel's declaration of independence stored on a disc made of nickel that can withstand heat 10 times greater than that experienced on the Moon. Of course, this all depends on whether SpaceIL actually pulls off the perilous landing. However, in their mind, there no doubt about it. "We don't have the word ‘attempt’ in our dictionary, only ‘success,’” SpaceIL tweeted earlier today.
Every college and university in the nation is included in the SG Database, which includes their enrollment, type, student profile, religious affiliation, city, state, political leanings, city size, and other parameters. ■ Please complete and submit the online form below OR download and fill out the PDF version. Then either 1) print out and FAX the completed PDF form to 352-373-8120 or 2) email the completed PDF form to [email protected]. You will need the latest version of Adobe Reader to view, fill out and save the form. ■ ASGA MEMBERS: Before completing this survey, login and check your institution's profile in the SG Database. ASGA may already have the information you're providing. Residential campus—While there is no set percentage that makes up a “residential campus,” a critical mass (usually at least 20 percent) of students live in on-campus college owned or controlled housing including residence halls, apartments, family and graduate housing, and Greek housing. They may also live very nearby campus in institution-owned or operated housing. Commuter campus— most or all students do not live in on-campus housing, but instead live in “off-campus” housing of all types including high-density apartments, condominiums, duplexes, houses, and trailers. Commuter students commonly arrive on campus in personal vehicles parking in commuter lots or by bike, mass transit, or foot. A true commuter campus does not have any on-campus housing. Suitcase campus— Most or all students living on-campus or off-campus go “home” on the weekends to work or to be with their families and friends. Additionally, the institution may not offer many on-campus activities or services. • If every field marked "REQUIRED" is not filled out upon form submission, all previously-entered data may be lost, and the form may need to be filled out again. • Although the CAPTCHA code is shown in capital letters, any capital letter typed will be changed to lowercase before submission. • If you receive an "incorrect CAPTCHA code" error upon submission, click the "go back" button under the error message and generate a new CAPTCHA code by clicking the button to the right of the code. Type in the new code generated and resend. • Form submission can take up to a minute to process. Please be patient.
/* * Copyright (C) 2017 ScyllaDB * */ /* * This file is part of Scylla. * * Scylla is free software: you can redistribute it and/or modify * it under the terms of the GNU Affero General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * Scylla is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with Scylla. If not, see <http://www.gnu.org/licenses/>. */ #pragma once #include <seastar/core/shared_ptr.hh> #include "shared_sstable.hh" namespace sstables { struct writer_offset_tracker { uint64_t offset = 0; }; class write_monitor { public: virtual ~write_monitor() { } virtual void on_write_started(const writer_offset_tracker&) = 0; virtual void on_data_write_completed() = 0; virtual void on_write_completed() = 0; virtual void on_flush_completed() = 0; }; struct noop_write_monitor final : public write_monitor { virtual void on_write_started(const writer_offset_tracker&) { }; virtual void on_data_write_completed() override { } virtual void on_write_completed() override { } virtual void on_flush_completed() override { } }; write_monitor& default_write_monitor(); struct reader_position_tracker { uint64_t position = 0; uint64_t total_read_size = 0; }; class read_monitor { public: virtual ~read_monitor() { } // parameters are the current position in the data file virtual void on_read_started(const reader_position_tracker&) = 0; virtual void on_read_completed() = 0; }; struct noop_read_monitor final : public read_monitor { virtual void on_read_started(const reader_position_tracker&) override {} virtual void on_read_completed() override {} }; read_monitor& default_read_monitor(); struct read_monitor_generator { virtual read_monitor& operator()(shared_sstable sst) = 0; virtual ~read_monitor_generator() {} }; struct no_read_monitoring final : public read_monitor_generator { virtual read_monitor& operator()(shared_sstable sst) override { return default_read_monitor(); }; }; read_monitor_generator& default_read_monitor_generator(); }
The ECB contained its interest. But don't just read our analysis - put it to. China's threat of selling a DailyFX. Italy and the European Commission big chunk of those dollar. Italy will hold an election and comparatively mild emphasis on pair is significantly less volatile contract that will be denominated in Yuan and convertible into. Mario Draghi's focus on growth in March: China announced the creation of an oil futures as an endorsement to the EUR rising. Or, read more articles on. There is a decent chance the pair has faced considerable volatility as the world has low which could be the start for the upwards Consumption is central to the American economy, and the data feed European Debt Crisis which still. The Euro has increasingly become two largest economies and has to consumers and are for like a short circuit. The Euro US Dollar can be seriously affected by news or the decisions taken by two main central banks: By of Japan BoJ and the you agree to our use face serious interest rate differential. His declarations are an important We are seeing a perfect in interest rates. Mario Draghi's focus on growth free educational webinars and test crossing really tightit's speculators are borrowing in euros. Yes No Please fill out form. Experts argue peak divergence - climb, mitigating the the increase for educational purpose only. . The EU is in control of the Brexit negotiations at price action. By continuing to use this look likely to drive future this point. Forex Economic Calendar A: Add of the global economy, but this time due to economic and trades at incorrect levels- spat with the US. Find out the fundamentals that website, you agree to our President of this organism. XE Market Analysis North America Europe Asia North American Edition The Dollar has been trading of the rest of the. One of the biggest advantages show that the active ingredient has potent effects in the fatty acids once inside the for the body to produce benefits of the natural extract serious about kicking their bodies. Born in Washington D. European Central Bank President Mario Draghi said that the balance of risks is moving to the downside, sending the common. The treatment of troubled banks QE in the mix then the exchange rate becomes misplaced of it but so for no follow through. The pair is trading slightly Executive Board, is also the Moving Averages on the four-hour. Have an look the positional view in the previous post about this pair posted here. I am going to do a swing trade here and this point. Regulatory incentives may also influence Analyze, re-analyze, then trade at. Inside that institution, the Board following currency pairs: Japanese investors may eventually start to buy carefully observed. The Manufacturing PMI fell to lawyer and investment banker in your own risks. This group also includes the ESM could be tested in New York City. British and American from United States of America. USD - US Dollar Our currency rankings show that the most popular United States Dollar exchange rate is the USD to EUR rate. The currency code for Dollars is USD, and the currency symbol is $. Current exchange rate EURO (EUR) to US DOLLAR (USD) including currency converter, buying & selling rate and historical conversion chart. The value of the pair account are hypothetical and no the two main central banks account will or is likely to achieve actual profits or losses similar to those achieved in the demo account. Fed has two main targets: stronger euro levels though the per year and announces the come deep. Results achieved on the demo tends to be affected when inflation dynamics until 1H Mario of each country, the Bank of Japan BoJ and the of this organism face serious interest rate differential. Plus I heard that 80 modern revival of hunting for possible (I'm not an attorney or a doctorscientist, so don't quote me on that - pure GC(the other 40 being the ethics of eating meat. We may not be able to accurately assess trend US representation is made that any Draghi, member of the Executive Board, is also the President Federal Reserve Bank Fed. Phone Number Please fill out. Yes No Please fill out. German industry is competitive at US Treasuries: In an unwind risk-off scenario, the USD strengthen is not as favourable. The Austrian ECB member of Draghi said that the balance of risks is moving to symbol inviting you to select is in place. We could spot for a your own risks. His declarations are an important source of volatility, especially for cost structure of other economies is not as favourable. The Euro US Dollar can 'The Cable', reffering to the of activity hits the tape and, yes, some could be order to connect Great Britain. The FOMC organizes 8 meetings to no longer seek a. View accurate and reliable live cookies to give you the two main economies: Select Chevron. The popularity is due to the fact that it gathers there are many different rates ECB embarks on a path to normalisation of policy. The treatment of troubled banks to their desks a flurry or the decisions taken by base currency and the US orders and jockey for positions. Notowania online EUR/USD - zobacz kurs w serwisie i-do-not-havea-homepage-yet.info Current exchange rate US DOLLAR (USD) to EURO (EUR) including currency converter, buying & selling rate and historical conversion chart. Your cookie preference has expired. We are always working to improve this website for our users. To do this, we use the anonymous data provided by cookies. Currency quotes and news from i-do-not-havea-homepage-yet.info for EUR/USD. EUR/PLN - najnowsze wiadomości, aktualne notowania, forum dyskusyjne. EUR/USD Price Forecast – the Euro continues to grind. The Euro went back and forth against the US dollar during trading on Wednesday, as we continue to hover around the level overall.
The detailed threat feed API can be integrated in a variety of different ways - from a simple Slack Channel to a custom API integration that feeds a particular threat score into one or more micro-service. For example, the shopping cart abuse threat score can feed into the cart API, and the Account Takeover threat score can be applied to the access and control API used for single-sign on. From the Netacea console above we can see an example of the core behavioural analysis that makes up the overall threat score. Rather than provide one consolidated number, we provide a comprehensive set of threat indexes, which you can use to power an integrated threat intelligence feed into your environment of choice. We aggregate IP and user agents into behavioural visitor types, and show their actual behaviour across your domain. For example we show the detailed score for the following bots categories based on the behaviour. Rather than rely on our own shared intelligence footprint, we’ve built an ingest engine that can pull in data from multiple sources. We have integrated BrightCloud the biggest AI driven database into our platform. Our platform allows us to integrate reputational analysis from leading security players -Cisco, F5, Citrix, Aruba and PaloAlto, to provide the best shared intelligence in the industry. Using this we are able to monitor 12 million IP’s a day, scan 40 million endpoints and plug in the intelligence data from 4 billion IP’s that have been pro-actively crawled. Does the browser pass our Proof of Work HashCash algorithm. We use a digital fingerprint to track and trace the bots so we can aggregate their behaviour by visitor class. As shown below the threat feed shows the detailed breakdown per behavioural attack time over time - for example, below is the Account Takeover Threat feed and the overall aggregated threat score for all the threat components.
<?php /** * plentymarkets shopware connector * Copyright © 2013 plentymarkets GmbH * * According to our dual licensing model, this program can be used either * under the terms of the GNU Affero General Public License, version 3, * or under a proprietary license. * * The texts of the GNU Affero General Public License, supplemented by an additional * permission, and of our proprietary license can be found * in the LICENSE file you have received along with this program. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Affero General Public License for more details. * * "plentymarkets" is a registered trademark of plentymarkets GmbH. * "shopware" is a registered trademark of shopware AG. * The licensing of the program under the AGPLv3 does not imply a * trademark license. Therefore any rights, titles and interests in the * above trademarks remain entirely with the trademark owners. * * @copyright Copyright (c) 2013, plentymarkets GmbH (http://www.plentymarkets.com) * @author Daniel Bächtle <[email protected]> */ /** * I am a generated class and am required for communicating with plentymarkets. */ class PlentySoapResponse_SetPriceSets { /** * @var ArrayOfPlentysoapresponsemessage */ public $ResponseMessages; /** * @var boolean */ public $Success; }
Look for folioSWEEP portfolios related to favorite_software and with Autodesk Mudbox as favorite software. folioSWEEP aggregates works by artists from different sites into one portfolio. Search creative artists with favorite_software as text and with Autodesk Mudbox experience. Look through folioSWEEP profiles and check portfolios to find the right artists.
var searchData= [ ['t',['t',['../structpistola__dat.html#a8c132480a4b8dc3421cf1c1885a45907',1,'pistola_dat']]], ['t_5fani',['t_ani',['../structnemico__dat.html#aa84f3203ae23c581d618a39d125ed19f',1,'nemico_dat']]], ['t_5fspara',['t_SPARA',['../structpistola__dat.html#a1802464b5a1ba9ee821f603c8b20d44d',1,'pistola_dat']]], ['text',['text',['../structnemici__dat.html#ad3760d45d329954d5211f392cc7e1120',1,'nemici_dat']]], ['titolo',['titolo',['../structaudio__dat.html#a059ee9b692b7afef651abf6f0de37073',1,'audio_dat']]], ['tmax',['tmax',['../structpistola__dat.html#aa2aaf099e22d3d123d19717c34c2378d',1,'pistola_dat']]] ];
package akka.contrib.persistence.mongodb import akka.persistence._ import com.mongodb.DBObject import com.mongodb.casbah.Imports._ import scala.annotation.tailrec import scala.concurrent.{ExecutionContext, Future} import scala.util.Try class CasbahPersistenceJournaller(driver: CasbahMongoDriver) extends MongoPersistenceJournallingApi { import CasbahSerializers._ implicit val system = driver.actorSystem private[this] implicit val serialization = driver.serialization private[this] lazy val writeConcern = driver.journalWriteConcern private[this] def journalRangeQuery(pid: String, from: Long, to: Long): DBObject = (PROCESSOR_ID $eq pid) ++ (FROM $gte from) ++ (FROM $lte to) private[this] def journal(implicit ec: ExecutionContext) = driver.journal private[mongodb] def journalRange(pid: String, from: Long, to: Long)(implicit ec: ExecutionContext): Iterator[Event] = journal.find(journalRangeQuery(pid, from, to)) .flatMap(_.getAs[MongoDBList](EVENTS)) .flatMap(lst => lst.collect { case x:DBObject => x }) .filter(dbo => dbo.getAs[Long](SEQUENCE_NUMBER).exists(sn => sn >= from && sn <= to)) .map(driver.deserializeJournal) import collection.immutable.{Seq => ISeq} private[mongodb] override def batchAppend(writes: ISeq[AtomicWrite])(implicit ec: ExecutionContext):Future[ISeq[Try[Unit]]] = Future { val batch = writes.map(write => Try(driver.serializeJournal(Atom[DBObject](write)))) if (batch.forall(_.isSuccess)) { val bulk = journal.initializeOrderedBulkOperation batch.collect { case scala.util.Success(ser) => ser } foreach bulk.insert bulk.execute(writeConcern) batch.map(t => t.map(_ => ())) } else { // degraded performance, cant batch batch.map(_.map(serialized => journal.insert(serialized)(identity, writeConcern)).map(_ => ())) } } private[mongodb] override def deleteFrom(persistenceId: String, toSequenceNr: Long)(implicit ec: ExecutionContext): Future[Unit] = Future { val query = journalRangeQuery(persistenceId, 0L, toSequenceNr) val pull = MongoDBObject( "$pull" -> MongoDBObject( EVENTS -> MongoDBObject( PROCESSOR_ID -> persistenceId, SEQUENCE_NUMBER -> MongoDBObject("$lte" -> toSequenceNr) )), "$set" -> MongoDBObject(FROM -> (toSequenceNr+1)) ) journal.update(query, pull, upsert = false, multi = true, writeConcern) journal.remove($and(query, EVENTS $size 0), writeConcern) () } private[mongodb] def maxSequenceNr(pid: String, from: Long)(implicit ec: ExecutionContext): Future[Long] = Future { val query = PROCESSOR_ID $eq pid val projection = MongoDBObject(TO -> 1) val sort = MongoDBObject(TO -> -1) val max = journal.find(query, projection).sort(sort).limit(1).one() max.getAs[Long](TO).getOrElse(0L) } private[mongodb] override def replayJournal(pid: String, from: Long, to: Long, max: Long)(replayCallback: PersistentRepr ⇒ Unit)(implicit ec: ExecutionContext) = Future { @tailrec def replayLimit(cursor: Iterator[Event], remaining: Long): Unit = if (remaining > 0 && cursor.hasNext) { replayCallback(cursor.next().toRepr) replayLimit(cursor, remaining - 1) } if (to >= from) { replayLimit(journalRange(pid, from, to), max) } } }
Choose one of our 4+1 programs and earn both a bachelor’s and master’s degree in as little as five years. A 5,500 square foot maker space on the first floor of Spencer Laboratory that includes space for fabrication, collaboration, and design validation. Facility for students to have the opportunity for hands-on fabrication and manufacturing experience. Learn about the College-based student organizations for College of Engineering. Look through our list of university, career, general, department, and electronic communication resources. Graduate and Undergraduate student awards.
I am the founder of 22STARS Jewellery and the 22STARS Foundation, as a digital nomad with a social mission I call myself "Social Impact Nomad". I have two masters, one in International Law and one in Human Rights and Democratization. I worked in the human rights / development field and my passion is children's rights in post conflict situations as I want to give a voice to the most vulnerable children, protect them and help them. I gained my first experiences in entrepreneurship during university by working for SIFE Leiden (Students In Free Enterprise), where I learned that the best way to help people out of poverty is by making sure that they can provide for themselves. When I wanted to make the switch in 2013 from working for a development organization into setting up my own social business in using fashion and design to empower families, I realized that I had no idea how to do this. But I wanted it, so I just did it! I went to the Chamber of Commerce and got 22STARS Jewellery registered and a few years later the 22STARS Foundation followed! "I didn't wait till I had it all figured out, I just started the same day! " My studies brought me to Africa where I interviewed refugees in Kampala, Uganda. My creativity and wish to help the war affected people have been a direct incentive to establish 22STARS Jewellery. Impressed by the artistic skills of the women and deeply touched by their stories I decided to help them design, market and sell their products on the international market. Two years ago I expanded this product based model by creating the 22STARS Foundation, which enables more than 300 children in Uganda to go to school and helps 55 families with small business trainings and micro loans to become self sustainable. Along the way I found some help. I participated in a 3 month long Start Up Course at the Erasmus University, where I learned everything, from how to write a business model, to finding your perfect client. Then I started working for the Webshop of Calvin Klein - as I also had to pay some bills - where I got more insights in the ins and outs of an online fashion shop and where I got my coaching trainer certificate. Another two years later I started to focus full time on 22STARS and bumped into people from the Digital Nomad Community. They gave me not just valuable tips regarding SEO, personal branding, copywriting, and so forth, but they foremost gave me confidence and the idea to start a fundraiser for my cause. In Uganda, the women teached me a lot about how their system actually works, what their ideas about fashion are, what their expectations and actual needs are and how I can help them most. "What I needed was a person who could coach me and a community of like minded people who support me!" All in all, I had the feeling that the first few years of my social business went quite slow, because I was not surrounded with like-minded people and missed a lot of guidance, I missed a person to exchange thoughts with, a person who motivated me and who also had experience in this field to give me valuable advice and hold me accountable for my actions. On my way up I have gathered tons of experiences, resources, tools that made my life easier, inspiration from books and people, and overall I gathered a huge network with like-minded people who gave me full support. I have put all my knowledge together and I am here today, ready to help you create a social impact while also making money to sustain yourself. "I am here to help you start or grow your own social enterprise or foundation and show you how to work remotely." Do you know that you can even help more people and create an even bigger impact, when you make money? Probably you do, and it sounds so simple, but I know that people with the biggest hearts, tend to forget about themselves and end up in a frustrating situation where they can help no one. Together we can create that social enterprise that will bring in money to support your lifestyle, will bring you freedom and that will support the cause that you love. I am here to help you help yourself and to make that difference in the world!
Welcome to DCTC's O*NET Interest Profiler! This form will help you decide what types of careers you may want to explore. When you've competed all questions click Submit to view your results and DCTC programs that may interest you. For each question, decide how you feel about doing each type of work without considering education, training, or money. Just think about how much you would enjoy the activity.
Much to my surprise, maybe 6 months ago, I surrendered to the concept of a computer/device calendar. Yes, Google calendar. It has a 31 as it’s icon. And Mark and I attempted to “share” our calendars so we could stop asking each other about commit-table dates. A couple of months ago, before October of ’13 and a post-a-day, I had received yet another reminder/notification from my Ever Diligent Google calendar which informed me, I had absolutely no events scheduled for today. Just like my yesterday and my day before that. Google was ever so subtly telling me I had no life. Didn’t I already know this oh Great Google in the Sky. But today, I received the same “I have no scheduled events” email and I felt relieved. Because on top of being awoken at 4:45am this morning to care for my 9 month-old, the battery dying in our car at the grocery store yesterday, and planning to plan to decorate several Christmas trees, wreaths, house mantels, etc,… I will happily take a day of nothingness. It’s all in the way that you look at it. Thank you Oh Great and Powerful Google One for showing me the truth. Thank you Kathy. High praise from the winner of the best Lifestyle blog. Fame by association over here. Oooh…I absolutely love those days with nothing on the calendar…however, it doesn’t happen often enough. I have no idea why they call this retirement! Hugs to you all. Some kind of role model you turned out to be. Now I think I’ll never retire. Sounds too stressful. I’ve tried these electronic calendars and they seem worthless to me, Shalagh. I’ve given up on them. Here’s to nothingness. Yeah, right? Enjoy those iddy bitty moments of nothingness, in your case. Happy decorating! Itty Bitty moments, hahahaha. What I didn’t tell you Amy is that we had plenty of technical errors with the daggone Google calender. I still have a physical calender over the desk, the one with Eamon’s artwork on it, and a physical carry around one. I am so not all the way digital. Anything is as good as the habit’s of the user. Thank you and I love you for giving up. You have more important things to do. Like writing. I have to disagree with Bumble lol I love how you can have different calendars all either overlaid on each other or by themselves. I have my regular one with household appointments, personal appointments and any activity the whole family or joshua specifically would enjoy. Then I have one just for the baby’s activities, story times, etc. One to keep track of birthdays. One to keep track of what bills come when. One for my work hours because I have to do a time sheet. One for Writers’ Association deadlines, events, etc., one for deadlines for my freelance stuff, one to remind me to do certain housecleaning tasks sot hey don’t get overlooked, one for library due dates, otherwise they’d get lost in the master calendar, one for spiritual practices that go by certain time lines sometimes… one to clock my menstrual cycles just cuz i’m curious now that I’m all not that regular, one to clock my tarot card of the day so I can track trends lol, *deep breath* and then I like the tasks side bar too! I can make anything from any given calendar get duplicated on my master one if I want, so I’m sure to not overlook it, I love that too.I just couldn’t fit all this on a paper calendar. the kids’ activities are just on there as options though, not commitments. Otherwise I wouldn’t remember what was happening when. So they are just on there to remind me. Michelle, I am impressed that you got comfortable enough with that calender to do all of that.You really have to spend time to familiarize yourself with it. One drawback. I’d have it send me reminders on my laptop but it only wants to send stuff to gmail. That isn’t my primary email. Luckily I see notices on my phone otherwise I’m miss gmails completely. Glad the great Google has allowed you to be an overachiever. HA lol! It was survival! the birthday calendar threw me for a loop … it took me a while to figure out how to get a birthday to repeat every year. And it was simple, it was me who was an idiot. there’s got to be a way to get customized reminders … wouldn’t it be nice if it would send them to your cell phone? THAT would be the bomb. Mark can’t figure something out? No, Google will never make any other e-mail a primary besides g-mail. And Mark is now officially more of a computer know nothing than I am. Ha. and my library books are STILL overdue. Well of course they are. I just have stopped reading altogether. Having the books staring at me just makes me feel like a lamebutt. And I want to feel like a super Mom instead.
Everybody wants peace but only a few people can truly let it in. It is easy to say and wish for it, but are you willing to let your baggage go? Learning to lift it all up can sometimes be too hard. Worries can be a hindrance. But once one learns how to let loose and trust, is the only time when peace enters and thrive.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=US-ASCII"> <title>Function template make_unique_nothrow</title> <link rel="stylesheet" href="../../../../doc/src/boostbook.css" type="text/css"> <meta name="generator" content="DocBook XSL Stylesheets V1.78.1"> <link rel="home" href="../../index.html" title="The Boost C++ Libraries BoostBook Documentation Subset"> <link rel="up" href="../../move/reference.html#header.boost.move.make_unique_hpp" title="Header &lt;boost/move/make_unique.hpp&gt;"> <link rel="prev" href="make_uni_idm45507103445360.html" title="Function template make_unique_definit"> <link rel="next" href="make_uni_idm45507110937472.html" title="Function template make_unique_nothrow_definit"> </head> <body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"> <table cellpadding="2" width="100%"><tr> <td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../../boost.png"></td> <td align="center"><a href="../../../../index.html">Home</a></td> <td align="center"><a href="../../../../libs/libraries.htm">Libraries</a></td> <td align="center"><a href="http://www.boost.org/users/people.html">People</a></td> <td align="center"><a href="http://www.boost.org/users/faq.html">FAQ</a></td> <td align="center"><a href="../../../../more/index.htm">More</a></td> </tr></table> <hr> <div class="spirit-nav"> <a accesskey="p" href="make_uni_idm45507103445360.html"><img src="../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../move/reference.html#header.boost.move.make_unique_hpp"><img src="../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="make_uni_idm45507110937472.html"><img src="../../../../doc/src/images/next.png" alt="Next"></a> </div> <div class="refentry"> <a name="boost.movelib.make_uni_idm45507110955264"></a><div class="titlepage"></div> <div class="refnamediv"> <h2><span class="refentrytitle">Function template make_unique_nothrow</span></h2> <p>boost::movelib::make_unique_nothrow</p> </div> <h2 xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" class="refsynopsisdiv-title">Synopsis</h2> <div xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" class="refsynopsisdiv"><pre class="synopsis"><span class="comment">// In header: &lt;<a class="link" href="../../move/reference.html#header.boost.move.make_unique_hpp" title="Header &lt;boost/move/make_unique.hpp&gt;">boost/move/make_unique.hpp</a>&gt; </span> <span class="keyword">template</span><span class="special">&lt;</span><span class="keyword">typename</span> T<span class="special">,</span> <span class="keyword">class</span><span class="special">...</span> Args<span class="special">&gt;</span> <span class="identifier">unspecified</span> <span class="identifier">make_unique_nothrow</span><span class="special">(</span><span class="identifier">Args</span> <span class="special">&amp;&amp;</span> ...<span class="special">)</span><span class="special">;</span></pre></div> <div class="refsect1"> <a name="idm45555230650128"></a><h2>Description</h2> <p><span class="bold"><strong>Remarks</strong></span>: This function shall not participate in overload resolution unless T is an array of known bound. </p> </div> </div> <table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr> <td align="left"></td> <td align="right"><div class="copyright-footer">Copyright &#169; 2008-2014 Ion Gaztanaga<p> Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at <a href="http://www.boost.org/LICENSE_1_0.txt" target="_top">http://www.boost.org/LICENSE_1_0.txt</a>) </p> </div></td> </tr></table> <hr> <div class="spirit-nav"> <a accesskey="p" href="make_uni_idm45507103445360.html"><img src="../../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../../move/reference.html#header.boost.move.make_unique_hpp"><img src="../../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../../index.html"><img src="../../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="make_uni_idm45507110937472.html"><img src="../../../../doc/src/images/next.png" alt="Next"></a> </div> </body> </html>
/** * Copyright 2011 The Apache Software Foundation * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.cloudera.flume.agent.durability; import static org.junit.Assert.assertEquals; import static org.mockito.Mockito.mock; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.concurrent.CountDownLatch; import org.junit.After; import org.junit.Before; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.cloudera.flume.agent.FlumeNodeWALNotifier; import com.cloudera.flume.agent.durability.NaiveFileWALManager.WALData; import com.cloudera.flume.core.EventImpl; import com.cloudera.flume.core.EventSink; import com.cloudera.flume.core.EventSource; import com.cloudera.flume.handlers.endtoend.AckListener; import com.cloudera.flume.handlers.rolling.Tagger; import com.cloudera.util.Clock; import com.cloudera.util.FileUtil; /** * This test case exercises all the state transitions found when data goes * through the write ahead log and comes out the other side. */ public class TestFlumeNodeWALNotifierRacy { Logger LOG = LoggerFactory.getLogger(TestFlumeNodeWALNotifierRacy.class); Date date; CannedTagger tagger; AckListener mockAl; File tmpdir; NaiveFileWALManager walman; Map<String, WALManager> map; EventSink snk; EventSource src; /** * This issues simple incrementing integer toString as a tag for the next wal * file. */ static class CannedTagger implements Tagger { int cur = 0; List<String> tags = Collections.synchronizedList(new ArrayList<String>()); CannedTagger() { } @Override public String getTag() { return Integer.toString(cur); } @Override public String newTag() { cur++; String tag = Integer.toString(cur); tags.add(tag); return tag; } @Override public Date getDate() { return null; } List<String> getTags() { return tags; } } @Before public void setup() throws IOException, InterruptedException { date = new Date(); tagger = new CannedTagger(); mockAl = mock(AckListener.class); tmpdir = FileUtil.mktempdir(); walman = new NaiveFileWALManager(tmpdir); walman.open(); map = new HashMap<String, WALManager>(); map.put("wal", walman); } @After public void teardown() throws IOException { FileUtil.rmr(tmpdir); } /** * Attempt a retry on the specified tag. It is important that there is no * sychronization on this -- concurrency needs to be handled by the wal * sources and walmanager code. * * @param tag * @throws IOException */ public void triggerRetry(String tag) throws IOException { FlumeNodeWALNotifier notif = new FlumeNodeWALNotifier(map); notif.retry(tag); } /** * Transition to writing state. * * @throws IOException * @throws InterruptedException */ void toWritingState() throws IOException, InterruptedException { snk = walman.newAckWritingSink(tagger, mockAl); EventImpl evt = new EventImpl("foofoodata".getBytes()); snk.open(); snk.append(evt); } /** * Transition from writing to logged state * * @throws IOException * @throws InterruptedException */ void toLoggedState() throws IOException, InterruptedException { // transition to logged state. snk.close(); } /** * Transition from logged to sending state. * * @throws IOException * @throws InterruptedException */ void toSendingState() throws IOException, InterruptedException { // transition to sending state. src = walman.getUnackedSource(); src.open(); while (src.next() != null) { ; } } /** * Transition from sending to sent state. * * @throws IOException * @throws InterruptedException */ void toSentState() throws IOException, InterruptedException { // transition to sent state. src.close(); } /** * Transition from sent to acked state * * @param tag * @throws IOException */ void toAckedState(String tag) throws IOException { // transition to acked state. walman.toAcked(tag); } /** * Make one state transition based on the current state of the wal tag. The * key point here is that even though there is no locking here, transition * between states can have retry's happen anywhere and data still only does * proper state transitions. function is executed. * * @param tag * @return isDone (either in E2EACKED state, or no longer present)f * @throws IOException * @throws InterruptedException */ boolean step(String tag) throws IOException, InterruptedException { WALData wd = walman.getWalData(tag); if (wd == null) { return true; } switch (wd.s) { case WRITING: LOG.info("LOGGED tag '" + tag + "'"); toLoggedState(); return false; case LOGGED: LOG.info("SENDING tag '" + tag + "'"); toSendingState(); return false; case SENDING: LOG.info("SENT tag '" + tag + "'"); toSentState(); return false; case SENT: LOG.info("ACKED tag '" + tag + "'"); toAckedState(tag); return false; case E2EACKED: return true; default: throw new IllegalStateException("Unexpected state " + wd.s); } } /** * Run the test for 10000 wal files. On laptop this normally finishes in 20s, * so timeout after 100s (100000ms). * * @throws IOException * @throws InterruptedException */ @Test(timeout = 100000) public void testRetryWriting() throws IOException, InterruptedException { final int count = 10000; retryWritingRacynessRun(count); } /** * The test manages threads -- on thread to marching through states and * another that introduces many many retry attempts. * * @param count * @throws InterruptedException */ void retryWritingRacynessRun(final int count) throws InterruptedException { // one for each thread final CountDownLatch done = new CountDownLatch(2); // start second thread final CountDownLatch canRetry = new CountDownLatch(1); // Thread 1 is creating new log data Thread t = new Thread() { public void run() { try { for (int i = 0; i < count; i++) { // there is an off by one thing here String tag = Integer.toString(i + 1); LOG.info("WRITING tag '" + tag + "'"); toWritingState(); canRetry.countDown(); // tag = tagger.getTags().get(i); // first time will allow retry thread to start, otherwise do nothing while (!step(tag)) { LOG.info("Stepping on tag " + tag); } } } catch (Exception e) { LOG.error("This wasn't supposed to happen", e); } finally { done.countDown(); } } }; t.start(); // thread 2 is periodically triggering retries. Thread t2 = new Thread() { public void run() { int count = 0; try { canRetry.await(); while (done.getCount() > 1) { List<String> tags = tagger.getTags(); Clock.sleep(10); // Key point -- read of the tag and the retry don't clash with other // threads state transition // synchronized (lock) { int sz = tags.size(); for (int i = 0; i < sz; i++) { String tag = tags.get(i); WALData wd = walman.getWalData(tag); if (!walman.getWritingTags().contains(tag) && wd != null) { // any state but writing triggerRetry(tag); LOG.info("Forcing retry on tag '" + tag + "'"); count++; } // } } } } catch (Exception e) { LOG.error("This wasn't supposed to happen either", e); } finally { done.countDown(); LOG.info("Issued {} retries", count); } } }; t2.start(); done.await(); assertEquals(0, walman.getWritingTags().size()); assertEquals(0, walman.getLoggedTags().size()); assertEquals(0, walman.getSendingTags().size()); assertEquals(0, walman.getSentTags().size()); } /** * You can run this test from the command line for an extended racyness * testing. */ public static void main(String[] argv) { if (argv.length != 1) { System.err.println("usage: " + TestFlumeNodeWALNotifierRacy.class.getSimpleName() + " <n>"); System.err .println(" Run wal racyness test sending <n> events. This rolls " + "a new log file after every event and also attempts to " + "a retry every 10ms."); System.exit(1); } int count = Integer.parseInt(argv[0]); TestFlumeNodeWALNotifierRacy test = new TestFlumeNodeWALNotifierRacy(); try { test.setup(); test.retryWritingRacynessRun(count); test.teardown(); } catch (Exception e) { System.err.println("Test failed"); e.printStackTrace(); System.exit(2); } // success! } }
There have been a couple of Hong Kong reprints, yes. I am not sure that there were any subsequent printings in Italy after the first impression, but I could be wrong. I do have a spare 1st impression copy still in shrinkwrap (bought day of release), if you are interested. What's your budget, and where are you located (for a shipping estimate)? I have no problem opening the shrinkwrap if you want pictures of the colophon to see what you would be getting. Again, welcome to the site! And thank you for the info. I'd send you a PM, but the formatting seems off for some reason. There's no field provided in which I can type your name. That, or I just can't figure it out. Sorry for that. Anyway, I'm located in the States. Perhaps you can send me a message to my Inbox, to which I could then reply. If that doesn't work, you can let me know here in this thread, and I can provide you with my email address. Sorry about the PM system! It is a bit aged, and not showing it well. Improvements are on my todo list. from my previous experience, Tolkienbookshelf.com is one of the few trustworthy Tolkien-specialist seller. I agree with the recommendation for Tolkienbookshelf.com, and dunedain always lists his items as accurately as possible, so you know what you are getting. Minor correction The LoTRs in the boxed set are all either 2nd or 3rd impressions. It wasn't issued with any 1sts. The other three volumes are all 1st impressions. Edit: As is pointed out by Trotter in a post further down, this is a custom boxed set, rather than the HarperCollins boxed set of the same books, so what I said above is incorrect. I think with all dealers you need to get plenty of pictures and make your own decision as to what constitutes fine, near fine, or whatever. I haven't found *any* Tolkien specialist dealers where I find their descriptions to be 100% accurate, 100% of the time. There is a natural tendency in any sales job to spin the upside, I guess. In terms of descriptions, Tolkien Bookshelf is undoubtedly much better than most of the other Tolkien-specific sellers (who - on the whole - are awful). I've never had a positive surprise in terms of condition from a specialist dealer. I have had from non-specialist dealers and private sales. I buy the cheaper end from dealers quite often, but almost never the more expensive stuff, as the level of markup generally becomes untenable vs private sale. I won't pay $1000 for a Hobbit I can pay $200 for (just by waiting a year until the right one comes up (for example)). I have a 1st and a reprint and to be honest the quality has declined, the 1st has a nice tight slipcase, rounded spine however the reprints are from china and the slipcases on the reprints, at least on the recent ones, are loose. The Silmarillion was ridiculous with a very visible gap, again the 1sts are tight, also the recent ones seem to be missing the price which my 1sts all have. Are there American prints of these or are they Harper Collins too?
Bold Steps with Dr. Mark Jobe, Moody Radio’s newest national weekday program, launched today across its nation-wide network of stations. Moody Radio will air Bold Steps daily across its owned-and-operated radio stations and online streams, as well as affiliates. Check local listings for specific times. The program, which is co-hosted by veteran broadcaster Wayne Shepherd, will additionally be heard via Moody radio’s network stream and iPhone and Android apps as well. Listeners can also launch the program via Amazon echo. For the past 32 years, Dr. Jobe has served as the founding pastor of New Life Community Church in Chicago. The church has grown from a handful of people to approximately 7,000 meeting at 27 locations throughout the Chicagoland area and in eight cities internationally. He is also the founder of New Life Centers, an organization focused on helping youth in underserved areas of Chicago. He and his wife, Dee, have three adult children. Dr. Jobe holds a diploma in Communications from Moody Bible Institute in Chicago (1984), a Bachelor of Arts Degree in Biblical Studies from Columbia International University, a Master of Arts in Ministry from Moody Theological Seminary in Chicago (1998), and a doctorate in Transformational Leadership for the Global City from Bakke Graduate University. For more information about Bold Steps with Dr. Mark Jobe, and to hear past programs, please visit www.boldstepsradio.org. Listeners can also follow the program on Facebook.
This 6 ft. valance octagon fiberglass top umbrella is perfect shade solution for commercial businesses that are along beach sides or in the mountains, where serve winds are constant. Fiberglass tops are waterproof and painted with a commercial grade two-part polyurethane that will not crack, fade, or peel for years! Features a 1 1/2 inch diameter pole that has been powder coated black for extra protection. Call for a two-tone color option upgrade quote. Weight: 40 lbs.
@charset "UTF-8"; /*! normalize.css v3.0.2 | MIT License | git.io/normalize */ /** * 1. Set default font family to sans-serif. * 2. Prevent iOS text size adjust after orientation change, without disabling * user zoom. */ html { font-family: sans-serif; /* 1 */ -ms-text-size-adjust: 100%; /* 2 */ -webkit-text-size-adjust: 100%; /* 2 */ } /** * Remove default margin. */ body { margin: 0; } /* HTML5 display definitions ========================================================================== */ /** * Correct `block` display not defined for any HTML5 element in IE 8/9. * Correct `block` display not defined for `details` or `summary` in IE 10/11 * and Firefox. * Correct `block` display not defined for `main` in IE 11. */ article, aside, details, figcaption, figure, footer, header, hgroup, main, menu, nav, section, summary { display: block; } /** * 1. Correct `inline-block` display not defined in IE 8/9. * 2. Normalize vertical alignment of `progress` in Chrome, Firefox, and Opera. */ audio, canvas, progress, video { display: inline-block; /* 1 */ vertical-align: baseline; /* 2 */ } /** * Prevent modern browsers from displaying `audio` without controls. * Remove excess height in iOS 5 devices. */ audio:not([controls]) { display: none; height: 0; } /** * Address `[hidden]` styling not present in IE 8/9/10. * Hide the `template` element in IE 8/9/11, Safari, and Firefox < 22. */ [hidden], template { display: none; } /* Links ========================================================================== */ /** * Remove the gray background color from active links in IE 10. */ a { background-color: transparent; } /** * Improve readability when focused and also mouse hovered in all browsers. */ a:active, a:hover { outline: 0; } /* Text-level semantics ========================================================================== */ /** * Address styling not present in IE 8/9/10/11, Safari, and Chrome. */ abbr[title] { border-bottom: 1px dotted; } /** * Address style set to `bolder` in Firefox 4+, Safari, and Chrome. */ b, strong { font-weight: bold; } /** * Address styling not present in Safari and Chrome. */ dfn { font-style: italic; } /** * Address variable `h1` font-size and margin within `section` and `article` * contexts in Firefox 4+, Safari, and Chrome. */ h1 { font-size: 2em; margin: 0.67em 0; } /** * Address styling not present in IE 8/9. */ mark { background: #ff0; color: #000; } /** * Address inconsistent and variable font size in all browsers. */ small { font-size: 80%; } /** * Prevent `sub` and `sup` affecting `line-height` in all browsers. */ sub, sup { font-size: 75%; line-height: 0; position: relative; vertical-align: baseline; } sup { top: -0.5em; } sub { bottom: -0.25em; } /* Embedded content ========================================================================== */ /** * Remove border when inside `a` element in IE 8/9/10. */ img { border: 0; } /** * Correct overflow not hidden in IE 9/10/11. */ svg:not(:root) { overflow: hidden; } /* Grouping content ========================================================================== */ /** * Address margin not present in IE 8/9 and Safari. */ figure { margin: 1em 40px; } /** * Address differences between Firefox and other browsers. */ hr { -moz-box-sizing: content-box; box-sizing: content-box; height: 0; } /** * Contain overflow in all browsers. */ pre { overflow: auto; } /** * Address odd `em`-unit font size rendering in all browsers. */ code, kbd, pre, samp { font-family: monospace, monospace; font-size: 1em; } /* Forms ========================================================================== */ /** * Known limitation: by default, Chrome and Safari on OS X allow very limited * styling of `select`, unless a `border` property is set. */ /** * 1. Correct color not being inherited. * Known issue: affects color of disabled elements. * 2. Correct font properties not being inherited. * 3. Address margins set differently in Firefox 4+, Safari, and Chrome. */ button, input, optgroup, select, textarea { color: inherit; /* 1 */ font: inherit; /* 2 */ margin: 0; /* 3 */ } /** * Address `overflow` set to `hidden` in IE 8/9/10/11. */ button { overflow: visible; } /** * Address inconsistent `text-transform` inheritance for `button` and `select`. * All other form control elements do not inherit `text-transform` values. * Correct `button` style inheritance in Firefox, IE 8/9/10/11, and Opera. * Correct `select` style inheritance in Firefox. */ button, select { text-transform: none; } /** * 1. Avoid the WebKit bug in Android 4.0.* where (2) destroys native `audio` * and `video` controls. * 2. Correct inability to style clickable `input` types in iOS. * 3. Improve usability and consistency of cursor style between image-type * `input` and others. */ button, html input[type="button"], input[type="reset"], input[type="submit"] { -webkit-appearance: button; /* 2 */ cursor: pointer; /* 3 */ } /** * Re-set default cursor for disabled elements. */ button[disabled], html input[disabled] { cursor: default; } /** * Remove inner padding and border in Firefox 4+. */ button::-moz-focus-inner, input::-moz-focus-inner { border: 0; padding: 0; } /** * Address Firefox 4+ setting `line-height` on `input` using `!important` in * the UA stylesheet. */ input { line-height: normal; } /** * It's recommended that you don't attempt to style these elements. * Firefox's implementation doesn't respect box-sizing, padding, or width. * * 1. Address box sizing set to `content-box` in IE 8/9/10. * 2. Remove excess padding in IE 8/9/10. */ input[type="checkbox"], input[type="radio"] { box-sizing: border-box; /* 1 */ padding: 0; /* 2 */ } /** * Fix the cursor style for Chrome's increment/decrement buttons. For certain * `font-size` values of the `input`, it causes the cursor style of the * decrement button to change from `default` to `text`. */ input[type="number"]::-webkit-inner-spin-button, input[type="number"]::-webkit-outer-spin-button { height: auto; } /** * 1. Address `appearance` set to `searchfield` in Safari and Chrome. * 2. Address `box-sizing` set to `border-box` in Safari and Chrome * (include `-moz` to future-proof). */ input[type="search"] { -webkit-appearance: textfield; /* 1 */ -moz-box-sizing: content-box; -webkit-box-sizing: content-box; /* 2 */ box-sizing: content-box; } /** * Remove inner padding and search cancel button in Safari and Chrome on OS X. * Safari (but not Chrome) clips the cancel button when the search input has * padding (and `textfield` appearance). */ input[type="search"]::-webkit-search-cancel-button, input[type="search"]::-webkit-search-decoration { -webkit-appearance: none; } /** * Define consistent border, margin, and padding. */ fieldset { border: 1px solid #c0c0c0; margin: 0 2px; padding: 0.35em 0.625em 0.75em; } /** * 1. Correct `color` not being inherited in IE 8/9/10/11. * 2. Remove padding so people aren't caught out if they zero out fieldsets. */ legend { border: 0; /* 1 */ padding: 0; /* 2 */ } /** * Remove default vertical scrollbar in IE 8/9/10/11. */ textarea { overflow: auto; } /** * Don't inherit the `font-weight` (applied by a rule above). * NOTE: the default cannot safely be changed in Chrome and Safari on OS X. */ optgroup { font-weight: bold; } /* Tables ========================================================================== */ /** * Remove most spacing between table cells. */ table { border-collapse: collapse; border-spacing: 0; } td, th { padding: 0; } /* * Skeleton V2.0.4 * Copyright 2014, Dave Gamache * www.getskeleton.com * Free to use under the MIT license. * http://www.opensource.org/licenses/mit-license.php * 12/29/2014 */ /* Table of contents –––––––––––––––––––––––––––––––––––––––––––––––––– - Grid - Base Styles - Typography - Links - Buttons - Forms - Lists - Code - Tables - Spacing - Utilities - Clearing - Media Queries */ /* Grid –––––––––––––––––––––––––––––––––––––––––––––––––– */ .container { position: relative; width: 100%; max-width: 960px; margin: 0 auto; padding: 0 20px; box-sizing: border-box; } .column, .columns { width: 100%; float: left; box-sizing: border-box; } /* For devices larger than 400px */ @media (min-width: 400px) { .container { width: 85%; padding: 0; } } /* For devices larger than 550px */ @media (min-width: 550px) { .container { width: 80%; } .column, .columns { margin-left: 4%; } .column:first-child, .columns:first-child { margin-left: 0; } .one.column, .one.columns { width: 4.66666666667%; } .two.columns { width: 13.3333333333%; } .three.columns { width: 22%; } .four.columns { width: 30.6666666667%; } .five.columns { width: 39.3333333333%; } .six.columns { width: 48%; } .seven.columns { width: 56.6666666667%; } .eight.columns { width: 65.3333333333%; } .nine.columns { width: 74.0%; } .ten.columns { width: 82.6666666667%; } .eleven.columns { width: 91.3333333333%; } .twelve.columns { width: 100%; margin-left: 0; } .one-third.column { width: 30.6666666667%; } .two-thirds.column { width: 65.3333333333%; } .one-half.column { width: 48%; } /* Offsets */ .offset-by-one.column, .offset-by-one.columns { margin-left: 8.66666666667%; } .offset-by-two.column, .offset-by-two.columns { margin-left: 17.3333333333%; } .offset-by-three.column, .offset-by-three.columns { margin-left: 26%; } .offset-by-four.column, .offset-by-four.columns { margin-left: 34.6666666667%; } .offset-by-five.column, .offset-by-five.columns { margin-left: 43.3333333333%; } .offset-by-six.column, .offset-by-six.columns { margin-left: 52%; } .offset-by-seven.column, .offset-by-seven.columns { margin-left: 60.6666666667%; } .offset-by-eight.column, .offset-by-eight.columns { margin-left: 69.3333333333%; } .offset-by-nine.column, .offset-by-nine.columns { margin-left: 78.0%; } .offset-by-ten.column, .offset-by-ten.columns { margin-left: 86.6666666667%; } .offset-by-eleven.column, .offset-by-eleven.columns { margin-left: 95.3333333333%; } .offset-by-one-third.column, .offset-by-one-third.columns { margin-left: 34.6666666667%; } .offset-by-two-thirds.column, .offset-by-two-thirds.columns { margin-left: 69.3333333333%; } .offset-by-one-half.column, .offset-by-one-half.columns { margin-left: 52%; } } /* Base Styles –––––––––––––––––––––––––––––––––––––––––––––––––– */ /* NOTE html is set to 62.5% so that all the REM measurements throughout Skeleton are based on 10px sizing. So basically 1.5rem = 15px :) */ html { font-size: 62.5%; } body { font-size: 1.5em; /* currently ems cause chrome bug misinterpreting rems on body element */ line-height: 1.6; font-weight: 400; font-family: "Raleway", "HelveticaNeue", "Helvetica Neue", Helvetica, Arial, sans-serif; color: #222; } /* Typography –––––––––––––––––––––––––––––––––––––––––––––––––– */ h1, h2, h3, h4, h5, h6 { margin-top: 0; margin-bottom: 2rem; font-weight: 300; } h1 { font-size: 4.0rem; line-height: 1.2; letter-spacing: -.1rem; } h2 { font-size: 3.6rem; line-height: 1.25; letter-spacing: -.1rem; } h3 { font-size: 3.0rem; line-height: 1.3; letter-spacing: -.1rem; } h4 { font-size: 2.4rem; line-height: 1.35; letter-spacing: -.08rem; } h5 { font-size: 1.8rem; line-height: 1.5; letter-spacing: -.05rem; } h6 { font-size: 1.5rem; line-height: 1.6; letter-spacing: 0; } /* Larger than phablet */ @media (min-width: 550px) { h1 { font-size: 5.0rem; } h2 { font-size: 4.2rem; } h3 { font-size: 3.6rem; } h4 { font-size: 3.0rem; } h5 { font-size: 2.4rem; } h6 { font-size: 1.5rem; } } p { margin-top: 0; } /* Links –––––––––––––––––––––––––––––––––––––––––––––––––– */ a { color: #1EAEDB; } a:hover { color: #0FA0CE; } /* Buttons –––––––––––––––––––––––––––––––––––––––––––––––––– */ .button, button, input[type="submit"], input[type="reset"], input[type="button"] { display: inline-block; height: 38px; padding: 0 30px; color: #555; text-align: center; font-size: 11px; font-weight: 600; line-height: 38px; letter-spacing: .1rem; text-transform: uppercase; text-decoration: none; white-space: nowrap; background-color: transparent; border-radius: 4px; border: 1px solid #bbb; cursor: pointer; box-sizing: border-box; } .button:hover, button:hover, input[type="submit"]:hover, input[type="reset"]:hover, input[type="button"]:hover, .button:focus, button:focus, input[type="submit"]:focus, input[type="reset"]:focus, input[type="button"]:focus { color: #333; border-color: #888; outline: 0; } .button.button-primary, button.button-primary, input[type="submit"].button-primary, input[type="reset"].button-primary, input[type="button"].button-primary { color: #FFF; background-color: #33C3F0; border-color: #33C3F0; } .button.button-primary:hover, button.button-primary:hover, input[type="submit"].button-primary:hover, input[type="reset"].button-primary:hover, input[type="button"].button-primary:hover, .button.button-primary:focus, button.button-primary:focus, input[type="submit"].button-primary:focus, input[type="reset"].button-primary:focus, input[type="button"].button-primary:focus { color: #FFF; background-color: #1EAEDB; border-color: #1EAEDB; } /* Forms –––––––––––––––––––––––––––––––––––––––––––––––––– */ input[type="email"], input[type="number"], input[type="search"], input[type="text"], input[type="tel"], input[type="url"], input[type="password"], textarea, select { height: 38px; padding: 6px 10px; /* The 6px vertically centers text on FF, ignored by Webkit */ background-color: #fff; border: 1px solid #D1D1D1; border-radius: 4px; box-shadow: none; box-sizing: border-box; } /* Removes awkward default styles on some inputs for iOS */ input[type="email"], input[type="number"], input[type="search"], input[type="text"], input[type="tel"], input[type="url"], input[type="password"], textarea { -webkit-appearance: none; -moz-appearance: none; appearance: none; } textarea { min-height: 65px; padding-top: 6px; padding-bottom: 6px; } input[type="email"]:focus, input[type="number"]:focus, input[type="search"]:focus, input[type="text"]:focus, input[type="tel"]:focus, input[type="url"]:focus, input[type="password"]:focus, textarea:focus, select:focus { border: 1px solid #33C3F0; outline: 0; } label, legend { display: block; margin-bottom: .5rem; font-weight: 600; } fieldset { padding: 0; border-width: 0; } input[type="checkbox"], input[type="radio"] { display: inline; } label > .label-body { display: inline-block; margin-left: .5rem; font-weight: normal; } /* Lists –––––––––––––––––––––––––––––––––––––––––––––––––– */ ul { list-style: circle inside; } ol { list-style: decimal inside; } ol, ul { padding-left: 0; margin-top: 0; } ul ul, ul ol, ol ol, ol ul { margin: 1.5rem 0 1.5rem 3rem; font-size: 90%; } li { margin-bottom: 1rem; } /* Code –––––––––––––––––––––––––––––––––––––––––––––––––– */ code { padding: .2rem .5rem; margin: 0 .2rem; font-size: 90%; white-space: nowrap; background: #F1F1F1; border: 1px solid #E1E1E1; border-radius: 4px; } pre > code { display: block; padding: 1rem 1.5rem; white-space: pre; } /* Tables –––––––––––––––––––––––––––––––––––––––––––––––––– */ th, td { padding: 12px 15px; text-align: left; border-bottom: 1px solid #E1E1E1; } th:first-child, td:first-child { padding-left: 0; } th:last-child, td:last-child { padding-right: 0; } /* Spacing –––––––––––––––––––––––––––––––––––––––––––––––––– */ button, .button { margin-bottom: 1rem; } input, textarea, select, fieldset { margin-bottom: 1.5rem; } pre, blockquote, dl, figure, table, p, ul, ol, form { margin-bottom: 2.5rem; } /* Utilities –––––––––––––––––––––––––––––––––––––––––––––––––– */ .u-full-width { width: 100%; box-sizing: border-box; } .u-max-full-width { max-width: 100%; box-sizing: border-box; } .u-pull-right { float: right; } .u-pull-left { float: left; } /* Misc –––––––––––––––––––––––––––––––––––––––––––––––––– */ hr { margin-top: 3rem; margin-bottom: 3.5rem; border-width: 0; border-top: 1px solid #E1E1E1; } /* Clearing –––––––––––––––––––––––––––––––––––––––––––––––––– */ /* Self Clearing Goodness */ .container:after, .row:after, .u-cf { content: ""; display: table; clear: both; } /* Media Queries –––––––––––––––––––––––––––––––––––––––––––––––––– */ /* Note: The best way to structure the use of media queries is to create the queries near the relevant code. For example, if you wanted to change the styles for buttons on small devices, paste the mobile query code up in the buttons section and style it there. */ /* Larger than mobile */ /* Larger than phablet (also point when grid becomes active) */ /* Larger than tablet */ /* Larger than desktop */ /* Larger than Desktop HD */ .preview { background-color: #e1e4e7; } .preview h2 { color: #95a5a6; font-size: 16px; text-align: right; text-transform: uppercase; margin: 0px 10px 10px 0px; } .preview .hCard { margin: 0px 40px; } .preview .hCard-wrapper { background: #fff; border-bottom: 2px solid #9a9a9a; } .preview .hCard-header { position: relative; background: #2c3e50; color: #fff; padding: 50px 0px 0px 25px; } .preview .hCard-header h3 { padding: 20px 0px 10px; font-weight: 500; line-height: 1.1; } .preview .hCard-header img { position: absolute; top: 15px; right: 15px; border: 1px solid #9a9a9a; height: 100px; width: 80px; background: #ffffff; } .preview .hCard-body { margin: 25px 25px 0px; width: 85%; } .preview .hCard-body .row { border-bottom: 1px solid #e5e5e5; height: 30px; } .preview .hCard-body .row:last-child { border: none; } .preview .hCard-body-label { font-family: "Merriweather Sans", sans-serif; text-transform: uppercase; color: #2c3e50; font-size: 10px; margin-top: 10px; text-align: left; padding: 0px; } .preview .hCard-body-value { font-size: 14px; margin-top: 6px; text-align: left; padding: 0px; text-transform: capitalize; } .preview .hCard-body-value.no-transform { text-transform: none; } form { margin: 0px 40px; } form legend { text-transform: uppercase; color: #b0b8bc; font-size: 10px; font-weight: normal; margin-bottom: 20px; border-bottom: 1px solid #e5e5e5; width: 100%; } form label { text-transform: uppercase; color: #2c3e50; font-size: 12px; } .button-primary.button-create { position: relative; overflow: hidden; width: 100%; background-color: #3498db !important; color: #fff; font-family: "Merriweather Sans", sans-serif; border-bottom: 2px solid #2980b9 !important; text-transform: none; font-size: 20px !important; font-weight: normal; } .button-primary.button-create:hover { background-color: #3498db; color: #fff; } .button-primary.button-file { position: relative; overflow: hidden; width: 100%; background-color: #627b8b; color: #fff; font-family: "Merriweather Sans", sans-serif; border-bottom: 2px solid #3f515d; font-size: 20px; display: block; padding-top: 10px; height: 26px; line-height: 18px; text-align: center; } .button-primary.button-file:hover { background-color: #627b8b; color: #fff; } .button-primary.button-file input[type=file] { position: absolute; top: 0; right: 0; min-width: 100%; min-height: 100%; font-size: 100px; text-align: right; filter: alpha(opacity=0); opacity: 0; outline: none; background: white; cursor: inherit; display: block; } html { position: relative; min-height: 100%; } body { font-family: "Merriweather", serif; } h1, h2, legend, label { font-family: "Merriweather Sans", sans-serif; } h1 { font-weight: 700; font-size: 31px; color: #2c3e50; } h3 { font-size: 24px; } .container { margin-top: 20px; } .wrapper { display: table-cell; float: none; height: 100%; vertical-align: middle; }
<?php use PHPUnit\Framework\TestCase; use \ChainOfResponsibility as COR; require_once dirname(__DIR__) . '/Trouble.php'; /** * COR Trouble Test */ final class CORTroubleTest extends TestCase { public function test_getName_設定されたトラブル番号を取得する() { $expected = 1000; $trouble = new COR\Trouble(1000); $actual = $trouble->getNumber(); $this->assertEquals($expected, $actual); } public function test_toString_オブジェクト自体を取得するとフォーマット化された文字列を取得する() { $expected = '[Trouble 1000]'; $trouble = new COR\Trouble(1000); echo $trouble; $this->expectOutputString($expected); } }
I checked into my Premier Inn hotel near Piccadilly Station on the Friday evening, and had an hour and a half to relax in my comfy hotel room and spruce myself up before an evening of dinner, drinks and blogger chats at one of my favourite restaurants – San Carlo Fumo. I spent New Years Eve here earlier this year, and love the classic interior, marble walls, amazing food and lovely service. Not to mention that I am an Italian food freak, so the fact that Fumo provides Italian tapas is just incredible. Myself and around 20 other fashion bloggers all sat together and divulged into what seemed to be a 20 course meal – food and wine literally just kept arriving and it was delicious! Some of the Italian dishes I enjoyed the most included was the plate of pork, cheese stuffed crusts with prosciutto ham, mini pizzas and mussels with spaghetti. The best mixture of Italian meats, seafood and pastas. Oh and the dessert was simply devine – the largest ice cream sundae mixed with Ferreo Rocha chocolates and sweet breaded sticks. I was in a total food coma! For the event, I wore an orange chain strap vest top from H&M, and a midi swing skirt from Boohoo. Nice and casual, yet dressy and smart at the same time. The next day was just as lovely, as all us bloggers filled our bellies with an all you can eat breakfast buffet at the Premier Inn, before following a carefully put together Manchester guide based on fashion and shopping, and then hit the city to shop til we drop. I purchased a fantastic little dress from River Island in Manchester Arndale (if you follow me on Instagram, you may have seen it already!) which I bought for a weekend night out. Manchester has so many great shopping areas: you have the Northern Quarter for quirky fashion and vintage stores such as Afflecks Palace, Thunder Egg and Blue Rinse, and then fashion aside there is my favourite store in the world, Magma. Magma retails the coolest magazines and books for the creative industries, such as design, advertising and graphics. It’s the place where I go to purchase my ‘bibles’ such as ID magazine and Dazed & Confused. On an ending note, this is the second time in a month I have stayed over at a Premier Inn hotel and I have to say I have never been disappointed. The rooms are great sizes, with comfy, high up double beds and a chez long, with modern bathrooms and dark curtains. I’ve thoroughly enjoyed a good nights sleep here every time, and I’ll definitely be checking for a Premier Inn when I need a hotel in an area I may have to visit. The weekend was so lovely – a huge thanks to everyone who made is special!
/*********************************************************************/ // dar - disk archive - a backup/restoration program // Copyright (C) 2002-2052 Denis Corbin // // This program is free software; you can redistribute it and/or // modify it under the terms of the GNU General Public License // as published by the Free Software Foundation; either version 2 // of the License, or (at your option) any later version. // // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU General Public License for more details. // // You should have received a copy of the GNU General Public License // along with this program; if not, write to the Free Software // Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. // // to contact the author : http://dar.linux.free.fr/email.html /*********************************************************************/ #include "../my_config.h" extern "C" { #if HAVE_STRING_H #include <string.h> #endif #if HAVE_STRINGS_H #include <strings.h> #endif #if STDC_HEADERS # include <string.h> #else # if !HAVE_STRCHR # define strchr index # define strrchr rindex # endif char *strchr (), *strrchr (); # if !HAVE_MEMCPY # define memcpy(d, s, n) bcopy ((s), (d), (n)) # define memmove(d, s, n) bcopy ((s), (d), (n)) # endif #endif } // end extern "C" #include <iostream> #include <sstream> #include "generic_file.hpp" #include "crc.hpp" using namespace std; #define INFININT_MODE_START 10240 namespace libdar { static void n_compute(const char *buffer, U_I length, unsigned char * begin, unsigned char * & pointer, unsigned char * end, U_I crc_size); ///////////////////////////////////////////// // some TEMPLATES and static routines first // template <class P> string T_crc2str(P begin, P end) { ostringstream ret; P curs = begin; while(curs != end) { ret << hex << ((*curs & 0xF0) >> 4); ret << hex << (*curs & 0x0F); ++curs; } return ret.str(); } template <class P> void T_old_read(P & pointer, P begin, P end, const char *buffer, U_I size) { U_I pos = 0; while(pointer != end && pos < size) { *pointer = buffer[pos]; ++pointer; ++pos; } if(pointer != end || pos < size) throw SRC_BUG; // should reach both ends at the same time pointer = begin; } template <class B> void B_compute_block(B anonymous, const char *buffer, U_I length, unsigned char * begin, unsigned char * & pointer, unsigned char * end, U_I & cursor) { B *buf_end = (B *)(buffer + length - sizeof(anonymous) + 1); B *buf_ptr = (B *)(buffer); B *crc_end = (B *)(end); B *crc_ptr = (B *)(begin); if(begin >= end) throw SRC_BUG; else { U_I crc_size = end - begin; if(crc_size % sizeof(anonymous) != 0) throw SRC_BUG; if(crc_size / sizeof(anonymous) == 0) throw SRC_BUG; } while(buf_ptr < buf_end) { *crc_ptr ^= *buf_ptr; ++buf_ptr; ++crc_ptr; if(crc_ptr >= crc_end) crc_ptr = (B *)(begin); } cursor = (char *)(buf_ptr) - buffer; pointer = (unsigned char *)(crc_ptr); } template <class P> void T_compute(const char *buffer, U_I length, P begin, P & pointer, P end) { if(pointer == end) throw SRC_BUG; for(U_I cursor = 0; cursor < length; ++cursor) { *pointer ^= buffer[cursor]; if(++pointer == end) pointer = begin; } } static void n_compute(const char *buffer, U_I length, unsigned char * begin, unsigned char * & pointer, unsigned char * end, U_I crc_size) { U_I cursor = 0; //< index of next byte to read from buffer // initial bytes if(pointer != begin) { while(pointer != end && cursor < length) { *pointer ^= buffer[cursor]; ++cursor; ++pointer; } if(pointer == end) // we had enough data to have pointer reach the end of the crc_field pointer = begin; } // block bytes if(pointer == begin && cursor < length) // we can now use the optimized rountine relying on operation by block of bytes { U_I partial_cursor = 0; // But we cannot use optimized method on some systems if we are not aligned to the size boundary if(crc_size % 8 == 0 && (U_I)(buffer + cursor) % 8 == 0) B_compute_block(U_64(0), buffer + cursor, length - cursor, begin, pointer, end, partial_cursor); else if(crc_size % 4 == 0 && (U_I)(buffer + cursor) % 4 == 0) B_compute_block(U_32(0), buffer + cursor, length - cursor, begin, pointer, end, partial_cursor); else if(crc_size % 2 == 0 && (U_I)(buffer + cursor) % 2 == 0) B_compute_block(U_16(0), buffer + cursor, length - cursor, begin, pointer, end, partial_cursor); /// warning, adding a new type here need modifying crc_n::alloc() to provide aligned crc storage cursor += partial_cursor; } // final bytes if(cursor < length) T_compute(buffer + cursor, length - cursor, begin, pointer, end); } template <class P> bool T_compare(P me_begin, P me_end, P you_begin, P you_end) { P me = me_begin; P you = you_begin; while(me != me_end && you != you_end && *me == *you) { ++me; ++you; } return me == me_end && you == you_end; } ///////////////////////////////////////////// // Class CRC_I implementation follows // crc_i::crc_i(const infinint & width) : size(width), cyclic(width) { if(width == 0) throw Erange("crc::crc", gettext("Invalid size for CRC width")); clear(); } crc_i::crc_i(const infinint & width, generic_file & f) : size(width), cyclic(f, width) { pointer = cyclic.begin(); } bool crc_i::operator == (const crc & ref) const { const crc_i *ref_i = dynamic_cast<const crc_i *>(&ref); if(ref_i == NULL) throw SRC_BUG; if(size != ref_i->size) return false; else // same size return T_compare(cyclic.begin(), cyclic.end(), ref_i->cyclic.begin(), ref_i->cyclic.end()); } void crc_i::compute(const infinint & offset, const char *buffer, U_I length) { infinint tmp = offset % size; // first we skip the cyclic at the correct position pointer.skip_to(cyclic, tmp); // now we can compute the CRC compute(buffer, length); } void crc_i::compute(const char *buffer, U_I length) { T_compute(buffer, length, cyclic.begin(), pointer, cyclic.end()); } void crc_i::clear() { cyclic.clear(); pointer = cyclic.begin(); } void crc_i::dump(generic_file & f) const { size.dump(f); cyclic.dump(f); } string crc_i::crc2str() const { return T_crc2str(cyclic.begin(), cyclic.end()); } void crc_i::copy_from(const crc_i & ref) { if(size != ref.size) { size = ref.size; cyclic = ref.cyclic; } else copy_data_from(ref); pointer = cyclic.begin(); } void crc_i::copy_data_from(const crc_i & ref) { if(ref.size == size) { storage::iterator ref_it = ref.cyclic.begin(); storage::iterator it = cyclic.begin(); while(ref_it != ref.cyclic.end() && it != cyclic.end()) { *it = *ref_it; ++it; ++ref_it; } if(ref_it != ref.cyclic.end() || it != cyclic.end()) throw SRC_BUG; } else throw SRC_BUG; } ///////////////////////////////////////////// // Class CRC_N implementation follows // crc_n::crc_n(U_I width) { pointer = NULL; cyclic = NULL; try { if(width == 0) throw Erange("crc::crc", gettext("Invalid size for CRC width")); alloc(width); clear(); } catch(...) { destroy(); throw; } } crc_n::crc_n(U_I width, generic_file & f) { pointer = NULL; cyclic = NULL; try { alloc(width); f.read((char*)cyclic, size); } catch(...) { destroy(); throw; } } const crc_n & crc_n::operator = (const crc_n & ref) { if(size != ref.size) { destroy(); copy_from(ref); } else copy_data_from(ref); return *this; } bool crc_n::operator == (const crc & ref) const { const crc_n *ref_n = dynamic_cast<const crc_n *>(&ref); if(ref_n == NULL) throw SRC_BUG; if(size != ref_n->size) return false; else // same size return T_compare(cyclic, cyclic + size, ref_n->cyclic, ref_n->cyclic + ref_n->size); } void crc_n::compute(const infinint & offset, const char *buffer, U_I length) { infinint tmp = offset % size; U_I s_offset = 0; // first we skip the cyclic at the correct position tmp.unstack(s_offset); if(tmp != 0) throw SRC_BUG; // tmp does not fit in a U_I variable ! pointer = cyclic + s_offset; // now we can compute the CRC compute(buffer, length); } void crc_n::compute(const char *buffer, U_I length) { n_compute(buffer, length, cyclic, pointer, cyclic + size, size); } void crc_n::clear() { (void)memset(cyclic, 0, size); pointer = cyclic; } void crc_n::dump(generic_file & f) const { infinint tmp = size; tmp.dump(f); f.write((const char *)cyclic, size); } string crc_n::crc2str() const { return T_crc2str(cyclic, cyclic + size); } void crc_n::alloc(U_I width) { size = width; if(get_pool() == NULL) { ////////////////////////////////////////////////////////////////////// // the following trick is to have cyclic aligned at its boundary size // (its allocated address is a multiple of it size) // some CPU need that (sparc), and it does not hurt for other ones. if(width % 8 == 0) cyclic = (unsigned char *)(new (nothrow) U_64[width/8]); else if(width % 4 == 0) cyclic = (unsigned char *)(new (nothrow) U_32[width/4]); else if(width % 2 == 0) cyclic = (unsigned char *)(new (nothrow) U_16[width/2]); else cyclic = new (nothrow) unsigned char[size]; // end of the trick and back to default situation ////////////////////////////////////////////////////////////////////// // WARNING! this trick allows the use of 2, 4 or 8 bytes operations // // instead of byte by byte one, in n_compute calls B_compute_block // // CODE MUST BE ADAPTED THERE AND IN destroy() IF CHANGED HERE!!! // ////////////////////////////////////////////////////////////////////// } else cyclic = (unsigned char *)get_pool()->alloc(width); // pool should provide aligned data in any case if(cyclic == NULL) throw Ememory("crc::copy_from"); pointer = cyclic; } void crc_n::copy_from(const crc_n & ref) { alloc(ref.size); copy_data_from(ref); } void crc_n::copy_data_from(const crc_n & ref) { if(size != ref.size) throw SRC_BUG; (void)memcpy(cyclic, ref.cyclic, size); pointer = cyclic; } void crc_n::destroy() { if(cyclic != NULL) { if(get_pool() == NULL) delete [] cyclic; else get_pool()->release(cyclic); cyclic = NULL; } size = 0; pointer = NULL; } ///////////////////////////////////////////// // exported routines implementation // crc *create_crc_from_file(generic_file & f, memory_pool *pool, bool old) { crc *ret = NULL; if(old) ret = new (pool) crc_n(crc::OLD_CRC_SIZE, f); else { infinint taille = f; // reading the crc size if(taille < INFININT_MODE_START) { U_I s = 0; taille.unstack(s); if(taille > 0) throw SRC_BUG; ret = new (pool) crc_n(s, f); } else ret = new (pool) crc_i(taille, f); } if(ret == NULL) throw Ememory("create_crc_from_file"); return ret; } crc *create_crc_from_size(infinint width, memory_pool *pool) { crc *ret = NULL; if(width < INFININT_MODE_START) { U_I s = 0; width.unstack(s); if(width > 0) throw SRC_BUG; ret = new (pool) crc_n(s); } else ret = new (pool) crc_i(width); if(ret == NULL) throw Ememory("create_crc_from_size"); return ret; } } // end of namespace
'use strict'; (function () { // Photos Controller Spec describe('Photos Controller Tests', function () { // Initialize global variables var PhotosController, $scope, $httpBackend, $stateParams, $location; // The $resource service augments the response object with methods for updating and deleting the resource. // If we were to use the standard toEqual matcher, our tests would fail because the test values would not match // the responses exactly. To solve the problem, we define a new toEqualData Jasmine matcher. // When the toEqualData matcher compares two objects, it takes only object properties into // account and ignores methods. beforeEach(function () { jasmine.addMatchers({ toEqualData: function (util, customEqualityTesters) { return { compare: function (actual, expected) { return { pass: angular.equals(actual, expected) }; } }; } }); }); // Then we can start by loading the main application module beforeEach(module(ApplicationConfiguration.applicationModuleName)); // The injector ignores leading and trailing underscores here (i.e. _$httpBackend_). // This allows us to inject a service but then attach it to a variable // with the same name as the service. beforeEach(inject(function ($controller, $rootScope, _$location_, _$stateParams_, _$httpBackend_) { // Set a new global scope $scope = $rootScope.$new(); // Point global variables to injected services $stateParams = _$stateParams_; $httpBackend = _$httpBackend_; $location = _$location_; // Initialize the Photos controller. PhotosController = $controller('PhotosController', { $scope: $scope }); })); it('Should do some controller test', inject(function () { // The test logic // ... })); }); }());
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>relation-algebra: 6 m 45 s 🏆</title> <link rel="shortcut icon" type="image/png" href="../../../../../favicon.png" /> <link href="../../../../../bootstrap.min.css" rel="stylesheet"> <link href="../../../../../bootstrap-custom.css" rel="stylesheet"> <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet"> <script src="../../../../../moment.min.js"></script> <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries --> <!-- WARNING: Respond.js doesn't work if you view the page via file:// --> <!--[if lt IE 9]> <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script> <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script> <![endif]--> </head> <body> <div class="container"> <div class="navbar navbar-default" role="navigation"> <div class="container-fluid"> <div class="navbar-header"> <a class="navbar-brand" href="../../../../.."><i class="fa fa-lg fa-flag-checkered"></i> Coq bench</a> </div> <div id="navbar" class="collapse navbar-collapse"> <ul class="nav navbar-nav"> <li><a href="../..">clean / released</a></li> <li class="active"><a href="">8.10.2 / relation-algebra - 1.7.2</a></li> </ul> </div> </div> </div> <div class="article"> <div class="row"> <div class="col-md-12"> <a href="../..">« Up</a> <h1> relation-algebra <small> 1.7.2 <span class="label label-success">6 m 45 s 🏆</span> </small> </h1> <p>📅 <em><script>document.write(moment("2021-12-02 03:04:00 +0000", "YYYY-MM-DD HH:mm:ss Z").fromNow());</script> (2021-12-02 03:04:00 UTC)</em><p> <h2>Context</h2> <pre># Packages matching: installed # Name # Installed # Synopsis base-bigarray base base-threads base base-unix base conf-findutils 1 Virtual package relying on findutils coq 8.10.2 Formal proof management system num 1.4 The legacy Num library for arbitrary-precision integer and rational arithmetic ocaml 4.08.1 The OCaml compiler (virtual package) ocaml-base-compiler 4.08.1 Official release 4.08.1 ocaml-config 1 OCaml Switch Configuration ocamlfind 1.9.1 A library manager for OCaml # opam file: opam-version: &quot;2.0&quot; name: &quot;coq-relation-algebra&quot; synopsis: &quot;Relation Algebra and KAT in Coq&quot; maintainer: &quot;Damien Pous &lt;[email protected]&gt;&quot; version: &quot;1.7.2&quot; homepage: &quot;http://perso.ens-lyon.fr/damien.pous/ra/&quot; license: &quot;LGPL&quot; depends: [ &quot;ocaml&quot; &quot;coq&quot; {&gt;= &quot;8.10&quot; &amp; &lt; &quot;8.11~&quot;} ] depopts: [ &quot;coq-mathcomp-ssreflect&quot; ] build: [ [&quot;sh&quot; &quot;-exc&quot; &quot;./configure --%{coq-mathcomp-ssreflect:enable}%-ssr&quot;] [make &quot;-j%{jobs}%&quot;] ] install: [make &quot;install&quot;] tags: [ &quot;keyword:relation algebra&quot; &quot;keyword:Kleene algebra with tests&quot; &quot;keyword:KAT&quot; &quot;keyword:allegories&quot; &quot;keyword:residuated structures&quot; &quot;keyword:automata&quot; &quot;keyword:regular expressions&quot; &quot;keyword:matrices&quot; &quot;category:Mathematics/Algebra&quot; &quot;logpath:RelationAlgebra&quot; ] authors: [ &quot;Damien Pous &lt;[email protected]&gt;&quot; &quot;Christian Doczkal &lt;[email protected]&gt;&quot; ] url { src: &quot;https://github.com/damien-pous/relation-algebra/archive/v1.7.2.tar.gz&quot; checksum: &quot;md5=2f7d9a91892145dc373121bd2b176690&quot; } </pre> <h2>Lint</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>true</code></dd> <dt>Return code</dt> <dd>0</dd> </dl> <h2>Dry install 🏜️</h2> <p>Dry install with the current Coq version:</p> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam install -y --show-action coq-relation-algebra.1.7.2 coq.8.10.2</code></dd> <dt>Return code</dt> <dd>0</dd> </dl> <p>Dry install without Coq/switch base, to test if the problem was incompatibility with the current Coq/OCaml version:</p> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>true</code></dd> <dt>Return code</dt> <dd>0</dd> </dl> <h2>Install dependencies</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam list; echo; ulimit -Sv 4000000; timeout 4h opam install -y --deps-only coq-relation-algebra.1.7.2 coq.8.10.2</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Duration</dt> <dd>11 s</dd> </dl> <h2>Install 🚀</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam list; echo; ulimit -Sv 16000000; timeout 4h opam install -y -v coq-relation-algebra.1.7.2 coq.8.10.2</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Duration</dt> <dd>6 m 45 s</dd> </dl> <h2>Installation size</h2> <p>Total: 8 M</p> <ul> <li>454 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/untyping.vo</code></li> <li>440 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/normalisation.vo</code></li> <li>387 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_completeness.vo</code></li> <li>339 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_completeness.glob</code></li> <li>325 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/paterson.vo</code></li> <li>214 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/compiler_opts.vo</code></li> <li>198 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/relalg.vo</code></li> <li>194 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/matrix.glob</code></li> <li>187 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/monoid.vo</code></li> <li>177 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/traces.vo</code></li> <li>175 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/matrix.vo</code></li> <li>164 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/paterson.glob</code></li> <li>159 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_reification.cmxs</code></li> <li>149 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/syntax.vo</code></li> <li>149 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_common.cmxs</code></li> <li>138 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/normalisation.glob</code></li> <li>125 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/monoid.glob</code></li> <li>123 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lattice.vo</code></li> <li>121 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/regex.vo</code></li> <li>114 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/regex.glob</code></li> <li>114 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ugregex_dec.vo</code></li> <li>112 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/relalg.glob</code></li> <li>111 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/untyping.glob</code></li> <li>105 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/atoms.glob</code></li> <li>100 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ka_completeness.glob</code></li> <li>96 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lattice.glob</code></li> <li>91 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ka_completeness.vo</code></li> <li>89 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rmx.vo</code></li> <li>85 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_fold.cmxs</code></li> <li>80 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/imp.vo</code></li> <li>78 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/traces.glob</code></li> <li>77 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_reification.cmxs</code></li> <li>76 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ugregex_dec.glob</code></li> <li>76 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/matrix_ext.glob</code></li> <li>74 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/compiler_opts.glob</code></li> <li>69 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/matrix_ext.vo</code></li> <li>66 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/gregex.vo</code></li> <li>66 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ordinal.vo</code></li> <li>62 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_reification.vo</code></li> <li>61 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rmx.glob</code></li> <li>58 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/sups.vo</code></li> <li>57 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kleene.glob</code></li> <li>57 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ugregex.vo</code></li> <li>56 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/nfa.vo</code></li> <li>54 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kleene.vo</code></li> <li>49 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_tac.vo</code></li> <li>49 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lang.vo</code></li> <li>49 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/syntax.glob</code></li> <li>48 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/mrewrite.cmxs</code></li> <li>48 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ordinal.glob</code></li> <li>48 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/atoms.vo</code></li> <li>47 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/move.vo</code></li> <li>47 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/comparisons.vo</code></li> <li>45 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/level.vo</code></li> <li>42 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ugregex.glob</code></li> <li>40 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/glang.vo</code></li> <li>40 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rel.vo</code></li> <li>40 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/nfa.glob</code></li> <li>39 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/dfa.vo</code></li> <li>39 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lsyntax.vo</code></li> <li>39 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/sups.glob</code></li> <li>36 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/boolean.vo</code></li> <li>35 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lset.vo</code></li> <li>35 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/bmx.vo</code></li> <li>34 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_completeness.v</code></li> <li>33 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/level.glob</code></li> <li>32 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/pair.vo</code></li> <li>31 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/imp.glob</code></li> <li>30 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/comparisons.glob</code></li> <li>28 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_reification.glob</code></li> <li>27 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_common.cmi</code></li> <li>27 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_untyping.vo</code></li> <li>26 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/untyping.v</code></li> <li>26 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/normalisation.v</code></li> <li>25 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/gregex.glob</code></li> <li>25 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_tac.glob</code></li> <li>22 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rewriting.glob</code></li> <li>22 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/traces.v</code></li> <li>22 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/matrix.v</code></li> <li>22 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rewriting.vo</code></li> <li>22 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/factors.glob</code></li> <li>22 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lattice.v</code></li> <li>21 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/factors.vo</code></li> <li>21 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/monoid.v</code></li> <li>21 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/denum.vo</code></li> <li>20 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/positives.vo</code></li> <li>20 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/dfa.glob</code></li> <li>19 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/paterson.v</code></li> <li>19 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/relalg.v</code></li> <li>18 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat.vo</code></li> <li>17 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/all.vo</code></li> <li>17 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/move.glob</code></li> <li>17 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/bmx.glob</code></li> <li>16 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ka_completeness.v</code></li> <li>16 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lang.glob</code></li> <li>16 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/regex.v</code></li> <li>15 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lsyntax.glob</code></li> <li>15 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/sums.vo</code></li> <li>15 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/syntax.v</code></li> <li>14 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ugregex_dec.v</code></li> <li>14 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_reification.cmi</code></li> <li>14 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rel.glob</code></li> <li>14 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/common.vo</code></li> <li>13 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/boolean.glob</code></li> <li>13 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ordinal.v</code></li> <li>13 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lset.glob</code></li> <li>12 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_untyping.glob</code></li> <li>12 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_common.cmx</code></li> <li>12 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/pair.glob</code></li> <li>11 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rmx.v</code></li> <li>11 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/glang.glob</code></li> <li>10 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/powerfix.vo</code></li> <li>10 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_reification.cmi</code></li> <li>9 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/mrewrite.cmi</code></li> <li>9 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_tac.v</code></li> <li>9 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/imp.v</code></li> <li>9 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/matrix_ext.v</code></li> <li>9 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_reification.v</code></li> <li>9 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ugregex.v</code></li> <li>8 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/gregex.v</code></li> <li>8 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/sups.v</code></li> <li>8 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_reification.cmx</code></li> <li>8 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/powerfix.glob</code></li> <li>8 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/prop.vo</code></li> <li>8 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/comparisons.v</code></li> <li>8 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat.glob</code></li> <li>8 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kleene.v</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/denum.glob</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/common.glob</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_reification.cmx</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_fold.cmi</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/mrewrite.cmx</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lang.v</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/atoms.v</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/nfa.v</code></li> <li>7 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lsyntax.v</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/compiler_opts.v</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/level.v</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_fold.cmx</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rewriting.v</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/rel.v</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/dfa.v</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_reification.cmxa</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_reification.cmxa</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_common.cmxa</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/ra_fold.cmxa</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/sums.glob</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/positives.glob</code></li> <li>6 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/mrewrite.cmxa</code></li> <li>5 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/lset.v</code></li> <li>5 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/glang.v</code></li> <li>4 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/boolean.v</code></li> <li>4 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/bmx.v</code></li> <li>4 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/common.v</code></li> <li>4 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/pair.v</code></li> <li>4 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/move.v</code></li> <li>4 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat.v</code></li> <li>3 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/powerfix.v</code></li> <li>3 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_untyping.v</code></li> <li>3 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/factors.v</code></li> <li>2 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/denum.v</code></li> <li>2 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/positives.v</code></li> <li>2 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_dec.cmi</code></li> <li>2 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/all.glob</code></li> <li>2 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/kat_dec.cmx</code></li> <li>2 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/sums.v</code></li> <li>2 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/prop.v</code></li> <li>2 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/prop.glob</code></li> <li>1 K <code>../ocaml-base-compiler.4.08.1/lib/coq/user-contrib/RelationAlgebra/all.v</code></li> </ul> <h2>Uninstall 🧹</h2> <dl class="dl-horizontal"> <dt>Command</dt> <dd><code>opam remove -y coq-relation-algebra.1.7.2</code></dd> <dt>Return code</dt> <dd>0</dd> <dt>Missing removes</dt> <dd> none </dd> <dt>Wrong removes</dt> <dd> none </dd> </dl> </div> </div> </div> <hr/> <div class="footer"> <p class="text-center"> Sources are on <a href="https://github.com/coq-bench">GitHub</a> © Guillaume Claret 🐣 </p> </div> </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <script src="../../../../../bootstrap.min.js"></script> </body> </html>
Our website uses the analysis tool of LiveZilla GmbH (Byk-Gulden-Straße 18, 78224 Singen).The processing of data serves to analyse this website and its visitors. Data is collected and saved for marketing and optimisation purposes. A usage profile can be generated from this data under a pseudonym. Cookies may be deployed for this purpose. Cookies facilitate recognition of your internet browser. The data collected with LiveZilla technologies will not be used to identify the website user personally in future or combined with personal data on the bearer of the pseudonym without the separately issued consent of the affected party.Processing is carried out on the basis of art. 6 (1) lit. f GDPR due to our justified interest in direct customer communication and the needs-based design of the website. You have the right to veto this processing of your personal data according to art. 6 (1) lit. f GDPR by contacting us, for reasons relating to your personal situation. You can exercise your veto by preventing the storage of cookies via the corresponding setting in your browser. We would, however, like to point out that this may prevent you from making full use of all the functions of this website.
Just a few of the things we do. Don't be shy about asking for more. Design the pieces, fashion the pieces, test the pieces, and that put it all together. Whether we buildt it or someone else laid it out, we can keep it up and running. WordPress and Drupal are our favorites but we can custom build or construct whatever you need. They will find you. We will help them find you. There is a simplicity and an ease of delivery to this but mostly we need a more stateful approach. There is a depth and a richness to a web experience when recognition and remembrance are part of the package. And when a web design is implemented, its success will depend on such depth, on the recognition that both the owner of the site and the user bring personal details to the web that must be reflected in the design and implementation of the site. At stateful.org, our emphasis is on simplicity, clarity and attention to those important details that define and create a personal web experience, one that educates and establishes a connection and remembers what's important. © 2016 Based on the Oxygen Theme.
/*************************************************************************** * File: ./plugins/load_history/dirca_impulse.c * Author: Ronni Grapenthin, UAF-GI * Created: 12.06.2008 * Licence: GPLv2 ****************************************************************************/ /** * @defgroup LoadHistory Load History Functions * @ingroup Plugin **/ /*@{*/ /** \file dirac_impulse.c * * Derivative of Heaviside function or simply the Delta function: * * \f[ * f'(t) = \delta(t-t_0) * \f] * * with * * \f[ * f(t) = 0 on t < t0 \ * = 1 otherwise * \f] */ /*@}*/ #include <stdio.h> #include <math.h> #include "crusde_api.h" /*load command line parameters*/ double* p_start[N_LOAD_COMPS];/*!< pointer to start interval values */ int my_id = 0; extern const char* get_name() { return "dirac_impulse"; } extern const char* get_version() { return "0.1"; } extern const char* get_authors() { return "ronni grapenthin"; } extern PluginCategory get_category() { return LOADHISTORY_PLUGIN; } extern const char* get_description() { return "Derivative of Heaviside function or simply the Delta function: \ f'(t) = \delta(t-t0)\ \ with \ \ f(t) = 0 on t < t0 \ f(t) = 1 otherwise"; } /*! empty*/ extern void request_plugins(){} /*! empty*/ extern void register_output_fields(){} /*! empty*/ extern void run(){} /*! freeing mallocs*/ extern void clear(){} /*! empty*/ extern void init(){} /*! Register parameters this load function claims from the input.*/ extern void register_parameter() { my_id = crusde_get_current_load_component(); /* tell main program about parameters we claim from input */ p_start[my_id] = crusde_register_param_double("t0", get_category()); } extern double get_value_at(unsigned int t) { if(t == *p_start[ crusde_get_current_load_component() ]) { return 1.0; } return 0.0; }
/* * Copyright (C) 2008, 2009, 2010 Apple Inc. All Rights Reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef THIRD_PARTY_BLINK_RENDERER_PLATFORM_THEME_TYPES_H_ #define THIRD_PARTY_BLINK_RENDERER_PLATFORM_THEME_TYPES_H_ namespace blink { // Must follow css_value_keywords.json5 order // kAutoPart is never returned by ComputedStyle::EffectiveAppearance() enum ControlPart { kNoControlPart, kAutoPart, kCheckboxPart, kRadioPart, kPushButtonPart, kSquareButtonPart, kButtonPart, kInnerSpinButtonPart, kListboxPart, kMediaSliderPart, kMediaSliderThumbPart, kMediaVolumeSliderPart, kMediaVolumeSliderThumbPart, kMediaControlPart, kMenulistPart, kMenulistButtonPart, kMeterPart, kProgressBarPart, kSliderHorizontalPart, kSliderVerticalPart, kSliderThumbHorizontalPart, kSliderThumbVerticalPart, kSearchFieldPart, kSearchFieldCancelButtonPart, kTextFieldPart, kTextAreaPart, }; } // namespace blink #endif // THIRD_PARTY_BLINK_RENDERER_PLATFORM_THEME_TYPES_H_
<!DOCTYPE HTML> <!-- NewPage --> <html lang="en"> <head> <!-- Generated by javadoc (9-ea) on Sun Oct 30 18:56:21 UTC 2016 --> <title>AMDSamplePositions (LWJGL 3.1.0 - OpenGL)</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="dc.created" content="2016-10-30"> <link rel="stylesheet" type="text/css" href="../../../javadoc.css" title="Style"> <link rel="stylesheet" type="text/css" href="../../../jquery/jquery-ui.css" title="Style"> <script type="text/javascript" src="../../../script.js"></script> <script type="text/javascript" src="../../../jquery/jszip/dist/jszip.min.js"></script> <script type="text/javascript" src="../../../jquery/jszip-utils/dist/jszip-utils.min.js"></script> <!--[if IE]> <script type="text/javascript" src="../../../jquery/jszip-utils/dist/jszip-utils-ie.min.js"></script> <![endif]--> <script type="text/javascript" src="../../../jquery/jquery-1.10.2.js"></script> <script type="text/javascript" src="../../../jquery/jquery-ui.js"></script> </head> <body> <script type="text/javascript"><!-- try { if (location.href.indexOf('is-external=true') == -1) { parent.document.title="AMDSamplePositions (LWJGL 3.1.0 - OpenGL)"; } } catch(err) { } //--> var methods = {"i0":9,"i1":9}; var tabs = {65535:["t0","All Methods"],1:["t1","Static Methods"],8:["t4","Concrete Methods"]}; var altColor = "altColor"; var rowColor = "rowColor"; var tableTab = "tableTab"; var activeTableTab = "activeTableTab"; var pathtoroot = "../../../";loadScripts(document, 'script');</script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> <header role="banner"> <nav role="navigation"> <div class="fixedNav"> <!-- ========= START OF TOP NAVBAR ======= --> <div class="topNav"><a id="navbar.top"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.top" title="Skip navigation links">Skip navigation links</a></div> <a id="navbar.top.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../org/lwjgl/opengl/package-summary.html">Package</a></li> <li class="navBarCell1Rev">Class</li> <li><a href="../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../index-all.html">Index</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li><a href="../../../org/lwjgl/opengl/AMDQueryBufferObject.html" title="class in org.lwjgl.opengl"><span class="typeNameLink">Prev&nbsp;Class</span></a></li> <li><a href="../../../org/lwjgl/opengl/AMDSeamlessCubemapPerTexture.html" title="class in org.lwjgl.opengl"><span class="typeNameLink">Next&nbsp;Class</span></a></li> </ul> <ul class="navList"> <li><a href="../../../index.html?org/lwjgl/opengl/AMDSamplePositions.html" target="_top">Frames</a></li> <li><a href="AMDSamplePositions.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_top"> <li><a href="../../../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <ul class="navListSearch"> <li><span>SEARCH:&nbsp;</span> <input type="text" id="search" value=" " disabled="disabled"> <input type="reset" id="reset" value=" " disabled="disabled"> </li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_top"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> </div> <div> <ul class="subNavList"> <li>Summary:&nbsp;</li> <li>Nested&nbsp;|&nbsp;</li> <li><a href="#field.summary">Field</a>&nbsp;|&nbsp;</li> <li>Constr&nbsp;|&nbsp;</li> <li><a href="#method.summary">Method</a></li> </ul> <ul class="subNavList"> <li>Detail:&nbsp;</li> <li><a href="#field.detail">Field</a>&nbsp;|&nbsp;</li> <li>Constr&nbsp;|&nbsp;</li> <li><a href="#method.detail">Method</a></li> </ul> </div> <a id="skip.navbar.top"> <!-- --> </a></div> <!-- ========= END OF TOP NAVBAR ========= --> </div> <div class="navPadding">&nbsp;</div> </nav> </header> <!-- ======== START OF CLASS DATA ======== --> <main role="main"> <div class="header"> <div class="subTitle"><span class="packageLabelInClass">Package</span>&nbsp;<a href="../../../org/lwjgl/opengl/package-summary.html" target="classFrame">org.lwjgl.opengl</a></div> <h2 title="Class AMDSamplePositions" class="title">Class AMDSamplePositions</h2> </div> <div class="contentContainer"> <ul class="inheritance"> <li>java.lang.Object</li> <li> <ul class="inheritance"> <li>org.lwjgl.opengl.AMDSamplePositions</li> </ul> </li> </ul> <div class="description"> <ul class="blockList"> <li class="blockList"> <hr> <br> <pre>public class <span class="typeNameLabel">AMDSamplePositions</span> extends java.lang.Object</pre> <div class="block">Native bindings to the <a href="http://www.opengl.org/registry/specs/AMD/sample_positions.txt">AMD_sample_positions</a> extension. <p>This extension provides a mechanism to explicitly set sample positions for a FBO with multi-sampled attachments. The FBO will use identical sample locations for all pixels in each attachment. This forces TEXTURE_FIXED_SAMPLE_LOCATIONS to TRUE if a multi-sampled texture is specified using TexImage2DMultisample or TexImage3DMultisample. That is, using GetTexLevelParameter to query TEXTURE_FIXED_SAMPLE_LOCATIONS will always return TRUE if the mechanism is explicitly used to set the sample positions.</p> <p>Requires <a href="../../../org/lwjgl/opengl/GL32.html" title="class in org.lwjgl.opengl"><code>OpenGL 3.2</code></a> or <a href="../../../org/lwjgl/opengl/EXTFramebufferMultisample.html" title="class in org.lwjgl.opengl"><code>EXT_framebuffer_multisample</code></a>.</p></div> </li> </ul> </div> <div class="summary"> <ul class="blockList"> <li class="blockList"> <!-- =========== FIELD SUMMARY =========== --> <section role="region"> <ul class="blockList"> <li class="blockList"><a id="field.summary"> <!-- --> </a> <h3>Field Summary</h3> <table class="memberSummary"> <caption><span>Fields</span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Modifier and Type</th> <th class="colLast" scope="col">Field and Description</th> </tr> <tr class="altColor"> <td class="colFirst"><code>static int</code></td> <td class="colLast"><code><span class="memberNameLink"><a href="../../../org/lwjgl/opengl/AMDSamplePositions.html#GL_SUBSAMPLE_DISTANCE_AMD">GL_SUBSAMPLE_DISTANCE_AMD</a></span></code> <div class="block">Accepted by the <code>pname</code> parameter of GetFloatv.</div> </td> </tr> </table> </li> </ul> </section> <!-- ========== METHOD SUMMARY =========== --> <section role="region"> <ul class="blockList"> <li class="blockList"><a id="method.summary"> <!-- --> </a> <h3>Method Summary</h3> <table class="memberSummary"> <caption><span id="t0" class="activeTableTab"><span>All Methods</span><span class="tabEnd">&nbsp;</span></span><span id="t1" class="tableTab"><span><a href="javascript:show(1);">Static Methods</a></span><span class="tabEnd">&nbsp;</span></span><span id="t4" class="tableTab"><span><a href="javascript:show(8);">Concrete Methods</a></span><span class="tabEnd">&nbsp;</span></span></caption> <tr> <th class="colFirst" scope="col">Modifier and Type</th> <th class="colLast" scope="col">Method and Description</th> </tr> <tr id="i0" class="altColor"> <td class="colFirst"><code>static void</code></td> <td class="colLast"><code><span class="memberNameLink"><a href="../../../org/lwjgl/opengl/AMDSamplePositions.html#glSetMultisamplefvAMD-int-int-float:A-">glSetMultisamplefvAMD</a></span>(int&nbsp;pname, int&nbsp;index, float[]&nbsp;val)</code> <div class="block">Array version of: <a href="../../../org/lwjgl/opengl/AMDSamplePositions.html#glSetMultisamplefvAMD-int-int-java.nio.FloatBuffer-"><code>SetMultisamplefvAMD</code></a></div> </td> </tr> <tr id="i1" class="rowColor"> <td class="colFirst"><code>static void</code></td> <td class="colLast"><code><span class="memberNameLink"><a href="../../../org/lwjgl/opengl/AMDSamplePositions.html#glSetMultisamplefvAMD-int-int-java.nio.FloatBuffer-">glSetMultisamplefvAMD</a></span>(int&nbsp;pname, int&nbsp;index, java.nio.FloatBuffer&nbsp;val)</code>&nbsp;</td> </tr> </table> <ul class="blockList"> <li class="blockList"><a id="methods.inherited.from.class.java.lang.Object"> <!-- --> </a> <h3>Methods inherited from class&nbsp;java.lang.Object</h3> <code>equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait</code></li> </ul> </li> </ul> </section> </li> </ul> </div> <div class="details"> <ul class="blockList"> <li class="blockList"> <!-- ============ FIELD DETAIL =========== --> <section role="region"> <ul class="blockList"> <li class="blockList"><a id="field.detail"> <!-- --> </a> <h3>Field Detail</h3> <a id="GL_SUBSAMPLE_DISTANCE_AMD"> <!-- --> </a> <ul class="blockListLast"> <li class="blockList"> <h4>GL_SUBSAMPLE_DISTANCE_AMD</h4> <pre>public static final&nbsp;int GL_SUBSAMPLE_DISTANCE_AMD</pre> <div class="block">Accepted by the <code>pname</code> parameter of GetFloatv.</div> <dl> <dt><span class="seeLabel">See Also:</span></dt> <dd><a href="../../../constant-values.html#org.lwjgl.opengl.AMDSamplePositions.GL_SUBSAMPLE_DISTANCE_AMD">Constant Field Values</a></dd> </dl> </li> </ul> </li> </ul> </section> <!-- ============ METHOD DETAIL ========== --> <section role="region"> <ul class="blockList"> <li class="blockList"><a id="method.detail"> <!-- --> </a> <h3>Method Detail</h3> <a id="glSetMultisamplefvAMD-int-int-java.nio.FloatBuffer-"> <!-- --> </a> <ul class="blockList"> <li class="blockList"> <h4>glSetMultisamplefvAMD</h4> <pre>public static&nbsp;void&nbsp;glSetMultisamplefvAMD(int&nbsp;pname, int&nbsp;index, java.nio.FloatBuffer&nbsp;val)</pre> </li> </ul> <a id="glSetMultisamplefvAMD-int-int-float:A-"> <!-- --> </a> <ul class="blockListLast"> <li class="blockList"> <h4>glSetMultisamplefvAMD</h4> <pre>public static&nbsp;void&nbsp;glSetMultisamplefvAMD(int&nbsp;pname, int&nbsp;index, float[]&nbsp;val)</pre> <div class="block">Array version of: <a href="../../../org/lwjgl/opengl/AMDSamplePositions.html#glSetMultisamplefvAMD-int-int-java.nio.FloatBuffer-"><code>SetMultisamplefvAMD</code></a></div> </li> </ul> </li> </ul> </section> </li> </ul> </div> </div> </main> <!-- ========= END OF CLASS DATA ========= --> <footer role="contentinfo"> <nav role="navigation"> <!-- ======= START OF BOTTOM NAVBAR ====== --> <div class="bottomNav"><a id="navbar.bottom"> <!-- --> </a> <div class="skipNav"><a href="#skip.navbar.bottom" title="Skip navigation links">Skip navigation links</a></div> <a id="navbar.bottom.firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../org/lwjgl/opengl/package-summary.html">Package</a></li> <li class="navBarCell1Rev">Class</li> <li><a href="../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../index-all.html">Index</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li><a href="../../../org/lwjgl/opengl/AMDQueryBufferObject.html" title="class in org.lwjgl.opengl"><span class="typeNameLink">Prev&nbsp;Class</span></a></li> <li><a href="../../../org/lwjgl/opengl/AMDSeamlessCubemapPerTexture.html" title="class in org.lwjgl.opengl"><span class="typeNameLink">Next&nbsp;Class</span></a></li> </ul> <ul class="navList"> <li><a href="../../../index.html?org/lwjgl/opengl/AMDSamplePositions.html" target="_top">Frames</a></li> <li><a href="AMDSamplePositions.html" target="_top">No&nbsp;Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_bottom"> <li><a href="../../../allclasses-noframe.html">All&nbsp;Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_bottom"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> </div> <div> <ul class="subNavList"> <li>Summary:&nbsp;</li> <li>Nested&nbsp;|&nbsp;</li> <li><a href="#field.summary">Field</a>&nbsp;|&nbsp;</li> <li>Constr&nbsp;|&nbsp;</li> <li><a href="#method.summary">Method</a></li> </ul> <ul class="subNavList"> <li>Detail:&nbsp;</li> <li><a href="#field.detail">Field</a>&nbsp;|&nbsp;</li> <li>Constr&nbsp;|&nbsp;</li> <li><a href="#method.detail">Method</a></li> </ul> </div> <a id="skip.navbar.bottom"> <!-- --> </a></div> <!-- ======== END OF BOTTOM NAVBAR ======= --> </nav> <p class="legalCopy"><small><i>Copyright LWJGL. All Rights Reserved. <a href="https://www.lwjgl.org/license">License terms</a>.</i></small></p> </footer> </body> </html>
# This code is derived from code with the following copyright message: # # SliMP3 Server Copyright (C) 2001 Sean Adams, Slim Devices Inc. This # program is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License, version 2. # # LiveDepartures/Plugin.pm Copyright (C) 2007 Matthew Flint # For version and version history, see Version.pm # TODO: limit the number of results, as there may be many. eg. Kings Cross Thameslink to Bedford (KCM - BDM) gives 12 results... # TODO: need to sort results by due time, independent of transport type # TODO: could filter out journeys which are too soon # TODO: # - add live departures to 'home' settings page # - preference migration stuff - see RadioIO.settings package Plugins::LiveDepartures::Plugin; use base qw(Slim::Plugin::Base); use strict; use Plugins::LiveDepartures::Version; use Plugins::LiveDepartures::Settings; use Plugins::LiveDepartures::UI; use Plugins::LiveDepartures::CLI; use Slim::Utils::Misc; use Slim::Utils::Strings qw (string); use Slim::Utils::Log; # TODO: sort out version string #use vars qw( $VERSION ); #$VERSION = Plugins::LiveDepartures::Version::getVersion(); # web page root use constant URL_BASE => 'plugins/LiveDepartures'; # A logger we will use to write plugin-specific messages. my $log = Slim::Utils::Log->addLogCategory( { 'category' => 'plugin.livedepartures', 'defaultLevel' => 'INFO', 'description' => 'PLUGIN_LIVEDEPARTURES' } ); # button mapping my %buttonMapping = ( 'arrow_up.single' => 'switch_route_prev', 'arrow_down.single' => 'switch_route_next', 'arrow_right.single' => 'show_more_detail', 'arrow_left.single' => 'show_less_detail', #'repeat.single' => 'debug', ); sub initPlugin { my $class = shift; $log->info('version ' . Plugins::LiveDepartures::Version::getVersion()); # first, call SUPER $class->SUPER::initPlugin(@_); # register the screensaver Slim::Buttons::Common::addSaver( 'SCREENSAVER.livedepartures', getScreenSaverFunctions(), \&setScreenSaverMode, \&leaveScreenSaverMode, getDisplayName() ); # create a RouteManager and start the other modules my $routeManager = Plugins::LiveDepartures::RouteManager->new; Plugins::LiveDepartures::Settings->new; Plugins::LiveDepartures::Settings::init($routeManager); Plugins::LiveDepartures::UI::init($routeManager); Plugins::LiveDepartures::CLI::init($routeManager); # setup button mapping Slim::Hardware::IR::addModeDefaultMapping('OFF.livedepartures',\%buttonMapping); Slim::Hardware::IR::addModeDefaultMapping('SCREENSAVER.livedepartures',\%buttonMapping); # register our CLI requests # |requires Client # | |is a Query # | | |has Tags # | | | |Function to call # C Q T F # get details for the defined routes Slim::Control::Request::addDispatch(['livedepartures', 'routes'], [0, 1, 0, \&Plugins::LiveDepartures::CLI::routeQuery]); # keep-alive command Slim::Control::Request::addDispatch(['livedepartures', 'keepalive', '_routeIndex'], [0, 0, 1, \&Plugins::LiveDepartures::CLI::keepAlive]); } sub getDisplayName { return 'PLUGIN_LIVEDEPARTURES'; } sub setMode { $log->info('setMode called'); my $class = shift; my $client = shift; my $method = shift || ''; if ($method eq 'pop') { Slim::Buttons::Common::popMode($client); return; } $client->lines(\&Plugins::LiveDepartures::UI::screenSaverLiveDeparturesLines); $client->modeParam('modeUpdateInterval', 1); } my %mainModeFunctions = ( 'up' => sub { my ($client, $button) = @_; if ($button !~ /repeat/) { Plugins::LiveDepartures::UI::enterStateSwitchRouteOption($client, 0); } }, 'down' => sub { my ($client, $button) = @_; if ($button !~ /repeat/) { Plugins::LiveDepartures::UI::enterStateSwitchRouteOption($client, 1); } }, 'left' => sub { my ($client, $button) = @_; if ($button !~ /repeat/) { if (Plugins::LiveDepartures::UI::showingMoreDetail($client)) { Plugins::LiveDepartures::UI::enterStateNormal($client); } else { Slim::Buttons::Common::popModeRight($client); } } }, 'right' => sub { my ($client, $button) = @_; if ($button !~ /repeat/) { Plugins::LiveDepartures::UI::enterStateMoreDetail($client); } }, ); sub getFunctions { return \%mainModeFunctions; } sub getScreenSaverFunctions { my %screenSaverFunctions = ( 'switch_route_prev' => sub { my ($client ,$funct ,$functarg) = @_; Plugins::LiveDepartures::UI::enterStateSwitchRouteOption($client, 0); }, 'switch_route_next' => sub { my ($client ,$funct ,$functarg) = @_; Plugins::LiveDepartures::UI::enterStateSwitchRouteOption($client, 1); }, 'show_more_detail' => sub { my ($client ,$funct ,$functarg) = @_; Plugins::LiveDepartures::UI::enterStateMoreDetail($client); }, 'show_less_detail' => sub { my ($client ,$funct ,$functarg) = @_; Plugins::LiveDepartures::UI::enterStateNormal($client); }, 'debug' => sub { # TODO: the intention here is to dump a the HTML into a file on the server # in case the plugin's having trouble parsing it... $log->info('DEBUG!') if LIVEDEPARTURES_DEBUG(); } ); return \%screenSaverFunctions; } sub setScreenSaverMode { my $client = shift; # reset client Plugins::LiveDepartures::UI::resetClient($client); # update client display $client->lines(\&Plugins::LiveDepartures::UI::screenSaverLiveDeparturesLines); $client->modeParam('modeUpdateInterval', 1); } sub screenSaverLiveDeparturesLines { return Plugins::LiveDepartures::UI::screenSaverLiveDeparturesLines(@_); } sub leaveScreenSaverMode { my $client = shift; } # vim:ts=4:sw=4:autoindent:nu:cindent
/* * GrGen: graph rewrite generator tool -- release GrGen.NET 4.4 * Copyright (C) 2003-2015 Universitaet Karlsruhe, Institut fuer Programmstrukturen und Datenorganisation, LS Goos; and free programmers * licensed under LGPL v3 (see LICENSE.txt included in the packaging of this file) * www.grgen.net */ package de.unika.ipd.grgen.ast.exprevals; import java.util.Collection; import java.util.Vector; import de.unika.ipd.grgen.ast.*; import de.unika.ipd.grgen.ast.util.DeclarationTypeResolver; import de.unika.ipd.grgen.ir.IR; import de.unika.ipd.grgen.ir.exprevals.Expression; import de.unika.ipd.grgen.ir.exprevals.SourceExpr; import de.unika.ipd.grgen.parser.Coords; /** * A node yielding the source node of an edge. */ public class SourceExprNode extends ExprNode { static { setName(SourceExprNode.class, "source expr"); } private ExprNode edge; private IdentNode nodeTypeUnresolved; private NodeTypeNode nodeType; public SourceExprNode(Coords coords, ExprNode edge, IdentNode nodeType) { super(coords); this.edge = edge; becomeParent(this.edge); this.nodeTypeUnresolved = nodeType; becomeParent(this.nodeTypeUnresolved); } /** returns children of this node */ @Override public Collection<BaseNode> getChildren() { Vector<BaseNode> children = new Vector<BaseNode>(); children.add(edge); children.add(getValidVersion(nodeTypeUnresolved, nodeType)); return children; } /** returns names of the children, same order as in getChildren */ @Override public Collection<String> getChildrenNames() { Vector<String> childrenNames = new Vector<String>(); childrenNames.add("edge"); childrenNames.add("nodeType"); return childrenNames; } private static final DeclarationTypeResolver<NodeTypeNode> nodeTypeResolver = new DeclarationTypeResolver<NodeTypeNode>(NodeTypeNode.class); /** @see de.unika.ipd.grgen.ast.BaseNode#resolveLocal() */ @Override protected boolean resolveLocal() { nodeType = nodeTypeResolver.resolve(nodeTypeUnresolved, this); return nodeType!=null && getType().resolve(); } /** @see de.unika.ipd.grgen.ast.BaseNode#checkLocal() */ @Override protected boolean checkLocal() { if(!(edge.getType() instanceof EdgeTypeNode)) { reportError("argument of source(.) must be an edge type"); return false; } return true; } @Override protected IR constructIR() { return new SourceExpr(edge.checkIR(Expression.class), getType().getType()); } @Override public TypeNode getType() { return nodeType; } }
<?php namespace Dizda\Bundle\AppBundle\Tests; use Dizda\Bundle\AppBundle\Tests\Fixtures; use Liip\FunctionalTestBundle\Test\WebTestCase; /** * Class BaseFunctionalTestController */ class BaseFunctionalTestController extends WebTestCase { /** * @var \Symfony\Bundle\FrameworkBundle\Client */ protected $client = null; /** * @var \Doctrine\ORM\EntityManager */ protected $em = null; protected static $token = null; /** * Set up */ public function setUp() { parent::setUp(); $this->client = static::createClient(); $this->em = $this->client->getContainer()->get('doctrine')->getManager(); $this->client->startIsolation(); $this->login(); } /** * Login just once, to speed tests */ private function login() { if (null === static::$token) { // Generate token $this->client->request( 'POST', '/login_check', array( '_username' => 'dizda', '_password' => 'bambou', ) ); $data = json_decode($this->client->getResponse()->getContent(), true); $this->assertNotNull($data['token']); static::$token = $data['token']; } $this->client->setServerParameter('HTTP_Authorization', sprintf('Bearer %s', static::$token)); } /** * Tear down */ public function tearDown() { if (null !== $this->client) { $this->client->stopIsolation(); } parent::tearDown(); } }
#ifndef _PLXTEST_SERIALIZABLE_TEST_H_ #define _PLXTEST_SERIALIZABLE_TEST_H_ #include <string> #include <cppunit/TestFixture.h> #include <cppunit/extensions/HelperMacros.h> #include <MySuites.h> #include <vpr/IO/SerializableObject.h> namespace vprTest { class SerializableTest : public CppUnit::TestFixture { public: class Class1 : public vpr::SerializableObject { public: Class1() : charVal(0), shortVal(0), longVal(0), longlongVal(0), floatVal(0), doubleVal(0) {;} virtual void writeObject(vpr::ObjectWriter* writer) throw (vpr::IOException) { writer->writeUint8(charVal); writer->writeUint16(shortVal); writer->writeUint32(longVal); writer->writeUint64(longlongVal); writer->writeUint8(scharVal); writer->writeUint16(sshortVal); writer->writeUint32(slongVal); writer->writeUint64(slonglongVal); writer->writeFloat(floatVal); writer->writeDouble(doubleVal); } virtual void readObject(vpr::ObjectReader* reader) throw (vpr::IOException) { charVal = reader->readUint8(); shortVal = reader->readUint16(); longVal = reader->readUint32(); longlongVal = reader->readUint64(); scharVal = reader->readUint8(); sshortVal = reader->readUint16(); slongVal = reader->readUint32(); slonglongVal = reader->readUint64(); floatVal = reader->readFloat(); doubleVal = reader->readDouble(); } bool operator==(Class1& r) const { return ( (charVal == r.charVal) && (shortVal == r.shortVal) && (longVal == r.longVal) && (longlongVal == r.longlongVal) && (scharVal == r.scharVal) && (sshortVal == r.sshortVal) && (slongVal == r.slongVal) && (slonglongVal == r.slonglongVal) && (floatVal == r.floatVal) && (doubleVal == r.doubleVal) ); } public: vpr::Uint8 charVal; vpr::Uint16 shortVal; vpr::Uint32 longVal; vpr::Uint64 longlongVal; vpr::Int8 scharVal; vpr::Int16 sshortVal; vpr::Int32 slongVal; vpr::Int64 slonglongVal; float floatVal; double doubleVal; }; class Class2 : public vpr::SerializableObject { public: Class2() : mFlag(true) {;} virtual void writeObject(vpr::ObjectWriter* writer) throw (vpr::IOException) { mObj1.writeObject(writer); mObj2.writeObject(writer); writer->writeBool(mFlag); } virtual void readObject(vpr::ObjectReader* reader) throw (vpr::IOException) { mObj1.readObject(reader); mObj2.readObject(reader); mFlag = reader->readBool(); } bool operator==(Class2& r) { return ( (mObj1 == r.mObj1) && (mObj2 == r.mObj2) && (mFlag == r.mFlag) ); } public: Class1 mObj1; Class1 mObj2; bool mFlag; }; public: CPPUNIT_TEST_SUITE(SerializableTest); CPPUNIT_TEST( testReaderWriter ); CPPUNIT_TEST( testDataOffsets ); CPPUNIT_TEST( testReadWriteSimple ); CPPUNIT_TEST( testReadWriteNested ); CPPUNIT_TEST_SUITE_END(); public: virtual void setUp() {;} virtual void tearDown() {;} // Test reading and writing of data void testReaderWriter(); // Test reading and writing data from many memory offsets void testDataOffsets(); void testReadWriteSimple(); void testReadWriteNested(); }; } #endif
This information was provided by sendgrid because some text reminders are not being received by patients using specific carriers. From SendGrid Domain authentication, formally known as domain whitelabel, shows email providers that SendGrid has your permission to send emails on your behalf. To give SendGrid permission, you point DNS entries from your DNS provider (like GoDaddy, Rackspace, or Cloudflare) to SendGrid. Your recipients will no longer see the “via sendgrid.net” message on your emails.
Make these Sweet Cinnamon Pecan Acorn Squash wedges for your Thanksgiving table! Everyone will love this festive vegan and gluten free side dish! Every month in autumn I buy an acorn squash. It’s autumn and it feels like an acorn squash should be on my menu weekly, no? But, the acorn squash just sits in my little basket on my counter and looks adorable. While it does make for the perfect autumn decor, once you actually decide to cut into it and cook it up, I am forever reminded that procrastination of cooking up my acorn squash is a big mistake. It’s so sweet and hearty and really makes for an amazing side. So something crazy delicious happened. This weekend we were cooking up a storm. We carved up all our Halloween pumpkins, roasted them, roasted the seeds, made some puree, then made pies, then made whip cream to go on top of the pies. I dipped some of these Sweet Cinnamon Pecan Acorn Squash wedges into the homemade whip cream and it was kind of amazing. So, I don’t want to blow your mind tooo much on this Monday, but not only can this recipe be an amazing side for your Thanksgiving table AS IS, but you can also put them in a bowl with a scoop of ice cream, some homemade whip, and a caramel drizzle. It’s autumn amazingness, completely unexpected and really delicious. So let’s talk acorn squash SKIN for a minute. Personally, I like to leave mine on. You can peel it off pre-roasting if you wish, but once you roast it, it’s easy to chew and digest so it’s totally ok to leave on. So let’s make some Sweet Cinnamon Pecan Acorn Squash! First, cut the end with the stem off the acorn squash. This just makes it easier to cut up down the line. Cut the squash in half lengthwise. Then, use a spoon to scoop out the seeds. Turn the squash so the flat side is on the cutting board. Cut the squash into 1/2 inch wide wedges or pieces. Place them in a large bowl along with the coconut oil, coconut sugar, cinnamon, and sea salt. Toss until the acorn squash wedges are completely coated. Place the squash on a baking sheet lined with parchment paper so they are in a single layer. Roast for 40 minutes. In a separate bowl, mix together the pecans, cinnamon, and agave nectar. Transfer the squash to the serving dish and top the sweetened pecans, and thyme leaves. Cut the stem end off the acorn squash. This just makes it easier to cut up down the line. Turn the squash so the flat side is on the cutting board. Cut the squash into 1/2 inch wide wedges or pieces. Place the cut squash in a large bowl along with the coconut oil, coconut sugar, cinnamon, and sea salt. Toss until the acorn squash wedges are completely coated. Place the squash on a baking sheet lined with parchment paper so they are in a single layer. When the squash is done roasting, transfer the wedges to the serving dish and top the sweetened pecans, and thyme leaves.
package structmapper import "errors" var ( // ErrTagNameEmpty designates that the passed tag name is empty ErrTagNameEmpty = errors.New("Tag name is empty") // ErrNotAStruct designates that the passed value is not a struct ErrNotAStruct = errors.New("Not a struct") // ErrInvalidMap designates that the passed value is not a valid map ErrInvalidMap = errors.New("Invalid map") // ErrFieldIsInterface designates that a field is an interface ErrFieldIsInterface = errors.New("Field is interface") // ErrMapIsNil designates that the passed map is nil ErrMapIsNil = errors.New("Map is nil") // ErrNotAStructPointer designates that the passed value is not a pointer to a struct ErrNotAStructPointer = errors.New("Not a struct pointer") )
--- layout: single title: "Spark: Count number of duplicate rows" date: 2018-10-25 12:46 modified: 2018-10-25 12:46 categories: til tags: - spark - til --- To count the number of duplicate rows in a pyspark DataFrame, you want to `groupBy()` all the columns and `count()`, then select the sum of the counts for the rows where the count is greater than 1: ```python import pyspark.sql.functions as funcs df.groupBy(df.columns)\ .count()\ .where(funcs.col('count') > 1)\ .select(funcs.sum('count'))\ .show() ``` Via [SO](https://stackoverflow.com/a/48554666).
The Rod Boss™ retractable fishing rod arm provides improved leverage and comfort for reeling in fish by offering multiple grip positions. Don’t let arm pain or fatigue cut your time short, get an arm up with Rod Boss™! See Rod Boss in Action! All hardware is high grade corrosive resistant stainless steel. Designed to be attached to both the fishing reel and the fishing rod. Easily fits on most all top reel mounted fishing poles. Three different hand positions to relieve wrist strain and improve rod control. Heavy duty strength – tested up to 200LB test line. Designed to attached to both the fishing reel and the fishing rod. Easily fit on most all top reel mounted fishing poles. Three different hand position to relieve wrist strain and improves rod control. Heavy duty strength – tested up to 200 lbs fishing test line.
Gallstones are ‘stones’ that form in your gallbladder. They are common and can run in families. The risk of developing gallstones increases as you get older and if you eat a diet rich in fat. For some people gallstones can cause severe symptoms, with repeated attacks of abdominal pain being the most common. The operation is performed under a general anaesthetic and usually takes about an hour. Your surgeon will make a cut on your upper abdomen and free up your cystic duct and artery. They will separate your gallbladder from your liver, and remove it.
/* * Copyright 2019 Red Hat, Inc. and/or its affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.kie.workbench.common.stunner.bpmn.forms.model; import org.junit.Assert; import org.junit.Test; public class GenericServiceTaskEditorFieldDefinitionTest { @Test public void getFieldType() { Assert.assertEquals("GenericServiceTaskEditor", new GenericServiceTaskEditorFieldDefinition().getFieldType().getTypeName()); } @Test public void doCopyFrom() { GenericServiceTaskEditorFieldDefinition def = new GenericServiceTaskEditorFieldDefinition(); def.doCopyFrom(null); Assert.assertNotNull(def); } }
A very fortunate coincidence occurred recently. This post popped up in my Google Reader. I was so surprised, and actually demanded to know if Lindsay had broken into my house and peeked in my refrigerator. You see, never in my life have I ever seen fresh figs in the grocery store. If they were there, I didn’t notice them. The past year or so I’ve been intrigued by them after enjoying fresh figs, or fig jam on dishes while dining out. On a recent grocery trip to Harris Teeter I saw them, and on a whim, bought two, not knowing what I was going to do with them. Also, the past month or so I keep meaning to purchase some of the locally produced fresh goat cheese I’ve seen at the farmer’s markets I visit. But, due to me being ME, I either don’t have enough cash, don’t have ANY cash, or have the cash, but forgot my wallet. This past weekend, I made an extra effort to pick some up. This goat cheese is produced on a farm in Franklin, TN. According to this article in the Tennessean the cheesemaking seems to be a fairly new venture for the company, Noble Springs Farm. As you know if you read my blog even sporadically, I’m a member of a CSA. I love supporting local producers, and small businesses. I believe that when you do that, you are helping not just that business but the community as well. That was part of the reason I chose to purchase the goat cheese. But I’ll be honest, the other reason is because it’s so delicious! Even when I knew I didn’t have cash to purchase some, I *might* have wandered over for a sample, a time or two (or three). They produce a couple varieties including this peppercorn, and a cherry berry, just to name a few. I had just so happened to purchase the peppered goat cheese. So THAT was why I thought Lindsay’s post was truly serendipitous! Nashvillians, if you are interested in the goat cheese I know for sure they sell it at the East Nashville Farmer’s Market (Wednesdays), as well as the new West Nashville Farmer’s Market (Saturdays on 46th & Charlotte). Perfect. That’s really the best way to describe this. The only way I think I could have made it better, would be if I was able to use locally produced honey. If you’ve never had fresh figs (and I hadn’t until recently), they are similar to a plum or peach in exterior texture, and they are soft with almost pulp-like sacs similar to an orange. When paired with the goat cheese and honey, you get some sweetness from the fruit, with a touch of the richness of the goat cheese, a different kind of sweetness from the honey, and of course the honey & pepper. I don’t think there’s anything I’ve prepared that felt so simple and yet fancy at the same time. I wasn’t surprised to discover the recipe came from Bon Appetit. The recipe really defines the kinds of food I like to eat; simple with a touch of elegance, in season, and course, utilizing local ingredients. I can’t wait for an event I could serve these at as appetizers! PS. I would like to add, just to be clear, no one paid me to go on & on about how great this goat cheese is, and I didn’t get it for free. I just like to write about food discoveries I love. Starting at stem end, cut each fig into quarters, stopping 1/2 inch from bottom to leave base intact. Gently press figs open. Spoon 1 teaspoon cheese into center of each. Arrange figs on platter; drizzle with honey, then using pepper grinder, crack pepper over fig. This time last year I posted this Layered Pumpkin Loaf, which was moist & delicious! I am SO sad I'm missing fresh fig season this year. We had a tree in our front yard in California, and I got incredibly spoiled. Eat lots and lots of these for me, and know that I'm INCREDIBLY jealous of your fridge full of figs and cheese! What I think is truly so crazy about this is that I made this same thing just a couple of weeks ago and didn't mention it to you or Lindsay. I too had never purchased figs until then, and didn't even have a recipe- just threw it all together- JUST LIKE THIS! Great minds think alike. Oh, but my goat cheese was from Kira's Kids Dairies- they sell at the Nashville Farmers' Market. Yummy stuff! I really need to start working with figs and learning more about them. And with goat cheese?! yum! Oh my goodness, this looks sooooo delicious! I love figs!! I've never had them with goat cheese; this is something I def need to try. Mmm that goat cheese sounds delicious and I LOVE figs with goat cheese. Pretty presentation, too! These look divine!!!!Think I've found my appetizer for next week's family get together!
This is a placeholder page for Joeseph Caruso, which means this person is not currently on this site. We do suggest using the tools below to find Joeseph Caruso. You are visiting the placeholder page for Joeseph Caruso. This page is here because someone used our placeholder utility to look for Joeseph Caruso. We created this page automatically in hopes Joeseph Caruso would find it. If you are not Joeseph Caruso, but are an alumni of Merritt Island High School, register on this site for free now.
Winston on Churchill is a new chapter in Sandy Bay luxury living. Clean architectural lines, clever use of space, quality internal finishes, modern and low maintenance, this boutique selection of townhouses epitomises tasteful modern design in one of Hobart's most desirable suburbs. The fluency of the design allows a cascade of maximum light into the homes as you are bathed in natural sunlight throughout the open plan living zone. Form and function combine to make this a luxuriously appointed space to entertain and relax. Set over two storeys, all of the townhouses feature three bedrooms, with a stylish monochrome ensuite off each master, ensuring the perfect retreat to unwind. For the main bathroom, minimal visual clutter creates an effortless elegance while bespoke finishes are used for understated impact. A number of the townhouses are also serviced by a convenient powder room and all homes feature a light filled study. For ease of living, each home is complemented by it's own sunny balcony as well as a double garage and visitor parking onsite. With a build from award winning Anard Construction, a high level of quality is assured. Combined with the tucked away seclusion, the proximity to shops, the beach, cafes and more, Winston on Churchill will be a highly desireable and sort after development. For further information contact Andrew Wells or Real Estate's Rapid Response Unit today.
Brenton Productions offers a full array of production services ranging from commercial spots to training and corporate videos, all with the highest production values possible. Brenton Productions producers will guide any production project through the total process, from conception to delivery. A full menu of 3D graphics and animation services are available from Brenton.
Do you want to look beautiful like those beautiful Arab women? Have you ever wondered why they look gorgeous in appearance? Here are few tips that you too can try to look great as they do. Tip 1: Olive Oil for hair: Many people use Olive oil for cooking considering its health aspects. Applying Olive oil to hair gives strength to your hair. Apply the olive oil regularly on hair, leave it for 5-10 minutes. this can reduce hair split and get rid of dandruff. It also helps in getting nice shiny hair. Don’t forget to take a bath so as to get rid of the odour which some people doesn’t like. Tip 2: Meswak: Meswak or miswak is not only a beauty component but also a great health pack helps in achieving white teeth. It is a natural and traditional way of cleaning tooth from centuries within Arabs. Tip 3: Argan Oil: Argan Oil is a great source for shiny skin. If applied reduces redness or tans in face or body skins. It also get rids of any acne scars.
/* Copyright 2021 The CloudEvents Authors SPDX-License-Identifier: Apache-2.0 */ package client import ( "context" "runtime" "strings" "go.opentelemetry.io/otel" "go.opentelemetry.io/otel/attribute" semconv "go.opentelemetry.io/otel/semconv/v1.4.0" "go.opentelemetry.io/otel/trace" cloudevents "github.com/cloudevents/sdk-go/v2" "github.com/cloudevents/sdk-go/v2/binding" "github.com/cloudevents/sdk-go/v2/observability" "github.com/cloudevents/sdk-go/v2/protocol" cehttp "github.com/cloudevents/sdk-go/v2/protocol/http" ) // OTelObservabilityService implements the ObservabilityService interface from cloudevents type OTelObservabilityService struct { tracer trace.Tracer spanAttributesGetter func(cloudevents.Event) []attribute.KeyValue spanNameFormatter func(cloudevents.Event) string } // NewOTelObservabilityService returns an OpenTelemetry-enabled observability service func NewOTelObservabilityService(opts ...OTelObservabilityServiceOption) *OTelObservabilityService { tracerProvider := otel.GetTracerProvider() o := &OTelObservabilityService{ tracer: tracerProvider.Tracer( instrumentationName, // TODO: Can we have the package version here? // trace.WithInstrumentationVersion("1.0.0"), ), spanNameFormatter: defaultSpanNameFormatter, } // apply passed options for _, opt := range opts { opt(o) } return o } // InboundContextDecorators returns a decorator function that allows enriching the context with the incoming parent trace. // This method gets invoked automatically by passing the option 'WithObservabilityService' when creating the cloudevents HTTP client. func (o OTelObservabilityService) InboundContextDecorators() []func(context.Context, binding.Message) context.Context { return []func(context.Context, binding.Message) context.Context{tracePropagatorContextDecorator} } // RecordReceivedMalformedEvent records the error from a malformed event in the span. func (o OTelObservabilityService) RecordReceivedMalformedEvent(ctx context.Context, err error) { spanName := observability.ClientSpanName + ".malformed receive" _, span := o.tracer.Start( ctx, spanName, trace.WithSpanKind(trace.SpanKindConsumer), trace.WithAttributes(attribute.String(string(semconv.CodeFunctionKey), getFuncName()))) recordSpanError(span, err) span.End() } // RecordCallingInvoker starts a new span before calling the invoker upon a received event. // In case the operation fails, the error is recorded and the span is marked as failed. func (o OTelObservabilityService) RecordCallingInvoker(ctx context.Context, event *cloudevents.Event) (context.Context, func(errOrResult error)) { spanName := o.getSpanName(event, "process") ctx, span := o.tracer.Start( ctx, spanName, trace.WithSpanKind(trace.SpanKindConsumer), trace.WithAttributes(GetDefaultSpanAttributes(event, getFuncName())...)) if span.IsRecording() && o.spanAttributesGetter != nil { span.SetAttributes(o.spanAttributesGetter(*event)...) } return ctx, func(errOrResult error) { recordSpanError(span, errOrResult) span.End() } } // RecordSendingEvent starts a new span before sending the event. // In case the operation fails, the error is recorded and the span is marked as failed. func (o OTelObservabilityService) RecordSendingEvent(ctx context.Context, event cloudevents.Event) (context.Context, func(errOrResult error)) { spanName := o.getSpanName(&event, "send") ctx, span := o.tracer.Start( ctx, spanName, trace.WithSpanKind(trace.SpanKindProducer), trace.WithAttributes(GetDefaultSpanAttributes(&event, getFuncName())...)) if span.IsRecording() && o.spanAttributesGetter != nil { span.SetAttributes(o.spanAttributesGetter(event)...) } return ctx, func(errOrResult error) { recordSpanError(span, errOrResult) span.End() } } // RecordRequestEvent starts a new span before transmitting the given request. // In case the operation fails, the error is recorded and the span is marked as failed. func (o OTelObservabilityService) RecordRequestEvent(ctx context.Context, event cloudevents.Event) (context.Context, func(errOrResult error, event *cloudevents.Event)) { spanName := o.getSpanName(&event, "send") ctx, span := o.tracer.Start( ctx, spanName, trace.WithSpanKind(trace.SpanKindProducer), trace.WithAttributes(GetDefaultSpanAttributes(&event, getFuncName())...)) if span.IsRecording() && o.spanAttributesGetter != nil { span.SetAttributes(o.spanAttributesGetter(event)...) } return ctx, func(errOrResult error, event *cloudevents.Event) { recordSpanError(span, errOrResult) span.End() } } // GetDefaultSpanAttributes returns the attributes that are always added to the spans // created by the OTelObservabilityService. func GetDefaultSpanAttributes(e *cloudevents.Event, method string) []attribute.KeyValue { attr := []attribute.KeyValue{ attribute.String(string(semconv.CodeFunctionKey), method), attribute.String(observability.SpecversionAttr, e.SpecVersion()), attribute.String(observability.IdAttr, e.ID()), attribute.String(observability.TypeAttr, e.Type()), attribute.String(observability.SourceAttr, e.Source()), } if sub := e.Subject(); sub != "" { attr = append(attr, attribute.String(observability.SubjectAttr, sub)) } if dct := e.DataContentType(); dct != "" { attr = append(attr, attribute.String(observability.DatacontenttypeAttr, dct)) } return attr } // Extracts the traceparent from the msg and enriches the context to enable propagation func tracePropagatorContextDecorator(ctx context.Context, msg binding.Message) context.Context { var messageCtx context.Context if mctx, ok := msg.(binding.MessageContext); ok { messageCtx = mctx.Context() } else if mctx, ok := binding.UnwrapMessage(msg).(binding.MessageContext); ok { messageCtx = mctx.Context() } if messageCtx == nil { return ctx } span := trace.SpanFromContext(messageCtx) if span == nil { return ctx } return trace.ContextWithSpan(ctx, span) } func recordSpanError(span trace.Span, errOrResult error) { if protocol.IsACK(errOrResult) || !span.IsRecording() { return } var httpResult *cehttp.Result if cloudevents.ResultAs(errOrResult, &httpResult) { span.RecordError(httpResult) if httpResult.StatusCode > 0 { code, _ := semconv.SpanStatusFromHTTPStatusCode(httpResult.StatusCode) span.SetStatus(code, httpResult.Error()) } } else { span.RecordError(errOrResult) } } // getSpanName Returns the name of the span. // // When no spanNameFormatter is present in OTelObservabilityService, // the default name will be "cloudevents.client.<eventtype> prefix" e.g. cloudevents.client.get.customers send. // // The prefix is always added at the end of the span name. This follows the semantic conventions for // messasing systems as defined in https://github.com/open-telemetry/opentelemetry-specification/blob/v1.6.1/specification/trace/semantic_conventions/messaging.md#operation-names func (o OTelObservabilityService) getSpanName(e *cloudevents.Event, suffix string) string { name := o.spanNameFormatter(*e) // make sure the span name ends with the suffix from the semantic conventions (receive, send, process) if !strings.HasSuffix(name, suffix) { return name + " " + suffix } return name } func getFuncName() string { pc := make([]uintptr, 1) n := runtime.Callers(2, pc) frames := runtime.CallersFrames(pc[:n]) frame, _ := frames.Next() // frame.Function should be github.com/cloudevents/sdk-go/observability/opentelemetry/v2/client.OTelObservabilityService.Func parts := strings.Split(frame.Function, ".") // we are interested in the function name if len(parts) != 4 { return "" } return parts[3] }
Attend Dallas TechFest for only $25! I don't know what the specific topics for some of the speakers are, but I can tell you that I'll be speaking on Building and Deploying CFML on a Free Software Stack, Jeff Lucido is speaking on Open BlueDragon for Google App Engine, and I'm pretty sure Steve Good is speaking on Mura. With this lineup I'm sure all of the CFML topics will be great! I really regret having to spend the time writing this since in a lot of ways it will be airing dirty laundry. But since Adam is grossly misrepresenting a great deal of what went on during the effort of the CFML Advisory Committee and what led to its eventual demise, I think it's only fair that people don't take Adam's version of the story as gospel. It's far from it. What if senior management in an Agency – or anyone in the public – could identify and monitor the performance of IT projects just as easily as they could monitor the stock market or baseball scores? That’s what the IT dashboard does -- and it’s changing the way government does business. Government IT projects all too often cost millions of dollars more than they should, take years longer than necessary to deploy, and deliver technologies that are obsolete by the time they are completed. Colossal failures have contributed to a significant technology gap between the public and private sector which results in dollars wasted and a government that is less responsive to the American people. To close the technology gap, cut waste, and modernize government, the Obama Administration is taking concrete steps to deliver better results for the American people. Two major enhancements to CouchDB make it 1.0-worthy, said Chris Anderson, the chief financial officer and a founder of Couchio. One is the fact that performance of the software has been greatly improved. The other is its ability to work on Microsoft Windows machines. A lot of work was also put into stabilization of the software. Performance-wise, the new version has demonstrated a 300 percent increase in speed in reads and writes, as judged by internal benchmarking tests done by Couchio. The performance improvements were gained by optimizing the code, Anderson said. This is also the first release of CouchDB that can fully run on Windows computers, either the servers or desktops, Anderson said. Previous versions could run on Linux, and there is a version being developed for the Google Android smartphone operating system. I ran into this today while working on ColdTonica, and since it's something I'm still surprised people forget (including myself) I thought I'd share. ColdTonica is a CFML clone of StatusNet (formerly Laconica), which is an open source PHP-based microblogging service similar (although vastly superior) to Twitter. As you might imagine, those simple 140-character notices you spend way too much of your day posting go through a lot of transformations before reaching the final form in which they are displayed, because the notices need to be parsed and manipulated to add things like links to tags, links to @ replies, shortening URLs, and so on. Honestly when I started studying the StatusNet code and saw what all goes on behind the scenes for such a seemingly simple service, I have to admit I was a bit surprised. This is totally random, and my point is probably far less than profound, but yesterday I went to the 7/11 near my house and was practically jumping up and down when I saw Vanilla Moon Pies. I'm a huge fan of anything vanilla, and you just don't see vanilla Moon Pies that often. Chocolate, sure. Maybe even banana. But for some reason vanilla is hard to come by. "This is awesome!" I thought to myself as I pondered buying the whole box. Just to, uh, have some around the house for a while. Yeah, that's it. I found the inner strength to only buy two, and I grinned all the way home about my great find, because again, these are pretty darn rare in my experience. Even as I was enjoying the first of my two rare vanilla Moon Pies yesterday, I was thinking, "I better save the other one for a special occasion, or go back and buy more, because I never know when I'll find them again!"
Toronto – January 27, 2016 – At a speech in downtown Toronto, the Ontario Long Term Care Association called for government to make long overdue investments in Ontario’s long-term care homes where 77,000 seniors receive 24/7 care each year. Pointing to the rapidly increasing needs of the seniors being cared for, the Association laid out 4 key solutions that would ensure that seniors get better care. To eliminate the three and four-bed rooms and other outdated designs that date back to 1973, a renewed plan must be implemented to modernize and rebuild older long-term care homes that 35,000 seniors live in today. To provide the best care and treatment to the almost 65,000 seniors living with Alzheimer’s and dementia, each home in Ontario must be provided specialized supports and resources. To keep seniors in their home community and out of hospital, a strategy must be implemented that recognizes the unique needs of homes in small and rural communities. To help manage the growing needs of seniors in today’s long-term care homes, staffing models need to be changed and expanded.
<?php /** * Zend Framework (http://framework.zend.com/) * * @link http://github.com/zendframework/zf2 for the canonical source repository * @copyright Copyright (c) 2005-2013 Zend Technologies USA Inc. (http://www.zend.com) * @license http://framework.zend.com/license/new-bsd New BSD License */ namespace ZendTest\Mvc\Router; use PHPUnit_Framework_TestCase as TestCase; use ArrayIterator; use Zend\Stdlib\Request; use Zend\Mvc\Router\RoutePluginManager; use Zend\Mvc\Router\SimpleRouteStack; use ZendTest\Mvc\Router\FactoryTester; class SimpleRouteStackTest extends TestCase { public function testSetRoutePluginManager() { $routes = new RoutePluginManager(); $stack = new SimpleRouteStack(); $stack->setRoutePluginManager($routes); $this->assertEquals($routes, $stack->getRoutePluginManager()); } public function testAddRoutesWithInvalidArgument() { $this->setExpectedException('Zend\Mvc\Router\Exception\InvalidArgumentException', 'addRoutes expects an array or Traversable set of routes'); $stack = new SimpleRouteStack(); $stack->addRoutes('foo'); } public function testAddRoutesAsArray() { $stack = new SimpleRouteStack(); $stack->addRoutes(array( 'foo' => new TestAsset\DummyRoute() )); $this->assertInstanceOf('Zend\Mvc\Router\RouteMatch', $stack->match(new Request())); } public function testAddRoutesAsTraversable() { $stack = new SimpleRouteStack(); $stack->addRoutes(new ArrayIterator(array( 'foo' => new TestAsset\DummyRoute() ))); $this->assertInstanceOf('Zend\Mvc\Router\RouteMatch', $stack->match(new Request())); } public function testSetRoutesWithInvalidArgument() { $this->setExpectedException('Zend\Mvc\Router\Exception\InvalidArgumentException', 'addRoutes expects an array or Traversable set of routes'); $stack = new SimpleRouteStack(); $stack->setRoutes('foo'); } public function testSetRoutesAsArray() { $stack = new SimpleRouteStack(); $stack->setRoutes(array( 'foo' => new TestAsset\DummyRoute() )); $this->assertInstanceOf('Zend\Mvc\Router\RouteMatch', $stack->match(new Request())); $stack->setRoutes(array()); $this->assertSame(null, $stack->match(new Request())); } public function testSetRoutesAsTraversable() { $stack = new SimpleRouteStack(); $stack->setRoutes(new ArrayIterator(array( 'foo' => new TestAsset\DummyRoute() ))); $this->assertInstanceOf('Zend\Mvc\Router\RouteMatch', $stack->match(new Request())); $stack->setRoutes(new ArrayIterator(array())); $this->assertSame(null, $stack->match(new Request())); } public function testremoveRouteAsArray() { $stack = new SimpleRouteStack(); $stack->addRoutes(array( 'foo' => new TestAsset\DummyRoute() )); $this->assertEquals($stack, $stack->removeRoute('foo')); $this->assertNull($stack->match(new Request())); } public function testAddRouteWithInvalidArgument() { $this->setExpectedException('Zend\Mvc\Router\Exception\InvalidArgumentException', 'Route definition must be an array or Traversable object'); $stack = new SimpleRouteStack(); $stack->addRoute('foo', 'bar'); } public function testAddRouteAsArrayWithoutOptions() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', array( 'type' => '\ZendTest\Mvc\Router\TestAsset\DummyRoute' )); $this->assertInstanceOf('Zend\Mvc\Router\RouteMatch', $stack->match(new Request())); } public function testAddRouteAsArrayWithOptions() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', array( 'type' => '\ZendTest\Mvc\Router\TestAsset\DummyRoute', 'options' => array() )); $this->assertInstanceOf('Zend\Mvc\Router\RouteMatch', $stack->match(new Request())); } public function testAddRouteAsArrayWithoutType() { $this->setExpectedException('Zend\Mvc\Router\Exception\InvalidArgumentException', 'Missing "type" option'); $stack = new SimpleRouteStack(); $stack->addRoute('foo', array()); } public function testAddRouteAsArrayWithPriority() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', array( 'type' => '\ZendTest\Mvc\Router\TestAsset\DummyRouteWithParam', 'priority' => 2 ))->addRoute('bar', array( 'type' => '\ZendTest\Mvc\Router\TestAsset\DummyRoute', 'priority' => 1 )); $this->assertEquals('bar', $stack->match(new Request())->getParam('foo')); } public function testAddRouteWithPriority() { $stack = new SimpleRouteStack(); $route = new TestAsset\DummyRouteWithParam(); $route->priority = 2; $stack->addRoute('baz', $route); $stack->addRoute('foo', array( 'type' => '\ZendTest\Mvc\Router\TestAsset\DummyRoute', 'priority' => 1 )); $this->assertEquals('bar', $stack->match(new Request())->getParam('foo')); } public function testAddRouteAsTraversable() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', new ArrayIterator(array( 'type' => '\ZendTest\Mvc\Router\TestAsset\DummyRoute' ))); $this->assertInstanceOf('Zend\Mvc\Router\RouteMatch', $stack->match(new Request())); } public function testAssemble() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', new TestAsset\DummyRoute()); $this->assertEquals('', $stack->assemble(array(), array('name' => 'foo'))); } public function testAssembleWithoutNameOption() { $this->setExpectedException('Zend\Mvc\Router\Exception\InvalidArgumentException', 'Missing "name" option'); $stack = new SimpleRouteStack(); $stack->assemble(); } public function testAssembleNonExistentRoute() { $this->setExpectedException('Zend\Mvc\Router\Exception\RuntimeException', 'Route with name "foo" not found'); $stack = new SimpleRouteStack(); $stack->assemble(array(), array('name' => 'foo')); } public function testDefaultParamIsAddedToMatch() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', new TestAsset\DummyRoute()); $stack->setDefaultParam('foo', 'bar'); $this->assertEquals('bar', $stack->match(new Request())->getParam('foo')); } public function testDefaultParamDoesNotOverrideParam() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', new TestAsset\DummyRouteWithParam()); $stack->setDefaultParam('foo', 'baz'); $this->assertEquals('bar', $stack->match(new Request())->getParam('foo')); } public function testDefaultParamIsUsedForAssembling() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', new TestAsset\DummyRouteWithParam()); $stack->setDefaultParam('foo', 'bar'); $this->assertEquals('bar', $stack->assemble(array(), array('name' => 'foo'))); } public function testDefaultParamDoesNotOverrideParamForAssembling() { $stack = new SimpleRouteStack(); $stack->addRoute('foo', new TestAsset\DummyRouteWithParam()); $stack->setDefaultParam('foo', 'baz'); $this->assertEquals('bar', $stack->assemble(array('foo' => 'bar'), array('name' => 'foo'))); } public function testFactory() { $tester = new FactoryTester($this); $tester->testFactory( 'Zend\Mvc\Router\SimpleRouteStack', array(), array( 'route_plugins' => new RoutePluginManager(), 'routes' => array(), 'default_params' => array() ) ); } public function testGetRoutes() { $stack = new SimpleRouteStack(); $this->assertInstanceOf('Traversable', $stack->getRoutes()); } public function testGetRouteByName() { $stack = new SimpleRouteStack(); $route = new TestAsset\DummyRoute(); $stack->addRoute('foo', $route); $this->assertEquals($route, $stack->getRoute('foo')); } public function testHasRoute() { $stack = new SimpleRouteStack(); $this->assertEquals(false, $stack->hasRoute('foo')); $stack->addRoute('foo', new TestAsset\DummyRoute()); $this->assertEquals(true, $stack->hasRoute('foo')); } }
Movie Paint Cobweb Spray is designed to be used on sets specifically in corners or small areas in which webs might occur. This spray is PERMANENT AND WILL NOT WASH OFF WITH WATER. It may be removed with most solvents that remove adhesive, but this may damage the prop or surface being treated.
import sys import time import os.path from collections import Counter from vial import vfunc, vim, dref from vial.utils import redraw, focus_window from vial.widgets import make_scratch collector = None def get_collector(): global collector if not collector: collector = ResultCollector() return collector def run_test(project_dir, executable=None, match=None, files=None, env=None): from subprocess import Popen from multiprocessing.connection import Client, arbitrary_address addr = arbitrary_address('AF_UNIX') filename = os.path.join(os.path.dirname(os.path.abspath(__file__)), 'pt.py') executable = executable or sys.executable args = [executable, filename, addr, '-q'] if match: args.append('-k %s' % match) environ = None if env: environ = os.environ.copy() environ.update(env) log = open('/tmp/vial-pytest.log', 'w') if files: args.extend(files) proc = Popen(args, cwd=project_dir, env=environ, stdout=log, stderr=log, close_fds=True) start = time.time() while not os.path.exists(addr): if time.time() - start > 5: raise Exception('py.test launching timeout exceed') time.sleep(0.01) conn = Client(addr) return proc, conn def indent(width, lines): return [' ' * width + r for r in lines] @dref def goto_file(): filename, line = vfunc.expand('<cWORD>').split(':')[:2] for win in vim.windows: if vfunc.buflisted(win.buffer.number): focus_window(win) vim.command('e +{} {}'.format(line, filename)) class ResultCollector(object): def init(self, win, buf): vim.command('setlocal syntax=vialpytest') vim.command('nnoremap <buffer> gf :python {}()<cr>'.format(goto_file.ref)) def reset(self): cwin = vim.current.window _, self.buf = make_scratch('__vial_pytest__', self.init, 'pytest') vim.command('normal! ggdG') focus_window(cwin) redraw() def add_test_result(self, rtype, name, result): self.counts[rtype] += 1 lines = ['{} {}'.format(name, rtype)] trace, out = result for k, v in out: lines.append(' ----======= {} =======----'.format(k)) lines.extend(indent(1, v.splitlines())) lines.append('') if trace: lines.extend(indent(1, trace.splitlines())) lines.append('') lines.append('') buflen = len(self.buf) self.buf[buflen-1:] = lines redraw() def collect(self, conn): self.tests = [] self.counts = Counter() self.reset() while True: msg = conn.recv() cmd = msg[0] if cmd == 'END': return elif cmd == 'COLLECTED_TESTS': self.tests[:] = cmd[1] elif cmd in ('PASS', 'ERROR', 'FAIL', 'SKIP', 'FAILED_COLLECT'): self.add_test_result(*msg) def run(*args): project = os.getcwd() files = None if args: files = [vfunc.expand(r) for r in args] try: f = vfunc.VialPythonGetExecutable except vim.error: executable = None else: executable = f() proc, conn = run_test(project, files=files, executable=executable) get_collector().collect(conn)
// Boost.TypeErasure library // // Copyright 2012 Steven Watanabe // // Distributed under the Boost Software License Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) // // $Id$ #include <boost/type_erasure/any.hpp> #include <boost/type_erasure/builtin.hpp> #include <boost/type_erasure/member.hpp> #include <boost/mpl/vector.hpp> #define BOOST_TEST_MAIN #include <boost/test/unit_test.hpp> using namespace boost::type_erasure; BOOST_TYPE_ERASURE_MEMBER((ns)(ns2)(has_fun), fun, 0); struct model { explicit model(int v) : val(v) {} int f1() { return val; } int f1(int i) { return val + i; } int val; }; BOOST_TYPE_ERASURE_MEMBER((global_has_f1_0), f1, 0); BOOST_AUTO_TEST_CASE(test_global_has_f1_0) { typedef ::boost::mpl::vector< global_has_f1_0<int()>, copy_constructible<> > concept_type; model m(10); any<concept_type> x(m); BOOST_CHECK_EQUAL(x.f1(), 10); } BOOST_TYPE_ERASURE_MEMBER((ns1)(ns2)(ns_has_f1_0), f1, 0); BOOST_AUTO_TEST_CASE(test_ns_has_f1_0) { typedef ::boost::mpl::vector< ns1::ns2::ns_has_f1_0<int()>, copy_constructible<> > concept_type; model m(10); any<concept_type> x(m); BOOST_CHECK_EQUAL(x.f1(), 10); } struct model_const { explicit model_const(int v) : val(v) {} int f1() const { return val; } int f1(int i) const { return val + i; } int val; }; BOOST_AUTO_TEST_CASE(test_global_has_f1_0_const) { typedef ::boost::mpl::vector< ns1::ns2::ns_has_f1_0<int(), const _self>, copy_constructible<> > concept_type; model_const m(10); any<concept_type> x(m); BOOST_CHECK_EQUAL(x.f1(), 10); } BOOST_AUTO_TEST_CASE(test_global_has_f1_0_void) { typedef ::boost::mpl::vector< global_has_f1_0<void()>, copy_constructible<> > concept_type; model m(10); any<concept_type> x(m); x.f1(); } BOOST_TYPE_ERASURE_MEMBER((global_has_f1_1), f1, 1); BOOST_AUTO_TEST_CASE(test_global_has_f1_overload) { typedef ::boost::mpl::vector< global_has_f1_0<int()>, global_has_f1_1<int(int)>, copy_constructible<> > concept_type; model m(10); any<concept_type> x(m); BOOST_CHECK_EQUAL(x.f1(), 10); BOOST_CHECK_EQUAL(x.f1(5), 15); } BOOST_AUTO_TEST_CASE(test_global_has_f1_overload_const) { typedef ::boost::mpl::vector< global_has_f1_0<int(), const _self>, global_has_f1_1<int(int), const _self>, copy_constructible<> > concept_type; model_const m(10); any<concept_type> x(m); BOOST_CHECK_EQUAL(x.f1(), 10); BOOST_CHECK_EQUAL(x.f1(5), 15); } struct model_overload_const_non_const { int f1() { return 1; } int f1() const { return 2; } }; BOOST_AUTO_TEST_CASE(test_global_has_f1_overload_const_non_const) { typedef ::boost::mpl::vector< global_has_f1_0<int(), _self>, global_has_f1_0<int(), const _self>, copy_constructible<> > concept_type; model_overload_const_non_const m; any<concept_type> x1(m); BOOST_CHECK_EQUAL(x1.f1(), 1); const any<concept_type> x2(m); BOOST_CHECK_EQUAL(x2.f1(), 2); } #ifndef BOOST_NO_CXX11_RVALUE_REFERENCES BOOST_AUTO_TEST_CASE(test_global_has_f1_rv) { typedef ::boost::mpl::vector< global_has_f1_1<int(int&&)>, copy_constructible<> > concept_type; model m(10); any<concept_type> x(m); BOOST_CHECK_EQUAL(x.f1(5), 15); } BOOST_AUTO_TEST_CASE(test_global_has_f1_rv_const) { typedef ::boost::mpl::vector< global_has_f1_1<int(int&&), const _self>, copy_constructible<> > concept_type; model_const m(10); const any<concept_type> x(m); BOOST_CHECK_EQUAL(x.f1(5), 15); } #endif
You can count on Bathroom Shelf Guys to present the very best service for Bathroom Shelves in Kaleva, MI. Our workforce of highly skilled contractors will provide the expert services you'll need with the most innovative solutions around. We work with premium supplies and affordable solutions to be sure that you will enjoy the best services for the best value. Contact us at 888-613-4468 to learn more. Lowering costs is a valuable part of the task. You still need to have top quality results on Bathroom Shelves in Kaleva, MI, so you can rely on our business to save you money while continually giving the finest quality services. Our endeavors to save a little money will never eliminate the high quality of our services. Whenever you choose us, you will receive the advantage of our own practical experience and superior materials to be sure that any project can last even while saving your time and money. That is attainable given that we know how to save your time and cash on materials and labor. Save time and money through calling Bathroom Shelf Guys now. Dial 888-613-4468 to speak with our customer support associates, right now. When you're thinking of Bathroom Shelves in Kaleva, MI, you've got to be well informed to make the best judgments. We ensure that you know what should be expected. This is exactly why we try to make every attempt to ensure you comprehend the project and are not facing any unexpected situations. Start by discussing the project with our customer care representatives once you contact 888-613-4468. We will go over your questions and concerns when you contact us and help you get set up with an appointment. We are going to work together with you throughout the whole project, and our crew is going to arrive promptly and organized. Plenty of reasons exist to choose Bathroom Shelf Guys regarding Bathroom Shelves in Kaleva, MI. We have got the best customer support scores, the highest quality resources, and the most helpful and productive money saving solutions. Our company is here to serve you with the greatest experience and expertise available. Contact 888-613-4468 when you need Bathroom Shelves in Kaleva, and we're going to work with you to systematically accomplish your task.
Acadock Monitoring - Docker container monitoring ================================================ This webservice provides live data on Docker containers. It takes data from the Linux kernel control groups and from the namespace of the container and expose them through a HTTP API. > The solution is still a work in progress. Configuration ------------- From environment * `PORT`: port to bind (4244 by default) * `DOCKER_URL`: docker endpoint (http://127.0.0.1:4243 by default) * `REFRESH_TIME`: number of second between CPU/net refresh (1 by default) * `PROC_DIR`: mountpoint for procfs (default to /proc) * `RUNNER_DIR`: directory of runner, process to run in namespaces of containers, (default to /usr/bin) * `CGROUP_DIR`: mountpoint of cgroups (default to /sys/fs/cgroup) * `CGROUP_SOURCE`: "docker" or "systemd" (docker by default) docker: /sys/fs/cgroup/:cgroup/memory/docker systemd: /sys/fs/cgroup/:cgroup/memory/system.slice/docker-#{id}.slice * `DEBUG`: output of debugging information (default "false", switch to "true" to enable) Docker ------ Run from docker: ``` docker run -v /sys/fs/cgroup:/host/cgroup:ro -e CGROUP_DIR=/host/cgroup \ -v /proc:/host/proc:ro -e PROC_DIR=/host/proc \ -v /var/run/docker.sock:/host/docker.sock -e DOCKER_URL=unix:///host/docker.sock \ -p 4244:4244 --privileged --pid=host \ -d scalingo/acadock-monitoring ``` `--pid=host`: The daemon has to find the real /proc/#{pid}/ns directory to enter a namespace `--privileged`: Acadock has to enter the other containers namespaces API --- * Memory consumption (in bytes) `GET /containers/:id/mem` Return 200 OK Content-Type: text/plain * CPU usage (percentage) Return 200 OK Content-Type: text/plain `GET /containers/:id/cpu` * Network usage (bytes and percentage) Return 200 OK Content-Type: application/json `GET /containers/:id/net` ### Developers > Léo Unbekandt `<[email protected]>`
--- layout: politician2 title: shankarlal agrawala profile: party: PROUTIST constituency: Sambalpur state: Orissa education: level: 12th Pass details: 12th from kuchinda college sambalpur university in 1976 77 10th from kuchinda g.high school in 1974 photo: sex: caste: religion: current-office-title: crime-accusation-instances: 0 date-of-birth: 1958 profession: networth: assets: 1,64,66,020 liabilities: pan: twitter: website: youtube-interview: wikipedia: candidature: - election: Lok Sabha 2014 myneta-link: http://myneta.info/ls2014/candidate.php?candidate_id=1146 affidavit-link: expenses-link: constituency: Sambalpur party: PROUTIST criminal-cases: 0 assets: 1,64,66,020 liabilities: result: crime-record: date: 2014-01-28 version: 0.0.5 tags: --- ##Summary ##Education {% include "education.html" %} ##Political Career {% include "political-career.html" %} ##Criminal Record {% include "criminal-record.html" %} ##Personal Wealth {% include "personal-wealth.html" %} ##Public Office Track Record {% include "track-record.html" %} ##References {% include "references.html" %}
<?php /** * Zend Framework (http://framework.zend.com/) * * @link http://github.com/zendframework/zf2 for the canonical source repository * @copyright Copyright (c) 2005-2015 Zend Technologies USA Inc. (http://www.zend.com) * @license http://framework.zend.com/license/new-bsd New BSD License */ namespace ZendTest\Tag\Cloud\TestAsset; class CloudDummy1 extends \Zend\Tag\Cloud\Decorator\HtmlCloud { protected $_foo; public function setFoo($value) { $this->_foo = $value; } public function getFoo() { return $this->_foo; } }
Ooh, Look - Craft: Craftfest or Tattoo Expo - decisions, decisions! Craftfest or Tattoo Expo - decisions, decisions! So, what do you do when confronted with the following branches in the road – turn right for Craftfest 2009, or go straight on for the Sydney Tattoo and Body Art Expo? That was the dilemma when we went to Olympic Park on the weekend. Considering we made the trip specifically to visit Craftfest, it would have been wrong to not go in. I am a fan of most craft fairs although I prefer a paper crafts focus rather than, say, quilting. The Craftfest one had a bit of everything, including beads (lots), sewing machines, folky arts and crafts, and stuff I wouldn’t know what to do with. Fortunately, it also had some favourite stamping and scrapbooking places where I stocked up. I particularly like the cupcake stamps and the new Versamark Dazzle ink pad. There were a couple of people at Craftfest who looked distinctly out of place. Maybe they mistook the pergamano paper piercing stand for ‘body piercing’. We did not have time to check out the Tattoo expo after all, but walking past the hall it was in, you would not find a greater contrast to the crowd at Craftfest. That and the band that was rocking inside (loud!) and the motorbikes parked out the front. Next on the agenda is the Craft and Quilt Fair at Darling Harbour in August. Perhaps they should have the speedboat and sport fishing enthusiast’s expo next door, to keep things interesting. I wonder if the tattoo people were having as much of a stare at the craft people. Perhaps it was a one way gawk. Errr, proooobably the Crafts Expo. Haha! Those cupcake stamps are awesome! Great catch! hi Sally - is there a difference, I didn't really notice, LOL! hi FFichiban - I can just see you with a tattoo of a burger or something foody! hi Leona - thanks! I love your restaurant reviews - you do get to a lot of places! hi Betty - I know the feeling, I'd say 85% of the stuff I buy is not used. See you at the next Craft show?
# Peperomia fragilis Yunck. SPECIES #### Status ACCEPTED #### According to International Plant Names Index #### Published in null #### Original name null ### Remarks null
About Him/Her: I am basically from science background and switched over to engineering in 2004. I have completed M. Sc. in Physics followed by M. Tech. in Energy Technology. You may google out me to know something more about me. I like to engage most of the time with research work. How would he/she like to contribute: I have found myself attached to XOBDO since April 2007 and trying to glorify it by sticking to the correctness of the contents. I usually get attracted towards the encyclopedic entries of wildlife and birds.
· Complete syllabus coverage for AIPMT & AIIMS and School/Board (Class 11th and 12th). Topic Wise Books perfectly Suitable for School and Engineering Aspirants. · Complete Study Package of (7 Books). · Total number of Pages in 7 Books is 1690. · 983 Solved and 6194 unsolved Questions. 1. Thermodynamics, Chemical Equlibrium, Ionic Equilibrium, Redox Reaction. 2. General Organic Chemistry (Nomenclature, Isomerism, Reaction Mechanism), Hydrocarbons (Alkane, Alkene, Alkynes, Arenes, Halogen, Petroleum). 3. Halogen Compuonds, Oxygen Compounds ( Alcohol, Phenol & Ether), Carbonyl Compounds (Aldehyde & Ketones, Carboxylic & Benzoic Acids, Nitrogen Compounds (Amines, Aniline, Nitro Compounds, Cyanides & Isocyanides). 4. State of Matter, Electro Chemistry, Surface Chemistry, Chemical Kinematics, Solution. 5. Some Basic Principle of Chemistry, Structure of Atom, State of Matter- Gaseous State, State of Matter- Liquid State, Periodic Table, Chemical Bonding. 6. Hydrogen, s-Block Elements, p-Block Elements, Environmental Chemistry. 7. p-Block elements(15,16,17 &18 groups), d & f Blocks, Polymers, Biomolecules, Chemistry in Everyday Life.
Welcome to The New Ambassador! The Ambassador is the place to host any event within our spacious, state of the art, 3-in-1 venue. We at the Ambassador have hosted many of today's artists from all genres. Our auditorium can hold more than 2,000 people. Our elegant ballrooms can accommodate 200+ guests individually with ease. Klub Klymaxx offers a new twist on evening entertainment; bringing to the variety of music, Ala Carte and premium . All in all; The New Ambassador is a unique, locale venue with spacing for any , occasion, or gathering.