text
stringlengths 2
100k
| meta
dict |
---|---|
---
title: 我的 Chrome 扩展 和 主题(不间断更新)
date: 2016-09-11 22:14:50
description: "对自己的 Chrome 进行一个重新整理"
categories: [软件]
tags: [Chrome]
---
<!-- more -->

## 初衷
- 整理自己的习惯,也希望你有好的扩展可以留言给我推荐,能提高效率的事情我非常需要!
- 不对下面扩展进行再唠叨,具体点击到扩展主页可以看到对应的说明。
## 扩展
### 具体
- **官网**:<https://chrome.google.com/webstore/category/extensions>
- 生活相关:
- **LastPass**:<https://chrome.google.com/webstore/detail/lastpass-free-password-ma/hdokiejnpimakedhajhdlcegeplioahd>
- **Proxy SwitchyOmega**:<https://chrome.google.com/webstore/detail/proxy-switchyomega/padekgcemlokbadohgkifijomclgjgif>
- **Adblock Plus**:<https://chrome.google.com/webstore/detail/adblock-plus/cfhdojbkjhnklbpkdaibdccddilifddb>
- **Ghostery**:<https://chrome.google.com/webstore/detail/ghostery/mlomiejdfkolichcflejclcbmpeaniij>
- **SimpleUndoClose**:<https://chrome.google.com/webstore/detail/simpleundoclose/emhohdghchmjepmigjojkehidlielknj>
- **Better History**:<https://chrome.google.com/webstore/detail/better-history/obciceimmggglbmelaidpjlmodcebijb>
- **Text Link**:<https://chrome.google.com/webstore/detail/text-link/ikfmghnmgeicocakijcebpkmbfljnogk>
- **Markdown Here**:<https://chrome.google.com/webstore/detail/markdown-here/elifhakcjgalahccnjkneoccemfahfoa>
- **为知笔记网页剪辑器**:<https://chrome.google.com/webstore/detail/wiznote-web-clipper/jfanfpmalehkemdiiebjljddhgojhfab>
- **Chrono下载管理器**:<https://chrome.google.com/webstore/detail/chrono-download-manager/mciiogijehkdemklbdcbfkefimifhecn>
- **二维码生成器**:<https://chrome.google.com/webstore/detail/%E4%BA%8C%E7%BB%B4%E7%A0%81%E7%94%9F%E6%88%90%E5%99%A8/ajaomcmkalmeeahjfdklkcjbljhbokjl>
- **划词翻译**:<https://chrome.google.com/webstore/detail/%E5%88%92%E8%AF%8D%E7%BF%BB%E8%AF%91/ikhdkkncnoglghljlkmcimlnlhkeamad>
- **Google 翻译**:<https://chrome.google.com/webstore/detail/google-translate/aapbdbdomjkkjkaonfhkkikfgjllcleb>
- **右键搜**:<https://chrome.google.com/webstore/detail/context-menus/phlfmkfpmphogkomddckmggcfpmfchpn>
- **捕捉网页截图 - FireShot的**:<https://chrome.google.com/webstore/detail/take-webpage-screenshots/mcbpblocgmgfnpjjppndjkmgjaogfceg>
- **Aerys - 窗口标签管理器**:<https://chrome.google.com/webstore/detail/aerys-tab-manager/kclbicheojedbinfjdjjolmciodoihkl>
- 开发相关:
- **Wappalyzer 查看该网站使用什么技术**:<https://chrome.google.com/webstore/detail/wappalyzer/gppongmhjkpfnbhagpmjfkannfbllamg>
- **JSONView**:<https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc>
- **Octotree**:<https://chrome.google.com/webstore/detail/octotree/bkhaagjahfmjljalopjnoealnfndnagc>
- **Postman**:<https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop>
- **EditThisCookie**:<https://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg?utm_source=chrome-ntp-icon>
- 少用:
- **惠惠购物助手**:<https://chrome.google.com/webstore/detail/%E6%83%A0%E6%83%A0%E8%B4%AD%E7%89%A9%E5%8A%A9%E6%89%8B/ohjkicjidmohhfcjjlahfppkdblibkkb>
- [购物党全网自动比价工具](https://chrome.google.com/webstore/detail/%E8%B4%AD%E7%89%A9%E5%85%9A%E5%85%A8%E7%BD%91%E8%87%AA%E5%8A%A8%E6%AF%94%E4%BB%B7%E5%B7%A5%E5%85%B7%EF%BC%9A%E6%B7%98%E5%AE%9D%E4%BA%AC%E4%B8%9C%E7%BE%8E%E4%BA%9A%E6%97%A5%E4%BA%9A%E6%AF%94%E4%BB%B7%E3%80%8118/jgphnjokjhjlcnnajmfjlacjnjkhleah)
- **为什么你们就是不能加个空格呢**:<https://chrome.google.com/webstore/detail/%E7%82%BA%E4%BB%80%E9%BA%BC%E4%BD%A0%E5%80%91%E5%B0%B1%E6%98%AF%E4%B8%8D%E8%83%BD%E5%8A%A0%E5%80%8B%E7%A9%BA%E6%A0%BC%E5%91%A2%EF%BC%9F/paphcfdffjnbcgkokihcdjliihicmbpd>
- **Context**:<https://chrome.google.com/webstore/detail/context/aalnjolghjkkogicompabhhbbkljnlka>
- **Spaces**:<https://chrome.google.com/webstore/detail/spaces/cenkmofngpohdnkbjdpilgpmbiiljjim>
- **The Great Suspender**:<https://chrome.google.com/webstore/detail/the-great-suspender/klbibkeccnjlkjkiokjodocebajanakg>
- **阅读模式**:<https://chrome.google.com/webstore/detail/reader-view/iibolhpkjjmoepndefdmdlmbpfhlgjpl>
- **Enable Copy**:<https://chrome.google.com/webstore/detail/enable-copy/lmnganadkecefnhncokdlaohlkneihio>
- **Linkclump**:<https://chrome.google.com/webstore/detail/linkclump/lfpjkncokllnfokkgpkobnkbkmelfefj>
- **Easy Auto Refresh**:<https://chrome.google.com/webstore/detail/easy-auto-refresh/aabcgdmkeabbnleenpncegpcngjpnjkc>
- **Xmarks**:<https://chrome.google.com/webstore/detail/xmarks-bookmark-sync/ajpgkpeckebdhofmmjfgcjjiiejpodla>
| {
"pile_set_name": "Github"
} |
fe81c549d37931543d09cb45a8daeeb20b1f4085
| {
"pile_set_name": "Github"
} |
//
// ========================================================================
// Copyright (c) 1995-2020 Mort Bay Consulting Pty Ltd and others.
//
// This program and the accompanying materials are made available under
// the terms of the Eclipse Public License 2.0 which is available at
// https://www.eclipse.org/legal/epl-2.0
//
// This Source Code may also be made available under the following
// Secondary Licenses when the conditions for such availability set
// forth in the Eclipse Public License, v. 2.0 are satisfied:
// the Apache License v2.0 which is available at
// https://www.apache.org/licenses/LICENSE-2.0
//
// SPDX-License-Identifier: EPL-2.0 OR Apache-2.0
// ========================================================================
//
import org.eclipse.jetty.alpn.conscrypt.server.ConscryptServerALPNProcessor;
import org.eclipse.jetty.io.ssl.ALPNProcessor;
module org.eclipse.jetty.alpn.conscrypt.server
{
requires org.conscrypt;
requires org.slf4j;
requires transitive org.eclipse.jetty.alpn.server;
provides ALPNProcessor.Server with ConscryptServerALPNProcessor;
}
| {
"pile_set_name": "Github"
} |
from decimal import Decimal
import pytest
from django_scopes import scopes_disabled
from pretix.base.models import TaxRule
TEST_TAXRULE_RES = {
'name': {'en': 'VAT'},
'rate': '19.00',
'price_includes_tax': True,
'eu_reverse_charge': False,
'home_country': ''
}
@pytest.mark.django_db
def test_rule_list(token_client, organizer, event, taxrule):
res = dict(TEST_TAXRULE_RES)
res["id"] = taxrule.pk
resp = token_client.get('/api/v1/organizers/{}/events/{}/taxrules/'.format(organizer.slug, event.slug))
assert resp.status_code == 200
assert [res] == resp.data['results']
@pytest.mark.django_db
def test_rule_detail(token_client, organizer, event, taxrule):
res = dict(TEST_TAXRULE_RES)
res["id"] = taxrule.pk
resp = token_client.get('/api/v1/organizers/{}/events/{}/taxrules/{}/'.format(organizer.slug, event.slug,
taxrule.pk))
assert resp.status_code == 200
assert res == resp.data
@pytest.mark.django_db
def test_rule_create(token_client, organizer, event):
resp = token_client.post(
'/api/v1/organizers/{}/events/{}/taxrules/'.format(organizer.slug, event.slug),
{
"name": {"en": "VAT", "de": "MwSt"},
"rate": "19.00",
"price_includes_tax": True,
"eu_reverse_charge": False,
"home_country": "DE"
},
format='json'
)
assert resp.status_code == 201
rule = TaxRule.objects.get(pk=resp.data['id'])
assert rule.name.data == {"en": "VAT", "de": "MwSt"}
assert rule.rate == Decimal("19.00")
assert rule.price_includes_tax is True
assert rule.eu_reverse_charge is False
assert str(rule.home_country) == "DE"
@pytest.mark.django_db
def test_rule_update(token_client, organizer, event, taxrule):
resp = token_client.patch(
'/api/v1/organizers/{}/events/{}/taxrules/{}/'.format(organizer.slug, event.slug, taxrule.pk),
{
"rate": "20.00",
},
format='json'
)
assert resp.status_code == 200
taxrule.refresh_from_db()
assert taxrule.rate == Decimal("20.00")
assert taxrule.all_logentries().last().action_type == 'pretix.event.taxrule.changed'
@pytest.mark.django_db
def test_rule_delete(token_client, organizer, event, taxrule):
resp = token_client.delete(
'/api/v1/organizers/{}/events/{}/taxrules/{}/'.format(organizer.slug, event.slug, taxrule.pk),
)
assert resp.status_code == 204
assert not event.tax_rules.exists()
@pytest.mark.django_db
def test_rule_delete_forbidden(token_client, organizer, event, taxrule):
with scopes_disabled():
event.items.create(name="Budget Ticket", default_price=23, tax_rule=taxrule)
resp = token_client.delete(
'/api/v1/organizers/{}/events/{}/taxrules/{}/'.format(organizer.slug, event.slug, taxrule.pk),
)
assert resp.status_code == 403
assert event.tax_rules.exists()
| {
"pile_set_name": "Github"
} |
/*
* Copyright (C) 2009, 2010 Red Hat Inc, Steven Rostedt <[email protected]>
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation;
* version 2.1 of the License (not later!)
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this program; if not, see <http://www.gnu.org/licenses>
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
#ifndef _LARGEFILE64_SOURCE
#define _LARGEFILE64_SOURCE
#endif
#include <stdlib.h>
#include <fcntl.h>
#include <errno.h>
#include <string>
#include <vector>
#include <forward_list>
#include <csetjmp>
#include <unordered_map>
#include <unordered_set>
#include <algorithm>
#include <future>
#ifdef WIN32
#include <io.h>
#define open _open
#define close _close
#define read _read
#define lseek64 _lseeki64
#define dup _dup
#else
#define USE_MMAP
#include <sys/mman.h>
#include <sys/param.h>
#include <unistd.h>
#ifdef __APPLE__
#define lseek64 lseek
#define off64_t off_t
#endif
#endif
extern "C"
{
#include "event-parse.h"
#include "kbuffer.h"
}
#include "../gpuvis_macros.h"
#include "trace-read.h"
enum
{
TRACECMD_OPTION_DONE,
TRACECMD_OPTION_DATE,
TRACECMD_OPTION_CPUSTAT,
TRACECMD_OPTION_BUFFER,
TRACECMD_OPTION_TRACECLOCK,
TRACECMD_OPTION_UNAME,
TRACECMD_OPTION_HOOK,
TRACECMD_OPTION_OFFSET,
TRACEMCD_OPTION_CPUCOUNT,
TRACECMD_OPTION_VERSION,
TRACECMD_OPTION_SAVED_TGIDS = 32,
};
enum
{
TRACECMD_FL_BUFFER_INSTANCE = ( 1 << 1 ),
TRACECMD_FL_LATENCY = ( 1 << 2 ),
};
typedef struct tracecmd_input tracecmd_input_t;
typedef struct pevent pevent_t;
typedef struct pevent_record pevent_record_t;
typedef struct kbuffer kbuffer_t;
typedef struct event_format event_format_t;
typedef struct file_info
{
int done;
tracecmd_input_t *handle;
pevent_record_t *record;
} file_info_t;
typedef struct page
{
off64_t offset;
tracecmd_input_t *handle;
void *map;
int ref_count;
} page_t;
typedef struct cpu_data
{
/* the first two never change */
unsigned long long file_offset = 0;
unsigned long long file_size = 0;
unsigned long long offset = 0;
unsigned long long size = 0;
unsigned long long timestamp = 0;
std::forward_list< page_t * > pages;
pevent_record_t *next_record = nullptr;
page_t *page = nullptr;
kbuffer_t *kbuf = nullptr;
pevent_record_t event_record;
} cpu_data_t;
typedef struct input_buffer_instance
{
char *name;
size_t offset;
} input_buffer_instance_t;
typedef struct tracecmd_input
{
pevent_t *pevent = nullptr;
tracecmd_input_t *parent = nullptr;
unsigned long flags = 0;
int fd = -1;
int long_size = 0;
unsigned long page_size = 0;
int cpus = 0;
int ref = 0;
int nr_buffers = 0; /* buffer instances */
bool use_trace_clock = false;
#ifdef USE_MMAP
bool read_page = false;
#endif
cpu_data_t *cpu_data = nullptr;
unsigned long long ts_offset = 0;
input_buffer_instance_t *buffers = nullptr;
std::string file;
std::string uname;
std::string opt_version;
std::vector< std::string > cpustats;
/* file information */
size_t header_files_start = 0;
size_t ftrace_files_start = 0;
size_t event_files_start = 0;
size_t total_file_size = 0;
std::jmp_buf jump_buffer;
} tracecmd_input_t;
void logf( const char *fmt, ... ) ATTRIBUTE_PRINTF( 1, 2 );
[[noreturn]] static void die( tracecmd_input_t *handle, const char *fmt, ... ) ATTRIBUTE_PRINTF( 2, 3 );
[[noreturn]] static void die( tracecmd_input_t *handle, const char *fmt, ... )
{
int ret;
va_list ap;
char *buf = NULL;
va_start( ap, fmt );
ret = vasprintf( &buf, fmt, ap );
va_end( ap );
if ( ret >= 0 )
{
logf( "[Error] %s", buf );
free( buf );
}
std::longjmp( handle->jump_buffer, -1 );
}
static void *trace_malloc( tracecmd_input_t *handle, size_t size )
{
void *ret = malloc( size );
if ( !ret && handle )
die( handle, "%s(%lu) failed\n", __func__, size );
return ret;
}
static void *trace_realloc( tracecmd_input_t *handle, void *ptr, size_t size )
{
void *ret = realloc( ptr, size );
if ( !ret && handle )
die( handle, "%s(%lu) failed\n", __func__, size );
return ret;
}
static size_t do_read( tracecmd_input_t *handle, void *data, size_t size )
{
ssize_t ret;
ret = TEMP_FAILURE_RETRY( read( handle->fd, data, size ) );
if ( ret < 0 )
{
die( handle, "%s(\"%s\") failed: %s (%d)\n", __func__, handle->file.c_str(),
strerror( errno ), errno );
}
return ret;
}
static void do_read_check( tracecmd_input_t *handle, void *data, size_t size )
{
size_t ret;
ret = do_read( handle, data, size );
if ( ret != size )
{
die( handle, "%s(\"%s\") failed: %s (%d)\n", __func__, handle->file.c_str(),
strerror( errno ), errno );
}
}
static char *read_string( tracecmd_input_t *handle )
{
char *str = NULL;
char buf[ BUFSIZ ];
unsigned int i, r;
unsigned int size = 0;
for ( ;; )
{
r = do_read( handle, buf, BUFSIZ );
if ( r <= 0 )
goto fail;
for ( i = 0; i < r; i++ )
{
if ( !buf[ i ] )
break;
}
if ( i < r )
break;
if ( str )
{
size += BUFSIZ;
str = ( char * )trace_realloc( handle, str, size );
memcpy( str + ( size - BUFSIZ ), buf, BUFSIZ );
}
else
{
size = BUFSIZ;
str = ( char * )trace_malloc( handle, size );
memcpy( str, buf, size );
}
}
/* move the file descriptor to the end of the string */
off64_t val;
val = lseek64( handle->fd, -( int )( r - ( i + 1 ) ), SEEK_CUR );
if ( val < 0 )
goto fail;
if ( str )
{
size += i + 1;
str = ( char * )trace_realloc( handle, str, size );
memcpy( str + ( size - i ), buf, i );
str[ size ] = 0;
}
else
{
size = i + 1;
str = ( char * )trace_malloc( handle, size );
memcpy( str, buf, i );
str[ i ] = 0;
}
return str;
fail:
if ( str )
free( str );
return NULL;
}
static unsigned int read4( tracecmd_input_t *handle )
{
unsigned int data;
pevent_t *pevent = handle->pevent;
do_read_check( handle, &data, 4 );
return __data2host4( pevent, data );
}
static unsigned long long read8( tracecmd_input_t *handle )
{
unsigned long long data;
pevent_t *pevent = handle->pevent;
do_read_check( handle, &data, 8 );
return __data2host8( pevent, data );
}
static void read_data_and_size( tracecmd_input_t *handle,
char **data, unsigned long long *size )
{
*size = read8( handle );
*data = ( char * )trace_malloc( handle, *size + 1 );
do_read_check( handle, *data, *size );
}
static void read_header_files( tracecmd_input_t *handle )
{
pevent_t *pevent = handle->pevent;
long long size;
char *header;
char buf[ 64 ];
do_read_check( handle, buf, 12 );
if ( memcmp( buf, "header_page", 12 ) != 0 )
die( handle, "%s: header_page not found.\n", __func__ );
size = read8( handle );
header = ( char * )trace_malloc( handle, size );
do_read_check( handle, header, size );
pevent_parse_header_page( pevent, header, size, handle->long_size );
free( header );
/*
* The size field in the page is of type long,
* use that instead, since it represents the kernel.
*/
handle->long_size = pevent->header_page_size_size;
do_read_check( handle, buf, 13 );
if ( memcmp( buf, "header_event", 13 ) != 0 )
die( handle, "%s: header_event not found.\n", __func__ );
size = read8( handle );
header = ( char * )trace_malloc( handle, size );
do_read_check( handle, header, size );
free( header );
handle->ftrace_files_start = lseek64( handle->fd, 0, SEEK_CUR );
}
static void read_ftrace_file( tracecmd_input_t *handle,
unsigned long long size )
{
char *buf;
pevent_t *pevent = handle->pevent;
buf = ( char * )trace_malloc( handle, size );
do_read_check( handle, buf, size );
if ( pevent_parse_event( pevent, buf, size, "ftrace" ) )
die( handle, "%s: pevent_parse_event failed.\n", __func__ );
free( buf );
}
static void read_event_file( tracecmd_input_t *handle,
char *system, unsigned long long size )
{
pevent_t *pevent = handle->pevent;
char *buf;
buf = ( char * )trace_malloc( handle, size );
do_read_check( handle, buf, size );
if ( pevent_parse_event( pevent, buf, size, system ) )
die( handle, "%s: pevent_parse_event failed.\n", __func__ );
free( buf );
}
static void read_ftrace_files( tracecmd_input_t *handle )
{
unsigned int count;
count = read4( handle );
for ( unsigned int i = 0; i < count; i++ )
{
unsigned long long size;
size = read8( handle );
read_ftrace_file( handle, size );
}
handle->event_files_start = lseek64( handle->fd, 0, SEEK_CUR );
}
static void read_event_files( tracecmd_input_t *handle )
{
unsigned int systems;
systems = read4( handle );
for ( unsigned int i = 0; i < systems; i++ )
{
char *system;
unsigned int count;
system = read_string( handle );
if ( !system )
die( handle, "%s: failed to read system string.\n", __func__ );
count = read4( handle );
for ( unsigned int x = 0; x < count; x++ )
{
unsigned long long size;
size = read8( handle );
read_event_file( handle, system, size );
}
free( system );
}
}
static void parse_proc_kallsyms( pevent_t *pevent, char *file )
{
char *line;
char *next = NULL;
line = strtok_r( file, "\n", &next );
while ( line )
{
char *endptr;
unsigned long long addr;
// Parse lines of this form:
// addr ch func mod
// ffffffffc07ec678 d descriptor.58652\t[bnep]
addr = strtoull( line, &endptr, 16 );
if ( endptr && *endptr == ' ' )
{
char ch;
char *mod;
char *func;
line = endptr + 1;
ch = *line;
// Skip: x86-64 reports per-cpu variable offsets as absolute (A)
if ( ch && ( ch != 'A' ) && ( line[ 1 ] == ' ' ) && line[ 2 ] )
{
func = line + 2;
mod = strchr( func, '\t' );
if ( mod && mod[ 1 ] == '[' )
{
*mod = 0;
mod += 2;
mod[ strlen( mod ) - 1 ] = 0;
}
//$ TODO mikesart PERF: This is adding the item to a func_list
// which gets converted to a sorted array afterwords.
pevent_register_function( pevent, func, addr, mod );
}
}
line = strtok_r( NULL, "\n", &next );
}
}
static void read_proc_kallsyms( tracecmd_input_t *handle )
{
char *buf;
unsigned int size;
pevent_t *pevent = handle->pevent;
size = read4( handle );
if ( !size )
return; /* OK? */
buf = ( char * )trace_malloc( handle, size + 1 );
do_read_check( handle, buf, size );
buf[ size ] = 0;
parse_proc_kallsyms( pevent, buf );
free( buf );
}
static void parse_ftrace_printk( tracecmd_input_t *handle, pevent_t *pevent, char *file )
{
char *line;
char *next = NULL;
line = strtok_r( file, "\n", &next );
while ( line )
{
char *fmt;
char *printk;
char *addr_str;
unsigned long long addr;
addr_str = strtok_r( line, ":", &fmt );
if ( !addr_str )
{
die( handle, "%s: printk format with empty entry.\n", __func__ );
break;
}
addr = strtoull( addr_str, NULL, 16 );
/* fmt still has a space, skip it */
printk = strdup( fmt + 1 );
line = strtok_r( NULL, "\n", &next );
pevent_register_print_string( pevent, printk, addr );
free( printk );
}
}
static void read_ftrace_printk( tracecmd_input_t *handle )
{
char *buf;
unsigned int size;
size = read4( handle );
if ( !size )
return; /* OK? */
buf = ( char * )trace_malloc( handle, size + 1 );
do_read_check( handle, buf, size );
buf[ size ] = 0;
parse_ftrace_printk( handle, handle->pevent, buf );
free( buf );
}
static void parse_cmdlines( pevent_t *pevent, char *file )
{
char *line;
char *next = NULL;
line = strtok_r( file, "\n", &next );
while ( line )
{
int pid;
char *comm;
// Parse "PID CMDLINE"
pid = strtoul( line, &comm, 10 );
if ( comm && *comm == ' ' )
pevent_register_comm( pevent, comm + 1, pid );
line = strtok_r( NULL, "\n", &next );
}
}
static void read_and_parse_cmdlines( tracecmd_input_t *handle )
{
char *cmdlines;
unsigned long long size;
pevent_t *pevent = handle->pevent;
read_data_and_size( handle, &cmdlines, &size );
cmdlines[ size ] = 0;
parse_cmdlines( pevent, cmdlines );
free( cmdlines );
}
static void parse_trace_clock( pevent_t *pevent, char *file )
{
// "[local] global counter uptime perf mono mono_raw x86-tsc\n"
char *clock = strchr( file, '[' );
if ( clock )
{
char *end = strchr( clock, ']' );
if ( end )
{
*end = 0;
pevent_register_trace_clock( pevent, clock + 1 );
}
}
}
/**
* tracecmd_read_headers - read the header information from trace.dat
* @handle: input handle for the trace.dat file
*
* This reads the trace.dat file for various information. Like the
* format of the ring buffer, event formats, ftrace formats, kallsyms
* and printk.
*/
static void tracecmd_read_headers( tracecmd_input_t *handle )
{
read_header_files( handle );
read_ftrace_files( handle );
read_event_files( handle );
read_proc_kallsyms( handle );
read_ftrace_printk( handle );
read_and_parse_cmdlines( handle );
pevent_set_long_size( handle->pevent, handle->long_size );
}
static int read_page( tracecmd_input_t *handle, off64_t offset,
int cpu, void *map )
{
off64_t ret;
off64_t save_seek;
/* other parts of the code may expect the pointer to not move */
save_seek = lseek64( handle->fd, 0, SEEK_CUR );
ret = lseek64( handle->fd, offset, SEEK_SET );
if ( ret < 0 )
return -1;
ret = TEMP_FAILURE_RETRY( read( handle->fd, map, handle->page_size ) );
if ( ret < 0 )
return -1;
/* reset the file pointer back */
lseek64( handle->fd, save_seek, SEEK_SET );
return 0;
}
static page_t *allocate_page( tracecmd_input_t *handle, int cpu, off64_t offset )
{
int ret;
cpu_data_t *cpu_data = &handle->cpu_data[ cpu ];
for ( page_t *page : cpu_data->pages )
{
if ( page->offset == offset )
{
page->ref_count++;
return page;
}
}
page_t *page = ( page_t * )trace_malloc( handle, sizeof( *page ) );
memset( page, 0, sizeof( *page ) );
page->offset = offset;
page->handle = handle;
#ifdef USE_MMAP
if ( handle->read_page )
#endif
{
page->map = trace_malloc( handle, handle->page_size );
ret = read_page( handle, offset, cpu, page->map );
if ( ret < 0 )
{
free( page->map );
page->map = NULL;
}
}
#ifdef USE_MMAP
else
{
page->map = mmap( NULL, handle->page_size, PROT_READ, MAP_PRIVATE,
handle->fd, offset );
if ( page->map == MAP_FAILED )
page->map = NULL;
}
#endif
if ( !page->map )
{
free( page );
return NULL;
}
cpu_data->pages.push_front( page );
page->ref_count = 1;
return page;
}
static void __free_page( tracecmd_input_t *handle, int cpu, page_t *page )
{
if ( !page->ref_count )
die( handle, "%s: Page ref count is zero.\n", __func__ );
page->ref_count--;
if ( page->ref_count )
return;
#ifdef USE_MMAP
if ( handle->read_page )
#endif
free( page->map );
#ifdef USE_MMAP
else
munmap( page->map, handle->page_size );
#endif
handle->cpu_data[ cpu ].pages.remove( page );
free( page );
}
static void free_page( tracecmd_input_t *handle, int cpu )
{
if ( !handle->cpu_data ||
cpu >= handle->cpus ||
!handle->cpu_data[ cpu ].page )
return;
__free_page( handle, cpu, handle->cpu_data[ cpu ].page );
handle->cpu_data[ cpu ].page = NULL;
}
static void __free_record( pevent_record_t *record )
{
if ( record->priv )
{
page_t *page = ( page_t * )record->priv;
__free_page( page->handle, record->cpu, page );
}
}
static void free_record( tracecmd_input_t *handle, pevent_record_t *record )
{
if ( !record )
return;
if ( !record->ref_count )
die( handle, "%s: record ref count is zero.\n", __func__ );
record->ref_count--;
if ( record->ref_count )
return;
if ( record->locked )
die( handle, "%s: freeing record when it is locked.\n", __func__ );
record->data = NULL;
__free_record( record );
}
static void free_next( tracecmd_input_t *handle, int cpu )
{
pevent_record_t *record;
if ( !handle->cpu_data || cpu >= handle->cpus )
return;
record = handle->cpu_data[ cpu ].next_record;
if ( !record )
return;
handle->cpu_data[ cpu ].next_record = NULL;
record->locked = 0;
free_record( handle, record );
}
/*
* Page is mapped, now read in the page header info.
*/
static void update_page_info( tracecmd_input_t *handle, int cpu )
{
pevent_t *pevent = handle->pevent;
void *ptr = handle->cpu_data[ cpu ].page->map;
kbuffer_t *kbuf = handle->cpu_data[ cpu ].kbuf;
/* FIXME: handle header page */
if ( pevent->header_page_ts_size != 8 )
die( handle, "%s: expected a long long type for timestamp.\n", __func__ );
kbuffer_load_subbuffer( kbuf, ptr );
if ( ( unsigned long )kbuffer_subbuffer_size( kbuf ) > handle->page_size )
{
die( handle, "%s: bad page read, with size of %d\n", __func__,
kbuffer_subbuffer_size( kbuf ) );
}
handle->cpu_data[ cpu ].timestamp = kbuffer_timestamp( kbuf ) + handle->ts_offset;
}
/*
* get_page maps a page for a given cpu.
*
* Returns 1 if the page was already mapped,
* 0 if it mapped successfully
* -1 on error
*/
static int get_page( tracecmd_input_t *handle, int cpu,
unsigned long long offset )
{
/* Don't map if the page is already where we want */
if ( handle->cpu_data[ cpu ].offset == offset &&
handle->cpu_data[ cpu ].page )
return 1;
/* Do not map no data for CPU */
if ( !handle->cpu_data[ cpu ].size )
return -1;
if ( offset & ( handle->page_size - 1 ) )
die( handle, "%s: bad page offset %llx\n", __func__, offset );
if ( offset < handle->cpu_data[ cpu ].file_offset ||
offset > handle->cpu_data[ cpu ].file_offset +
handle->cpu_data[ cpu ].file_size )
{
die( handle, "%s: bad page offset %llx\n", __func__, offset );
return -1;
}
handle->cpu_data[ cpu ].offset = offset;
handle->cpu_data[ cpu ].size = ( handle->cpu_data[ cpu ].file_offset +
handle->cpu_data[ cpu ].file_size ) - offset;
free_page( handle, cpu );
handle->cpu_data[ cpu ].page = allocate_page( handle, cpu, offset );
if ( !handle->cpu_data[ cpu ].page )
return -1;
update_page_info( handle, cpu );
return 0;
}
static int get_next_page( tracecmd_input_t *handle, int cpu )
{
unsigned long long offset;
if ( !handle->cpu_data[ cpu ].page )
return 0;
free_page( handle, cpu );
if ( handle->cpu_data[ cpu ].size <= handle->page_size )
{
handle->cpu_data[ cpu ].offset = 0;
return 0;
}
offset = handle->cpu_data[ cpu ].offset + handle->page_size;
return get_page( handle, cpu, offset );
}
/**
* tracecmd_peek_data - return the record at the current location.
* @handle: input handle for the trace.dat file
* @cpu: the CPU to pull from
*
* This returns the record at the current location of the CPU
* iterator. It does not increment the CPU iterator.
*/
static pevent_record_t *tracecmd_peek_data( tracecmd_input_t *handle, int cpu )
{
pevent_record_t *record;
unsigned long long ts;
kbuffer_t *kbuf;
page_t *page;
void *data;
if ( cpu >= handle->cpus )
return NULL;
page = handle->cpu_data[ cpu ].page;
kbuf = handle->cpu_data[ cpu ].kbuf;
if ( handle->cpu_data[ cpu ].next_record )
{
record = handle->cpu_data[ cpu ].next_record;
if ( !record->data )
die( handle, "%s: Something freed the record.\n", __func__ );
if ( handle->cpu_data[ cpu ].timestamp == record->ts )
return record;
/*
* The timestamp changed, which means the cached
* record is no longer valid. Reread a new record.
*/
free_next( handle, cpu );
}
read_again:
if ( !page )
return NULL;
data = kbuffer_read_event( kbuf, &ts );
if ( !data )
{
if ( get_next_page( handle, cpu ) )
return NULL;
page = handle->cpu_data[ cpu ].page;
goto read_again;
}
handle->cpu_data[ cpu ].timestamp = ts + handle->ts_offset;
record = &handle->cpu_data[ cpu ].event_record;
record->ts = handle->cpu_data[ cpu ].timestamp;
record->offset = handle->cpu_data[ cpu ].offset + kbuffer_curr_offset( kbuf );
record->missed_events = 0;
record->record_size = kbuffer_curr_size( kbuf );
record->size = kbuffer_event_size( kbuf );
record->data = data;
record->cpu = cpu;
record->ref_count = 1;
record->locked = 1;
record->priv = page;
handle->cpu_data[ cpu ].next_record = record;
page->ref_count++;
kbuffer_next_event( kbuf, NULL );
return record;
}
/**
* tracecmd_read_data - read the next record and increment
* @handle: input handle for the trace.dat file
* @cpu: the CPU to pull from
*
* This returns the record at the current location of the CPU
* iterator and increments the CPU iterator.
*
* The record returned must be freed.
*/
static pevent_record_t *tracecmd_read_data( tracecmd_input_t *handle, int cpu )
{
pevent_record_t *record;
record = tracecmd_peek_data( handle, cpu );
handle->cpu_data[ cpu ].next_record = NULL;
if ( record )
record->locked = 0;
return record;
}
/**
* tracecmd_peek_next_data - return the next record
* @handle: input handle to the trace.dat file
* @rec_cpu: return pointer to the CPU that the record belongs to
*
* This returns the next record by time. This is different than
* tracecmd_peek_data in that it looks at all CPUs. It does a peek
* at each CPU and the record with the earliest time stame is
* returned. If @rec_cpu is not NULL it gets the CPU id the record was
* on. It does not increment the CPU iterator.
*/
static pevent_record_t *tracecmd_peek_next_data( tracecmd_input_t *handle, int *rec_cpu )
{
int cpu;
int next_cpu;
unsigned long long ts;
pevent_record_t *record, *next_record = NULL;
if ( rec_cpu )
*rec_cpu = -1;
next_cpu = -1;
ts = 0;
for ( cpu = 0; cpu < handle->cpus; cpu++ )
{
record = tracecmd_peek_data( handle, cpu );
if ( record && ( !next_record || record->ts < ts ) )
{
ts = record->ts;
next_cpu = cpu;
next_record = record;
}
}
if ( next_record )
{
if ( rec_cpu )
*rec_cpu = next_cpu;
return next_record;
}
return NULL;
}
/**
* tracecmd_read_next_data - read the next record
* @handle: input handle to the trace.dat file
* @rec_cpu: return pointer to the CPU that the record belongs to
*
* This returns the next record by time. This is different than
* tracecmd_read_data in that it looks at all CPUs. It does a peek
* at each CPU and the record with the earliest time stame is
* returned. If @rec_cpu is not NULL it gets the CPU id the record was
* on. The CPU cursor of the returned record is moved to the
* next record.
*
* Multiple reads of this function will return a serialized list
* of all records for all CPUs in order of time stamp.
*
* The record returned must be freed.
*/
static pevent_record_t *tracecmd_read_next_data( tracecmd_input_t *handle, int *rec_cpu )
{
int next_cpu;
pevent_record_t *record;
record = tracecmd_peek_next_data( handle, &next_cpu );
if ( !record )
return NULL;
if ( rec_cpu )
*rec_cpu = next_cpu;
return tracecmd_read_data( handle, next_cpu );
}
static int init_cpu( tracecmd_input_t *handle, int cpu )
{
cpu_data_t *cpu_data = &handle->cpu_data[ cpu ];
cpu_data->offset = cpu_data->file_offset;
cpu_data->size = cpu_data->file_size;
cpu_data->timestamp = 0;
if ( !cpu_data->size )
{
//$ TODO: printf( "CPU %d is empty.\n", cpu );
return 0;
}
cpu_data->page = allocate_page( handle, cpu, cpu_data->offset );
#ifdef USE_MMAP
if ( !cpu_data->page && !handle->read_page )
{
//$ TODO mikesart: This just should never happen, yes?
die( handle, "%s: Can't mmap file, will read instead.\n", __func__ );
if ( cpu )
{
/*
* If the other CPUs had size and was able to mmap
* then bail.
*/
for ( int i = 0; i < cpu; i++ )
{
if ( handle->cpu_data[ i ].size )
return -1;
}
}
/* try again without mmapping, just read it directly */
handle->read_page = true;
cpu_data->page = allocate_page( handle, cpu, cpu_data->offset );
if ( !cpu_data->page )
/* Still no luck, bail! */
return -1;
}
#endif
update_page_info( handle, cpu );
return 0;
}
static void tracecmd_parse_tgids(struct pevent *pevent,
char *file, int size __maybe_unused)
{
char *next = NULL;
int pid, tgid;
char *endptr;
char *line;
line = strtok_r(file, "\n", &next);
while (line) {
pid = strtol(line, &endptr, 10);
if (endptr && *endptr == ' ') {
tgid = strtol(endptr + 1, NULL, 10);
pevent_register_tgid(pevent, tgid, pid);
}
line = strtok_r(NULL, "\n", &next);
}
}
static int handle_options( tracecmd_input_t *handle )
{
for ( ;; )
{
char *buf;
unsigned int size;
unsigned short option;
unsigned long long offset;
do_read_check( handle, &option, 2 );
if ( option == TRACECMD_OPTION_DONE )
break;
/* next 4 bytes is the size of the option */
do_read_check( handle, &size, 4 );
size = __data2host4( handle->pevent, size );
buf = ( char * )trace_malloc( handle, size );
do_read_check( handle, buf, size );
switch ( option )
{
case TRACECMD_OPTION_DATE:
/*
* A time has been mapped that is the
* difference between the timestamps and
* gtod. It is stored as ASCII with '0x'
* appended.
*/
offset = strtoll( buf, NULL, 0 );
/* Convert from micro to nano */
offset *= 1000;
handle->ts_offset += offset;
break;
case TRACECMD_OPTION_OFFSET:
/*
* Similar to date option, but just adds an
* offset to the timestamp.
*/
offset = strtoll( buf, NULL, 0 );
handle->ts_offset += offset;
break;
case TRACECMD_OPTION_CPUSTAT:
handle->cpustats.push_back( buf );
break;
case TRACECMD_OPTION_BUFFER:
{
input_buffer_instance_t *buffer;
/* A buffer instance is saved at the end of the file */
handle->nr_buffers++;
handle->buffers = ( input_buffer_instance_t * )trace_realloc( handle, handle->buffers,
sizeof( *handle->buffers ) * handle->nr_buffers );
buffer = &handle->buffers[ handle->nr_buffers - 1 ];
buffer->name = strdup( buf + 8 );
if ( !buffer->name )
{
free( handle->buffers );
handle->buffers = NULL;
return -ENOMEM;
}
offset = *( unsigned long long * )buf;
buffer->offset = __data2host8( handle->pevent, offset );
break;
}
case TRACECMD_OPTION_TRACECLOCK:
handle->use_trace_clock = true;
break;
case TRACECMD_OPTION_UNAME:
handle->uname = buf;
break;
case TRACECMD_OPTION_HOOK:
// Used by trace-cmd report --profile. We don't need it.
// hook = tracecmd_create_event_hook( buf );
break;
case TRACEMCD_OPTION_CPUCOUNT:
if ( size > sizeof( uint64_t ) )
tracecmd_parse_tgids(handle->pevent, buf, size);
break;
case TRACECMD_OPTION_SAVED_TGIDS:
tracecmd_parse_tgids(handle->pevent, buf, size);
break;
case TRACECMD_OPTION_VERSION:
handle->opt_version = std::string( buf, size );
break;
default:
logf( "%s: unknown option %d\n", __func__, option ) ;
break;
}
free( buf );
}
return 0;
}
static void read_cpu_data( tracecmd_input_t *handle )
{
char buf[ 10 ];
enum kbuffer_endian endian;
enum kbuffer_long_size long_size;
do_read_check( handle, buf, 10 );
// check if this handles options
if ( strncmp( buf, "options", 7 ) == 0 )
{
if ( handle_options( handle ) < 0 )
die( handle, "%s: handle_options failed.\n", __func__ );
do_read_check( handle, buf, 10 );
}
// Check if this is a latency report or not.
if ( strncmp( buf, "latency", 7 ) == 0 )
{
handle->flags |= TRACECMD_FL_LATENCY;
return;
}
/* We expect this to be flyrecord */
if ( strncmp( buf, "flyrecord", 9 ) != 0 )
die( handle, "%s: flyrecord not found.\n", __func__ );
handle->cpu_data = new ( std::nothrow ) cpu_data_t [ handle->cpus ];
if ( !handle->cpu_data )
die( handle, "%s: new cpu_data_t failed.\n", __func__ );
long_size = ( handle->long_size == 8 ) ? KBUFFER_LSIZE_8 : KBUFFER_LSIZE_4;
endian = handle->pevent->file_bigendian ?
KBUFFER_ENDIAN_BIG : KBUFFER_ENDIAN_LITTLE;
for ( int cpu = 0; cpu < handle->cpus; cpu++ )
{
unsigned long long size;
unsigned long long offset;
handle->cpu_data[ cpu ].kbuf = kbuffer_alloc( long_size, endian );
if ( !handle->cpu_data[ cpu ].kbuf )
die( handle, "%s: kbuffer_alloc failed.\n", __func__ );
if ( handle->pevent->old_format )
kbuffer_set_old_format( handle->cpu_data[ cpu ].kbuf );
offset = read8( handle );
size = read8( handle );
handle->cpu_data[ cpu ].file_offset = offset;
handle->cpu_data[ cpu ].file_size = size;
if ( size && ( offset + size > handle->total_file_size ) )
{
/* this happens if the file got truncated */
die( handle, "%s: File possibly truncated. "
"Need at least %llu, but file size is %zu.\n",
__func__, offset + size, handle->total_file_size );
}
if ( init_cpu( handle, cpu ) < 0 )
die( handle, "%s: init_cpu failed.\n", __func__ );
}
}
static int read_and_parse_trace_clock( tracecmd_input_t *handle,
pevent_t *pevent )
{
char *trace_clock;
unsigned long long size;
read_data_and_size( handle, &trace_clock, &size );
trace_clock[ size ] = 0;
parse_trace_clock( pevent, trace_clock );
free( trace_clock );
return 0;
}
/**
* tracecmd_init_data - prepare reading the data from trace.dat
* @handle: input handle for the trace.dat file
*
* This prepares reading the data from trace.dat. This is called
* after tracecmd_read_headers() and before tracecmd_read_data().
*/
void tracecmd_init_data( tracecmd_input_t *handle )
{
pevent_t *pevent = handle->pevent;
handle->cpus = read4( handle );
pevent_set_cpus( pevent, handle->cpus );
read_cpu_data( handle );
if ( handle->use_trace_clock )
{
/*
* There was a bug in the original setting of
* the trace_clock file which let it get
* corrupted. If it fails to read, force local
* clock.
*/
if ( read_and_parse_trace_clock( handle, pevent ) < 0 )
pevent_register_trace_clock( pevent, "local" );
}
}
static inline int tracecmd_host_bigendian( void )
{
unsigned char str[] = { 0x1, 0x2, 0x3, 0x4 };
unsigned int *ptr = ( unsigned int * )str;
return *ptr == 0x01020304;
}
/**
* tracecmd_alloc_fd - create a tracecmd_input handle from a file descriptor
* @fd: the file descriptor for the trace.dat file
*
* Allocate a tracecmd_input handle from a file descriptor and open the
* file. This tests if the file is of trace-cmd format and allocates
* a parse event descriptor.
*
* The returned pointer is not ready to be read yet. A tracecmd_read_headers()
* and tracecmd_init_data() still need to be called on the descriptor.
*/
static void tracecmd_alloc_fd( tracecmd_input_t *handle, const char *file, int fd )
{
char buf[ 64 ];
char *version;
char test[] = { 23, 8, 68 };
handle->file = file;
handle->fd = fd;
handle->ref = 1;
do_read_check( handle, buf, 3 );
if ( memcmp( buf, test, 3 ) != 0 )
die( handle, "[Error] %s: header memcheck failed.\n", __func__ );
do_read_check( handle, buf, 7 );
if ( memcmp( buf, "tracing", 7 ) != 0 )
die( handle, "[Error] %s: failed to read tracing string.\n", __func__ );
version = read_string( handle );
if ( !version )
die( handle, "[Error] %s: failed to read version string.\n", __func__ );
free( version );
do_read_check( handle, buf, 1 );
handle->pevent = pevent_alloc();
if ( !handle->pevent )
die( handle, "[Error] %s: pevent_alloc failed.\n", __func__ );
handle->pevent->file_bigendian = buf[ 0 ];
handle->pevent->host_bigendian = tracecmd_host_bigendian();
do_read_check( handle, buf, 1 );
handle->long_size = buf[ 0 ];
handle->page_size = read4( handle );
handle->header_files_start = lseek64( handle->fd, 0, SEEK_CUR );
handle->total_file_size = lseek64( handle->fd, 0, SEEK_END );
handle->header_files_start = lseek64( handle->fd, handle->header_files_start, SEEK_SET );
}
/**
* tracecmd_alloc_fd - create a tracecmd_input handle from a file name
* @file: the file name of the file that is of tracecmd data type.
*
* Allocate a tracecmd_input handle from a given file name and open the
* file. This tests if the file is of trace-cmd format and allocates
* a parse event descriptor.
*
* The returned pointer is not ready to be read yet. A tracecmd_read_headers()
* and tracecmd_init_data() still need to be called on the descriptor.
*/
static void tracecmd_alloc( tracecmd_input_t *handle, const char *file )
{
int fd;
fd = TEMP_FAILURE_RETRY( open( file, O_RDONLY ) );
if ( fd < 0 )
{
logf( "[Error] %s: open(\"%s\") failed: %d\n", __func__, file, errno );
return;
}
tracecmd_alloc_fd( handle, file, fd );
}
/**
* tracecmd_close - close and free the trace.dat handle
* @handle: input handle for the trace.dat file
*
* Close the file descriptor of the handle and frees
* the resources allocated by the handle.
*/
static void tracecmd_close( tracecmd_input_t *handle )
{
int cpu;
if ( !handle )
return;
if ( handle->ref <= 0 )
{
die( handle, "%s: bad ref count on handle.\n", __func__ );
return;
}
if ( --handle->ref )
return;
for ( cpu = 0; cpu < handle->cpus; cpu++ )
{
/* The tracecmd_peek_data may have cached a record */
free_next( handle, cpu );
free_page( handle, cpu );
if ( handle->cpu_data && handle->cpu_data[ cpu ].kbuf )
{
kbuffer_free( handle->cpu_data[ cpu ].kbuf );
// if ( !list_empty( &handle->cpu_data[ cpu ].pages ) )
if ( !handle->cpu_data[ cpu ].pages.empty() )
die( handle, "%s: pages still allocated on cpu %d\n", __func__, cpu );
}
}
if ( handle->fd >= 0 )
{
close( handle->fd );
handle->fd = -1;
}
delete [] handle->cpu_data;
if ( handle->flags & TRACECMD_FL_BUFFER_INSTANCE )
{
tracecmd_close( handle->parent );
}
else
{
/* Only main handle frees pevent */
pevent_free( handle->pevent );
}
delete handle;
}
static tracecmd_input_t *tracecmd_buffer_instance_handle( tracecmd_input_t *handle, int indx )
{
tracecmd_input_t *new_handle;
input_buffer_instance_t *buffer = &handle->buffers[ indx ];
size_t offset;
ssize_t ret;
if ( indx >= handle->nr_buffers )
return NULL;
/*
* We make a copy of the current handle, but we substitute
* the cpu data with the cpu data for this buffer.
*/
new_handle = new tracecmd_input_t;
*new_handle = *handle;
new_handle->cpu_data = NULL;
new_handle->nr_buffers = 0;
new_handle->buffers = NULL;
new_handle->ref = 1;
new_handle->parent = handle;
handle->ref++;
new_handle->fd = dup( handle->fd );
new_handle->flags |= TRACECMD_FL_BUFFER_INSTANCE;
/* Save where we currently are */
offset = lseek64( handle->fd, 0, SEEK_CUR );
ret = lseek64( handle->fd, buffer->offset, SEEK_SET );
if ( ret < 0 )
{
die( handle, "%s: could not seek to buffer %s offset %lu.\n",
__func__, buffer->name, buffer->offset );
}
read_cpu_data( new_handle );
ret = lseek64( handle->fd, offset, SEEK_SET );
if ( ret < 0 )
die( handle, "%s: could not seek to back to offset %ld\n", __func__, offset );
return new_handle;
}
static bool is_timestamp_in_us( char *trace_clock, bool use_trace_clock )
{
if ( !use_trace_clock )
return true;
// trace_clock information:
// https://www.kernel.org/doc/Documentation/trace/ftrace.txt
if ( !strcmp( trace_clock, "local" ) ||
!strcmp( trace_clock, "global" ) ||
!strcmp( trace_clock, "uptime" ) ||
!strcmp( trace_clock, "perf" ) )
return true;
/* trace_clock is setting in tsc or counter mode */
return false;
}
extern "C" void print_str_arg( struct trace_seq *s, void *data, int size,
struct event_format *event, const char *format,
int len_arg, struct print_arg *arg );
class trace_data_t
{
public:
trace_data_t( EventCallback &_cb, trace_info_t &_trace_info, StrPool &_strpool ) :
cb( _cb ), trace_info( _trace_info ), strpool( _strpool )
{
seqno_str = strpool.getstr( "seqno" );
crtc_str = strpool.getstr( "crtc" );
ip_str = strpool.getstr( "ip" );
parent_ip_str = strpool.getstr( "parent_ip" );
buf_str = strpool.getstr( "buf" );
ftrace_print_str = strpool.getstr( "ftrace-print" );
ftrace_function_str = strpool.getstr( "ftrace-function" );
drm_vblank_event_str = strpool.getstr( "drm_vblank_event" );
sched_switch_str = strpool.getstr( "sched_switch" );
time_str = strpool.getstr( "time" );
high_prec_str = strpool.getstr( "high_prec" );
}
public:
EventCallback &cb;
trace_info_t &trace_info;
StrPool &strpool;
const char *seqno_str;
const char *crtc_str;
const char *ip_str;
const char *parent_ip_str;
const char *buf_str;
const char *ftrace_print_str;
const char *ftrace_function_str;
const char *drm_vblank_event_str;
const char *sched_switch_str;
const char *time_str;
const char *high_prec_str;
};
static void init_event_flags( trace_data_t &trace_data, trace_event_t &event )
{
// Make sure our event type bits are cleared
event.flags &= ~( TRACE_FLAG_FENCE_SIGNALED |
TRACE_FLAG_FTRACE_PRINT |
TRACE_FLAG_VBLANK |
TRACE_FLAG_TIMELINE |
TRACE_FLAG_SW_QUEUE |
TRACE_FLAG_HW_QUEUE |
TRACE_FLAG_SCHED_SWITCH |
TRACE_FLAG_SCHED_SWITCH_TASK_RUNNING |
TRACE_FLAG_AUTOGEN_COLOR );
// fence_signaled was renamed to dma_fence_signaled post v4.9
if ( event.system == trace_data.ftrace_print_str )
event.flags |= TRACE_FLAG_FTRACE_PRINT;
else if ( event.name == trace_data.drm_vblank_event_str )
event.flags |= TRACE_FLAG_VBLANK;
else if ( event.name == trace_data.sched_switch_str )
event.flags |= TRACE_FLAG_SCHED_SWITCH;
else if ( strstr( event.name, "fence_signaled" ) )
event.flags |= TRACE_FLAG_FENCE_SIGNALED;
else if ( strstr( event.name, "amdgpu_cs_ioctl" ) )
event.flags |= TRACE_FLAG_SW_QUEUE;
else if ( strstr( event.name, "amdgpu_sched_run_job" ) )
event.flags |= TRACE_FLAG_HW_QUEUE;
}
static int trace_enum_events( trace_data_t &trace_data, tracecmd_input_t *handle, pevent_record_t *record )
{
int ret = 0;
event_format_t *event;
pevent_t *pevent = handle->pevent;
StrPool &strpool = trace_data.strpool;
event = pevent_find_event_by_record( pevent, record );
if ( event )
{
struct trace_seq seq;
trace_event_t trace_event;
struct format_field *format;
int pid = pevent_data_pid( pevent, record );
const char *comm = pevent_data_comm_from_pid( pevent, pid );
bool is_ftrace_function = !strcmp( "ftrace", event->system ) && !strcmp( "function", event->name );
bool is_printk_function = !strcmp( "ftrace", event->system ) && !strcmp( "print", event->name );
trace_seq_init( &seq );
trace_event.pid = pid;
trace_event.cpu = record->cpu;
trace_event.ts = record->ts;
trace_event.comm = strpool.getstrf( "%s-%u", comm, pid );
trace_event.system = strpool.getstr( event->system );
trace_event.name = strpool.getstr( event->name );
trace_event.user_comm = trace_event.comm;
// Get count of fields for this event.
uint32_t field_count = 0;
format = event->format.fields;
for ( ; format; format = format->next )
field_count++;
// Alloc space for our fields array.
trace_event.numfields = 0;
trace_event.fields = new event_field_t[ field_count ];
format = event->format.common_fields;
for ( ; format; format = format->next )
{
if ( !strcmp( format->name, "common_flags" ) )
{
unsigned long long val = pevent_read_number( pevent,
( char * )record->data + format->offset, format->size );
// TRACE_FLAG_IRQS_OFF | TRACE_FLAG_HARDIRQ | TRACE_FLAG_SOFTIRQ
trace_event.flags = val;
break;
}
}
format = event->format.fields;
for ( ; format; format = format->next )
{
const char *format_name = strpool.getstr( format->name );
trace_seq_reset( &seq );
if ( is_printk_function && ( format_name == trace_data.buf_str ) )
{
struct print_arg *args = event->print_fmt.args;
// We are assuming print_fmt for ftrace/print function is:
// print fmt: "%ps: %s", (void *)REC->ip, REC->buf
if ( args->type != PRINT_FIELD )
args = args->next;
print_str_arg( &seq, record->data, record->size,
event, "%s", -1, args );
// pretty_print prints IP and print string (buf).
// pretty_print( &seq, record->data, record->size, event );
trace_event.system = trace_data.ftrace_print_str;
// Convert all LFs to spaces.
for ( unsigned int i = 0; i < seq.len; i++ )
{
if ( seq.buffer[ i ] == '\n' )
seq.buffer[ i ] = ' ';
}
}
else
{
pevent_print_field( &seq, record->data, format );
if ( format_name == trace_data.seqno_str )
{
unsigned long long val = pevent_read_number( pevent,
( char * )record->data + format->offset, format->size );
trace_event.seqno = val;
}
else if ( format_name == trace_data.crtc_str )
{
unsigned long long val = pevent_read_number( pevent,
( char * )record->data + format->offset, format->size );
trace_event.crtc = val;
}
else if ( trace_event.name == trace_data.drm_vblank_event_str &&
format_name == trace_data.time_str &&
!strcmp(handle->pevent->trace_clock, "mono"))
{
// for drm_vblank_event, if "time" field is available,
// and the trace-clock is monotonic, store the timestamp
// passed along with the vblank event
unsigned long long val = pevent_read_number( pevent,
( char * )record->data + format->offset, format->size );
trace_event.vblank_ts = val - trace_data.trace_info.min_file_ts;
}
else if ( trace_event.name == trace_data.drm_vblank_event_str &&
format_name == trace_data.high_prec_str &&
!strcmp(handle->pevent->trace_clock, "mono"))
{
// for drm_vblank_event, if "high_prec" field is available,
// and the trace-lock is monotonic, store the field whether or not
// the passed timestamp is actually from a high-precision source
unsigned long long val = pevent_read_number( pevent,
( char * )record->data + format->offset, format->size );
trace_event.vblank_ts_high_prec = val != 0;
}
else if ( is_ftrace_function )
{
bool is_ip = ( format_name == trace_data.ip_str );
if ( is_ip || ( format_name == trace_data.parent_ip_str ) )
{
unsigned long long val = pevent_read_number( pevent,
( char * )record->data + format->offset, format->size );
const char *func = pevent_find_function( pevent, val );
if ( func )
{
trace_seq_printf( &seq, " (%s)", func );
if ( is_ip )
{
// If this is a ftrace:function event, set the name
// to be the function name we just found.
trace_event.system = trace_data.ftrace_function_str;
trace_event.name = strpool.getstr( func );
}
}
}
}
}
// Trim trailing whitespace
while ( ( seq.len > 0 ) &&
isspace( (unsigned char)seq.buffer[ seq.len - 1 ] ) )
{
seq.len--;
}
trace_seq_terminate( &seq );
trace_event.fields[ trace_event.numfields ].key = format_name;
trace_event.fields[ trace_event.numfields ].value = strpool.getstr( seq.buffer, seq.len );
trace_event.numfields++;
}
init_event_flags( trace_data, trace_event );
ret = trace_data.cb( trace_event );
trace_seq_destroy( &seq );
}
return ret;
}
static pevent_record_t *get_next_record( file_info_t *file_info )
{
if ( file_info->record )
return file_info->record;
if ( file_info->done )
return NULL;
file_info->record = tracecmd_read_next_data( file_info->handle, NULL );
if ( !file_info->record )
file_info->done = 1;
return file_info->record;
}
static void add_file( std::vector< file_info_t * > &file_list, tracecmd_input_t *handle, const char *file )
{
file_info_t *item = ( file_info_t * )trace_malloc( handle, sizeof( *item ) );
memset( item, 0, sizeof( *item ) );
item->handle = handle;
handle->file = file;
file_list.push_back( item );
}
void tgid_info_t::add_pid( int pid )
{
auto idx = std::find( pids.begin(), pids.end(), pid );
if ( idx == pids.end() )
{
// Add pid to the tgid array of pids: main thread at start, others at back
if ( pid == tgid )
pids.insert( pids.begin(), pid );
else
pids.push_back( pid );
}
}
static int64_t geti64( const char *str, const char *var )
{
const char *val = strstr( str, var );
if ( val )
return strtoull( val + strlen( var ), NULL, 10 );
return 0;
}
static int64_t getf64( const char *str, const char *var )
{
const char *val = strstr( str, var );
if ( val )
return strtold( val + strlen( var ), NULL ) * NSECS_PER_SEC;
return 0;
}
int read_trace_file( const char *file, StrPool &strpool, trace_info_t &trace_info, EventCallback &cb )
{
GPUVIS_TRACE_BLOCK( __func__ );
tracecmd_input_t *handle;
std::vector< file_info_t * > file_list;
// Latest ts value where a cpu data starts
unsigned long long trim_ts = 0;
handle = new ( std::nothrow ) tracecmd_input_t;
if ( !handle )
{
logf( "[Error] %s: new tracecmd_input_t failed.\n", __func__ );
return -1;
}
if ( setjmp( handle->jump_buffer ) )
{
logf( "[Error] %s: setjmp error called for %s.\n", __func__, file );
tracecmd_close( handle );
return -1;
}
tracecmd_alloc( handle, file );
add_file( file_list, handle, file );
// Read header information from trace.dat file.
tracecmd_read_headers( handle );
// Prepare reading the data from trace.dat.
tracecmd_init_data( handle );
// We don't support reading latency trace files.
if ( handle->flags & TRACECMD_FL_LATENCY )
die( handle, "%s: Latency traces not supported.\n", __func__ );
/* Find the kernel_stacktrace if available */
// pevent = handle->pevent;
// event = pevent_find_event_by_name(pevent, "ftrace", "kernel_stack");
/* If this file has buffer instances, get the file_info for them */
for ( int i = 0; i < handle->nr_buffers; i++ )
{
tracecmd_input_t *new_handle;
const char *name = handle->buffers[ i ].name;
new_handle = tracecmd_buffer_instance_handle( handle, i );
if ( !new_handle )
die( handle, "%s: could not retrieve handle %s.\n", __func__, name );
add_file( file_list, new_handle, name );
}
trace_info.cpus = handle->cpus;
trace_info.file = handle->file;
trace_info.uname = handle->uname;
trace_info.opt_version = handle->opt_version;
trace_info.timestamp_in_us = is_timestamp_in_us( handle->pevent->trace_clock, handle->use_trace_clock );
// Explicitly add idle thread at pid 0
trace_info.pid_comm_map.get_val( 0, strpool.getstr( "<idle>" ) );
// Add comms for other pids
for ( cmdline_list *cmdlist = handle->pevent->cmdlist;
cmdlist;
cmdlist = cmdlist->next )
{
int pid = cmdlist->pid;
const char *comm = cmdlist->comm;
int tgid = pevent_data_tgid_from_pid( handle->pevent, pid );
// Add to our pid --> comm map
trace_info.pid_comm_map.get_val( pid, strpool.getstr( comm ) );
if ( tgid > 0 )
{
tgid_info_t *tgid_info = trace_info.tgid_pids.get_val_create( tgid );
if ( !tgid_info->tgid )
{
tgid_info->tgid = tgid;
tgid_info->hashval += hashstr32( comm );
}
tgid_info->add_pid( pid );
// Pid --> tgid
trace_info.pid_tgid_map.get_val( pid, tgid );
}
}
// Find the lowest ts value in the trace file
for ( size_t cpu = 0; cpu < ( size_t )handle->cpus; cpu++ )
{
pevent_record_t *record = tracecmd_peek_data( handle, cpu );
if ( record )
trace_info.min_file_ts = std::min< int64_t >( trace_info.min_file_ts, record->ts );
}
trace_info.cpu_info.resize( handle->cpus );
for ( size_t cpu = 0; cpu < ( size_t )handle->cpus; cpu++ )
{
cpu_info_t &cpu_info = trace_info.cpu_info[ cpu ];
pevent_record_t *record = tracecmd_peek_data( handle, cpu );
cpu_info.file_offset = handle->cpu_data[ cpu ].file_offset;
cpu_info.file_size = handle->cpu_data[ cpu ].file_size;
if ( cpu < handle->cpustats.size() )
{
const char *stats = handle->cpustats[ cpu ].c_str();
cpu_info.entries = geti64( stats, "entries:" );
cpu_info.overrun = geti64( stats, "overrun:" );
cpu_info.commit_overrun = geti64( stats, "commit overrun:" );
cpu_info.bytes = geti64( stats, "bytes:" );
cpu_info.oldest_event_ts = getf64( stats, "oldest event ts:" );
cpu_info.now_ts = getf64( stats, "now ts:" );
cpu_info.dropped_events = geti64( stats, "dropped events:" );
cpu_info.read_events = geti64( stats, "read events:" );
if ( cpu_info.oldest_event_ts )
cpu_info.oldest_event_ts -= trace_info.min_file_ts;
if ( cpu_info.now_ts )
cpu_info.now_ts -= trace_info.min_file_ts;
}
if ( record )
{
cpu_info.min_ts = record->ts - trace_info.min_file_ts;
if ( cpu_info.overrun && trace_info.trim_trace )
trim_ts = std::max< unsigned long long >( trim_ts, record->ts );
}
}
// Scoot to tracestart time if it was set
trim_ts = std::max< unsigned long long >( trim_ts, trace_info.min_file_ts + trace_info.m_tracestart );
trace_data_t trace_data( cb, trace_info, strpool );
for ( ;; )
{
int ret = 0;
file_info_t *last_file_info = NULL;
pevent_record_t *last_record = NULL;
for ( file_info_t *file_info : file_list )
{
pevent_record_t *record = get_next_record( file_info );
if ( !last_record ||
( record && record->ts < last_record->ts ) )
{
last_record = record;
last_file_info = file_info;
}
}
if ( last_record )
{
cpu_info_t &cpu_info = trace_info.cpu_info[ last_record->cpu ];
// Bump up total event count for this cpu
cpu_info.tot_events++;
// Store the max ts value we've seen for this cpu
cpu_info.max_ts = last_record->ts - trace_info.min_file_ts;
// If this ts is greater than our trim value, add it.
if ( last_record->ts >= trim_ts )
{
cpu_info.events++;
ret = trace_enum_events( trace_data, last_file_info->handle, last_record );
// Bail if user specified read length and we hit it
if ( trace_info.m_tracelen && ( last_record->ts - trim_ts > trace_info.m_tracelen ) )
last_record = NULL;
}
free_record( last_file_info->handle, last_file_info->record );
last_file_info->record = NULL;
}
if ( !last_record || ret )
break;
}
if ( trim_ts )
trace_info.trimmed_ts = trim_ts - trace_info.min_file_ts;
for ( file_info_t *file_info : file_list )
{
tracecmd_close( file_info->handle );
free( file_info );
}
file_list.clear();
return 0;
}
| {
"pile_set_name": "Github"
} |
// Messages for Yiddish (ייִדיש)
// Exported from translatewiki.net
// Author: פוילישער
"CFBundleDisplayName" = "וויקיפעדיע";
| {
"pile_set_name": "Github"
} |
<a name="1.0.3"></a>
## [1.0.3](https://github.com/darrachequesne/has-binary/compare/1.0.2...1.0.3) (2018-05-14)
### Bug Fixes
* avoid use of global ([#4](https://github.com/darrachequesne/has-binary/issues/4)) ([91aa21e](https://github.com/darrachequesne/has-binary/commit/91aa21e))
<a name="1.0.2"></a>
## [1.0.2](https://github.com/darrachequesne/has-binary/compare/1.0.1...1.0.2) (2017-04-27)
### Bug Fixes
* Fix Blob detection for iOS 8/9 ([2a7b25c](https://github.com/darrachequesne/has-binary/commit/2a7b25c))
<a name="1.0.1"></a>
## [1.0.1](https://github.com/darrachequesne/has-binary/compare/1.0.0...1.0.1) (2017-04-05)
<a name="1.0.0"></a>
# [1.0.0](https://github.com/darrachequesne/has-binary/compare/0.1.7...1.0.0) (2017-04-05)
### Bug Fixes
* do not call toJSON more than once ([#7](https://github.com/darrachequesne/has-binary/issues/7)) ([27165d2](https://github.com/darrachequesne/has-binary/commit/27165d2))
* Ensure globals are functions before running `instanceof` checks against them. ([#4](https://github.com/darrachequesne/has-binary/issues/4)) ([f9be9b3](https://github.com/darrachequesne/has-binary/commit/f9be9b3))
* fix the case when toJSON() returns a Buffer ([#6](https://github.com/darrachequesne/has-binary/issues/6)) ([518747d](https://github.com/darrachequesne/has-binary/commit/518747d))
### Performance Improvements
* Performance improvements ([#3](https://github.com/darrachequesne/has-binary/issues/3)) ([3e88e81](https://github.com/darrachequesne/has-binary/commit/3e88e81))
<a name="0.1.7"></a>
## [0.1.7](https://github.com/darrachequesne/has-binary/compare/0.1.6...0.1.7) (2015-11-19)
<a name="0.1.6"></a>
## [0.1.6](https://github.com/darrachequesne/has-binary/compare/0.1.5...0.1.6) (2015-01-24)
<a name="0.1.5"></a>
## 0.1.5 (2014-09-04)
| {
"pile_set_name": "Github"
} |
open Lwt.Infix
open Sexplib
let read_data fd =
Lwt_unix.fstat fd >>= fun stats ->
let size = stats.Lwt_unix.st_size in
let buf = Bytes.create size in
let rec read start =
let len = size - start in
Lwt_unix.read fd buf start len >>= function
| x when x = len -> Lwt.return buf
| x -> read (start + x)
in
read 0
let read dir file =
Lwt.catch (fun () ->
Lwt_unix.access dir [ Unix.F_OK ; Unix.X_OK ] >>= fun () ->
let f = Filename.concat dir file in
Lwt_unix.access f [ Unix.F_OK ; Unix.R_OK ] >>= fun () ->
Lwt_unix.openfile f [Unix.O_RDONLY] 0 >>= fun fd ->
read_data fd >>= fun buf ->
Lwt_unix.close fd >|= fun () ->
Some (String.trim (Bytes.to_string buf)))
(fun _ ->
Lwt.catch
(fun () ->
Lwt_unix.access dir [ Unix.F_OK ] >|= fun () ->
Some dir)
(fun _ -> Lwt.return None) >>= function
| Some f ->
Lwt_unix.stat f >>= fun stat ->
if stat.Lwt_unix.st_kind = Lwt_unix.S_DIR then
Lwt.return None
else
Lwt.fail (Invalid_argument "given path is not a directory")
| None -> Lwt.return None )
let write_data fd data =
let rec write start =
let len = Bytes.length data - start in
Lwt_unix.write fd data start len >>= function
| n when n = len -> Lwt.return ()
| n -> write (start + n)
in
write 0
let rec ensure_create dir =
Lwt.catch
(fun () -> Lwt_unix.access dir [ Unix.F_OK ; Unix.X_OK ])
(fun _ ->
Lwt.catch
(fun () -> Lwt_unix.mkdir dir 0o700)
(fun _ ->
ensure_create (Filename.dirname dir) >>= fun () ->
ensure_create dir))
let open_append dir file =
ensure_create dir >>= fun () ->
let file = Filename.concat dir file in
Lwt_unix.openfile file Unix.([O_WRONLY ; O_APPEND; O_CREAT]) 0o600
let append dir file buf =
open_append dir file >>= fun fd ->
write_data fd buf >>= fun () ->
Lwt_unix.close fd
let delete file =
Lwt.catch (fun () ->
Lwt_unix.access file [ Unix.F_OK ; Unix.W_OK ] >>= fun () ->
Lwt_unix.unlink file)
(fun _ -> Lwt.return ())
let write dir filename buf =
ensure_create dir >>= fun () ->
let f = Filename.concat dir filename in
let file = f ^ ".tmp" in
delete file >>= fun () ->
Lwt_unix.openfile file [Unix.O_WRONLY ; Unix.O_EXCL ; Unix.O_CREAT] 0o600 >>= fun fd ->
write_data fd buf >>= fun () ->
Lwt_unix.close fd >>= fun () ->
Lwt_unix.rename file f >>= fun () ->
Lwt.return ()
let config = "config.sexp"
let colours = "colours.sexp"
let users = "users.sexp"
let maybe_create_dir dir =
Lwt.catch (fun () -> Lwt_unix.access dir [Unix.F_OK ; Unix.R_OK])
(fun _ -> Lwt_unix.mkdir dir 0o700)
let history = "histories"
let message_history_dir dir =
let name = Filename.concat dir history in
maybe_create_dir name >|= fun () ->
name
let user_dir dir =
let name = Filename.concat dir "users" in
maybe_create_dir name >|= fun () ->
name
let dump_config cfgdir cfg =
write cfgdir config (Xconfig.store_config cfg)
let load_config dsa cfg =
read cfg config >|= function
| Some x -> Some (Xconfig.load_config dsa x)
| None -> None
let load_colours cfg =
read cfg colours >|= function
| Some x -> Some (Sexplib.Conv.(list_of_sexp (pair_of_sexp User.chatkind_of_sexp string_of_sexp)) (Sexplib.Sexp.of_string x))
| None -> None
let dump_user cfgdir user =
user_dir cfgdir >>= fun userdir ->
let out = Xjid.bare_jid_to_string (Contact.bare user) in
match Contact.marshal user with
| None ->
let file = Filename.concat userdir out in
delete file
| Some sexp ->
let data = Bytes.of_string (Sexplib.Sexp.to_string_hum sexp) in
write userdir out data
let notify_user cfgdir =
let mvar = Lwt_mvar.create_empty () in
let rec loop () =
Lwt_mvar.take mvar >>= fun user ->
dump_user cfgdir user >>= fun () ->
loop ()
in
Lwt.async loop ;
mvar
let load_user_dir cfgdir users =
user_dir cfgdir >>= fun dir ->
Lwt_unix.opendir dir >>= fun dh ->
let rec loadone () =
Lwt.catch (fun () ->
Lwt_unix.readdir dh >>= fun f ->
if f = "." || f = ".." then
loadone ()
else
(read dir f >>= fun data ->
(match data with
| None -> Printf.printf "couldn't read file %s" f
| Some x -> match Contact.load x with
| None -> Printf.printf "something went wrong while loading %s/%s\n" dir f
| Some x -> Contact.replace_contact users x) ;
loadone ()))
(function
| End_of_file -> Lwt_unix.closedir dh
| e -> Printf.printf "problem while loading a user %s\n" (Printexc.to_string e) ; Lwt.return_unit)
in
loadone ()
let dump_history cfgdir buddy =
match Contact.marshal_history buddy with
| None -> Lwt.return_unit (* should remove if user.User.preserve_messages is not set *)
| Some sexp ->
message_history_dir cfgdir >>= fun history_dir ->
append history_dir (Xjid.bare_jid_to_string (Contact.bare buddy)) (Bytes.of_string sexp)
let dump_histories cfgdir users =
let users = Contact.fold (fun _ v acc -> v :: acc) users [] in
Lwt_list.iter_p (dump_history cfgdir) users
let load_users cfg =
let data = Contact.create () in
read cfg users >|= function
| Some x -> (try
let us = User.load_users x in
List.iter (Contact.replace_user data) us ;
data
with _ -> data)
| None -> data
let load_histories cfg contacts =
message_history_dir cfg >|= fun histo ->
let contact_list =
Contact.fold
(fun _ v acc -> Contact.load_history histo v :: acc)
contacts
[]
in
List.iter (Contact.replace_contact contacts) contact_list
let pass_file = "password"
let dump_password cfgdir password =
write cfgdir pass_file password
let load_password cfgdir =
read cfgdir pass_file
let otr_dsa = "otr_dsa.sexp"
let dump_dsa cfgdir dsa =
write cfgdir otr_dsa (Bytes.of_string (Sexp.to_string_hum (Mirage_crypto_pk.Dsa.sexp_of_priv dsa)))
let load_dsa cfgdir =
read cfgdir otr_dsa >|= function
| None -> None
| Some x -> Some (Mirage_crypto_pk.Dsa.priv_of_sexp (Sexp.of_string x))
| {
"pile_set_name": "Github"
} |
/*!
* humanize-byte - index.js
* Copyright(c) 2014 dead_horse <[email protected]>
* MIT Licensed
*/
'use strict';
/**
* Module dependencies.
*/
var bytes = require('bytes');
module.exports = function (size) {
if (typeof size === 'number') {
return size;
}
return bytes(size);
}
| {
"pile_set_name": "Github"
} |
/*******************************************************************************
* Copyright 2012-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
* Licensed under the Apache License, Version 2.0 (the "License"). You may not use
* this file except in compliance with the License. A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file.
* This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the
* specific language governing permissions and limitations under the License.
* *****************************************************************************
*
* AWS Tools for Windows (TM) PowerShell (TM)
*
*/
using System;
using System.Collections.Generic;
using System.Linq;
using System.Management.Automation;
using System.Text;
using Amazon.PowerShell.Common;
using Amazon.Runtime;
using Amazon.Redshift;
using Amazon.Redshift.Model;
namespace Amazon.PowerShell.Cmdlets.RS
{
/// <summary>
/// Allows you to purchase reserved nodes. Amazon Redshift offers a predefined set of
/// reserved node offerings. You can purchase one or more of the offerings. You can call
/// the <a>DescribeReservedNodeOfferings</a> API to obtain the available reserved node
/// offerings. You can call this API by providing a specific reserved node offering and
/// the number of nodes you want to reserve.
///
///
/// <para>
/// For more information about reserved node offerings, go to <a href="https://docs.aws.amazon.com/redshift/latest/mgmt/purchase-reserved-node-instance.html">Purchasing
/// Reserved Nodes</a> in the <i>Amazon Redshift Cluster Management Guide</i>.
/// </para>
/// </summary>
[Cmdlet("Request", "RSReservedNodeOffering", SupportsShouldProcess = true, ConfirmImpact = ConfirmImpact.Medium)]
[OutputType("Amazon.Redshift.Model.ReservedNode")]
[AWSCmdlet("Calls the Amazon Redshift PurchaseReservedNodeOffering API operation.", Operation = new[] {"PurchaseReservedNodeOffering"}, SelectReturnType = typeof(Amazon.Redshift.Model.PurchaseReservedNodeOfferingResponse))]
[AWSCmdletOutput("Amazon.Redshift.Model.ReservedNode or Amazon.Redshift.Model.PurchaseReservedNodeOfferingResponse",
"This cmdlet returns an Amazon.Redshift.Model.ReservedNode object.",
"The service call response (type Amazon.Redshift.Model.PurchaseReservedNodeOfferingResponse) can also be referenced from properties attached to the cmdlet entry in the $AWSHistory stack."
)]
public partial class RequestRSReservedNodeOfferingCmdlet : AmazonRedshiftClientCmdlet, IExecutor
{
#region Parameter NodeCount
/// <summary>
/// <para>
/// <para>The number of reserved nodes that you want to purchase.</para><para>Default: <code>1</code></para>
/// </para>
/// </summary>
[System.Management.Automation.Parameter(Position = 1, ValueFromPipelineByPropertyName = true)]
public System.Int32? NodeCount { get; set; }
#endregion
#region Parameter ReservedNodeOfferingId
/// <summary>
/// <para>
/// <para>The unique identifier of the reserved node offering you want to purchase.</para>
/// </para>
/// </summary>
#if !MODULAR
[System.Management.Automation.Parameter(Position = 0, ValueFromPipelineByPropertyName = true, ValueFromPipeline = true)]
#else
[System.Management.Automation.Parameter(Position = 0, ValueFromPipelineByPropertyName = true, ValueFromPipeline = true, Mandatory = true)]
[System.Management.Automation.AllowEmptyString]
[System.Management.Automation.AllowNull]
#endif
[Amazon.PowerShell.Common.AWSRequiredParameter]
public System.String ReservedNodeOfferingId { get; set; }
#endregion
#region Parameter Select
/// <summary>
/// Use the -Select parameter to control the cmdlet output. The default value is 'ReservedNode'.
/// Specifying -Select '*' will result in the cmdlet returning the whole service response (Amazon.Redshift.Model.PurchaseReservedNodeOfferingResponse).
/// Specifying the name of a property of type Amazon.Redshift.Model.PurchaseReservedNodeOfferingResponse will result in that property being returned.
/// Specifying -Select '^ParameterName' will result in the cmdlet returning the selected cmdlet parameter value.
/// </summary>
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
public string Select { get; set; } = "ReservedNode";
#endregion
#region Parameter PassThru
/// <summary>
/// Changes the cmdlet behavior to return the value passed to the ReservedNodeOfferingId parameter.
/// The -PassThru parameter is deprecated, use -Select '^ReservedNodeOfferingId' instead. This parameter will be removed in a future version.
/// </summary>
[System.Obsolete("The -PassThru parameter is deprecated, use -Select '^ReservedNodeOfferingId' instead. This parameter will be removed in a future version.")]
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
public SwitchParameter PassThru { get; set; }
#endregion
#region Parameter Force
/// <summary>
/// This parameter overrides confirmation prompts to force
/// the cmdlet to continue its operation. This parameter should always
/// be used with caution.
/// </summary>
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
public SwitchParameter Force { get; set; }
#endregion
protected override void ProcessRecord()
{
base.ProcessRecord();
var resourceIdentifiersText = FormatParameterValuesForConfirmationMsg(nameof(this.ReservedNodeOfferingId), MyInvocation.BoundParameters);
if (!ConfirmShouldProceed(this.Force.IsPresent, resourceIdentifiersText, "Request-RSReservedNodeOffering (PurchaseReservedNodeOffering)"))
{
return;
}
var context = new CmdletContext();
// allow for manipulation of parameters prior to loading into context
PreExecutionContextLoad(context);
#pragma warning disable CS0618, CS0612 //A class member was marked with the Obsolete attribute
if (ParameterWasBound(nameof(this.Select)))
{
context.Select = CreateSelectDelegate<Amazon.Redshift.Model.PurchaseReservedNodeOfferingResponse, RequestRSReservedNodeOfferingCmdlet>(Select) ??
throw new System.ArgumentException("Invalid value for -Select parameter.", nameof(this.Select));
if (this.PassThru.IsPresent)
{
throw new System.ArgumentException("-PassThru cannot be used when -Select is specified.", nameof(this.Select));
}
}
else if (this.PassThru.IsPresent)
{
context.Select = (response, cmdlet) => this.ReservedNodeOfferingId;
}
#pragma warning restore CS0618, CS0612 //A class member was marked with the Obsolete attribute
context.NodeCount = this.NodeCount;
context.ReservedNodeOfferingId = this.ReservedNodeOfferingId;
#if MODULAR
if (this.ReservedNodeOfferingId == null && ParameterWasBound(nameof(this.ReservedNodeOfferingId)))
{
WriteWarning("You are passing $null as a value for parameter ReservedNodeOfferingId which is marked as required. In case you believe this parameter was incorrectly marked as required, report this by opening an issue at https://github.com/aws/aws-tools-for-powershell/issues.");
}
#endif
// allow further manipulation of loaded context prior to processing
PostExecutionContextLoad(context);
var output = Execute(context) as CmdletOutput;
ProcessOutput(output);
}
#region IExecutor Members
public object Execute(ExecutorContext context)
{
var cmdletContext = context as CmdletContext;
// create request
var request = new Amazon.Redshift.Model.PurchaseReservedNodeOfferingRequest();
if (cmdletContext.NodeCount != null)
{
request.NodeCount = cmdletContext.NodeCount.Value;
}
if (cmdletContext.ReservedNodeOfferingId != null)
{
request.ReservedNodeOfferingId = cmdletContext.ReservedNodeOfferingId;
}
CmdletOutput output;
// issue call
var client = Client ?? CreateClient(_CurrentCredentials, _RegionEndpoint);
try
{
var response = CallAWSServiceOperation(client, request);
object pipelineOutput = null;
pipelineOutput = cmdletContext.Select(response, this);
output = new CmdletOutput
{
PipelineOutput = pipelineOutput,
ServiceResponse = response
};
}
catch (Exception e)
{
output = new CmdletOutput { ErrorResponse = e };
}
return output;
}
public ExecutorContext CreateContext()
{
return new CmdletContext();
}
#endregion
#region AWS Service Operation Call
private Amazon.Redshift.Model.PurchaseReservedNodeOfferingResponse CallAWSServiceOperation(IAmazonRedshift client, Amazon.Redshift.Model.PurchaseReservedNodeOfferingRequest request)
{
Utils.Common.WriteVerboseEndpointMessage(this, client.Config, "Amazon Redshift", "PurchaseReservedNodeOffering");
try
{
#if DESKTOP
return client.PurchaseReservedNodeOffering(request);
#elif CORECLR
return client.PurchaseReservedNodeOfferingAsync(request).GetAwaiter().GetResult();
#else
#error "Unknown build edition"
#endif
}
catch (AmazonServiceException exc)
{
var webException = exc.InnerException as System.Net.WebException;
if (webException != null)
{
throw new Exception(Utils.Common.FormatNameResolutionFailureMessage(client.Config, webException.Message), webException);
}
throw;
}
}
#endregion
internal partial class CmdletContext : ExecutorContext
{
public System.Int32? NodeCount { get; set; }
public System.String ReservedNodeOfferingId { get; set; }
public System.Func<Amazon.Redshift.Model.PurchaseReservedNodeOfferingResponse, RequestRSReservedNodeOfferingCmdlet, object> Select { get; set; } =
(response, cmdlet) => response.ReservedNode;
}
}
}
| {
"pile_set_name": "Github"
} |
/**
* oEmbed Plugin plugin
* Licensed under the MIT license
* jQuery Embed Plugin: http://code.google.com/p/jquery-oembed/ (MIT License)
* Plugin for: http://ckeditor.com/license (GPL/LGPL/MPL: http://ckeditor.com/license)
*/
(function() {
CKEDITOR.plugins.add('oembed', {
icons: 'oembed',
hidpi: true,
requires: 'widget,dialog',
lang: 'de,en,fr,nl,pl,pt-br,ru,tr', // %REMOVE_LINE_CORE%
version: 1.17,
init: function(editor) {
// Load jquery?
loadjQueryLibaries();
CKEDITOR.tools.extend(CKEDITOR.editor.prototype, {
oEmbed: function(url, maxWidth, maxHeight, responsiveResize) {
if (url.length < 1 || url.indexOf('http') < 0) {
alert(editor.lang.oembed.invalidUrl);
return false;
}
function embed() {
if (maxWidth == null || maxWidth == 'undefined') {
maxWidth = null;
}
if (maxHeight == null || maxHeight == 'undefined') {
maxHeight = null;
}
if (responsiveResize == null || responsiveResize == 'undefined') {
responsiveResize = false;
}
embedCode(url, editor, false, maxWidth, maxHeight, responsiveResize);
}
if (typeof(jQuery.fn.oembed) === 'undefined') {
CKEDITOR.scriptLoader.load(CKEDITOR.getUrl(CKEDITOR.plugins.getPath('oembed') + 'libs/jquery.oembed.min.js'), function() {
embed();
});
} else {
embed();
}
return true;
}
});
editor.widgets.add('oembed', {
draggable: false,
mask: true,
dialog: 'oembed',
allowedContent: {
div: {
styles: 'text-align,float',
attributes: '*',
classes: editor.config.oembed_WrapperClass != null ? editor.config.oembed_WrapperClass : "embeddedContent"
},
'div(embeddedContent,oembed-provider-*) iframe': {
attributes: '*'
},
'div(embeddedContent,oembed-provider-*) blockquote': {
attributes: '*'
},
'div(embeddedContent,oembed-provider-*) script': {
attributes: '*'
}
},
template:
'<div class="' + (editor.config.oembed_WrapperClass != null ? editor.config.oembed_WrapperClass : "embeddedContent") + '">' +
'</div>',
upcast: function(element) {
return element.name == 'div' && element.hasClass(editor.config.oembed_WrapperClass != null ? editor.config.oembed_WrapperClass : "embeddedContent");
},
init: function() {
var data = {
oembed: this.element.data('oembed') || '',
resizeType: this.element.data('resizeType') || 'noresize',
maxWidth: this.element.data('maxWidth') || 560,
maxHeight: this.element.data('maxHeight') || 315,
align: this.element.data('align') || 'none',
oembed_provider: this.element.data('oembed_provider') || ''
};
this.setData(data);
this.element.addClass('oembed-provider-' + data.oembed_provider);
this.on('dialog', function(evt) {
evt.data.widget = this;
}, this);
}
});
editor.ui.addButton('oembed', {
label: editor.lang.oembed.button,
command: 'oembed',
toolbar: 'insert,10',
icon: this.path + "icons/" + (CKEDITOR.env.hidpi ? "hidpi/" : "") + "oembed.png"
});
var resizeTypeChanged = function() {
var dialog = this.getDialog(),
resizetype = this.getValue(),
maxSizeBox = dialog.getContentElement('general', 'maxSizeBox').getElement(),
sizeBox = dialog.getContentElement('general', 'sizeBox').getElement();
if (resizetype == 'noresize') {
maxSizeBox.hide();
sizeBox.hide();
} else if (resizetype == "custom") {
maxSizeBox.hide();
sizeBox.show();
} else {
maxSizeBox.show();
sizeBox.hide();
}
};
String.prototype.beginsWith = function(string) {
return (this.indexOf(string) === 0);
};
function loadjQueryLibaries() {
if (typeof(jQuery) === 'undefined') {
CKEDITOR.scriptLoader.load('//ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js', function() {
jQuery.noConflict();
if (typeof(jQuery.fn.oembed) === 'undefined') {
CKEDITOR.scriptLoader.load(
CKEDITOR.getUrl(CKEDITOR.plugins.getPath('oembed') + 'libs/jquery.oembed.min.js')
);
}
});
} else if (typeof(jQuery.fn.oembed) === 'undefined') {
CKEDITOR.scriptLoader.load(CKEDITOR.getUrl(CKEDITOR.plugins.getPath('oembed') + 'libs/jquery.oembed.min.js'));
}
}
function embedCode(url, instance, maxWidth, maxHeight, responsiveResize, resizeType, align, widget) {
jQuery('body').oembed(url, {
onEmbed: function(e) {
var elementAdded = false,
provider = jQuery.fn.oembed.getOEmbedProvider(url);
widget.element.data('resizeType', resizeType);
if (resizeType == "responsive" || resizeType == "custom") {
widget.element.data('maxWidth', maxWidth);
widget.element.data('maxHeight', maxHeight);
}
widget.element.data('align', align);
// TODO handle align
if (align == 'center') {
if (!widget.inline)
widget.element.setStyle('text-align', 'center');
widget.element.removeStyle('float');
} else {
if (!widget.inline)
widget.element.removeStyle('text-align');
if (align == 'none')
widget.element.removeStyle('float');
else
widget.element.setStyle('float', align);
}
if (typeof e.code === 'string') {
if (widget.element.$.firstChild) {
widget.element.$.removeChild(widget.element.$.firstChild);
}
widget.element.appendHtml(e.code);
widget.element.data('oembed', url);
widget.element.data('oembed_provider', provider.name);
widget.element.addClass('oembed-provider-' + provider.name);
elementAdded = true;
} else if (typeof e.code[0].outerHTML === 'string') {
if (widget.element.$.firstChild) {
widget.element.$.removeChild(widget.element.$.firstChild);
}
widget.element.appendHtml(e.code[0].outerHTML);
widget.element.data('oembed', url);
widget.element.data('oembed_provider', provider.name);
widget.element.addClass('oembed-provider-' + provider.name);
elementAdded = true;
} else {
alert(editor.lang.oembed.noEmbedCode);
}
},
onError: function(externalUrl) {
if (externalUrl.indexOf("vimeo.com") > 0) {
alert(editor.lang.oembed.noVimeo);
} else {
alert(editor.lang.oembed.Error);
}
},
maxHeight: maxHeight,
maxWidth: maxWidth,
useResponsiveResize: responsiveResize,
embedMethod: 'editor'
});
}
CKEDITOR.dialog.add('oembed', function(editor) {
return {
title: editor.lang.oembed.title,
minWidth: CKEDITOR.env.ie && CKEDITOR.env.quirks ? 568 : 550,
minHeight: 155,
onShow: function() {
var data = {
oembed: this.widget.element.data('oembed') || '',
resizeType: this.widget.element.data('resizeType') || 'noresize',
maxWidth: this.widget.element.data('maxWidth'),
maxHeight: this.widget.element.data('maxHeight'),
align: this.widget.element.data('align') || 'none'
};
this.widget.setData(data);
this.getContentElement('general', 'resizeType').setValue(data.resizeType);
this.getContentElement('general', 'align').setValue(data.align);
var resizetype = this.getContentElement('general', 'resizeType').getValue(),
maxSizeBox = this.getContentElement('general', 'maxSizeBox').getElement(),
sizeBox = this.getContentElement('general', 'sizeBox').getElement();
if (resizetype == 'noresize') {
maxSizeBox.hide();
sizeBox.hide();
} else if (resizetype == "custom") {
maxSizeBox.hide();
sizeBox.show();
} else {
maxSizeBox.show();
sizeBox.hide();
}
},
onOk: function() {
},
contents: [
{
label: editor.lang.common.generalTab,
id: 'general',
elements: [
{
type: 'html',
id: 'oembedHeader',
html: '<div style="white-space:normal;width:500px;padding-bottom:10px">' + editor.lang.oembed.pasteUrl + '</div>'
}, {
type: 'text',
id: 'embedCode',
focus: function() {
this.getElement().focus();
},
label: editor.lang.oembed.url,
title: editor.lang.oembed.pasteUrl,
setup: function(widget) {
if (widget.data.oembed) {
this.setValue(widget.data.oembed);
}
},
commit: function(widget) {
var dialog = CKEDITOR.dialog.getCurrent(),
inputCode = dialog.getValueOf('general', 'embedCode').replace(/\s/g, ""),
resizeType = dialog.getContentElement('general', 'resizeType').
getValue(),
align = dialog.getContentElement('general', 'align').
getValue(),
maxWidth = null,
maxHeight = null,
responsiveResize = false,
editorInstance = dialog.getParentEditor();
if (inputCode.length < 1 || inputCode.indexOf('http') < 0) {
alert(editor.lang.oembed.invalidUrl);
return false;
}
if (resizeType == "noresize") {
responsiveResize = false;
maxWidth = null;
maxHeight = null;
} else if (resizeType == "responsive") {
maxWidth = dialog.getContentElement('general', 'maxWidth').
getInputElement().
getValue();
maxHeight = dialog.getContentElement('general', 'maxHeight').
getInputElement().
getValue();
responsiveResize = true;
} else if (resizeType == "custom") {
maxWidth = dialog.getContentElement('general', 'width').
getInputElement().
getValue();
maxHeight = dialog.getContentElement('general', 'height').
getInputElement().
getValue();
responsiveResize = false;
}
embedCode(inputCode, editorInstance, maxWidth, maxHeight, responsiveResize, resizeType, align, widget);
widget.setData('oembed', inputCode);
widget.setData('resizeType', resizeType);
widget.setData('align', align);
widget.setData('maxWidth', maxWidth);
widget.setData('maxHeight', maxHeight);
}
}, {
type: 'hbox',
widths: ['50%', '50%'],
children: [
{
id: 'resizeType',
type: 'select',
label: editor.lang.oembed.resizeType,
'default': 'noresize',
setup: function(widget) {
if (widget.data.resizeType) {
this.setValue(widget.data.resizeType);
}
},
items: [
[editor.lang.oembed.noresize, 'noresize'],
[editor.lang.oembed.responsive, 'responsive'],
[editor.lang.oembed.custom, 'custom']
],
onChange: resizeTypeChanged
}, {
type: 'hbox',
id: 'maxSizeBox',
widths: ['120px', '120px'],
style: 'float:left;position:absolute;left:58%;width:200px',
children: [
{
type: 'text',
width: '100px',
id: 'maxWidth',
'default': editor.config.oembed_maxWidth != null ? editor.config.oembed_maxWidth : '560',
label: editor.lang.oembed.maxWidth,
title: editor.lang.oembed.maxWidthTitle,
setup: function(widget) {
if (widget.data.maxWidth) {
this.setValue(widget.data.maxWidth);
}
}
}, {
type: 'text',
id: 'maxHeight',
width: '120px',
'default': editor.config.oembed_maxHeight != null ? editor.config.oembed_maxHeight : '315',
label: editor.lang.oembed.maxHeight,
title: editor.lang.oembed.maxHeightTitle,
setup: function(widget) {
if (widget.data.maxHeight) {
this.setValue(widget.data.maxHeight);
}
}
}
]
}, {
type: 'hbox',
id: 'sizeBox',
widths: ['120px', '120px'],
style: 'float:left;position:absolute;left:58%;width:200px',
children: [
{
type: 'text',
id: 'width',
width: '100px',
'default': editor.config.oembed_maxWidth != null ? editor.config.oembed_maxWidth : '560',
label: editor.lang.oembed.width,
title: editor.lang.oembed.widthTitle,
setup: function(widget) {
if (widget.data.maxWidth) {
this.setValue(widget.data.maxWidth);
}
}
}, {
type: 'text',
id: 'height',
width: '120px',
'default': editor.config.oembed_maxHeight != null ? editor.config.oembed_maxHeight : '315',
label: editor.lang.oembed.height,
title: editor.lang.oembed.heightTitle,
setup: function(widget) {
if (widget.data.maxHeight) {
this.setValue(widget.data.maxHeight);
}
}
}
]
}
]
}, {
type: 'hbox',
id: 'alignment',
children: [
{
id: 'align',
type: 'radio',
items: [
[editor.lang.oembed.none, 'none'],
[editor.lang.common.alignLeft, 'left'],
[editor.lang.common.alignCenter, 'center'],
[editor.lang.common.alignRight, 'right']
],
label: editor.lang.common.align,
setup: function(widget) {
this.setValue(widget.data.align);
}
}
]
}
]
}
]
};
});
}
});
}
)();
| {
"pile_set_name": "Github"
} |
let uid = 0
/**
* A dep is an observable that can have multiple
* directives subscribing to it.
*
* @constructor
*/
export default function Dep () {
this.id = uid++
this.subs = []
}
// the current target watcher being evaluated.
// this is globally unique because there could be only one
// watcher being evaluated at any time.
Dep.target = null
/**
* Add a directive subscriber.
*
* @param {Directive} sub
*/
Dep.prototype.addSub = function (sub) {
this.subs.push(sub)
}
/**
* Remove a directive subscriber.
*
* @param {Directive} sub
*/
Dep.prototype.removeSub = function (sub) {
this.subs.$remove(sub)
}
/**
* Add self as a dependency to the target watcher.
*/
Dep.prototype.depend = function () {
Dep.target.addDep(this)
}
/**
* Notify all subscribers of a new value.
*/
Dep.prototype.notify = function () {
// stablize the subscriber list first
var subs = this.subs
for (var i = 0, l = subs.length; i < l; i++) {
subs[i].update()
}
}
| {
"pile_set_name": "Github"
} |
package container
import (
"errors"
"fmt"
"path/filepath"
"github.com/docker/swarmkit/api"
)
func validateMounts(mounts []api.Mount) error {
for _, mount := range mounts {
// Target must always be absolute
if !filepath.IsAbs(mount.Target) {
return fmt.Errorf("invalid mount target, must be an absolute path: %s", mount.Target)
}
switch mount.Type {
// The checks on abs paths are required due to the container API confusing
// volume mounts as bind mounts when the source is absolute (and vice-versa)
// See #25253
// TODO: This is probably not necessary once #22373 is merged
case api.MountTypeBind:
if !filepath.IsAbs(mount.Source) {
return fmt.Errorf("invalid bind mount source, must be an absolute path: %s", mount.Source)
}
case api.MountTypeVolume:
if filepath.IsAbs(mount.Source) {
return fmt.Errorf("invalid volume mount source, must not be an absolute path: %s", mount.Source)
}
case api.MountTypeTmpfs:
if mount.Source != "" {
return errors.New("invalid tmpfs source, source must be empty")
}
default:
return fmt.Errorf("invalid mount type: %s", mount.Type)
}
}
return nil
}
| {
"pile_set_name": "Github"
} |
// numeral.js locale configuration
// locale : polish (pl)
// author : Dominik Bulaj : https://github.com/dominikbulaj
(function (global, factory) {
if (typeof define === 'function' && define.amd) {
define(['../numeral'], factory);
} else if (typeof module === 'object' && module.exports) {
factory(require('../numeral'));
} else {
factory(global.numeral);
}
}(this, function (numeral) {
numeral.register('locale', 'pl', {
delimiters: {
thousands: ' ',
decimal: ','
},
abbreviations: {
thousand: 'tys.',
million: 'mln',
billion: 'mld',
trillion: 'bln'
},
ordinal: function (number) {
return '.';
},
currency: {
symbol: 'PLN'
}
});
}));
| {
"pile_set_name": "Github"
} |
/**
* PS Move API - An interface for the PS Move Motion Controller
* Copyright (c) 2012 Thomas Perl <[email protected]>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
**/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "psmove.h"
int
main(int argc, char* argv[])
{
PSMove *move;
int i;
int count;
if ((count = psmove_count_connected()) < 1) {
fprintf(stderr, "No controllers connected.\n");
return EXIT_FAILURE;
}
for (i=0; i<count; i++) {
move = psmove_connect_by_id(i);
if (move == NULL) {
fprintf(stderr, "Could not connect to controller #%d.\n", i);
continue;
}
psmove_dump_calibration(move);
psmove_disconnect(move);
}
return EXIT_SUCCESS;
}
| {
"pile_set_name": "Github"
} |
(function () {
var mainApp = angular.module("mainApp", ["ui.router", "LocalStorageModule"]);
mainApp.config(["$stateProvider", "$urlRouterProvider", "$locationProvider",
function ($stateProvider, $urlRouterProvider, $locationProvider) {
$urlRouterProvider.otherwise("/unauthorized");
$stateProvider
.state("authorized", {
url: "/authorized",
templateUrl: "/templates/authorized.html",
controller: "AuthorizeController"
})
.state("forbidden", { url: "/forbidden", templateUrl: "/templates/forbidden.html" })
.state("unauthorized", { url: "/unauthorized", templateUrl: "/templates/unauthorized.html" })
.state("logoff", { url: "/logoff", templateUrl: "/templates/unauthorized.html", controller: "LogoffController" })
.state("details", {
url: "/details/:id",
templateUrl: "/templates/details.html",
controller: "DetailsController",
resolve: {
DataEventRecordsService: "DataEventRecordsService",
dataEventRecord: [
"DataEventRecordsService", "$stateParams", function (DataEventRecordsService, $stateParams) {
var id = $stateParams.id;
console.log($stateParams.id);
return DataEventRecordsService.GetDataEventRecord({ id: id });
}
]
}
})
.state("overviewindex", {
url: "/overviewindex",
templateUrl: "/templates/overviewindex.html",
controller: "OverviewController",
resolve: {
DataEventRecordsService: "DataEventRecordsService",
dataEventRecords: [
"DataEventRecordsService", function (DataEventRecordsService) {
return DataEventRecordsService.GetDataEventRecords();
}
]
}
})
.state("create", {
url: "/create",
templateUrl: "/templates/create.html",
controller: "DetailsController",
resolve: {
dataEventRecord: [
function () {
return { Id: "", Name: "", Description: "", Timestamp: "2016-02-15T08:57:32" };
}
]
}
})
.state("reload", {
url: "/reload/:destinationState",
controller: ["$state", "$stateParams", function ($state, $stateParams) {
if ($stateParams.destinationState) {
$state.go($stateParams.destinationState);
}
else {
$state.go("overviewindex");
}
}]
});
$locationProvider.html5Mode(true);
}
]
);
mainApp.run(["$rootScope", function ($rootScope) {
$rootScope.$on('$stateChangeError', function(event, toState, toParams, fromState, fromParams, error) {
console.log(event);
console.log(toState);
console.log(toParams);
console.log(fromState);
console.log(fromParams);
console.log(error);
});
$rootScope.$on('$stateNotFound', function(event, unfoundState, fromState, fromParams) {
console.log(event);
console.log(unfoundState);
console.log(fromState);
console.log(fromParams);
});
}]);
})(); | {
"pile_set_name": "Github"
} |
# Locally computed
sha256 dd254beb0bafc695d0f62ae1a222ff85b52dbaa3a16f76e781dce22d0d20a4a6 3.3.4.tar.bz2
sha256 4f877e5ae4672568ef82cfd0023e2cef4a7cf55d867ab249efc9569a7eb9e5b1 COPYING.BSD
sha256 8ceb4b9ee5adedde47b31e975c1d90c73ad27b6b165a1dcd80c7c545eb65b903 COPYING.GPL
sha256 dc626520dcd53a22f727af3ee42c770e56c97a64fe3adb063799d8ab032fe551 COPYING.LGPL
sha256 f5b330efdad110cdd84d585ec61220b0650461fa599e36b13e1726c9346dcfb9 COPYING.MINPACK
sha256 fab3dd6bdab226f1c08630b1dd917e11fcb4ec5e1e020e2c16f83a0a13863e85 COPYING.MPL2
sha256 c83230b770f17ef1386ea1fd3681271dd98aa93646bdbfb5bff3a1b7050fff9d COPYING.README
| {
"pile_set_name": "Github"
} |
/*-----------------------------------------------------------------------------+
Copyright (c) 2010-2010: Joachim Faulhaber
+------------------------------------------------------------------------------+
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENCE.txt or copy at
http://www.boost.org/LICENSE_1_0.txt)
+-----------------------------------------------------------------------------*/
#ifndef BOOST_ICL_TYPE_TRAITS_PREDICATE_HPP_JOFA_101102
#define BOOST_ICL_TYPE_TRAITS_PREDICATE_HPP_JOFA_101102
#include <functional>
namespace boost{namespace icl
{
// naming convention
// predicate: n-ary predicate
// property: unary predicate
// relation: binary predicate
// Unary predicates
template <class Type>
class property : public std::unary_function<Type,bool>{};
template <class Type>
class member_property : public property<Type>
{
public:
member_property( bool(Type::* pred)()const ): property<Type>(), m_pred(pred){}
bool operator()(const Type& x)const { return (x.*m_pred)(); }
private:
bool(Type::* m_pred)()const;
} ;
// Binary predicates: relations
template <class LeftT, class RightT>
class relation : public std::binary_function<LeftT,RightT,bool>{};
}} // namespace icl boost
#endif
| {
"pile_set_name": "Github"
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.deploy.k8s.submit
import io.fabric8.kubernetes.client.KubernetesClient
import org.apache.spark.SparkConf
import org.apache.spark.deploy.k8s._
import org.apache.spark.internal.config.ConfigEntry
class KubernetesDriverBuilderSuite extends PodBuilderSuite {
override protected def templateFileConf: ConfigEntry[_] = {
Config.KUBERNETES_DRIVER_PODTEMPLATE_FILE
}
override protected def buildPod(sparkConf: SparkConf, client: KubernetesClient): SparkPod = {
val conf = KubernetesTestConf.createDriverConf(sparkConf = sparkConf)
new KubernetesDriverBuilder().buildFromFeatures(conf, client).pod
}
}
| {
"pile_set_name": "Github"
} |
/**
* Copyright (C) 2014-2017 Xavier Witdouck
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.zavtech.morpheus.util;
/**
* A utility class that can be used to make various assertions to trigger exceptions early.
*
* <p>This is open source software released under the <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache 2.0 License</a></p>
*
* @author Xavier Witdouck
*/
public class Asserts {
/**
* Checks an expression is true otherwise throws an exception
* @param expression the expression to check is true
* @param message the exception message if expression is false
* @throws AssertException if expression is false
*/
public static void check(boolean expression, String message) {
if (!expression) {
throw new AssertException(message);
}
}
/**
* Checks an expression is true otherwise throws an exception
* @param expression the expression to check is true
* @param message the formatted exception message if expression is false
* @param args the argumets for the formatted string
* @throws AssertException if expression is false
*/
public static void check(boolean expression, String message, Object... args) {
if (!expression) {
throw new AssertException(String.format(message, args));
}
}
/**
* Checks that an object reference is not null
* @param object the object reference to check
* @param message the exception message if object is null
* @return the same as the argument
* @throws AssertException if the object is null
*/
public static <T> T notNull(T object, String message) {
if (object == null) {
throw new AssertException(message);
} else {
return object;
}
}
/**
* Checks that a string is not null and its trimmed length > 0
* @param text the text to check
* @param message the exception message if empty
*/
public static void notEmpty(String text, String message) {
if (text == null || text.trim().length() == 0) {
throw new AssertException(message);
}
}
/**
* Asserts some condition is true and raises an AssertException if not
* @param condition the condition outcome
*/
public static void assertTrue(boolean condition) {
if (!condition) {
fail("Boolean assertion failed", null);
}
}
/**
* Asserts some condition is true and raises an AssertException if not
* @param condition the condition outcome
* @param message the message
*/
public static void assertTrue(boolean condition, String message) {
if (!condition) {
fail(message, null);
}
}
/**
* Asserts that the two values are equals
* @param actual the actual value
* @param expected the expected value
* @param message the error message if not equals
*/
public static void assertEquals(int actual, int expected, String message) {
assertEquals(actual, expected, 0d, message);
}
/**
* Asserts that the two values are almost equal based on some threshold
* @param actual the actual value
* @param expected the expected value
* @param delta the delta threshold
*/
public static void assertEquals(int actual, int expected, double delta) {
assertEquals(actual, expected, delta, null);
}
/**
* Asserts that the two values are almost equal based on some threshold
* @param actual the actual value
* @param expected the expected value
* @param message the error message if not equals
* @param delta the delta threshold
*/
public static void assertEquals(int actual, int expected, double delta, String message) {
if (Integer.compare(actual, expected) != 0) {
if (Math.abs(expected - actual) > delta) {
fail(message, "Actual value," + actual + ", not equal to expected: " + expected);
}
}
}
/**
* Asserts that the two values are equals
* @param actual the actual value
* @param expected the expected value
* @param message the error message if not equals
*/
public static void assertEquals(double actual, double expected, String message) {
assertEquals(actual, expected, 0d, message);
}
/**
* Asserts that the two values are almost equal based on some threshold
* @param actual the actual value
* @param expected the expected value
* @param delta the delta threshold
*/
public static void assertEquals(double actual, double expected, double delta) {
assertEquals(actual, expected, delta, null);
}
/**
* Asserts that the two values are almost equal based on some threshold
* @param actual the actual value
* @param expected the expected value
* @param message the error message if not equals
* @param delta the delta threshold
*/
public static void assertEquals(double actual, double expected, double delta, String message) {
if (Double.compare(actual, expected) != 0) {
if (Math.abs(expected - actual) > delta) {
fail(message, "Actual value," + actual + ", not equal to expected: " + expected);
}
}
}
/**
* Asserts that the two values are equals
* @param actual the actual value
* @param expected the expected value
*/
public static void assertEquals(Object actual, Object expected) {
assertEquals(actual, expected, null);
}
/**
* Asserts that the two values are equals
* @param actual the actual value
* @param expected the expected value
* @param message the error message if not equals
*/
public static void assertEquals(Object actual, Object expected, String message) {
if (actual != expected) {
if (actual != null && expected != null) {
final Class<?> type1 = actual.getClass();
final Class<?> type2 = expected.getClass();
if (type1 != type2) {
fail(message, String.format("Type mismatch, %s != %s", type1, type2));
} else if (!actual.equals(expected)) {
fail(message, String.format("Actual and expected value mismatch, %s != %s", type1, type2));
}
} else if (actual == null) {
fail(message, String.format("Actual value is null, expected value = %s", expected));
} else {
fail(message, String.format("Expected value is null, actual value = %s", actual));
}
}
}
/**
* Throws an AssertException with the message specified
* @param message the message string
* @param reason the reason string
*/
private static void fail(String message, String reason) {
if (message != null && reason != null) {
throw new AssertException(message + " - " + reason);
} else if (message != null) {
throw new AssertException(message);
} else {
throw new AssertException(reason);
}
}
}
| {
"pile_set_name": "Github"
} |
//===- MCELFStreamer.h - MCStreamer ELF Object File Interface ---*- C++ -*-===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#ifndef LLVM_MC_MCELFSTREAMER_H
#define LLVM_MC_MCELFSTREAMER_H
#include "llvm/ADT/SmallVector.h"
#include "llvm/MC/MCDirectives.h"
#include "llvm/MC/MCObjectStreamer.h"
namespace llvm {
class MCAsmBackend;
class MCCodeEmitter;
class MCExpr;
class MCInst;
class MCELFStreamer : public MCObjectStreamer {
public:
MCELFStreamer(MCContext &Context, std::unique_ptr<MCAsmBackend> TAB,
std::unique_ptr<MCObjectWriter> OW,
std::unique_ptr<MCCodeEmitter> Emitter);
~MCELFStreamer() override = default;
/// state management
void reset() override {
SeenIdent = false;
BundleGroups.clear();
MCObjectStreamer::reset();
}
/// \name MCStreamer Interface
/// @{
void InitSections(bool NoExecStack) override;
void ChangeSection(MCSection *Section, const MCExpr *Subsection) override;
void EmitLabel(MCSymbol *Symbol, SMLoc Loc = SMLoc()) override;
void EmitLabelAtPos(MCSymbol *Symbol, SMLoc Loc, MCFragment *F,
uint64_t Offset) override;
void EmitAssemblerFlag(MCAssemblerFlag Flag) override;
void EmitThumbFunc(MCSymbol *Func) override;
void EmitWeakReference(MCSymbol *Alias, const MCSymbol *Symbol) override;
bool EmitSymbolAttribute(MCSymbol *Symbol, MCSymbolAttr Attribute) override;
void EmitSymbolDesc(MCSymbol *Symbol, unsigned DescValue) override;
void EmitCommonSymbol(MCSymbol *Symbol, uint64_t Size,
unsigned ByteAlignment) override;
void emitELFSize(MCSymbol *Symbol, const MCExpr *Value) override;
void emitELFSymverDirective(StringRef AliasName,
const MCSymbol *Aliasee) override;
void EmitLocalCommonSymbol(MCSymbol *Symbol, uint64_t Size,
unsigned ByteAlignment) override;
void EmitZerofill(MCSection *Section, MCSymbol *Symbol = nullptr,
uint64_t Size = 0, unsigned ByteAlignment = 0,
SMLoc L = SMLoc()) override;
void EmitTBSSSymbol(MCSection *Section, MCSymbol *Symbol, uint64_t Size,
unsigned ByteAlignment = 0) override;
void EmitValueImpl(const MCExpr *Value, unsigned Size,
SMLoc Loc = SMLoc()) override;
void EmitIdent(StringRef IdentString) override;
void EmitValueToAlignment(unsigned, int64_t, unsigned, unsigned) override;
void emitCGProfileEntry(const MCSymbolRefExpr *From,
const MCSymbolRefExpr *To, uint64_t Count) override;
void FinishImpl() override;
void EmitBundleAlignMode(unsigned AlignPow2) override;
void EmitBundleLock(bool AlignToEnd) override;
void EmitBundleUnlock() override;
private:
bool isBundleLocked() const;
void EmitInstToFragment(const MCInst &Inst, const MCSubtargetInfo &) override;
void EmitInstToData(const MCInst &Inst, const MCSubtargetInfo &) override;
void fixSymbolsInTLSFixups(const MCExpr *expr);
void finalizeCGProfileEntry(const MCSymbolRefExpr *&S);
void finalizeCGProfile();
/// Merge the content of the fragment \p EF into the fragment \p DF.
void mergeFragment(MCDataFragment *, MCDataFragment *);
bool SeenIdent = false;
/// BundleGroups - The stack of fragments holding the bundle-locked
/// instructions.
SmallVector<MCDataFragment *, 4> BundleGroups;
};
MCELFStreamer *createARMELFStreamer(MCContext &Context,
std::unique_ptr<MCAsmBackend> TAB,
std::unique_ptr<MCObjectWriter> OW,
std::unique_ptr<MCCodeEmitter> Emitter,
bool RelaxAll, bool IsThumb, bool IsAndroid);
} // end namespace llvm
#endif // LLVM_MC_MCELFSTREAMER_H
| {
"pile_set_name": "Github"
} |
/**
* An interface to define a replacement rule for CustomReplace plugin
*/
export default interface Replacement {
/**
* Source string to replace from
*/
sourceString: string;
/**
* HTML string to replace to
*/
replacementHTML: string;
/**
* Whether the matching should be case sensitive
*/
matchSourceCaseSensitive: boolean;
}
| {
"pile_set_name": "Github"
} |
800 -1 0 1.495 0.0
801 -1 0 1.495 0.0
802 -1 0 1.5 0.0
803 -1 0 1.5 0.0
804 -1 0 1.505 0.0
805 -1 0 1.505 0.0
806 -1 0 1.51 0.0
807 -1 0 1.51 0.0
808 -1 0 1.515 0.0
809 -1 0 1.515 0.0
810 -1 0 1.52 0.0
811 -1 0 1.52 0.0
812 -1 0 1.525 0.0
813 -1 0 1.525 0.0
814 -1 0 1.53 0.0
815 -1 0 1.53 0.0
816 -1 0 1.535 0.0
817 -1 0 1.535 0.0
818 -1 0 1.54 0.0
819 -1 0 1.54 0.0
820 -1 0 1.545 0.0
821 -1 0 1.545 0.0
822 -1 0 1.55 0.0
823 -1 0 1.55 0.0
824 -1 0 1.555 0.0
825 -1 0 1.555 0.0
826 -1 0 1.56 0.0
827 -1 0 1.56 0.0
828 -1 0 1.565 0.0
829 -1 0 1.565 0.0
830 -1 0 1.57 0.0
831 -1 0 1.57 0.0
832 -1 0 1.575 0.0
833 -1 0 1.575 0.0
834 -1 0 1.58 0.0
835 -1 0 1.58 0.0
836 -1 0 1.585 0.0
837 -1 0 1.585 0.0
838 -1 0 1.59 0.0
839 -1 0 1.59 0.0
840 -1 0 1.595 0.0
841 -1 0 1.595 0.0
842 -1 0 1.6 0.0
843 -1 0 1.6 0.0
844 -1 0 1.605 0.0
845 -1 0 1.605 0.0
846 -1 0 1.61 0.0
847 -1 0 1.61 0.0
848 -1 0 1.615 0.0
849 -1 0 1.615 0.0
850 -1 0 1.62 0.0
851 -1 0 1.62 0.0
852 -1 0 1.625 0.0
853 -1 0 1.625 0.0
854 -1 0 1.63 0.0
855 -1 0 1.63 0.0
856 -1 0 1.635 0.0
857 -1 0 1.635 0.0
858 -1 0 1.64 0.0
859 -1 0 1.64 0.0
860 -1 0 1.645 0.0
861 -1 0 1.645 0.0
862 -1 0 1.65 0.0
863 -1 0 1.65 0.0
864 -1 0 1.655 0.0
865 -1 0 1.655 0.0
866 -1 0 1.66 0.0
867 -1 0 1.66 0.0
868 -1 0 1.665 0.0
869 -1 0 1.665 0.0
870 -1 0 1.67 0.0
871 -1 0 1.67 0.0
872 -1 0 1.675 0.0
873 -1 0 1.675 0.0
874 -1 0 1.68 0.0
875 -1 0 1.68 0.0
876 -1 0 1.685 0.0
877 -1 0 1.685 0.0
878 -1 0 1.69 0.0
879 -1 0 1.69 0.0
880 -1 0 1.695 0.0
881 -1 0 1.695 0.0
882 -1 0 1.7 0.0
883 -1 0 1.7 0.0
884 -1 0 1.705 0.0
885 -1 0 1.705 0.0
886 -1 0 1.71 0.0
887 -1 0 1.71 0.0
888 -1 0 1.715 0.0
889 -1 0 1.715 0.0
890 -1 0 1.72 0.0
891 -1 0 1.72 0.0
892 -1 0 1.725 0.0
893 -1 0 1.725 0.0
894 -1 0 1.73 0.0
895 -1 0 1.73 0.0
896 -1 0 1.735 0.0
897 -1 0 1.735 0.0
898 -1 0 1.74 0.0
899 -1 0 1.74 0.0
900 -1 0 1.745 0.0
901 -1 0 1.745 0.0
902 -1 0 1.75 0.0
903 -1 0 1.75 0.0
904 -1 0 1.755 0.0
905 -1 0 1.755 0.0
906 -1 0 1.76 0.0
907 -1 0 1.76 0.0
908 -1 0 1.765 0.0
909 -1 0 1.765 0.0
910 -1 0 1.77 0.0
911 -1 0 1.77 0.0
912 -1 0 1.775 0.0
913 -1 0 1.775 0.0
914 -1 0 1.78 0.0
915 -1 0 1.78 0.0
916 -1 0 1.785 0.0
917 -1 0 1.785 0.0
918 -1 0 1.79 0.0
919 -1 0 1.79 0.0
920 -1 0 1.795 0.0
921 -1 0 1.795 0.0
922 -1 0 1.8 0.0
923 -1 0 1.8 0.0
924 -1 0 1.805 0.0
925 -1 0 1.805 0.0
926 -1 0 1.81 0.0
927 -1 0 1.81 0.0
928 -1 0 1.815 0.0
929 -1 0 1.815 0.0
930 -1 0 1.82 0.0
931 -1 0 1.82 0.0
932 -1 0 1.825 0.0
933 -1 0 1.825 0.0
934 -1 0 1.83 0.0
935 -1 0 1.83 0.0
936 -1 0 1.835 0.0
937 -1 0 1.835 0.0
938 -1 0 1.84 0.0
939 -1 0 1.84 0.0
940 -1 0 1.845 0.0
941 -1 0 1.845 0.0
942 -1 0 1.85 0.0
943 -1 0 1.85 0.0
944 -1 0 1.855 0.0
945 -1 0 1.855 0.0
946 -1 0 1.86 0.0
947 -1 0 1.86 0.0
948 -1 0 1.865 0.0
949 -1 0 1.865 0.0
950 -1 0 1.87 0.0
951 -1 0 1.87 0.0
952 -1 0 1.875 0.0
953 -1 0 1.875 0.0
954 -1 0 1.88 0.0
955 -1 0 1.88 0.0
956 -1 0 1.885 0.0
957 -1 0 1.885 0.0
958 -1 0 1.89 0.0
959 -1 0 1.89 0.0
960 -1 0 1.895 0.0
961 -1 0 1.895 0.0
962 -1 0 1.9 0.0
963 -1 0 1.9 0.0
964 -1 0 1.905 0.0
965 -1 0 1.905 0.0
966 -1 0 1.91 0.0
967 -1 0 1.91 0.0
968 -1 0 1.915 0.0
969 -1 0 1.915 0.0
970 -1 0 1.92 0.0
971 -1 0 1.92 0.0
972 -1 0 1.925 0.0
973 -1 0 1.925 0.0
974 -1 0 1.93 0.0
975 -1 0 1.93 0.0
976 -1 0 1.935 0.0
977 -1 0 1.935 0.0
978 -1 0 1.94 0.0
979 -1 0 1.94 0.0
980 -1 0 1.945 0.0
981 -1 0 1.945 0.0
982 -1 0 1.95 0.0
983 -1 0 1.95 0.0
984 -1 0 1.955 0.0
985 -1 0 1.955 0.0
986 -1 0 1.96 0.0
987 -1 0 1.96 0.0
988 -1 0 1.965 0.0
989 -1 0 1.965 0.0
990 -1 0 1.97 0.0
991 -1 0 1.97 0.0
992 -1 0 1.975 0.0
993 -1 0 1.975 0.0
994 -1 0 1.98 0.0
995 -1 0 1.98 0.0
996 -1 0 1.985 0.0
997 -1 0 1.985 0.0
998 -1 0 1.99 0.0
999 -1 0 1.99 0.0
1000 -1 0 1.995 0.0
1001 -1 0 1.995 0.0
1002 -1 0 2 0.0
1003 -1 0 2 0.0
| {
"pile_set_name": "Github"
} |
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: peer/proposal.proto
package peer
import (
fmt "fmt"
proto "github.com/golang/protobuf/proto"
math "math"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
// This structure is necessary to sign the proposal which contains the header
// and the payload. Without this structure, we would have to concatenate the
// header and the payload to verify the signature, which could be expensive
// with large payload
//
// When an endorser receives a SignedProposal message, it should verify the
// signature over the proposal bytes. This verification requires the following
// steps:
// 1. Verification of the validity of the certificate that was used to produce
// the signature. The certificate will be available once proposalBytes has
// been unmarshalled to a Proposal message, and Proposal.header has been
// unmarshalled to a Header message. While this unmarshalling-before-verifying
// might not be ideal, it is unavoidable because i) the signature needs to also
// protect the signing certificate; ii) it is desirable that Header is created
// once by the client and never changed (for the sake of accountability and
// non-repudiation). Note also that it is actually impossible to conclusively
// verify the validity of the certificate included in a Proposal, because the
// proposal needs to first be endorsed and ordered with respect to certificate
// expiration transactions. Still, it is useful to pre-filter expired
// certificates at this stage.
// 2. Verification that the certificate is trusted (signed by a trusted CA) and
// that it is allowed to transact with us (with respect to some ACLs);
// 3. Verification that the signature on proposalBytes is valid;
// 4. Detect replay attacks;
type SignedProposal struct {
// The bytes of Proposal
ProposalBytes []byte `protobuf:"bytes,1,opt,name=proposal_bytes,json=proposalBytes,proto3" json:"proposal_bytes,omitempty"`
// Signaure over proposalBytes; this signature is to be verified against
// the creator identity contained in the header of the Proposal message
// marshaled as proposalBytes
Signature []byte `protobuf:"bytes,2,opt,name=signature,proto3" json:"signature,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *SignedProposal) Reset() { *m = SignedProposal{} }
func (m *SignedProposal) String() string { return proto.CompactTextString(m) }
func (*SignedProposal) ProtoMessage() {}
func (*SignedProposal) Descriptor() ([]byte, []int) {
return fileDescriptor_c4dbb4372a94bd5b, []int{0}
}
func (m *SignedProposal) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_SignedProposal.Unmarshal(m, b)
}
func (m *SignedProposal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_SignedProposal.Marshal(b, m, deterministic)
}
func (m *SignedProposal) XXX_Merge(src proto.Message) {
xxx_messageInfo_SignedProposal.Merge(m, src)
}
func (m *SignedProposal) XXX_Size() int {
return xxx_messageInfo_SignedProposal.Size(m)
}
func (m *SignedProposal) XXX_DiscardUnknown() {
xxx_messageInfo_SignedProposal.DiscardUnknown(m)
}
var xxx_messageInfo_SignedProposal proto.InternalMessageInfo
func (m *SignedProposal) GetProposalBytes() []byte {
if m != nil {
return m.ProposalBytes
}
return nil
}
func (m *SignedProposal) GetSignature() []byte {
if m != nil {
return m.Signature
}
return nil
}
// A Proposal is sent to an endorser for endorsement. The proposal contains:
// 1. A header which should be unmarshaled to a Header message. Note that
// Header is both the header of a Proposal and of a Transaction, in that i)
// both headers should be unmarshaled to this message; and ii) it is used to
// compute cryptographic hashes and signatures. The header has fields common
// to all proposals/transactions. In addition it has a type field for
// additional customization. An example of this is the ChaincodeHeaderExtension
// message used to extend the Header for type CHAINCODE.
// 2. A payload whose type depends on the header's type field.
// 3. An extension whose type depends on the header's type field.
//
// Let us see an example. For type CHAINCODE (see the Header message),
// we have the following:
// 1. The header is a Header message whose extensions field is a
// ChaincodeHeaderExtension message.
// 2. The payload is a ChaincodeProposalPayload message.
// 3. The extension is a ChaincodeAction that might be used to ask the
// endorsers to endorse a specific ChaincodeAction, thus emulating the
// submitting peer model.
type Proposal struct {
// The header of the proposal. It is the bytes of the Header
Header []byte `protobuf:"bytes,1,opt,name=header,proto3" json:"header,omitempty"`
// The payload of the proposal as defined by the type in the proposal
// header.
Payload []byte `protobuf:"bytes,2,opt,name=payload,proto3" json:"payload,omitempty"`
// Optional extensions to the proposal. Its content depends on the Header's
// type field. For the type CHAINCODE, it might be the bytes of a
// ChaincodeAction message.
Extension []byte `protobuf:"bytes,3,opt,name=extension,proto3" json:"extension,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Proposal) Reset() { *m = Proposal{} }
func (m *Proposal) String() string { return proto.CompactTextString(m) }
func (*Proposal) ProtoMessage() {}
func (*Proposal) Descriptor() ([]byte, []int) {
return fileDescriptor_c4dbb4372a94bd5b, []int{1}
}
func (m *Proposal) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Proposal.Unmarshal(m, b)
}
func (m *Proposal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Proposal.Marshal(b, m, deterministic)
}
func (m *Proposal) XXX_Merge(src proto.Message) {
xxx_messageInfo_Proposal.Merge(m, src)
}
func (m *Proposal) XXX_Size() int {
return xxx_messageInfo_Proposal.Size(m)
}
func (m *Proposal) XXX_DiscardUnknown() {
xxx_messageInfo_Proposal.DiscardUnknown(m)
}
var xxx_messageInfo_Proposal proto.InternalMessageInfo
func (m *Proposal) GetHeader() []byte {
if m != nil {
return m.Header
}
return nil
}
func (m *Proposal) GetPayload() []byte {
if m != nil {
return m.Payload
}
return nil
}
func (m *Proposal) GetExtension() []byte {
if m != nil {
return m.Extension
}
return nil
}
// ChaincodeHeaderExtension is the Header's extentions message to be used when
// the Header's type is CHAINCODE. This extensions is used to specify which
// chaincode to invoke and what should appear on the ledger.
type ChaincodeHeaderExtension struct {
// The ID of the chaincode to target.
ChaincodeId *ChaincodeID `protobuf:"bytes,2,opt,name=chaincode_id,json=chaincodeId,proto3" json:"chaincode_id,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ChaincodeHeaderExtension) Reset() { *m = ChaincodeHeaderExtension{} }
func (m *ChaincodeHeaderExtension) String() string { return proto.CompactTextString(m) }
func (*ChaincodeHeaderExtension) ProtoMessage() {}
func (*ChaincodeHeaderExtension) Descriptor() ([]byte, []int) {
return fileDescriptor_c4dbb4372a94bd5b, []int{2}
}
func (m *ChaincodeHeaderExtension) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ChaincodeHeaderExtension.Unmarshal(m, b)
}
func (m *ChaincodeHeaderExtension) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ChaincodeHeaderExtension.Marshal(b, m, deterministic)
}
func (m *ChaincodeHeaderExtension) XXX_Merge(src proto.Message) {
xxx_messageInfo_ChaincodeHeaderExtension.Merge(m, src)
}
func (m *ChaincodeHeaderExtension) XXX_Size() int {
return xxx_messageInfo_ChaincodeHeaderExtension.Size(m)
}
func (m *ChaincodeHeaderExtension) XXX_DiscardUnknown() {
xxx_messageInfo_ChaincodeHeaderExtension.DiscardUnknown(m)
}
var xxx_messageInfo_ChaincodeHeaderExtension proto.InternalMessageInfo
func (m *ChaincodeHeaderExtension) GetChaincodeId() *ChaincodeID {
if m != nil {
return m.ChaincodeId
}
return nil
}
// ChaincodeProposalPayload is the Proposal's payload message to be used when
// the Header's type is CHAINCODE. It contains the arguments for this
// invocation.
type ChaincodeProposalPayload struct {
// Input contains the arguments for this invocation. If this invocation
// deploys a new chaincode, ESCC/VSCC are part of this field.
// This is usually a marshaled ChaincodeInvocationSpec
Input []byte `protobuf:"bytes,1,opt,name=input,proto3" json:"input,omitempty"`
// TransientMap contains data (e.g. cryptographic material) that might be used
// to implement some form of application-level confidentiality. The contents
// of this field are supposed to always be omitted from the transaction and
// excluded from the ledger.
TransientMap map[string][]byte `protobuf:"bytes,2,rep,name=TransientMap,proto3" json:"TransientMap,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ChaincodeProposalPayload) Reset() { *m = ChaincodeProposalPayload{} }
func (m *ChaincodeProposalPayload) String() string { return proto.CompactTextString(m) }
func (*ChaincodeProposalPayload) ProtoMessage() {}
func (*ChaincodeProposalPayload) Descriptor() ([]byte, []int) {
return fileDescriptor_c4dbb4372a94bd5b, []int{3}
}
func (m *ChaincodeProposalPayload) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ChaincodeProposalPayload.Unmarshal(m, b)
}
func (m *ChaincodeProposalPayload) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ChaincodeProposalPayload.Marshal(b, m, deterministic)
}
func (m *ChaincodeProposalPayload) XXX_Merge(src proto.Message) {
xxx_messageInfo_ChaincodeProposalPayload.Merge(m, src)
}
func (m *ChaincodeProposalPayload) XXX_Size() int {
return xxx_messageInfo_ChaincodeProposalPayload.Size(m)
}
func (m *ChaincodeProposalPayload) XXX_DiscardUnknown() {
xxx_messageInfo_ChaincodeProposalPayload.DiscardUnknown(m)
}
var xxx_messageInfo_ChaincodeProposalPayload proto.InternalMessageInfo
func (m *ChaincodeProposalPayload) GetInput() []byte {
if m != nil {
return m.Input
}
return nil
}
func (m *ChaincodeProposalPayload) GetTransientMap() map[string][]byte {
if m != nil {
return m.TransientMap
}
return nil
}
// ChaincodeAction contains the actions the events generated by the execution
// of the chaincode.
type ChaincodeAction struct {
// This field contains the read set and the write set produced by the
// chaincode executing this invocation.
Results []byte `protobuf:"bytes,1,opt,name=results,proto3" json:"results,omitempty"`
// This field contains the events generated by the chaincode executing this
// invocation.
Events []byte `protobuf:"bytes,2,opt,name=events,proto3" json:"events,omitempty"`
// This field contains the result of executing this invocation.
Response *Response `protobuf:"bytes,3,opt,name=response,proto3" json:"response,omitempty"`
// This field contains the ChaincodeID of executing this invocation. Endorser
// will set it with the ChaincodeID called by endorser while simulating proposal.
// Committer will validate the version matching with latest chaincode version.
// Adding ChaincodeID to keep version opens up the possibility of multiple
// ChaincodeAction per transaction.
ChaincodeId *ChaincodeID `protobuf:"bytes,4,opt,name=chaincode_id,json=chaincodeId,proto3" json:"chaincode_id,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ChaincodeAction) Reset() { *m = ChaincodeAction{} }
func (m *ChaincodeAction) String() string { return proto.CompactTextString(m) }
func (*ChaincodeAction) ProtoMessage() {}
func (*ChaincodeAction) Descriptor() ([]byte, []int) {
return fileDescriptor_c4dbb4372a94bd5b, []int{4}
}
func (m *ChaincodeAction) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ChaincodeAction.Unmarshal(m, b)
}
func (m *ChaincodeAction) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ChaincodeAction.Marshal(b, m, deterministic)
}
func (m *ChaincodeAction) XXX_Merge(src proto.Message) {
xxx_messageInfo_ChaincodeAction.Merge(m, src)
}
func (m *ChaincodeAction) XXX_Size() int {
return xxx_messageInfo_ChaincodeAction.Size(m)
}
func (m *ChaincodeAction) XXX_DiscardUnknown() {
xxx_messageInfo_ChaincodeAction.DiscardUnknown(m)
}
var xxx_messageInfo_ChaincodeAction proto.InternalMessageInfo
func (m *ChaincodeAction) GetResults() []byte {
if m != nil {
return m.Results
}
return nil
}
func (m *ChaincodeAction) GetEvents() []byte {
if m != nil {
return m.Events
}
return nil
}
func (m *ChaincodeAction) GetResponse() *Response {
if m != nil {
return m.Response
}
return nil
}
func (m *ChaincodeAction) GetChaincodeId() *ChaincodeID {
if m != nil {
return m.ChaincodeId
}
return nil
}
func init() {
proto.RegisterType((*SignedProposal)(nil), "protos.SignedProposal")
proto.RegisterType((*Proposal)(nil), "protos.Proposal")
proto.RegisterType((*ChaincodeHeaderExtension)(nil), "protos.ChaincodeHeaderExtension")
proto.RegisterType((*ChaincodeProposalPayload)(nil), "protos.ChaincodeProposalPayload")
proto.RegisterMapType((map[string][]byte)(nil), "protos.ChaincodeProposalPayload.TransientMapEntry")
proto.RegisterType((*ChaincodeAction)(nil), "protos.ChaincodeAction")
}
func init() { proto.RegisterFile("peer/proposal.proto", fileDescriptor_c4dbb4372a94bd5b) }
var fileDescriptor_c4dbb4372a94bd5b = []byte{
// 462 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x53, 0xcf, 0x6b, 0xdb, 0x30,
0x18, 0xc5, 0x69, 0x9b, 0xa6, 0x5f, 0xb2, 0xd6, 0x75, 0xcb, 0x30, 0xa1, 0x87, 0x62, 0x18, 0xf4,
0xd0, 0x3a, 0x90, 0xc1, 0x18, 0xbb, 0x8c, 0x65, 0x2b, 0xac, 0x83, 0x41, 0xf1, 0x7e, 0x1c, 0x7a,
0x09, 0xb2, 0xfd, 0xcd, 0x11, 0xf1, 0x24, 0x21, 0xc9, 0x61, 0xfe, 0xf3, 0x76, 0xdc, 0x7f, 0x35,
0x64, 0x49, 0x6e, 0xba, 0x5c, 0x76, 0x4a, 0xbe, 0x1f, 0xef, 0xe9, 0x3d, 0x3d, 0x19, 0xce, 0x04,
0xa2, 0x9c, 0x09, 0xc9, 0x05, 0x57, 0xa4, 0x4e, 0x85, 0xe4, 0x9a, 0x47, 0xc3, 0xee, 0x47, 0x4d,
0xcf, 0xbb, 0x61, 0xb1, 0x22, 0x94, 0x15, 0xbc, 0x44, 0x3b, 0x9d, 0x5e, 0x3c, 0x81, 0x2c, 0x25,
0x2a, 0xc1, 0x99, 0x72, 0xd3, 0xe4, 0x1b, 0x1c, 0x7f, 0xa1, 0x15, 0xc3, 0xf2, 0xde, 0x2d, 0x44,
0x2f, 0xe0, 0xb8, 0x5f, 0xce, 0x5b, 0x8d, 0x2a, 0x0e, 0x2e, 0x83, 0xab, 0x49, 0xf6, 0xcc, 0x77,
0x17, 0xa6, 0x19, 0x5d, 0xc0, 0x91, 0xa2, 0x15, 0x23, 0xba, 0x91, 0x18, 0x0f, 0xba, 0x8d, 0xc7,
0x46, 0xf2, 0x00, 0xa3, 0x9e, 0xf0, 0x39, 0x0c, 0x57, 0x48, 0x4a, 0x94, 0x8e, 0xc8, 0x55, 0x51,
0x0c, 0x87, 0x82, 0xb4, 0x35, 0x27, 0xa5, 0xc3, 0xfb, 0xd2, 0x70, 0xe3, 0x2f, 0x8d, 0x4c, 0x51,
0xce, 0xe2, 0x3d, 0xcb, 0xdd, 0x37, 0x92, 0x35, 0xc4, 0xef, 0xbd, 0xc7, 0x8f, 0x1d, 0xd5, 0xad,
0x9f, 0x45, 0xaf, 0x60, 0xd2, 0xfb, 0x5f, 0x52, 0x4b, 0x3c, 0x9e, 0x9f, 0x59, 0xb3, 0x2a, 0xed,
0x71, 0x77, 0x1f, 0xb2, 0x71, 0xbf, 0x78, 0x57, 0x7e, 0xda, 0x1f, 0x05, 0xe1, 0x20, 0x3b, 0x75,
0x02, 0x96, 0x1b, 0xaa, 0x72, 0x5a, 0x53, 0xdd, 0x26, 0x7f, 0x82, 0xad, 0xd3, 0xbc, 0xa5, 0x7b,
0xa7, 0xf3, 0x1c, 0x0e, 0x28, 0x13, 0x8d, 0x76, 0xc6, 0x6c, 0x11, 0x7d, 0x87, 0xc9, 0x57, 0x49,
0x98, 0xa2, 0xc8, 0xf4, 0x67, 0x22, 0xe2, 0xc1, 0xe5, 0xde, 0xd5, 0x78, 0x3e, 0xdf, 0xd1, 0xf0,
0x0f, 0x5b, 0xba, 0x0d, 0xba, 0x65, 0x5a, 0xb6, 0xd9, 0x13, 0x9e, 0xe9, 0x5b, 0x38, 0xdd, 0x59,
0x89, 0x42, 0xd8, 0x5b, 0x63, 0xdb, 0x09, 0x38, 0xca, 0xcc, 0x5f, 0x23, 0x6a, 0x43, 0xea, 0xc6,
0x87, 0x62, 0x8b, 0x37, 0x83, 0xd7, 0x41, 0xf2, 0x3b, 0x80, 0x93, 0xfe, 0xf4, 0x77, 0x85, 0x36,
0x17, 0x16, 0xc3, 0xa1, 0x44, 0xd5, 0xd4, 0xda, 0xc7, 0xec, 0x4b, 0x13, 0x1b, 0x6e, 0x90, 0x69,
0xe5, 0x88, 0x5c, 0x15, 0x5d, 0xc3, 0xc8, 0xbf, 0xa1, 0x2e, 0x9b, 0xf1, 0x3c, 0xf4, 0xd6, 0x32,
0xd7, 0xcf, 0xfa, 0x8d, 0x9d, 0x40, 0xf6, 0xff, 0x3b, 0x90, 0x83, 0x70, 0x98, 0x85, 0x9a, 0xaf,
0x91, 0x2d, 0xb9, 0x40, 0x49, 0x8c, 0x5c, 0xb5, 0x28, 0x20, 0xe1, 0xb2, 0x4a, 0x57, 0xad, 0x40,
0x59, 0x63, 0x59, 0xa1, 0x4c, 0x7f, 0x90, 0x5c, 0xd2, 0xc2, 0x33, 0x9a, 0xd7, 0xbe, 0x38, 0x79,
0xbc, 0xdb, 0x62, 0x4d, 0x2a, 0x7c, 0xb8, 0xae, 0xa8, 0x5e, 0x35, 0x79, 0x5a, 0xf0, 0x9f, 0xb3,
0x2d, 0xec, 0xcc, 0x62, 0x6f, 0x2c, 0xf6, 0xa6, 0xe2, 0x33, 0x03, 0xcf, 0xed, 0x07, 0xf5, 0xf2,
0x6f, 0x00, 0x00, 0x00, 0xff, 0xff, 0xe5, 0xf4, 0xc9, 0x9a, 0x6e, 0x03, 0x00, 0x00,
}
| {
"pile_set_name": "Github"
} |
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build aix
package terminal
import "golang.org/x/sys/unix"
const ioctlReadTermios = unix.TCGETS
const ioctlWriteTermios = unix.TCSETS
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="utf-8"?>
<resources>
<string name="copyright">Авторські права © 2015 – 2019 Міхаель Фарел</string>
<string name="license">Ліцензія</string>
<string name="title_activity_license">Ліцензії з відкритим сирцевим кодом</string>
<string name="directions">Прикладіть картку.</string>
<string name="reading_card">Карта зчитується…</string>
<string name="dont_move">Не рухайте карткою</string>
<string name="advanced_info">Докладні відомості</string>
<string name="delete_card">Видалити картку</string>
<string name="unknown_card">Невідома картка</string>
<string name="locked_mfc_card">Заблокована картка MIFARE Classic</string>
<string name="locked_mfu_card">Блокована картка MIFARE Ultralight</string>
</resources> | {
"pile_set_name": "Github"
} |
/*
* Copyright (C) 2013-2014 Synopsys, Inc. All rights reserved.
*
* SPDX-License-Identifier: GPL-2.0+
*/
#include <config.h>
#include <common.h>
#include <linux/compiler.h>
#include <linux/kernel.h>
#include <linux/log2.h>
#include <asm/arcregs.h>
#include <asm/cache.h>
/* Bit values in IC_CTRL */
#define IC_CTRL_CACHE_DISABLE (1 << 0)
/* Bit values in DC_CTRL */
#define DC_CTRL_CACHE_DISABLE (1 << 0)
#define DC_CTRL_INV_MODE_FLUSH (1 << 6)
#define DC_CTRL_FLUSH_STATUS (1 << 8)
#define CACHE_VER_NUM_MASK 0xF
#define SLC_CTRL_SB (1 << 2)
#define OP_INV 0x1
#define OP_FLUSH 0x2
#define OP_INV_IC 0x3
/*
* By default that variable will fall into .bss section.
* But .bss section is not relocated and so it will be initilized before
* relocation but will be used after being zeroed.
*/
int l1_line_sz __section(".data");
int dcache_exists __section(".data");
int icache_exists __section(".data");
#define CACHE_LINE_MASK (~(l1_line_sz - 1))
#ifdef CONFIG_ISA_ARCV2
int slc_line_sz __section(".data");
int slc_exists __section(".data");
int ioc_exists __section(".data");
static unsigned int __before_slc_op(const int op)
{
unsigned int reg = reg;
if (op == OP_INV) {
/*
* IM is set by default and implies Flush-n-inv
* Clear it here for vanilla inv
*/
reg = read_aux_reg(ARC_AUX_SLC_CTRL);
write_aux_reg(ARC_AUX_SLC_CTRL, reg & ~DC_CTRL_INV_MODE_FLUSH);
}
return reg;
}
static void __after_slc_op(const int op, unsigned int reg)
{
if (op & OP_FLUSH) { /* flush / flush-n-inv both wait */
/*
* Make sure "busy" bit reports correct status,
* see STAR 9001165532
*/
read_aux_reg(ARC_AUX_SLC_CTRL);
while (read_aux_reg(ARC_AUX_SLC_CTRL) &
DC_CTRL_FLUSH_STATUS)
;
}
/* Switch back to default Invalidate mode */
if (op == OP_INV)
write_aux_reg(ARC_AUX_SLC_CTRL, reg | DC_CTRL_INV_MODE_FLUSH);
}
static inline void __slc_line_loop(unsigned long paddr, unsigned long sz,
const int op)
{
unsigned int aux_cmd;
int num_lines;
#define SLC_LINE_MASK (~(slc_line_sz - 1))
aux_cmd = op & OP_INV ? ARC_AUX_SLC_IVDL : ARC_AUX_SLC_FLDL;
sz += paddr & ~SLC_LINE_MASK;
paddr &= SLC_LINE_MASK;
num_lines = DIV_ROUND_UP(sz, slc_line_sz);
while (num_lines-- > 0) {
write_aux_reg(aux_cmd, paddr);
paddr += slc_line_sz;
}
}
static inline void __slc_entire_op(const int cacheop)
{
int aux;
unsigned int ctrl_reg = __before_slc_op(cacheop);
if (cacheop & OP_INV) /* Inv or flush-n-inv use same cmd reg */
aux = ARC_AUX_SLC_INVALIDATE;
else
aux = ARC_AUX_SLC_FLUSH;
write_aux_reg(aux, 0x1);
__after_slc_op(cacheop, ctrl_reg);
}
static inline void __slc_line_op(unsigned long paddr, unsigned long sz,
const int cacheop)
{
unsigned int ctrl_reg = __before_slc_op(cacheop);
__slc_line_loop(paddr, sz, cacheop);
__after_slc_op(cacheop, ctrl_reg);
}
#else
#define __slc_entire_op(cacheop)
#define __slc_line_op(paddr, sz, cacheop)
#endif
#ifdef CONFIG_ISA_ARCV2
static void read_decode_cache_bcr_arcv2(void)
{
union {
struct {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:24, way:2, lsz:2, sz:4;
#else
unsigned int sz:4, lsz:2, way:2, pad:24;
#endif
} fields;
unsigned int word;
} slc_cfg;
union {
struct {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:24, ver:8;
#else
unsigned int ver:8, pad:24;
#endif
} fields;
unsigned int word;
} sbcr;
sbcr.word = read_aux_reg(ARC_BCR_SLC);
if (sbcr.fields.ver) {
slc_cfg.word = read_aux_reg(ARC_AUX_SLC_CONFIG);
slc_exists = 1;
slc_line_sz = (slc_cfg.fields.lsz == 0) ? 128 : 64;
}
union {
struct bcr_clust_cfg {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:7, c:1, num_entries:8, num_cores:8, ver:8;
#else
unsigned int ver:8, num_cores:8, num_entries:8, c:1, pad:7;
#endif
} fields;
unsigned int word;
} cbcr;
cbcr.word = read_aux_reg(ARC_BCR_CLUSTER);
if (cbcr.fields.c)
ioc_exists = 1;
}
#endif
void read_decode_cache_bcr(void)
{
int dc_line_sz = 0, ic_line_sz = 0;
union {
struct {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad:12, line_len:4, sz:4, config:4, ver:8;
#else
unsigned int ver:8, config:4, sz:4, line_len:4, pad:12;
#endif
} fields;
unsigned int word;
} ibcr, dbcr;
ibcr.word = read_aux_reg(ARC_BCR_IC_BUILD);
if (ibcr.fields.ver) {
icache_exists = 1;
l1_line_sz = ic_line_sz = 8 << ibcr.fields.line_len;
if (!ic_line_sz)
panic("Instruction exists but line length is 0\n");
}
dbcr.word = read_aux_reg(ARC_BCR_DC_BUILD);
if (dbcr.fields.ver){
dcache_exists = 1;
l1_line_sz = dc_line_sz = 16 << dbcr.fields.line_len;
if (!dc_line_sz)
panic("Data cache exists but line length is 0\n");
}
if (ic_line_sz && dc_line_sz && (ic_line_sz != dc_line_sz))
panic("Instruction and data cache line lengths differ\n");
}
void cache_init(void)
{
read_decode_cache_bcr();
#ifdef CONFIG_ISA_ARCV2
read_decode_cache_bcr_arcv2();
if (ioc_exists) {
/* IOC Aperture start is equal to DDR start */
unsigned int ap_base = CONFIG_SYS_SDRAM_BASE;
/* IOC Aperture size is equal to DDR size */
long ap_size = CONFIG_SYS_SDRAM_SIZE;
flush_dcache_all();
invalidate_dcache_all();
if (!is_power_of_2(ap_size) || ap_size < 4096)
panic("IOC Aperture size must be power of 2 and bigger 4Kib");
/*
* IOC Aperture size decoded as 2 ^ (SIZE + 2) KB,
* so setting 0x11 implies 512M, 0x12 implies 1G...
*/
write_aux_reg(ARC_AUX_IO_COH_AP0_SIZE,
order_base_2(ap_size/1024) - 2);
/* IOC Aperture start must be aligned to the size of the aperture */
if (ap_base % ap_size != 0)
panic("IOC Aperture start must be aligned to the size of the aperture");
write_aux_reg(ARC_AUX_IO_COH_AP0_BASE, ap_base >> 12);
write_aux_reg(ARC_AUX_IO_COH_PARTIAL, 1);
write_aux_reg(ARC_AUX_IO_COH_ENABLE, 1);
}
#endif
}
int icache_status(void)
{
if (!icache_exists)
return 0;
if (read_aux_reg(ARC_AUX_IC_CTRL) & IC_CTRL_CACHE_DISABLE)
return 0;
else
return 1;
}
void icache_enable(void)
{
if (icache_exists)
write_aux_reg(ARC_AUX_IC_CTRL, read_aux_reg(ARC_AUX_IC_CTRL) &
~IC_CTRL_CACHE_DISABLE);
}
void icache_disable(void)
{
if (icache_exists)
write_aux_reg(ARC_AUX_IC_CTRL, read_aux_reg(ARC_AUX_IC_CTRL) |
IC_CTRL_CACHE_DISABLE);
}
#ifndef CONFIG_SYS_DCACHE_OFF
void invalidate_icache_all(void)
{
/* Any write to IC_IVIC register triggers invalidation of entire I$ */
if (icache_status()) {
write_aux_reg(ARC_AUX_IC_IVIC, 1);
read_aux_reg(ARC_AUX_IC_CTRL); /* blocks */
}
}
#else
void invalidate_icache_all(void)
{
}
#endif
int dcache_status(void)
{
if (!dcache_exists)
return 0;
if (read_aux_reg(ARC_AUX_DC_CTRL) & DC_CTRL_CACHE_DISABLE)
return 0;
else
return 1;
}
void dcache_enable(void)
{
if (!dcache_exists)
return;
write_aux_reg(ARC_AUX_DC_CTRL, read_aux_reg(ARC_AUX_DC_CTRL) &
~(DC_CTRL_INV_MODE_FLUSH | DC_CTRL_CACHE_DISABLE));
}
void dcache_disable(void)
{
if (!dcache_exists)
return;
write_aux_reg(ARC_AUX_DC_CTRL, read_aux_reg(ARC_AUX_DC_CTRL) |
DC_CTRL_CACHE_DISABLE);
}
#ifndef CONFIG_SYS_DCACHE_OFF
/*
* Common Helper for Line Operations on {I,D}-Cache
*/
static inline void __cache_line_loop(unsigned long paddr, unsigned long sz,
const int cacheop)
{
unsigned int aux_cmd;
#if (CONFIG_ARC_MMU_VER == 3)
unsigned int aux_tag;
#endif
int num_lines;
if (cacheop == OP_INV_IC) {
aux_cmd = ARC_AUX_IC_IVIL;
#if (CONFIG_ARC_MMU_VER == 3)
aux_tag = ARC_AUX_IC_PTAG;
#endif
} else {
/* d$ cmd: INV (discard or wback-n-discard) OR FLUSH (wback) */
aux_cmd = cacheop & OP_INV ? ARC_AUX_DC_IVDL : ARC_AUX_DC_FLDL;
#if (CONFIG_ARC_MMU_VER == 3)
aux_tag = ARC_AUX_DC_PTAG;
#endif
}
sz += paddr & ~CACHE_LINE_MASK;
paddr &= CACHE_LINE_MASK;
num_lines = DIV_ROUND_UP(sz, l1_line_sz);
while (num_lines-- > 0) {
#if (CONFIG_ARC_MMU_VER == 3)
write_aux_reg(aux_tag, paddr);
#endif
write_aux_reg(aux_cmd, paddr);
paddr += l1_line_sz;
}
}
static unsigned int __before_dc_op(const int op)
{
unsigned int reg;
if (op == OP_INV) {
/*
* IM is set by default and implies Flush-n-inv
* Clear it here for vanilla inv
*/
reg = read_aux_reg(ARC_AUX_DC_CTRL);
write_aux_reg(ARC_AUX_DC_CTRL, reg & ~DC_CTRL_INV_MODE_FLUSH);
}
return reg;
}
static void __after_dc_op(const int op, unsigned int reg)
{
if (op & OP_FLUSH) /* flush / flush-n-inv both wait */
while (read_aux_reg(ARC_AUX_DC_CTRL) & DC_CTRL_FLUSH_STATUS)
;
/* Switch back to default Invalidate mode */
if (op == OP_INV)
write_aux_reg(ARC_AUX_DC_CTRL, reg | DC_CTRL_INV_MODE_FLUSH);
}
static inline void __dc_entire_op(const int cacheop)
{
int aux;
unsigned int ctrl_reg = __before_dc_op(cacheop);
if (cacheop & OP_INV) /* Inv or flush-n-inv use same cmd reg */
aux = ARC_AUX_DC_IVDC;
else
aux = ARC_AUX_DC_FLSH;
write_aux_reg(aux, 0x1);
__after_dc_op(cacheop, ctrl_reg);
}
static inline void __dc_line_op(unsigned long paddr, unsigned long sz,
const int cacheop)
{
unsigned int ctrl_reg = __before_dc_op(cacheop);
__cache_line_loop(paddr, sz, cacheop);
__after_dc_op(cacheop, ctrl_reg);
}
#else
#define __dc_entire_op(cacheop)
#define __dc_line_op(paddr, sz, cacheop)
#endif /* !CONFIG_SYS_DCACHE_OFF */
void invalidate_dcache_range(unsigned long start, unsigned long end)
{
#ifdef CONFIG_ISA_ARCV2
if (!ioc_exists)
#endif
__dc_line_op(start, end - start, OP_INV);
#ifdef CONFIG_ISA_ARCV2
if (slc_exists && !ioc_exists)
__slc_line_op(start, end - start, OP_INV);
#endif
}
void flush_dcache_range(unsigned long start, unsigned long end)
{
#ifdef CONFIG_ISA_ARCV2
if (!ioc_exists)
#endif
__dc_line_op(start, end - start, OP_FLUSH);
#ifdef CONFIG_ISA_ARCV2
if (slc_exists && !ioc_exists)
__slc_line_op(start, end - start, OP_FLUSH);
#endif
}
void flush_cache(unsigned long start, unsigned long size)
{
flush_dcache_range(start, start + size);
}
void invalidate_dcache_all(void)
{
__dc_entire_op(OP_INV);
#ifdef CONFIG_ISA_ARCV2
if (slc_exists)
__slc_entire_op(OP_INV);
#endif
}
void flush_dcache_all(void)
{
__dc_entire_op(OP_FLUSH);
#ifdef CONFIG_ISA_ARCV2
if (slc_exists)
__slc_entire_op(OP_FLUSH);
#endif
}
| {
"pile_set_name": "Github"
} |
<h1><%= t(".header") %></h1>
<%= monologue_admin_form_for [:admin, @user], method: :put do |user| %>
<%= render :partial => 'form', :locals => { :user => user } %>
<%= user.submit t(".save"), class: "btn btn-large btn-primary" %>
<% end %> | {
"pile_set_name": "Github"
} |
---
tags: [ecosystem]
---
# Systems Programming
## Number Types
* [ocplib-endian](https://github.com/OCamlPro/ocplib-endian):
Read and write all sizes of integers, both big and little endian, from [Bigarrays](bigarray.md), strings and bytes.
* [Bytes](https://caml.inria.fr/pub/docs/manual-ocaml/libref/Bytes.html):
The standard library has functions to read little endian and big endian numbers of different sizes,
both signed and unsigned, from bytes.
* [Integers](https://github.com/ocamllabs/ocaml-integers):
A library for support of unsigned and signed types of different sizes.
* [StdInt](https://github.com/andrenth/ocaml-stdint):
More comprehensive library for manipulation of unsigned and signed types of different sizes.
Includes the ability to read big endian and little endian numbers.
## Mirage OS
* [Mirage OS](https://github.com/mirage/mirage):
Unikernel-based operating system written in OCaml.
It serves as a programming framework for constructing secure,
high-performance network applications across a variety of cloud computing and mobile platforms.
* [Arxiv paper on mirage](https://arxiv.org/pdf/1905.02529.pdf)
* [mirage-skeleton](https://github.com/mirage/mirage-skeleton):
Examples of simple Mirage apps.
* [ICFP Presentation on Mirage and Irmin](https://www.youtube.com/watch?v=nUJYGFJDVVo)
* [mirage-tcpip](https://github.com/mirage/mirage-tcpip):
Pure OCaml TCPIP stack.
* [Irmin](https://github.com/mirage/irmin):
Git-like distributed database written in OCaml.
* [Irmin](https://ani003.github.io/irmin/2020/08/13/irmin-containers-intro.html):
Irmin containers.
* [ocaml-fat](https://github.com/mirage/ocaml-fat):
Read and write FAT format filesystems from OCaml.
* [ocaml-git](https://github.com/mirage/ocaml-git):
Pure OCaml low-level git bindings.
* [ocaml-vchan](https://github.com/mirage/ocaml-vchan):
Pure OCaml implementation of the "vchan" shared-memory communication protocol.
## Mirage OS Applications
* [Canopy](https://github.com/Engil/Canopy): A blogging MirageOS unikernel based on git.
Can be compiled to Unix as well.
## Embedded Programming
* [OMicroB](https://github.com/stevenvar/OMicroB):
OCaml virtual machine for devices with limited resources such as microcontrollers.
Based on [this paper](http://hal.upmc.fr/hal-01705825/document).
* [OCaPIC](https://github.com/bvaugon/ocapic):
OCaml for PIC microcontrollers.
## Lower-level OS Libraries
* [kqueue](https://github.com/anuragsoni/kqueue-ml/):
Bindings to the [kqueue](https://en.wikipedia.org/wiki/Kqueue) event notification interface in BSD-style systems.
## Articles
* [ocamlunix](https://ocaml.github.io/ocamlunix/ocamlunix.html):
Great introduction to systems programming in OCaml
| {
"pile_set_name": "Github"
} |
<!doctype html>
<title>CodeMirror: Shell mode</title>
<meta charset="utf-8"/>
<link rel=stylesheet href="../../doc/docs.css">
<link rel=stylesheet href=../../lib/codemirror.css>
<script src=../../lib/codemirror.js></script>
<script src="../../addon/edit/matchbrackets.js"></script>
<script src=shell.js></script>
<style type=text/css>
.CodeMirror {border-top: 1px solid black; border-bottom: 1px solid black;}
</style>
<div id=nav>
<a href="https://codemirror.net"><h1>CodeMirror</h1><img id=logo src="../../doc/logo.png"></a>
<ul>
<li><a href="../../index.html">Home</a>
<li><a href="../../doc/manual.html">Manual</a>
<li><a href="https://github.com/codemirror/codemirror">Code</a>
</ul>
<ul>
<li><a href="../index.html">Language modes</a>
<li><a class=active href="#">Shell</a>
</ul>
</div>
<article>
<h2>Shell mode</h2>
<textarea id=code>
#!/bin/bash
# clone the repository
git clone http://github.com/garden/tree
# generate HTTPS credentials
cd tree
openssl genrsa -aes256 -out https.key 1024
openssl req -new -nodes -key https.key -out https.csr
openssl x509 -req -days 365 -in https.csr -signkey https.key -out https.crt
cp https.key{,.orig}
openssl rsa -in https.key.orig -out https.key
# start the server in HTTPS mode
cd web
sudo node ../server.js 443 'yes' >> ../node.log &
# here is how to stop the server
for pid in `ps aux | grep 'node ../server.js' | awk '{print $2}'` ; do
sudo kill -9 $pid 2> /dev/null
done
exit 0</textarea>
<script>
var editor = CodeMirror.fromTextArea(document.getElementById('code'), {
mode: 'shell',
lineNumbers: true,
matchBrackets: true
});
</script>
<p><strong>MIME types defined:</strong> <code>text/x-sh</code>, <code>application/x-sh</code>.</p>
</article>
| {
"pile_set_name": "Github"
} |
#include <algorithm>
#include <vector>
#include "test_framework/generic_test.h"
#include "test_framework/test_failure.h"
#include "test_framework/timed_executor.h"
using std::vector;
vector<int> GrayCode(int num_bits) {
// TODO - you fill in here.
return {};
}
bool DiffersByOneBit(int x, int y) {
int bit_difference = x ^ y;
return bit_difference && !(bit_difference & (bit_difference - 1));
}
void GrayCodeWrapper(TimedExecutor& executor, int num_bits) {
vector<int> result = executor.Run([&] { return GrayCode(num_bits); });
int expected_size = (1 << num_bits);
if (result.size() != expected_size) {
throw TestFailure("Length mismatch: expected " +
std::to_string(expected_size) + ", got " +
std::to_string(result.size()));
}
for (int i = 1; i < result.size(); i++)
if (!DiffersByOneBit(result[i - 1], result[i])) {
if (result[i - 1] == result[i]) {
throw TestFailure("Two adjacent entries are equal");
} else {
throw TestFailure("Two adjacent entries differ by more than 1 bit");
}
}
std::sort(begin(result), end(result));
auto uniq = std::unique(begin(result), end(result));
if (uniq != end(result)) {
throw TestFailure("Not all entries are distinct: found " +
std::to_string(std::distance(uniq, end(result))) +
" duplicates");
}
}
int main(int argc, char* argv[]) {
std::vector<std::string> args{argv + 1, argv + argc};
std::vector<std::string> param_names{"executor", "num_bits"};
return GenericTestMain(args, "gray_code.cc", "gray_code.tsv",
&GrayCodeWrapper, DefaultComparator{}, param_names);
}
| {
"pile_set_name": "Github"
} |
%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!1 &100000
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 100100000}
serializedVersion: 3
m_Component:
- 4: {fileID: 400000}
- 33: {fileID: 3300000}
- 23: {fileID: 2300000}
m_Layer: 0
m_Name: Orc Bracers
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 0
--- !u!1002 &100001
EditorExtensionImpl:
serializedVersion: 6
--- !u!4 &400000
Transform:
m_ObjectHideFlags: 1
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 100100000}
m_GameObject: {fileID: 100000}
m_LocalRotation: {x: 0, y: 0, z: 0, w: 1}
m_LocalPosition: {x: 0, y: 0, z: 0}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 0}
--- !u!1002 &400001
EditorExtensionImpl:
serializedVersion: 6
--- !u!23 &2300000
Renderer:
m_ObjectHideFlags: 1
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 100100000}
m_GameObject: {fileID: 100000}
m_Enabled: 1
m_CastShadows: 1
m_ReceiveShadows: 1
m_LightmapIndex: 255
m_LightmapTilingOffset: {x: 1, y: 1, z: 0, w: 0}
m_Materials:
- {fileID: 2100000, guid: 67ee969cd76de204fabb3af9227ad2f3, type: 2}
m_SubsetIndices:
m_StaticBatchRoot: {fileID: 0}
m_UseLightProbes: 0
m_LightProbeAnchor: {fileID: 0}
m_ScaleInLightmap: 1
--- !u!1002 &2300001
EditorExtensionImpl:
serializedVersion: 6
--- !u!33 &3300000
MeshFilter:
m_ObjectHideFlags: 1
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 100100000}
m_GameObject: {fileID: 100000}
m_Mesh: {fileID: 4300000, guid: 04c500d6ebb8403459ad4e44d4a35328, type: 3}
--- !u!1002 &3300001
EditorExtensionImpl:
serializedVersion: 6
--- !u!1001 &100100000
Prefab:
m_ObjectHideFlags: 1
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 0}
m_Modifications: []
m_RemovedComponents: []
m_ParentPrefab: {fileID: 0}
m_RootGameObject: {fileID: 100000}
m_IsPrefabParent: 1
m_IsExploded: 1
--- !u!1002 &100100001
EditorExtensionImpl:
serializedVersion: 6
| {
"pile_set_name": "Github"
} |
# CONFIG_32BIT is not set
CONFIG_64BIT=y
CONFIG_64BIT_PHYS_ADDR=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
CONFIG_ARCH_HAVE_CUSTOM_GPIO_H=y
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_BINFMT_ELF32=y
CONFIG_BINFMT_MISC=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_BLOCK_COMPAT=y
CONFIG_BOOT_ELF32=y
CONFIG_CEVT_R4K=y
CONFIG_CLONE_BACKWARDS=y
CONFIG_COMPAT=y
CONFIG_COMPAT_NETLINK_MESSAGES=y
CONFIG_CPU_BIG_ENDIAN=y
CONFIG_CPU_GENERIC_DUMP_TLB=y
CONFIG_CPU_HAS_PREFETCH=y
CONFIG_CPU_HAS_SYNC=y
# CONFIG_CPU_LITTLE_ENDIAN is not set
CONFIG_CPU_MIPSR2=y
CONFIG_CPU_R4K_CACHE_TLB=y
CONFIG_CPU_R4K_FPU=y
CONFIG_CPU_RMAP=y
CONFIG_CPU_SUPPORTS_32BIT_KERNEL=y
CONFIG_CPU_SUPPORTS_64BIT_KERNEL=y
CONFIG_CPU_SUPPORTS_HIGHMEM=y
CONFIG_CSRC_R4K=y
# CONFIG_CYCLADES is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_DMA_COHERENT=y
CONFIG_DTC=y
CONFIG_E1000E=y
CONFIG_EARLY_PRINTK=y
CONFIG_FS_POSIX_ACL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CMOS_UPDATE=y
CONFIG_GENERIC_IO=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HARDWARE_WATCHPOINTS=y
CONFIG_HAS_DMA=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAVE_64BIT_ALIGNED_ACCESS=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
# CONFIG_HAVE_BOOTMEM_INFO_NODE is not set
CONFIG_HAVE_CC_STACKPROTECTOR=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_HAVE_DEBUG_KMEMLEAK=y
CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_GENERIC_DMA_COHERENT=y
CONFIG_HAVE_IDE=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_HAVE_NET_DSA=y
CONFIG_HAVE_OPROFILE=y
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HW_HAS_PCI=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_OCORES=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_IOMMU_HELPER=y
CONFIG_IRQCHIP=y
CONFIG_IRQ_CPU=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_IRQ_WORK=y
# CONFIG_ISI is not set
CONFIG_MIPS=y
CONFIG_MIPS32_COMPAT=y
CONFIG_MIPS32_N32=y
CONFIG_MIPS32_O32=y
# CONFIG_MIPS_HUGE_TLB_SUPPORT is not set
CONFIG_MIPS_L1_CACHE_SHIFT=5
# CONFIG_MIPS_MACHINE is not set
CONFIG_MIPS_MT_DISABLED=y
CONFIG_MODULES_USE_ELF_REL=y
CONFIG_MODULES_USE_ELF_RELA=y
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
CONFIG_MTD_CFI_ADV_OPTIONS=y
CONFIG_MTD_CFI_GEOMETRY=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_PHYSMAP=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_NET_FLOW_LIMIT=y
CONFIG_NLM_COMMON=y
# CONFIG_NLM_MULTINODE is not set
CONFIG_NO_GENERIC_PCI_IOPORT_MAP=y
CONFIG_NO_HZ=y
CONFIG_NO_HZ_COMMON=y
CONFIG_NO_HZ_IDLE=y
CONFIG_NR_CPUS=32
CONFIG_NR_CPUS_DEFAULT_32=y
# CONFIG_N_HDLC is not set
CONFIG_OF=y
CONFIG_OF_ADDRESS=y
CONFIG_OF_EARLY_FLATTREE=y
CONFIG_OF_FLATTREE=y
CONFIG_OF_IRQ=y
CONFIG_OF_MTD=y
CONFIG_OF_NET=y
CONFIG_OF_PCI=y
CONFIG_OF_PCI_IRQ=y
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_PCI=y
CONFIG_PCI_DEBUG=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCI_REALLOC_ENABLE_AUTO=y
CONFIG_PCI_STUB=y
CONFIG_PERF_USE_VMALLOC=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_PM=y
# CONFIG_PM_ADVANCED_DEBUG is not set
CONFIG_PM_DEBUG=y
CONFIG_PM_RUNTIME=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_PPS=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_PROC_KCORE=y
CONFIG_PTP_1588_CLOCK=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RFS_ACCEL=y
# CONFIG_ROCKETPORT is not set
CONFIG_RPS=y
# CONFIG_SCSI_DMA is not set
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_NR_UARTS=48
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_SMP=y
CONFIG_STOP_MACHINE=y
CONFIG_SWIOTLB=y
# CONFIG_SYNCLINKMP is not set
# CONFIG_SYNCLINK_GT is not set
CONFIG_SYNC_R4K=y
CONFIG_SYSV68_PARTITION=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_SYS_HAS_EARLY_PRINTK=y
CONFIG_SYS_SUPPORTS_32BIT_KERNEL=y
CONFIG_SYS_SUPPORTS_64BIT_KERNEL=y
CONFIG_SYS_SUPPORTS_ARBIT_HZ=y
CONFIG_SYS_SUPPORTS_BIG_ENDIAN=y
CONFIG_SYS_SUPPORTS_HIGHMEM=y
CONFIG_SYS_SUPPORTS_LITTLE_ENDIAN=y
CONFIG_SYS_SUPPORTS_SMP=y
CONFIG_SYS_SUPPORTS_ZBOOT=y
CONFIG_SYS_SUPPORTS_ZBOOT_UART16550=y
CONFIG_THERMAL=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_THERMAL_EMULATION is not set
# CONFIG_THERMAL_GOV_FAIR_SHARE is not set
CONFIG_THERMAL_GOV_STEP_WISE=y
# CONFIG_THERMAL_GOV_USER_SPACE is not set
CONFIG_THERMAL_OF=y
CONFIG_TICK_CPU_ACCOUNTING=y
CONFIG_TIMER_STATS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TREE_RCU=y
CONFIG_USB_SUPPORT=y
CONFIG_USE_OF=y
CONFIG_WEAK_ORDERING=y
CONFIG_WEAK_REORDERING_BEYOND_LLSC=y
CONFIG_XPS=y
CONFIG_ZONE_DMA32=y
CONFIG_ZONE_DMA_FLAG=0
| {
"pile_set_name": "Github"
} |
/*
****************************************************************************
* (C) 2003 - Rolf Neugebauer - Intel Research Cambridge
* (C) 2005 - Grzegorz Milos - Intel Research Cambridge
****************************************************************************
*
* File: mm.c
* Author: Rolf Neugebauer ([email protected])
* Changes: Grzegorz Milos
*
* Date: Aug 2003, chages Aug 2005
*
* Environment: Xen Minimal OS
* Description: memory management related functions
* contains buddy page allocator from Xen.
*
****************************************************************************
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <mini-os/os.h>
#include <mini-os/hypervisor.h>
#include <xen/memory.h>
#include <mini-os/mm.h>
#include <mini-os/types.h>
#include <mini-os/lib.h>
#include <bmk-core/platform.h>
#include <bmk-core/pgalloc.h>
#include <bmk-core/string.h>
#ifdef MM_DEBUG
#define DEBUG(_f, _a...) \
minios_printk("MINI_OS(file=mm.c, line=%d) " _f "\n", __LINE__, ## _a)
#else
#define DEBUG(_f, _a...) ((void)0)
#endif
int free_physical_pages(xen_pfn_t *mfns, int n)
{
struct xen_memory_reservation reservation;
set_xen_guest_handle(reservation.extent_start, mfns);
reservation.nr_extents = n;
reservation.extent_order = 0;
reservation.domid = DOMID_SELF;
return HYPERVISOR_memory_op(XENMEM_decrease_reservation, &reservation);
}
void init_mm(void)
{
unsigned long start_pfn, max_pfn;
void *vastartpage, *vaendpage;
minios_printk("MM: Init\n");
arch_init_mm(&start_pfn, &max_pfn);
/*
* now we can initialise the page allocator
*/
vastartpage = to_virt(PFN_PHYS(start_pfn));
vaendpage = to_virt(PFN_PHYS(max_pfn));
minios_printk("MM: Initialise page allocator for %lx(%lx)-%lx(%lx)\n",
(u_long)vastartpage, PFN_PHYS(start_pfn),
(u_long)vaendpage, PFN_PHYS(max_pfn));
bmk_pgalloc_loadmem((u_long)vastartpage, (u_long)vaendpage);
minios_printk("MM: done\n");
bmk_memsize = PFN_PHYS(max_pfn) - PFN_PHYS(start_pfn);
arch_init_p2m(max_pfn);
arch_init_demand_mapping_area(max_pfn);
}
void fini_mm(void)
{
}
unsigned long long minios_get_l1prot(void)
{
return L1_PROT;
}
| {
"pile_set_name": "Github"
} |
// Windows/ErrorMsg.h
#ifndef __WINDOWS_ERROR_MSG_H
#define __WINDOWS_ERROR_MSG_H
#include "../Common/MyString.h"
namespace NWindows {
namespace NError {
UString MyFormatMessage(DWORD errorCode);
}}
#endif
| {
"pile_set_name": "Github"
} |
class AppDelegate: UIResponder, UIApplicationDelegate {
var managedObjectModel: NSManagedObjectModel {
if !_managedObjectModel {
let modelURL = NSBundle.mainBundle().URLForResource("AstronomyPicture", withExtension: "momd")
_managedObjectModel = NSManagedObjectModel(contentsOfURL: modelURL)
}
return _managedObjectModel!
}
}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
class AppDelegate: UIResponder, UIApplicationDelegate {
var managedObjectModel: NSManagedObjectModel {
if !_managedObjectModel {
let modelURL = NSBundle.mainBundle().URLForResource("AstronomyPicture", withExtension: "momd")
_managedObjectModel = NSManagedObjectModel(contentsOfURL: modelURL)
}
return _managedObjectModel!
}
} | {
"pile_set_name": "Github"
} |
#
# Module providing the `Pool` class for managing a process pool
#
# multiprocessing/pool.py
#
# Copyright (c) 2006-2008, R Oudkerk
# Licensed to PSF under a Contributor Agreement.
#
__all__ = ['Pool', 'ThreadPool']
#
# Imports
#
import threading
import queue
import itertools
import collections
import os
import time
import traceback
# If threading is available then ThreadPool should be provided. Therefore
# we avoid top-level imports which are liable to fail on some systems.
from . import util
from . import get_context, TimeoutError
#
# Constants representing the state of a pool
#
RUN = 0
CLOSE = 1
TERMINATE = 2
#
# Miscellaneous
#
job_counter = itertools.count()
def mapstar(args):
return list(map(*args))
def starmapstar(args):
return list(itertools.starmap(args[0], args[1]))
#
# Hack to embed stringification of remote traceback in local traceback
#
class RemoteTraceback(Exception):
def __init__(self, tb):
self.tb = tb
def __str__(self):
return self.tb
class ExceptionWithTraceback:
def __init__(self, exc, tb):
tb = traceback.format_exception(type(exc), exc, tb)
tb = ''.join(tb)
self.exc = exc
self.tb = '\n"""\n%s"""' % tb
def __reduce__(self):
return rebuild_exc, (self.exc, self.tb)
def rebuild_exc(exc, tb):
exc.__cause__ = RemoteTraceback(tb)
return exc
#
# Code run by worker processes
#
class MaybeEncodingError(Exception):
"""Wraps possible unpickleable errors, so they can be
safely sent through the socket."""
def __init__(self, exc, value):
self.exc = repr(exc)
self.value = repr(value)
super(MaybeEncodingError, self).__init__(self.exc, self.value)
def __str__(self):
return "Error sending result: '%s'. Reason: '%s'" % (self.value,
self.exc)
def __repr__(self):
return "<%s: %s>" % (self.__class__.__name__, self)
def worker(inqueue, outqueue, initializer=None, initargs=(), maxtasks=None,
wrap_exception=False):
if (maxtasks is not None) and not (isinstance(maxtasks, int)
and maxtasks >= 1):
raise AssertionError("Maxtasks {!r} is not valid".format(maxtasks))
put = outqueue.put
get = inqueue.get
if hasattr(inqueue, '_writer'):
inqueue._writer.close()
outqueue._reader.close()
if initializer is not None:
initializer(*initargs)
completed = 0
while maxtasks is None or (maxtasks and completed < maxtasks):
try:
task = get()
except (EOFError, OSError):
util.debug('worker got EOFError or OSError -- exiting')
break
if task is None:
util.debug('worker got sentinel -- exiting')
break
job, i, func, args, kwds = task
try:
result = (True, func(*args, **kwds))
except Exception as e:
if wrap_exception and func is not _helper_reraises_exception:
e = ExceptionWithTraceback(e, e.__traceback__)
result = (False, e)
try:
put((job, i, result))
except Exception as e:
wrapped = MaybeEncodingError(e, result[1])
util.debug("Possible encoding error while sending result: %s" % (
wrapped))
put((job, i, (False, wrapped)))
task = job = result = func = args = kwds = None
completed += 1
util.debug('worker exiting after %d tasks' % completed)
def _helper_reraises_exception(ex):
'Pickle-able helper function for use by _guarded_task_generation.'
raise ex
#
# Class representing a process pool
#
class Pool(object):
'''
Class which supports an async version of applying functions to arguments.
'''
_wrap_exception = True
def Process(self, *args, **kwds):
return self._ctx.Process(*args, **kwds)
def __init__(self, processes=None, initializer=None, initargs=(),
maxtasksperchild=None, context=None):
self._ctx = context or get_context()
self._setup_queues()
self._taskqueue = queue.SimpleQueue()
self._cache = {}
self._state = RUN
self._maxtasksperchild = maxtasksperchild
self._initializer = initializer
self._initargs = initargs
if processes is None:
processes = os.cpu_count() or 1
if processes < 1:
raise ValueError("Number of processes must be at least 1")
if initializer is not None and not callable(initializer):
raise TypeError('initializer must be a callable')
self._processes = processes
self._pool = []
self._repopulate_pool()
self._worker_handler = threading.Thread(
target=Pool._handle_workers,
args=(self, )
)
self._worker_handler.daemon = True
self._worker_handler._state = RUN
self._worker_handler.start()
self._task_handler = threading.Thread(
target=Pool._handle_tasks,
args=(self._taskqueue, self._quick_put, self._outqueue,
self._pool, self._cache)
)
self._task_handler.daemon = True
self._task_handler._state = RUN
self._task_handler.start()
self._result_handler = threading.Thread(
target=Pool._handle_results,
args=(self._outqueue, self._quick_get, self._cache)
)
self._result_handler.daemon = True
self._result_handler._state = RUN
self._result_handler.start()
self._terminate = util.Finalize(
self, self._terminate_pool,
args=(self._taskqueue, self._inqueue, self._outqueue, self._pool,
self._worker_handler, self._task_handler,
self._result_handler, self._cache),
exitpriority=15
)
def _join_exited_workers(self):
"""Cleanup after any worker processes which have exited due to reaching
their specified lifetime. Returns True if any workers were cleaned up.
"""
cleaned = False
for i in reversed(range(len(self._pool))):
worker = self._pool[i]
if worker.exitcode is not None:
# worker exited
util.debug('cleaning up worker %d' % i)
worker.join()
cleaned = True
del self._pool[i]
return cleaned
def _repopulate_pool(self):
"""Bring the number of pool processes up to the specified number,
for use after reaping workers which have exited.
"""
for i in range(self._processes - len(self._pool)):
w = self.Process(target=worker,
args=(self._inqueue, self._outqueue,
self._initializer,
self._initargs, self._maxtasksperchild,
self._wrap_exception)
)
self._pool.append(w)
w.name = w.name.replace('Process', 'PoolWorker')
w.daemon = True
w.start()
util.debug('added worker')
def _maintain_pool(self):
"""Clean up any exited workers and start replacements for them.
"""
if self._join_exited_workers():
self._repopulate_pool()
def _setup_queues(self):
self._inqueue = self._ctx.SimpleQueue()
self._outqueue = self._ctx.SimpleQueue()
self._quick_put = self._inqueue._writer.send
self._quick_get = self._outqueue._reader.recv
def apply(self, func, args=(), kwds={}):
'''
Equivalent of `func(*args, **kwds)`.
Pool must be running.
'''
return self.apply_async(func, args, kwds).get()
def map(self, func, iterable, chunksize=None):
'''
Apply `func` to each element in `iterable`, collecting the results
in a list that is returned.
'''
return self._map_async(func, iterable, mapstar, chunksize).get()
def starmap(self, func, iterable, chunksize=None):
'''
Like `map()` method but the elements of the `iterable` are expected to
be iterables as well and will be unpacked as arguments. Hence
`func` and (a, b) becomes func(a, b).
'''
return self._map_async(func, iterable, starmapstar, chunksize).get()
def starmap_async(self, func, iterable, chunksize=None, callback=None,
error_callback=None):
'''
Asynchronous version of `starmap()` method.
'''
return self._map_async(func, iterable, starmapstar, chunksize,
callback, error_callback)
def _guarded_task_generation(self, result_job, func, iterable):
'''Provides a generator of tasks for imap and imap_unordered with
appropriate handling for iterables which throw exceptions during
iteration.'''
try:
i = -1
for i, x in enumerate(iterable):
yield (result_job, i, func, (x,), {})
except Exception as e:
yield (result_job, i+1, _helper_reraises_exception, (e,), {})
def imap(self, func, iterable, chunksize=1):
'''
Equivalent of `map()` -- can be MUCH slower than `Pool.map()`.
'''
if self._state != RUN:
raise ValueError("Pool not running")
if chunksize == 1:
result = IMapIterator(self._cache)
self._taskqueue.put(
(
self._guarded_task_generation(result._job, func, iterable),
result._set_length
))
return result
else:
if chunksize < 1:
raise ValueError(
"Chunksize must be 1+, not {0:n}".format(
chunksize))
task_batches = Pool._get_tasks(func, iterable, chunksize)
result = IMapIterator(self._cache)
self._taskqueue.put(
(
self._guarded_task_generation(result._job,
mapstar,
task_batches),
result._set_length
))
return (item for chunk in result for item in chunk)
def imap_unordered(self, func, iterable, chunksize=1):
'''
Like `imap()` method but ordering of results is arbitrary.
'''
if self._state != RUN:
raise ValueError("Pool not running")
if chunksize == 1:
result = IMapUnorderedIterator(self._cache)
self._taskqueue.put(
(
self._guarded_task_generation(result._job, func, iterable),
result._set_length
))
return result
else:
if chunksize < 1:
raise ValueError(
"Chunksize must be 1+, not {0!r}".format(chunksize))
task_batches = Pool._get_tasks(func, iterable, chunksize)
result = IMapUnorderedIterator(self._cache)
self._taskqueue.put(
(
self._guarded_task_generation(result._job,
mapstar,
task_batches),
result._set_length
))
return (item for chunk in result for item in chunk)
def apply_async(self, func, args=(), kwds={}, callback=None,
error_callback=None):
'''
Asynchronous version of `apply()` method.
'''
if self._state != RUN:
raise ValueError("Pool not running")
result = ApplyResult(self._cache, callback, error_callback)
self._taskqueue.put(([(result._job, 0, func, args, kwds)], None))
return result
def map_async(self, func, iterable, chunksize=None, callback=None,
error_callback=None):
'''
Asynchronous version of `map()` method.
'''
return self._map_async(func, iterable, mapstar, chunksize, callback,
error_callback)
def _map_async(self, func, iterable, mapper, chunksize=None, callback=None,
error_callback=None):
'''
Helper function to implement map, starmap and their async counterparts.
'''
if self._state != RUN:
raise ValueError("Pool not running")
if not hasattr(iterable, '__len__'):
iterable = list(iterable)
if chunksize is None:
chunksize, extra = divmod(len(iterable), len(self._pool) * 4)
if extra:
chunksize += 1
if len(iterable) == 0:
chunksize = 0
task_batches = Pool._get_tasks(func, iterable, chunksize)
result = MapResult(self._cache, chunksize, len(iterable), callback,
error_callback=error_callback)
self._taskqueue.put(
(
self._guarded_task_generation(result._job,
mapper,
task_batches),
None
)
)
return result
@staticmethod
def _handle_workers(pool):
thread = threading.current_thread()
# Keep maintaining workers until the cache gets drained, unless the pool
# is terminated.
while thread._state == RUN or (pool._cache and thread._state != TERMINATE):
pool._maintain_pool()
time.sleep(0.1)
# send sentinel to stop workers
pool._taskqueue.put(None)
util.debug('worker handler exiting')
@staticmethod
def _handle_tasks(taskqueue, put, outqueue, pool, cache):
thread = threading.current_thread()
for taskseq, set_length in iter(taskqueue.get, None):
task = None
try:
# iterating taskseq cannot fail
for task in taskseq:
if thread._state:
util.debug('task handler found thread._state != RUN')
break
try:
put(task)
except Exception as e:
job, idx = task[:2]
try:
cache[job]._set(idx, (False, e))
except KeyError:
pass
else:
if set_length:
util.debug('doing set_length()')
idx = task[1] if task else -1
set_length(idx + 1)
continue
break
finally:
task = taskseq = job = None
else:
util.debug('task handler got sentinel')
try:
# tell result handler to finish when cache is empty
util.debug('task handler sending sentinel to result handler')
outqueue.put(None)
# tell workers there is no more work
util.debug('task handler sending sentinel to workers')
for p in pool:
put(None)
except OSError:
util.debug('task handler got OSError when sending sentinels')
util.debug('task handler exiting')
@staticmethod
def _handle_results(outqueue, get, cache):
thread = threading.current_thread()
while 1:
try:
task = get()
except (OSError, EOFError):
util.debug('result handler got EOFError/OSError -- exiting')
return
if thread._state:
assert thread._state == TERMINATE, "Thread not in TERMINATE"
util.debug('result handler found thread._state=TERMINATE')
break
if task is None:
util.debug('result handler got sentinel')
break
job, i, obj = task
try:
cache[job]._set(i, obj)
except KeyError:
pass
task = job = obj = None
while cache and thread._state != TERMINATE:
try:
task = get()
except (OSError, EOFError):
util.debug('result handler got EOFError/OSError -- exiting')
return
if task is None:
util.debug('result handler ignoring extra sentinel')
continue
job, i, obj = task
try:
cache[job]._set(i, obj)
except KeyError:
pass
task = job = obj = None
if hasattr(outqueue, '_reader'):
util.debug('ensuring that outqueue is not full')
# If we don't make room available in outqueue then
# attempts to add the sentinel (None) to outqueue may
# block. There is guaranteed to be no more than 2 sentinels.
try:
for i in range(10):
if not outqueue._reader.poll():
break
get()
except (OSError, EOFError):
pass
util.debug('result handler exiting: len(cache)=%s, thread._state=%s',
len(cache), thread._state)
@staticmethod
def _get_tasks(func, it, size):
it = iter(it)
while 1:
x = tuple(itertools.islice(it, size))
if not x:
return
yield (func, x)
def __reduce__(self):
raise NotImplementedError(
'pool objects cannot be passed between processes or pickled'
)
def close(self):
util.debug('closing pool')
if self._state == RUN:
self._state = CLOSE
self._worker_handler._state = CLOSE
def terminate(self):
util.debug('terminating pool')
self._state = TERMINATE
self._worker_handler._state = TERMINATE
self._terminate()
def join(self):
util.debug('joining pool')
if self._state == RUN:
raise ValueError("Pool is still running")
elif self._state not in (CLOSE, TERMINATE):
raise ValueError("In unknown state")
self._worker_handler.join()
self._task_handler.join()
self._result_handler.join()
for p in self._pool:
p.join()
@staticmethod
def _help_stuff_finish(inqueue, task_handler, size):
# task_handler may be blocked trying to put items on inqueue
util.debug('removing tasks from inqueue until task handler finished')
inqueue._rlock.acquire()
while task_handler.is_alive() and inqueue._reader.poll():
inqueue._reader.recv()
time.sleep(0)
@classmethod
def _terminate_pool(cls, taskqueue, inqueue, outqueue, pool,
worker_handler, task_handler, result_handler, cache):
# this is guaranteed to only be called once
util.debug('finalizing pool')
worker_handler._state = TERMINATE
task_handler._state = TERMINATE
util.debug('helping task handler/workers to finish')
cls._help_stuff_finish(inqueue, task_handler, len(pool))
if (not result_handler.is_alive()) and (len(cache) != 0):
raise AssertionError(
"Cannot have cache with result_hander not alive")
result_handler._state = TERMINATE
outqueue.put(None) # sentinel
# We must wait for the worker handler to exit before terminating
# workers because we don't want workers to be restarted behind our back.
util.debug('joining worker handler')
if threading.current_thread() is not worker_handler:
worker_handler.join()
# Terminate workers which haven't already finished.
if pool and hasattr(pool[0], 'terminate'):
util.debug('terminating workers')
for p in pool:
if p.exitcode is None:
p.terminate()
util.debug('joining task handler')
if threading.current_thread() is not task_handler:
task_handler.join()
util.debug('joining result handler')
if threading.current_thread() is not result_handler:
result_handler.join()
if pool and hasattr(pool[0], 'terminate'):
util.debug('joining pool workers')
for p in pool:
if p.is_alive():
# worker has not yet exited
util.debug('cleaning up worker %d' % p.pid)
p.join()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.terminate()
#
# Class whose instances are returned by `Pool.apply_async()`
#
class ApplyResult(object):
def __init__(self, cache, callback, error_callback):
self._event = threading.Event()
self._job = next(job_counter)
self._cache = cache
self._callback = callback
self._error_callback = error_callback
cache[self._job] = self
def ready(self):
return self._event.is_set()
def successful(self):
if not self.ready():
raise ValueError("{0!r} not ready".format(self))
return self._success
def wait(self, timeout=None):
self._event.wait(timeout)
def get(self, timeout=None):
self.wait(timeout)
if not self.ready():
raise TimeoutError
if self._success:
return self._value
else:
raise self._value
def _set(self, i, obj):
self._success, self._value = obj
if self._callback and self._success:
self._callback(self._value)
if self._error_callback and not self._success:
self._error_callback(self._value)
self._event.set()
del self._cache[self._job]
AsyncResult = ApplyResult # create alias -- see #17805
#
# Class whose instances are returned by `Pool.map_async()`
#
class MapResult(ApplyResult):
def __init__(self, cache, chunksize, length, callback, error_callback):
ApplyResult.__init__(self, cache, callback,
error_callback=error_callback)
self._success = True
self._value = [None] * length
self._chunksize = chunksize
if chunksize <= 0:
self._number_left = 0
self._event.set()
del cache[self._job]
else:
self._number_left = length//chunksize + bool(length % chunksize)
def _set(self, i, success_result):
self._number_left -= 1
success, result = success_result
if success and self._success:
self._value[i*self._chunksize:(i+1)*self._chunksize] = result
if self._number_left == 0:
if self._callback:
self._callback(self._value)
del self._cache[self._job]
self._event.set()
else:
if not success and self._success:
# only store first exception
self._success = False
self._value = result
if self._number_left == 0:
# only consider the result ready once all jobs are done
if self._error_callback:
self._error_callback(self._value)
del self._cache[self._job]
self._event.set()
#
# Class whose instances are returned by `Pool.imap()`
#
class IMapIterator(object):
def __init__(self, cache):
self._cond = threading.Condition(threading.Lock())
self._job = next(job_counter)
self._cache = cache
self._items = collections.deque()
self._index = 0
self._length = None
self._unsorted = {}
cache[self._job] = self
def __iter__(self):
return self
def next(self, timeout=None):
with self._cond:
try:
item = self._items.popleft()
except IndexError:
if self._index == self._length:
raise StopIteration from None
self._cond.wait(timeout)
try:
item = self._items.popleft()
except IndexError:
if self._index == self._length:
raise StopIteration from None
raise TimeoutError from None
success, value = item
if success:
return value
raise value
__next__ = next # XXX
def _set(self, i, obj):
with self._cond:
if self._index == i:
self._items.append(obj)
self._index += 1
while self._index in self._unsorted:
obj = self._unsorted.pop(self._index)
self._items.append(obj)
self._index += 1
self._cond.notify()
else:
self._unsorted[i] = obj
if self._index == self._length:
del self._cache[self._job]
def _set_length(self, length):
with self._cond:
self._length = length
if self._index == self._length:
self._cond.notify()
del self._cache[self._job]
#
# Class whose instances are returned by `Pool.imap_unordered()`
#
class IMapUnorderedIterator(IMapIterator):
def _set(self, i, obj):
with self._cond:
self._items.append(obj)
self._index += 1
self._cond.notify()
if self._index == self._length:
del self._cache[self._job]
#
#
#
class ThreadPool(Pool):
_wrap_exception = False
@staticmethod
def Process(*args, **kwds):
from .dummy import Process
return Process(*args, **kwds)
def __init__(self, processes=None, initializer=None, initargs=()):
Pool.__init__(self, processes, initializer, initargs)
def _setup_queues(self):
self._inqueue = queue.SimpleQueue()
self._outqueue = queue.SimpleQueue()
self._quick_put = self._inqueue.put
self._quick_get = self._outqueue.get
@staticmethod
def _help_stuff_finish(inqueue, task_handler, size):
# drain inqueue, and put sentinels at its head to make workers finish
try:
while True:
inqueue.get(block=False)
except queue.Empty:
pass
for i in range(size):
inqueue.put(None)
| {
"pile_set_name": "Github"
} |
#ifndef ULTRALCD_H
#define ULTRALCD_H
#include "Marlin.h"
#ifdef ULTRA_LCD
void lcd_update();
void lcd_init();
void lcd_setstatus(const char* message);
void lcd_setstatuspgm(const char* message);
void lcd_setalertstatuspgm(const char* message);
void lcd_reset_alert_level();
#ifdef DOGLCD
extern int lcd_contrast;
void lcd_setcontrast(uint8_t value);
#endif
static unsigned char blink = 0; // Variable for visualisation of fan rotation in GLCD
#define LCD_MESSAGEPGM(x) lcd_setstatuspgm(PSTR(x))
#define LCD_ALERTMESSAGEPGM(x) lcd_setalertstatuspgm(PSTR(x))
#define LCD_UPDATE_INTERVAL 100
#define LCD_TIMEOUT_TO_STATUS 15000
#ifdef ULTIPANEL
void lcd_buttons_update();
extern volatile uint8_t buttons; //the last checked buttons in a bit array.
#ifdef REPRAPWORLD_KEYPAD
extern volatile uint8_t buttons_reprapworld_keypad; // to store the keypad shiftregister values
#endif
#else
FORCE_INLINE void lcd_buttons_update() {}
#endif
extern int plaPreheatHotendTemp;
extern int plaPreheatHPBTemp;
extern int plaPreheatFanSpeed;
extern int absPreheatHotendTemp;
extern int absPreheatHPBTemp;
extern int absPreheatFanSpeed;
void lcd_buzz(long duration,uint16_t freq);
bool lcd_clicked();
#ifdef NEWPANEL
#define EN_C (1<<BLEN_C)
#define EN_B (1<<BLEN_B)
#define EN_A (1<<BLEN_A)
#define LCD_CLICKED (buttons&EN_C)
#ifdef REPRAPWORLD_KEYPAD
#define EN_REPRAPWORLD_KEYPAD_F3 (1<<BLEN_REPRAPWORLD_KEYPAD_F3)
#define EN_REPRAPWORLD_KEYPAD_F2 (1<<BLEN_REPRAPWORLD_KEYPAD_F2)
#define EN_REPRAPWORLD_KEYPAD_F1 (1<<BLEN_REPRAPWORLD_KEYPAD_F1)
#define EN_REPRAPWORLD_KEYPAD_UP (1<<BLEN_REPRAPWORLD_KEYPAD_UP)
#define EN_REPRAPWORLD_KEYPAD_RIGHT (1<<BLEN_REPRAPWORLD_KEYPAD_RIGHT)
#define EN_REPRAPWORLD_KEYPAD_MIDDLE (1<<BLEN_REPRAPWORLD_KEYPAD_MIDDLE)
#define EN_REPRAPWORLD_KEYPAD_DOWN (1<<BLEN_REPRAPWORLD_KEYPAD_DOWN)
#define EN_REPRAPWORLD_KEYPAD_LEFT (1<<BLEN_REPRAPWORLD_KEYPAD_LEFT)
#define LCD_CLICKED ((buttons&EN_C) || (buttons_reprapworld_keypad&EN_REPRAPWORLD_KEYPAD_F1))
#define REPRAPWORLD_KEYPAD_MOVE_Z_UP (buttons_reprapworld_keypad&EN_REPRAPWORLD_KEYPAD_F2)
#define REPRAPWORLD_KEYPAD_MOVE_Z_DOWN (buttons_reprapworld_keypad&EN_REPRAPWORLD_KEYPAD_F3)
#define REPRAPWORLD_KEYPAD_MOVE_X_LEFT (buttons_reprapworld_keypad&EN_REPRAPWORLD_KEYPAD_LEFT)
#define REPRAPWORLD_KEYPAD_MOVE_X_RIGHT (buttons_reprapworld_keypad&EN_REPRAPWORLD_KEYPAD_RIGHT)
#define REPRAPWORLD_KEYPAD_MOVE_Y_DOWN (buttons_reprapworld_keypad&EN_REPRAPWORLD_KEYPAD_DOWN)
#define REPRAPWORLD_KEYPAD_MOVE_Y_UP (buttons_reprapworld_keypad&EN_REPRAPWORLD_KEYPAD_UP)
#define REPRAPWORLD_KEYPAD_MOVE_HOME (buttons_reprapworld_keypad&EN_REPRAPWORLD_KEYPAD_MIDDLE)
#endif //REPRAPWORLD_KEYPAD
#else
//atomatic, do not change
#define B_LE (1<<BL_LE)
#define B_UP (1<<BL_UP)
#define B_MI (1<<BL_MI)
#define B_DW (1<<BL_DW)
#define B_RI (1<<BL_RI)
#define B_ST (1<<BL_ST)
#define EN_B (1<<BLEN_B)
#define EN_A (1<<BLEN_A)
#define LCD_CLICKED ((buttons&B_MI)||(buttons&B_ST))
#endif//NEWPANEL
#else //no lcd
FORCE_INLINE void lcd_update() {}
FORCE_INLINE void lcd_init() {}
FORCE_INLINE void lcd_setstatus(const char* message) {}
FORCE_INLINE void lcd_buttons_update() {}
FORCE_INLINE void lcd_reset_alert_level() {}
FORCE_INLINE void lcd_buzz(long duration,uint16_t freq) {}
#define LCD_MESSAGEPGM(x)
#define LCD_ALERTMESSAGEPGM(x)
#endif
char *itostr2(const uint8_t &x);
char *itostr31(const int &xx);
char *itostr3(const int &xx);
char *itostr3left(const int &xx);
char *itostr4(const int &xx);
char *ftostr3(const float &x);
char *ftostr31ns(const float &x); // float to string without sign character
char *ftostr31(const float &x);
char *ftostr32(const float &x);
char *ftostr5(const float &x);
char *ftostr51(const float &x);
char *ftostr52(const float &x);
#endif //ULTRALCD
| {
"pile_set_name": "Github"
} |
#pragma once
#include "../../ThirdParty/OpenSource/tinyimageformat/tinyimageformat_base.h"
#include "../../ThirdParty/OpenSource/tinyimageformat/tinyimageformat_apis.h"
inline void utils_caps_builder(Renderer* pRenderer)
{
pRenderer->pCapBits = (GPUCapBits*)tf_calloc(1, sizeof(GPUCapBits));
for (uint32_t i = 0; i < TinyImageFormat_Count;++i)
{
DXGI_FORMAT fmt = (DXGI_FORMAT) TinyImageFormat_ToDXGI_FORMAT((TinyImageFormat)i);
if(fmt == DXGI_FORMAT_UNKNOWN) continue;
UINT formatSupport = 0;
pRenderer->pDxDevice->CheckFormatSupport(fmt, &formatSupport);
pRenderer->pCapBits->canShaderReadFrom[i] = (formatSupport & D3D11_FORMAT_SUPPORT_SHADER_SAMPLE) != 0;
pRenderer->pCapBits->canShaderWriteTo[i] = (formatSupport & D3D11_FORMAT_SUPPORT_TYPED_UNORDERED_ACCESS_VIEW) != 0;
pRenderer->pCapBits->canRenderTargetWriteTo[i] = (formatSupport & D3D11_FORMAT_SUPPORT_RENDER_TARGET) != 0;
}
}
| {
"pile_set_name": "Github"
} |
---
title: "Flashing ntrboot (Custom Firmware)"
---
Before proceeding, ensure you have read all of the information on [ntrboot](ntrboot)
Note that the flashcart will not be able to be used for its standard functions while the ntrboot exploit is installed on it (except for in the case of the Acekard 2i, which remains functional *on 3DS systems with custom firmware installed only*). There are optional steps at the end of the ntrboot flashing instructions to remove it from your flashcart when you are done.
Note that in some rare circumstances, it may be possible for the flashing process to **brick** a counterfeit flashcart and render it permanently unusable. This is unlikely, but nevertheless only original listed flashcarts are supported. To reduce the chance of receiving a counterfeit card, it is recommended that you use a reputable site to buy your flashcart (such as [NDS Card](http://www.nds-card.com/))
{: .notice--danger}
#### Det du trenger
* Your ntrboot compatible DS / DSi flashcart:
+ Either the Acekard 2i or R4i Gold 3DS RTS
* Two 3DS family devices
+ **The source 3DS**: the 3DS family device that is already running some kind of custom firmware (such as boot9strap or arm9loaderhax)
+ **The target 3DS**: the device on stock firmware
* The latest release of [boot9strap](https://github.com/SciresM/boot9strap/releases/latest) *(`ntr` boot9strap; not the `devkit` file)*
* The latest release of [ntrboot_flasher](https://github.com/kitling/ntrboot_flasher/releases/latest)
#### Instruksjoner
##### Seksjon I - Forarbeid
1. Power off **the source 3DS**
1. Insert **the source 3DS**'s SD card into your computer
1. Create a folder named `ntrboot` on the root of your SD card
1. Copy `boot9strap_ntr.firm` and `boot9strap_ntr.firm.sha` from the boot9strap ntr `.zip` to the `/ntrboot/` folder on your SD card
1. Copy `ntrboot_flasher.firm` from the ntrboot_flasher `.zip` to the `/luma/payloads` folder on **the source 3DS**'s SD card
1. Reinsert **the source 3DS**'s SD card back into **the source 3DS**
1. Insert your ntrboot compatible DS / DSi flashcart into **the source 3DS**
##### Section II - Flashing ntrboot
1. Launch the Luma3DS chainloader by holding (Start) during boot on **the source 3DS**
1. Select "ntrboot_flasher"
1. Select "Dump Flash"
1. Vent til prosessen har fullført
1. Press (B) to return to the main menu
1. Select "Inject Ntrboot"
1. Press (Y) to proceed
1. Press (R) for retail unit ntrboot
1. Vent til prosessen har fullført
1. Press (B) to return to the main menu
1. Select "EXIT" to power off **the source 3DS**
___
Continue to [Installing boot9strap (ntrboot)](installing-boot9strap-(ntrboot))
{: .notice--primary} | {
"pile_set_name": "Github"
} |
class BrewPip < Formula
desc "Install pip packages as homebrew formulas"
homepage "https://github.com/hanxue/brew-pip"
url "https://github.com/hanxue/brew-pip/archive/0.4.1.tar.gz"
sha256 "9049a6db97188560404d8ecad2a7ade72a4be4338d5241097d3e3e8e215cda28"
def install
bin.install "bin/brew-pip"
end
end
| {
"pile_set_name": "Github"
} |
import { IDataSet } from '../../src/base/engine';
import { pivot_dataset, pivot_nodata } from '../base/datasource.spec';
import { PivotView } from '../../src/pivotview/base/pivotview';
import { createElement, remove, EmitType, EventHandler, extend } from '@syncfusion/ej2-base';
import { GroupingBar } from '../../src/common/grouping-bar/grouping-bar';
import { FieldList } from '../../src/common/actions/field-list';
import { TreeView } from '@syncfusion/ej2-navigations';
import { Dialog } from '@syncfusion/ej2-popups';
import {
FieldDroppedEventArgs, ColumnRenderEventArgs
} from '../../src/common/base/interface';
import * as util from '../utils.spec';
import { profile, inMB, getMemoryProfile } from '../common.spec';
// Field based sorting option
describe('- PivotGrid with Field based sorting option - ', () => {
beforeAll(() => {
const isDef = (o: any) => o !== undefined && o !== null;
if (!isDef(window.performance)) {
console.log("Unsupported environment, window.performance.memory is unavailable");
this.skip(); //Skips test (in Chai)
return;
}
});
describe('Grouping Bar with injected Module - ', () => {
let pivotGridObj: PivotView;
let elem: HTMLElement = createElement('div', { id: 'PivotGrid', styles: 'height:200px; width:500px' });
afterAll(() => {
if (pivotGridObj) {
pivotGridObj.destroy();
}
remove(elem);
});
beforeAll((done: Function) => {
if (!document.getElementById(elem.id)) {
document.body.appendChild(elem);
}
let dataBound: EmitType<Object> = () => { done(); };
PivotView.Inject(GroupingBar);
pivotGridObj = new PivotView({
dataSourceSettings: {
dataSource: pivot_dataset as IDataSet[],
expandAll: false,
enableSorting: true,
filterSettings: [{ name: 'state', type: 'Exclude', items: ['Delhi'] }],
sortSettings: [{ name: 'product', order: 'None' },
{ name: 'eyeColor', order: 'Descending' },
{ name: 'date', order: 'None' }],
formatSettings: [{ name: 'balance', format: 'C' }, { name: 'date', format: 'dd/MM/yyyy-hh:mm', type: 'date' }],
rows: [{ name: 'product', caption: 'Items' }, { name: 'eyeColor' }],
columns: [{ name: 'gender', caption: 'Population' }, { name: 'isActive' }],
values: [{ name: 'balance' }, { name: 'quantity' }],
filters: [{ name: 'state' }],
},
showGroupingBar: true,
groupingBarSettings: { showFilterIcon: false, showRemoveIcon: false, showSortIcon: false, showValueTypeIcon: false },
dataBound: dataBound,
gridSettings: {
columnRender: (args: ColumnRenderEventArgs) => {
args.columns[0].width = 200;
args.columns[1].allowReordering = true;
args.columns[1].allowResizing = true;
}
},
});
pivotGridObj.appendTo('#PivotGrid');
});
let persistdata: string;
it('check window resize with grouping bar', () => {
pivotGridObj.onWindowResize();
pivotGridObj.renderModule.updateGridSettings();
expect(true).toBeTruthy();
});
it('grouping bar render testing', () => {
expect(pivotGridObj.element.children[0].classList.contains('e-grouping-bar')).toBeTruthy;
pivotGridObj.dataBind();
pivotGridObj.groupingBarSettings = { showFilterIcon: true, showRemoveIcon: true, showSortIcon: true };
expect(pivotGridObj.element.children[0].classList.contains('e-grouping-bar')).toBeTruthy;
});
it('check sorting order field', () => {
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-columns').querySelectorAll('.e-pivot-button'));
expect(pivotButtons.length).toBeGreaterThan(0);
((pivotButtons[0]).querySelector('.e-sort') as HTMLElement).click();
expect(true).toBe(true);
});
it('sorting order after update', () => {
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-columns').querySelectorAll('.e-pivot-button'));
expect(pivotButtons.length).toBeGreaterThan(0);
expect((pivotButtons[0]).querySelector('.e-descend')).toBeTruthy;
});
it('check filtering field', (done: Function) => {
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-filters').querySelectorAll('.e-pivot-button'));
expect(pivotButtons.length).toBeGreaterThan(0);
((pivotButtons[0]).querySelector('.e-btn-filter') as HTMLElement).click();
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
let filterDialog: Dialog = pivotGridObj.pivotCommon.filterDialog.dialogPopUp;
expect(filterDialog.element.classList.contains('e-popup-open')).toBe(true);
done();
}, 1000);
});
it('check all nodes on filter popup', () => {
let treeObj: TreeView = pivotGridObj.pivotCommon.filterDialog.allMemberSelect;
let memberTreeObj: TreeView = pivotGridObj.pivotCommon.filterDialog.memberTreeView;
let filterDialog: Dialog = pivotGridObj.pivotCommon.filterDialog.dialogPopUp;
let allNode: HTMLElement = treeObj.element.querySelector('.e-checkbox-wrapper');
let checkEle: Element[] = <Element[] & NodeListOf<Element>>memberTreeObj.element.querySelectorAll('.e-checkbox-wrapper');
expect(checkEle.length).toBeGreaterThan(0);
expect(allNode.classList.contains('e-small')).toBe(false);
let args: MouseEvent = new MouseEvent("mousedown", { view: window, bubbles: true, cancelable: true });
allNode.querySelector('.e-frame').dispatchEvent(args);
args = new MouseEvent("mouseup", { view: window, bubbles: true, cancelable: true });
allNode.querySelector('.e-frame').dispatchEvent(args);
args = new MouseEvent("click", { view: window, bubbles: true, cancelable: true });
allNode.querySelector('.e-frame').dispatchEvent(args);
let checkedEle: Element[] = <Element[] & NodeListOf<Element>>memberTreeObj.element.querySelectorAll('.e-check');
expect(checkEle.length).toEqual(checkedEle.length);
expect(filterDialog.element.querySelector('.e-ok-btn').getAttribute('disabled')).toBe(null);
(filterDialog.element.querySelector('.e-ok-btn') as HTMLElement).click();
});
it('check filter state after update', () => {
let filterDialog: Dialog = pivotGridObj.pivotCommon.filterDialog.dialogPopUp;
expect(filterDialog).toBeUndefined;
});
it('check remove pivot button', (done: Function) => {
let pivotButton: HTMLElement =
(pivotGridObj.element.querySelector('.e-filters').querySelector('.e-pivot-button') as HTMLElement);
expect(pivotButton.id).toBe('state');
(pivotButton.querySelector('.e-remove') as HTMLElement).click();
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
pivotButton = (pivotGridObj.element.querySelector('.e-filters').querySelector('.e-pivot-button') as HTMLElement);
expect(pivotButton).toBeNull();
done();
}, 1000);
});
it('check drag and drop pivot button', (done: Function) => {
pivotGridObj.onFieldDropped = function (args: FieldDroppedEventArgs) {
args.droppedField.caption = "droppedButton"
};
let rowAxiscontent: HTMLElement = pivotGridObj.element.querySelector('.e-rows');
let valueAxiscontent: HTMLElement = pivotGridObj.element.querySelector('.e-values');
let pivotButton: HTMLElement[] = [].slice.call((valueAxiscontent).querySelectorAll('.e-pivot-button'));
expect(pivotButton.length).toEqual(2);
let dragElement: HTMLElement = pivotButton[0].querySelector('.e-content');
let mousedown: any =
util.getEventObject('MouseEvents', 'mousedown', dragElement, dragElement, 15, 10);
EventHandler.trigger(dragElement, 'mousedown', mousedown);
let mousemove: any =
util.getEventObject('MouseEvents', 'mousemove', dragElement, rowAxiscontent, 15, 70);
mousemove.srcElement = mousemove.target = mousemove.toElement = rowAxiscontent;
EventHandler.trigger(<any>(document), 'mousemove', mousemove);
mousemove = util.setMouseCordinates(mousemove, 15, 75);
EventHandler.trigger(<any>(document), 'mousemove', mousemove);
let mouseOverEventArgs: any = extend({}, mousemove, null, true);
mouseOverEventArgs.type = 'mouseover';
(pivotGridObj.groupingBarModule as any).dropIndicatorUpdate(mouseOverEventArgs);
let mouseLeaveEventArgs: any = extend({}, mousemove, null, true);
mouseLeaveEventArgs.type = 'mouseleave';
(pivotGridObj.groupingBarModule as any).dropIndicatorUpdate(mouseLeaveEventArgs);
let mouseUp: any = util.getEventObject('MouseEvents', 'mouseup', dragElement, rowAxiscontent);
mouseUp.type = 'mouseup';
mouseUp.srcElement = mouseUp.target = mouseUp.toElement = rowAxiscontent;
EventHandler.trigger(<any>(document), 'mouseup', mouseUp);
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
pivotButton = [].slice.call((rowAxiscontent).querySelectorAll('.e-pivot-button'));
expect(pivotButton.length).toEqual(3);
expect((pivotButton[2].querySelector('.e-content') as HTMLElement).innerText).toEqual("droppedButton");
done();
}, 1000);
});
it('destroy common event handlers', () => {
pivotGridObj.commonModule.destroy();
expect(true).toBeTruthy();
});
it('pivotgrid destroy', () => {
pivotGridObj.destroy();
expect(true).toBeTruthy();
});
it('pivotgrid destroy expect', () => {
expect(pivotGridObj.element.innerHTML).toBe('');
});
});
describe('- Field List with injected Module - ', () => {
let pivotGridObj: PivotView;
let elem: HTMLElement = createElement('div', { id: 'PivotGrid', styles: 'height:200px; width:500px' });
afterAll(() => {
if (pivotGridObj) {
pivotGridObj.destroy();
}
remove(elem);
});
beforeAll((done: Function) => {
if (!document.getElementById(elem.id)) {
document.body.appendChild(elem);
}
let dataBound: EmitType<Object> = () => { done(); };
PivotView.Inject(GroupingBar, FieldList);
pivotGridObj = new PivotView({
dataSourceSettings: {
dataSource: pivot_dataset as IDataSet[],
expandAll: false,
enableSorting: true,
filterSettings: [{ name: 'state', type: 'Exclude', items: ['Delhi'] }],
sortSettings: [{ name: 'product', order: 'None' },
{ name: 'eyeColor', order: 'Descending' },
{ name: 'date', order: 'None' }],
formatSettings: [{ name: 'balance', format: 'C' }, { name: 'date', format: 'dd/MM/yyyy-hh:mm', type: 'date' }],
rows: [{ name: 'date', caption: 'Date' }, { name: 'eyeColor' }],
columns: [{ name: 'gender', caption: 'Population' }, { name: 'isActive' }],
values: [{ name: 'balance' }, { name: 'quantity' }],
filters: [{ name: 'state' }],
},
showGroupingBar: true,
showFieldList: true,
dataBound: dataBound
});
pivotGridObj.appendTo('#PivotGrid');
util.disableDialogAnimation(pivotGridObj.pivotFieldListModule.dialogRenderer.fieldListDialog);
});
let persistdata: string;
it('check window resize with grouping bar', () => {
pivotGridObj.onWindowResize();
pivotGridObj.renderModule.updateGridSettings();
expect(true).toBeTruthy();
});
it('grouping bar render testing', () => {
pivotGridObj.dataBind();
expect(pivotGridObj.element.querySelector('.e-grouping-bar')).toBeTruthy;
});
it('field list render testing', () => {
pivotGridObj.dataBind();
expect(pivotGridObj.pivotFieldListModule).not.toBeUndefined;
});
it('check open field list popup', () => {
(pivotGridObj.pivotFieldListModule.element.querySelector('.e-toggle-field-list') as HTMLElement).click();
expect(true).toBe(true);
});
it('check sorting order field', () => {
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-columns').querySelectorAll('.e-pivot-button'));
expect(pivotButtons.length).toBeGreaterThan(0);
((pivotButtons[0]).querySelector('.e-sort') as HTMLElement).click();
expect(true).toBe(true);
});
it('sorting order after update', () => {
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-columns').querySelectorAll('.e-pivot-button'));
expect(pivotButtons.length).toBeGreaterThan(0);
expect((pivotButtons[0]).querySelector('.e-descend')).toBeTruthy;
});
it('check filtering field', (done: Function) => {
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-filters').querySelectorAll('.e-pivot-button'));
expect(pivotButtons.length).toBeGreaterThan(0);
((pivotButtons[0]).querySelector('.e-btn-filter') as HTMLElement).click();
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
let filterDialog: Dialog = pivotGridObj.pivotCommon.filterDialog.dialogPopUp;
expect(filterDialog.element.classList.contains('e-popup-open')).toBe(true);
done();
}, 1000);
});
it('check all nodes on filter popup', () => {
let treeObj: TreeView = pivotGridObj.pivotCommon.filterDialog.allMemberSelect;
let memberTreeObj: TreeView = pivotGridObj.pivotCommon.filterDialog.memberTreeView;
let filterDialog: Dialog = pivotGridObj.pivotCommon.filterDialog.dialogPopUp;
let allNode: HTMLElement = treeObj.element.querySelector('.e-checkbox-wrapper');
let checkEle: Element[] = <Element[] & NodeListOf<Element>>memberTreeObj.element.querySelectorAll('.e-checkbox-wrapper');
expect(checkEle.length).toBeGreaterThan(0);
expect(allNode.classList.contains('e-small')).toBe(false);
let args: MouseEvent = new MouseEvent("mousedown", { view: window, bubbles: true, cancelable: true });
allNode.querySelector('.e-frame').dispatchEvent(args);
args = new MouseEvent("mouseup", { view: window, bubbles: true, cancelable: true });
allNode.querySelector('.e-frame').dispatchEvent(args);
args = new MouseEvent("click", { view: window, bubbles: true, cancelable: true });
allNode.querySelector('.e-frame').dispatchEvent(args);
let checkedEle: Element[] = <Element[] & NodeListOf<Element>>memberTreeObj.element.querySelectorAll('.e-check');
expect(checkEle.length).toEqual(checkedEle.length);
expect(filterDialog.element.querySelector('.e-ok-btn').getAttribute('disabled')).toBe(null);
(filterDialog.element.querySelector('.e-ok-btn') as HTMLElement).click();
});
it('check filter state after update', () => {
let filterDialog: Dialog = pivotGridObj.pivotCommon.filterDialog.dialogPopUp;
expect(filterDialog).toBeUndefined;
});
it('check remove pivot button', (done: Function) => {
let pivotButton: HTMLElement =
(pivotGridObj.element.querySelector('.e-filters').querySelector('.e-pivot-button') as HTMLElement);
expect(pivotButton.id).toBe('state');
(pivotButton.querySelector('.e-remove') as HTMLElement).click();
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
pivotButton = (pivotGridObj.element.querySelector('.e-filters').querySelector('.e-pivot-button') as HTMLElement);
expect(pivotButton).toBeNull();
done();
}, 1000);
});
it('check drag and drop pivot button', (done: Function) => {
pivotGridObj.onFieldDropped = function (args: FieldDroppedEventArgs) {
args.droppedField.caption = "droppedButton"
};
let rowAxiscontent: HTMLElement = pivotGridObj.element.querySelector('.e-rows');
let valueAxiscontent: HTMLElement = pivotGridObj.element.querySelector('.e-values');
let pivotButton: HTMLElement[] = [].slice.call((valueAxiscontent).querySelectorAll('.e-pivot-button'));
expect(pivotButton.length).toEqual(2);
let dragElement: HTMLElement = pivotButton[0].querySelector('.e-draggable');
let mousedown: any =
util.getEventObject('MouseEvents', 'mousedown', dragElement, dragElement, 15, 10);
EventHandler.trigger(dragElement, 'mousedown', mousedown);
let mousemove: any =
util.getEventObject('MouseEvents', 'mousemove', dragElement, rowAxiscontent, 15, 70);
mousemove.srcElement = mousemove.target = mousemove.toElement = rowAxiscontent;
EventHandler.trigger(<any>(document), 'mousemove', mousemove);
mousemove = util.setMouseCordinates(mousemove, 15, 75);
EventHandler.trigger(<any>(document), 'mousemove', mousemove);
let mouseUp: any = util.getEventObject('MouseEvents', 'mouseup', dragElement, rowAxiscontent);
mouseUp.type = 'mouseup';
mouseUp.srcElement = mouseUp.target = mouseUp.toElement = rowAxiscontent;
EventHandler.trigger(<any>(document), 'mouseup', mouseUp);
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
pivotButton = [].slice.call((rowAxiscontent).querySelectorAll('.e-pivot-button'));
expect(pivotButton.length).toEqual(3);
expect((pivotButton[2].querySelector('.e-content') as HTMLElement).innerText).toEqual("droppedButton");
done();
}, 1000);
});
it('set rtl property', (done: Function) => {
pivotGridObj.enableRtl = true;
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
expect(pivotGridObj.element.classList.contains('e-rtl')).toBeTruthy;
done();
}, 1000);
});
it('remove rtl property', (done: Function) => {
pivotGridObj.enableRtl = false;
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
expect(pivotGridObj.element.classList.contains('e-rtl')).not.toBeTruthy;
done();
}, 1000);
});
it('destroy common event handlers', () => {
pivotGridObj.commonModule.destroy();
expect(true).toBeTruthy();
});
it('pivotgrid destroy', () => {
pivotGridObj.destroy();
expect(true).toBeTruthy();
});
it('pivotgrid destroy expect', () => {
expect(pivotGridObj.element.innerHTML).toBe('');
});
});
});
describe('Grouping Bar sorting none ', () => {
let pivotGridObj: PivotView;
let elem: HTMLElement = createElement('div', { id: 'PivotGrid', styles: 'height:200px; width:500px' });
afterAll(() => {
if (pivotGridObj) {
pivotGridObj.destroy();
}
remove(elem);
});
beforeAll((done: Function) => {
if (!document.getElementById(elem.id)) {
document.body.appendChild(elem);
}
let dataBound: EmitType<Object> = () => { done(); };
PivotView.Inject(GroupingBar, FieldList);
pivotGridObj = new PivotView({
dataSourceSettings: {
dataSource: pivot_dataset as IDataSet[],
expandAll: false,
enableSorting: true,
sortSettings: [{ name: 'company', order: 'None' }],
filterSettings: [{ name: 'name', type: 'Include', items: ['Knight Wooten'] },
{ name: 'company', type: 'Include', items: ['NIPAZ'] },
{ name: 'gender', type: 'Include', items: ['male'] }],
rows: [{ name: 'company' }, { name: 'state' }],
columns: [{ name: 'name' }],
values: [{ name: 'balance' }, { name: 'quantity' }], filters: [{ name: 'gender' }]
},
showGroupingBar: true,
showFieldList: false,
gridSettings: {
contextMenuItems: ['Aggregate', 'Csv Export', 'Drillthrough', 'Expand', 'Collapse']
},
dataBound: dataBound
});
pivotGridObj.appendTo('#PivotGrid');
});
let persistdata: string;
let click: MouseEvent = new MouseEvent('click', {
'view': window,
'bubbles': true,
'cancelable': true
});
it('grouping bar sort asc', (done: Function) => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-rows').querySelectorAll('.e-pivot-button'));
setTimeout(() => {
pivotGridObj.dataSourceSettings.sortSettings = [{ name: 'company', order: 'Ascending' }];
expect((pivotButtons[0]).querySelector('.e-sort')).toBeTruthy;
done();
}, 1000);
});
it('grouping bar sort desc', (done: Function) => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-rows').querySelectorAll('.e-pivot-button'));
setTimeout(() => {
pivotGridObj.dataSourceSettings.sortSettings = [{ name: 'company', order: 'Descending' }];
expect((pivotButtons[0]).querySelector('.e-descend')).toBeTruthy;
done();
}, 1000);
});
it('grouping bar sort none', (done: Function) => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-rows').querySelectorAll('.e-pivot-button'));
setTimeout(() => {
pivotGridObj.dataSourceSettings.sortSettings = [{ name: 'company', order: 'None' }]
expect((pivotButtons[0]).querySelector('.e-sort')).not.toBeTruthy;
done();
}, 1000);
});
it('grouping bar sort none icon click', (done: Function) => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
pivotGridObj.dataSourceSettings.sortSettings = [{ name: 'company', order: 'Ascending' }];
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-rows').querySelectorAll('.e-pivot-button'));
expect((pivotButtons[1]).querySelector('.e-sort')).toBeTruthy;
setTimeout(() => {
// pivotGridObj.dataSource.sortSettings = [{ name: 'company', order: 'None' }]
document.querySelectorAll('.e-group-rows .e-sort')[1].dispatchEvent(click);
done();
}, 1000);
});
it('grouping bar sort asc icon click', (done: Function) => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-rows').querySelectorAll('.e-pivot-button'));
expect((pivotButtons[1]).querySelector('.e-descend')).toBeTruthy;
setTimeout(() => {
// pivotGridObj.dataSource.sortSettings = [{ name: 'company', order: 'None' }]
document.querySelectorAll('.e-group-rows .e-sort')[1].dispatchEvent(click);
expect((pivotButtons[0]).querySelector('.e-sort')).toBeTruthy;
done();
}, 1000);
});
it('grouping bar sort desc icon click', (done: Function) => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
let pivotButtons: HTMLElement[] =
[].slice.call(pivotGridObj.element.querySelector('.e-rows').querySelectorAll('.e-pivot-button'));
expect((pivotButtons[1]).querySelector('.e-sort')).toBeTruthy;
setTimeout(() => {
// pivotGridObj.dataSource.sortSettings = [{ name: 'company', order: 'None' }]
document.querySelectorAll('.e-group-rows .e-sort')[1].dispatchEvent(click);
done();
}, 1000);
});
it('grouping bar sort desc icon click', (done: Function) => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
pivotGridObj.showGroupingBar = false;
pivotGridObj.setProperties({
gridSettings: {
contextMenuItems: ['Aggregate', 'Expand', 'Collapse']
}
}, true);
done();
}, 1000);
});
});
describe('Grouping bar sort icon deferupdate', () => {
let pivotGridObj: PivotView;
let elem: HTMLElement = createElement('div', { id: 'PivotGrid' });
let cf: any;
beforeAll(() => {
document.body.appendChild(elem);
PivotView.Inject(GroupingBar, FieldList);
pivotGridObj = new PivotView(
{
dataSourceSettings: {
dataSource: pivot_nodata as IDataSet[],
enableSorting: true,
expandAll: true,
rows: [{ name: 'Country' }],
columns: [{ name: 'Product' }],
values: [{ name: 'Amount' }],
},
allowDeferLayoutUpdate: true,
showFieldList: true,
showGroupingBar: true,
width: 600,
height: 300
});
pivotGridObj.appendTo('#PivotGrid');
});
let mouseup: MouseEvent = new MouseEvent('mouseup', {
'view': window,
'bubbles': true,
'cancelable': true
});
let mousedown: MouseEvent = new MouseEvent('mousedown', {
'view': window,
'bubbles': true,
'cancelable': true
});
let click: MouseEvent = new MouseEvent('click', {
'view': window,
'bubbles': true,
'cancelable': true
});
it('Country -> descending _using grouping bar sort icon', (done: Function) => {
jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000;
setTimeout(() => {
document.querySelectorAll('.e-group-rows .e-sort')[0].dispatchEvent(click);
done();
}, 1000);
});
it('memory leak', () => {
profile.sample();
let average: any = inMB(profile.averageChange);
//Check average change in memory samples to not be over 10MB
//expect(average).toBeLessThan(10);
let memory: any = inMB(getMemoryProfile());
//Check the final memory usage against the first usage, there should be little change if everything was properly deallocated
expect(memory).toBeLessThan(profile.samples[0] + 0.25);
});
afterAll(() => {
if (pivotGridObj) {
pivotGridObj.destroy();
}
remove(elem);
});
}); | {
"pile_set_name": "Github"
} |
package org.netbeans.gradle.project.api.task;
import java.util.Set;
import javax.annotation.CheckForNull;
import javax.annotation.Nonnull;
import javax.annotation.Nullable;
import org.netbeans.gradle.project.api.config.ProfileDef;
/**
* Defines a query which defines the commands for the
* {@link org.netbeans.spi.project.ActionProvider} of the project. A built-in
* command for Gradle projects are made of a Gradle command to be executed
* and a custom task to be executed after the command completes.
* <P>
* Instances of this interface are expected to be found on the lookup of the extension
* {@link org.netbeans.gradle.project.api.entry.GradleProjectExtension2#getExtensionLookup() (getExtensionLookup)}.
* <P>
* Instances of this interface are required to be safe to be accessed by
* multiple threads concurrently.
*
* @see org.netbeans.gradle.project.api.entry.GradleProjectExtension2#getExtensionLookup()
*/
public interface BuiltInGradleCommandQuery {
/**
* Returns the set of commands supported by the associated extension. This
* method needs to return the same string as defined by the
* {@link org.netbeans.spi.project.ActionProvider#getSupportedActions()}.
* method.
*
* @return the set of commands supported by the associated extension. This
* method may never return {@code null}. Although it is allowed to return
* an empty set, it is recommended not to implement
* {@code BuiltInGradleCommandQuery} in this case.
*/
@Nonnull
public Set<String> getSupportedCommands();
/**
* Returns the display name of the command, if know by the extension.
* The following commands are known by Gradle Support and the extension may
* choose to return {@code null} for them even if it
* {@link #getSupportedCommands() supports} the command:
* <ul>
* <li>{@code ActionProvider.COMMAND_BUILD}</li>
* <li>{@code ActionProvider.COMMAND_TEST}</li>
* <li>{@code ActionProvider.COMMAND_CLEAN}</li>
* <li>{@code ActionProvider.COMMAND_RUN}</li>
* <li>{@code ActionProvider.COMMAND_DEBUG}</li>
* <li>{@code ActionProvider.COMMAND_REBUILD}</li>
* <li>{@code ActionProvider.COMMAND_TEST_SINGLE}</li>
* <li>{@code ActionProvider.COMMAND_DEBUG_TEST_SINGLE}</li>
* <li>{@code ActionProvider.COMMAND_RUN_SINGLE}</li>
* <li>{@code ActionProvider.COMMAND_DEBUG_SINGLE}</li>
* <li>{@code JavaProjectConstants.COMMAND_JAVADOC}</li>
* <li>{@code JavaProjectConstants.COMMAND_DEBUG_FIX}</li>
* <li>{@code SingleMethod.COMMAND_RUN_SINGLE_METHOD}</li>
* <li>{@code SingleMethod.COMMAND_DEBUG_SINGLE_METHOD}</li>
* </ul>
*
* @param command the command whose display name is to be returned. This
* argument cannot be {@code null}.
* @return the display name of the command or {@code null} if this query
* does not know the display name for the specified command
*/
@Nullable
@CheckForNull
public String tryGetDisplayNameOfCommand(@Nonnull String command);
/**
* Defines the default Gradle command to run when the built-in command is to
* be executed. This command might be reconfigured by users, so there is no
* guarantee that this command will actually be executed.
* <P>
* This method must accept any profile or command even if
* {@link #getSupportedCommands()} does not contain the specified command.
* This method should return {@code null} for such case.
*
* @param profileDef the profile to which the command is requested. This
* argument can be {@code null} if the command is requested for the
* default profile.
* @param command the command as defined by the
* {@link org.netbeans.spi.project.ActionProvider} interface. This
* argument cannot be {@code null}.
*
* @return the default Gradle command to run when the built-in command is to
* be executed. This method may return {@code null} in the case there are
* no Gradle associated Gradle command with the specified command or
* profile.
*/
@Nullable
@CheckForNull
public GradleCommandTemplate tryGetDefaultGradleCommand(
@Nullable ProfileDef profileDef,
@Nonnull String command);
/**
* Returns the custom tasks associated with the specified built-in command.
* The tasks returned by this method cannot be reconfigured by users.
* <P>
* This method must accept any profile or command even if
* {@link #getSupportedCommands()} does not contain the specified command.
* This method should return {@code null} for such case.
*
* @param profileDef the profile to which the command is requested. This
* argument can be {@code null} if the command is requested for the
* default profile.
* @param command the command as defined by the
* {@link org.netbeans.spi.project.ActionProvider} interface. This
* argument cannot be {@code null}.
* @return the custom tasks associated with the specified built-in command.
* This method may return {@code null} if the associated extension does
* not define a Gradle command for the specified profile and command.
*/
@Nullable
@CheckForNull
public CustomCommandActions tryGetCommandDefs(
@Nullable ProfileDef profileDef,
@Nonnull String command);
}
| {
"pile_set_name": "Github"
} |
/*jslint onevar: false*/
/*globals sinon buster*/
/**
* @author Christian Johansen ([email protected])
* @license BSD
*
* Copyright (c) 2010-2012 Christian Johansen
*/
"use strict";
if (typeof require == "function" && typeof module == "object") {
var buster = require("../runner");
var sinon = require("../../lib/sinon");
}
buster.testCase("sinon.assert", {
setUp: function () {
this.global = typeof window !== "undefined" ? window : global;
this.setUpStubs = function () {
this.stub = sinon.stub.create();
sinon.stub(sinon.assert, "fail").throws();
sinon.stub(sinon.assert, "pass");
};
this.tearDownStubs = function () {
sinon.assert.fail.restore();
sinon.assert.pass.restore();
};
},
"is object": function () {
assert.isObject(sinon.assert);
},
"fail": {
setUp: function () {
this.exceptionName = sinon.assert.failException;
},
tearDown: function () {
sinon.assert.failException = this.exceptionName;
},
"throws exception": function () {
var failed = false;
var exception;
try {
sinon.assert.fail("Some message");
failed = true;
} catch (e) {
exception = e;
}
assert.isFalse(failed);
assert.equals(exception.name, "AssertError");
},
"throws configured exception type": function () {
sinon.assert.failException = "RetardError";
assert.exception(function () {
sinon.assert.fail("Some message");
}, "RetardError");
}
},
"match": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when arguments to not match": function () {
assert.exception(function () {
sinon.assert.match("foo", "bar");
});
assert(sinon.assert.fail.calledOnce);
},
"passes when argumens match": function () {
sinon.assert.match("foo", "foo");
assert(sinon.assert.pass.calledOnce);
}
},
"called": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method does not exist": function () {
assert.exception(function () {
sinon.assert.called();
});
assert(sinon.assert.fail.called);
},
"fails when method is not stub": function () {
assert.exception(function () {
sinon.assert.called(function () {});
});
assert(sinon.assert.fail.called);
},
"fails when method was not called": function () {
var stub = this.stub;
assert.exception(function () {
sinon.assert.called(stub);
});
assert(sinon.assert.fail.called);
},
"does not fail when method was called": function () {
var stub = this.stub;
stub();
refute.exception(function () {
sinon.assert.called(stub);
});
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
var stub = this.stub;
stub();
refute.exception(function () {
sinon.assert.called(stub);
});
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("called"));
}
},
"notCalled": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method does not exist": function () {
assert.exception(function () {
sinon.assert.notCalled();
});
assert(sinon.assert.fail.called);
},
"fails when method is not stub": function () {
assert.exception(function () {
sinon.assert.notCalled(function () {});
});
assert(sinon.assert.fail.called);
},
"fails when method was called": function () {
var stub = this.stub;
stub();
assert.exception(function () {
sinon.assert.notCalled(stub);
});
assert(sinon.assert.fail.called);
},
"passes when method was not called": function () {
var stub = this.stub;
refute.exception(function () {
sinon.assert.notCalled(stub);
});
assert.isFalse(sinon.assert.fail.called);
},
"should call pass callback": function () {
var stub = this.stub;
sinon.assert.notCalled(stub);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("notCalled"));
}
},
"calledOnce": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method does not exist": function () {
assert.exception(function () {
sinon.assert.calledOnce();
});
assert(sinon.assert.fail.called);
},
"fails when method is not stub": function () {
assert.exception(function () {
sinon.assert.calledOnce(function () {});
});
assert(sinon.assert.fail.called);
},
"fails when method was not called": function () {
var stub = this.stub;
assert.exception(function () {
sinon.assert.calledOnce(stub);
});
assert(sinon.assert.fail.called);
},
"passes when method was called": function () {
var stub = this.stub;
stub();
refute.exception(function () {
sinon.assert.calledOnce(stub);
});
assert.isFalse(sinon.assert.fail.called);
},
"fails when method was called more than once": function () {
var stub = this.stub;
stub();
stub();
assert.exception(function () {
sinon.assert.calledOnce(stub);
});
assert(sinon.assert.fail.called);
},
"calls pass callback": function () {
var stub = this.stub;
stub();
sinon.assert.calledOnce(stub);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("calledOnce"));
}
},
"calledTwice": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails if called once": function () {
var stub = this.stub;
this.stub();
assert.exception(function () {
sinon.assert.calledTwice(stub);
});
},
"passes if called twice": function () {
var stub = this.stub;
this.stub();
this.stub();
refute.exception(function () {
sinon.assert.calledTwice(stub);
});
},
"calls pass callback": function () {
var stub = this.stub;
stub();
stub();
sinon.assert.calledTwice(stub);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("calledTwice"));
}
},
"calledThrice": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails if called once": function () {
var stub = this.stub;
this.stub();
assert.exception(function () {
sinon.assert.calledThrice(stub);
});
},
"passes if called thrice": function () {
var stub = this.stub;
this.stub();
this.stub();
this.stub();
refute.exception(function () {
sinon.assert.calledThrice(stub);
});
},
"calls pass callback": function () {
var stub = this.stub;
stub();
stub();
stub();
sinon.assert.calledThrice(stub);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("calledThrice"));
}
},
"callOrder": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"passes when calls where done in right order": function () {
var spy1 = sinon.spy();
var spy2 = sinon.spy();
spy1();
spy2();
refute.exception(function () {
sinon.assert.callOrder(spy1, spy2);
});
},
"fails when calls where done in wrong order": function () {
var spy1 = sinon.spy();
var spy2 = sinon.spy();
spy2();
spy1();
assert.exception(function () {
sinon.assert.callOrder(spy1, spy2);
});
assert(sinon.assert.fail.called);
},
"passes when many calls where done in right order": function () {
var spy1 = sinon.spy();
var spy2 = sinon.spy();
var spy3 = sinon.spy();
var spy4 = sinon.spy();
spy1();
spy2();
spy3();
spy4();
refute.exception(function () {
sinon.assert.callOrder(spy1, spy2, spy3, spy4);
});
},
"fails when one of many calls where done in wrong order": function () {
var spy1 = sinon.spy();
var spy2 = sinon.spy();
var spy3 = sinon.spy();
var spy4 = sinon.spy();
spy1();
spy2();
spy4();
spy3();
assert.exception(function () {
sinon.assert.callOrder(spy1, spy2, spy3, spy4);
});
assert(sinon.assert.fail.called);
},
"calls pass callback": function () {
var stubs = [sinon.spy(), sinon.spy()];
stubs[0]();
stubs[1]();
sinon.assert.callOrder(stubs[0], stubs[1]);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("callOrder"));
},
"passes for multiple calls to same spy": function () {
var first = sinon.spy();
var second = sinon.spy();
first();
second();
first();
refute.exception(function () {
sinon.assert.callOrder(first, second, first);
});
},
"fails if first spy was not called": function () {
var first = sinon.spy();
var second = sinon.spy();
second();
assert.exception(function () {
sinon.assert.callOrder(first, second);
});
},
"fails if second spy was not called": function () {
var first = sinon.spy();
var second = sinon.spy();
first();
assert.exception(function () {
sinon.assert.callOrder(first, second);
});
}
},
"calledOn": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method does not exist": function () {
var object = {};
sinon.stub(this.stub, "calledOn");
assert.exception(function () {
sinon.assert.calledOn(null, object);
});
assert.isFalse(this.stub.calledOn.calledWith(object));
assert(sinon.assert.fail.called);
},
"fails when method is not stub": function () {
var object = {};
sinon.stub(this.stub, "calledOn");
assert.exception(function () {
sinon.assert.calledOn(function () {}, object);
});
assert.isFalse(this.stub.calledOn.calledWith(object));
assert(sinon.assert.fail.called);
},
"fails when method fails": function () {
var object = {};
sinon.stub(this.stub, "calledOn").returns(false);
var stub = this.stub;
assert.exception(function () {
sinon.assert.calledOn(stub, object);
});
assert(sinon.assert.fail.called);
},
"passes when method doesn't fail": function () {
var object = {};
sinon.stub(this.stub, "calledOn").returns(true);
var stub = this.stub;
sinon.assert.calledOn(stub, object);
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
var obj = {};
this.stub.call(obj);
sinon.assert.calledOn(this.stub, obj);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("calledOn"));
}
},
"calledWithNew": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method does not exist": function () {
sinon.stub(this.stub, "calledWithNew");
assert.exception(function () {
sinon.assert.calledWithNew(null);
});
assert.isFalse(this.stub.calledWithNew.called);
assert(sinon.assert.fail.called);
},
"fails when method is not stub": function () {
sinon.stub(this.stub, "calledWithNew");
assert.exception(function () {
sinon.assert.calledWithNew(function () {});
});
assert.isFalse(this.stub.calledWithNew.called);
assert(sinon.assert.fail.called);
},
"fails when method fails": function () {
sinon.stub(this.stub, "calledWithNew").returns(false);
var stub = this.stub;
assert.exception(function () {
sinon.assert.calledWithNew(stub);
});
assert(sinon.assert.fail.called);
},
"passes when method doesn't fail": function () {
sinon.stub(this.stub, "calledWithNew").returns(true);
var stub = this.stub;
sinon.assert.calledWithNew(stub);
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
var a = new this.stub();
sinon.assert.calledWithNew(this.stub);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("calledWithNew"));
}
},
"alwaysCalledWithNew": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method does not exist": function () {
sinon.stub(this.stub, "alwaysCalledWithNew");
assert.exception(function () {
sinon.assert.alwaysCalledWithNew(null);
});
assert.isFalse(this.stub.alwaysCalledWithNew.called);
assert(sinon.assert.fail.called);
},
"fails when method is not stub": function () {
sinon.stub(this.stub, "alwaysCalledWithNew");
assert.exception(function () {
sinon.assert.alwaysCalledWithNew(function () {});
});
assert.isFalse(this.stub.alwaysCalledWithNew.called);
assert(sinon.assert.fail.called);
},
"fails when method fails": function () {
sinon.stub(this.stub, "alwaysCalledWithNew").returns(false);
var stub = this.stub;
assert.exception(function () {
sinon.assert.alwaysCalledWithNew(stub);
});
assert(sinon.assert.fail.called);
},
"passes when method doesn't fail": function () {
sinon.stub(this.stub, "alwaysCalledWithNew").returns(true);
var stub = this.stub;
sinon.assert.alwaysCalledWithNew(stub);
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
var a = new this.stub();
sinon.assert.alwaysCalledWithNew(this.stub);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("alwaysCalledWithNew"));
}
},
"calledWith": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method fails": function () {
var object = {};
sinon.stub(this.stub, "calledWith").returns(false);
var stub = this.stub;
assert.exception(function () {
sinon.assert.calledWith(stub, object, 1);
});
assert(this.stub.calledWith.calledWith(object, 1));
assert(sinon.assert.fail.called);
},
"passes when method doesn't fail": function () {
var object = {};
sinon.stub(this.stub, "calledWith").returns(true);
var stub = this.stub;
refute.exception(function () {
sinon.assert.calledWith(stub, object, 1);
});
assert(this.stub.calledWith.calledWith(object, 1));
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
this.stub("yeah");
sinon.assert.calledWith(this.stub, "yeah");
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("calledWith"));
}
},
"calledWithExactly": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method fails": function () {
var object = {};
sinon.stub(this.stub, "calledWithExactly").returns(false);
var stub = this.stub;
assert.exception(function () {
sinon.assert.calledWithExactly(stub, object, 1);
});
assert(this.stub.calledWithExactly.calledWithExactly(object, 1));
assert(sinon.assert.fail.called);
},
"passes when method doesn't fail": function () {
var object = {};
sinon.stub(this.stub, "calledWithExactly").returns(true);
var stub = this.stub;
refute.exception(function () {
sinon.assert.calledWithExactly(stub, object, 1);
});
assert(this.stub.calledWithExactly.calledWithExactly(object, 1));
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
this.stub("yeah");
sinon.assert.calledWithExactly(this.stub, "yeah");
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("calledWithExactly"));
}
},
"neverCalledWith": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method fails": function () {
var object = {};
sinon.stub(this.stub, "neverCalledWith").returns(false);
var stub = this.stub;
assert.exception(function () {
sinon.assert.neverCalledWith(stub, object, 1);
});
assert(this.stub.neverCalledWith.calledWith(object, 1));
assert(sinon.assert.fail.called);
},
"passes when method doesn't fail": function () {
var object = {};
sinon.stub(this.stub, "neverCalledWith").returns(true);
var stub = this.stub;
refute.exception(function () {
sinon.assert.neverCalledWith(stub, object, 1);
});
assert(this.stub.neverCalledWith.calledWith(object, 1));
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
this.stub("yeah");
sinon.assert.neverCalledWith(this.stub, "nah!");
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("neverCalledWith"));
}
},
"threwTest": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method fails": function () {
sinon.stub(this.stub, "threw").returns(false);
var stub = this.stub;
assert.exception(function () {
sinon.assert.threw(stub, 1, 2);
});
assert(this.stub.threw.calledWithExactly(1, 2));
assert(sinon.assert.fail.called);
},
"passes when method doesn't fail": function () {
sinon.stub(this.stub, "threw").returns(true);
var stub = this.stub;
refute.exception(function () {
sinon.assert.threw(stub, 1, 2);
});
assert(this.stub.threw.calledWithExactly(1, 2));
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
sinon.stub(this.stub, "threw").returns(true);
this.stub();
sinon.assert.threw(this.stub);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("threw"));
}
},
"callCount": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails when method fails": function () {
this.stub();
this.stub();
var stub = this.stub;
assert.exception(function () {
sinon.assert.callCount(stub, 3);
});
assert(sinon.assert.fail.called);
},
"passes when method doesn't fail": function () {
var stub = this.stub;
this.stub.callCount = 3;
refute.exception(function () {
sinon.assert.callCount(stub, 3);
});
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
this.stub();
sinon.assert.callCount(this.stub, 1);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("callCount"));
}
},
"alwaysCalledOn": {
setUp: function () { this.setUpStubs(); },
tearDown: function () { this.tearDownStubs(); },
"fails if method is missing": function () {
assert.exception(function () {
sinon.assert.alwaysCalledOn();
});
},
"fails if method is not fake": function () {
assert.exception(function () {
sinon.assert.alwaysCalledOn(function () {}, {});
});
},
"fails if stub returns false": function () {
var stub = sinon.stub();
sinon.stub(stub, "alwaysCalledOn").returns(false);
assert.exception(function () {
sinon.assert.alwaysCalledOn(stub, {});
});
assert(sinon.assert.fail.called);
},
"passes if stub returns true": function () {
var stub = sinon.stub.create();
sinon.stub(stub, "alwaysCalledOn").returns(true);
sinon.assert.alwaysCalledOn(stub, {});
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
this.stub();
sinon.assert.alwaysCalledOn(this.stub, this);
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("alwaysCalledOn"));
}
},
"alwaysCalledWith": {
setUp: function () {
sinon.stub(sinon.assert, "fail").throws();
sinon.stub(sinon.assert, "pass");
},
tearDown: function () {
sinon.assert.fail.restore();
sinon.assert.pass.restore();
},
"fails if method is missing": function () {
assert.exception(function () {
sinon.assert.alwaysCalledWith();
});
},
"fails if method is not fake": function () {
assert.exception(function () {
sinon.assert.alwaysCalledWith(function () {});
});
},
"fails if stub returns false": function () {
var stub = sinon.stub.create();
sinon.stub(stub, "alwaysCalledWith").returns(false);
assert.exception(function () {
sinon.assert.alwaysCalledWith(stub, {}, []);
});
assert(sinon.assert.fail.called);
},
"passes if stub returns true": function () {
var stub = sinon.stub.create();
sinon.stub(stub, "alwaysCalledWith").returns(true);
sinon.assert.alwaysCalledWith(stub, {}, []);
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
var spy = sinon.spy();
spy("Hello");
sinon.assert.alwaysCalledWith(spy, "Hello");
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("alwaysCalledWith"));
}
},
"alwaysCalledWithExactly": {
setUp: function () {
sinon.stub(sinon.assert, "fail");
sinon.stub(sinon.assert, "pass");
},
tearDown: function () {
sinon.assert.fail.restore();
sinon.assert.pass.restore();
},
"fails if stub returns false": function () {
var stub = sinon.stub.create();
sinon.stub(stub, "alwaysCalledWithExactly").returns(false);
sinon.assert.alwaysCalledWithExactly(stub, {}, []);
assert(sinon.assert.fail.called);
},
"passes if stub returns true": function () {
var stub = sinon.stub.create();
sinon.stub(stub, "alwaysCalledWithExactly").returns(true);
sinon.assert.alwaysCalledWithExactly(stub, {}, []);
assert.isFalse(sinon.assert.fail.called);
},
"calls pass callback": function () {
var spy = sinon.spy();
spy("Hello");
sinon.assert.alwaysCalledWithExactly(spy, "Hello");
assert(sinon.assert.pass.calledOnce);
assert(sinon.assert.pass.calledWith("alwaysCalledWithExactly"));
}
},
"expose": {
"exposes asserts into object": function () {
var test = {};
sinon.assert.expose(test);
assert.isFunction(test.fail);
assert.isString(test.failException);
assert.isFunction(test.assertCalled);
assert.isFunction(test.assertCalledOn);
assert.isFunction(test.assertCalledWith);
assert.isFunction(test.assertCalledWithExactly);
assert.isFunction(test.assertThrew);
assert.isFunction(test.assertCallCount);
},
"exposes asserts into global": function () {
sinon.assert.expose(this.global, {
includeFail: false
});
assert.equals(typeof failException, "undefined");
assert.isFunction(assertCalled);
assert.isFunction(assertCalledOn);
assert.isFunction(assertCalledWith);
assert.isFunction(assertCalledWithExactly);
assert.isFunction(assertThrew);
assert.isFunction(assertCallCount);
},
"fails exposed asserts without errors": function () {
sinon.assert.expose(this.global, {
includeFail: false
});
try {
assertCalled(sinon.spy());
} catch (e) {
assert.equals(e.message, "expected spy to have been called at least once but was never called");
}
},
"exposes asserts into object without prefixes": function () {
var test = {};
sinon.assert.expose(test, { prefix: "" });
assert.isFunction(test.fail);
assert.isString(test.failException);
assert.isFunction(test.called);
assert.isFunction(test.calledOn);
assert.isFunction(test.calledWith);
assert.isFunction(test.calledWithExactly);
assert.isFunction(test.threw);
assert.isFunction(test.callCount);
},
"throws if target is undefined": function () {
assert.exception(function () {
sinon.assert.expose();
}, "TypeError");
},
"throws if target is null": function () {
assert.exception(function () {
sinon.assert.expose(null);
}, "TypeError");
}
},
"message": {
setUp: function () {
this.obj = {
doSomething: function () {}
};
sinon.spy(this.obj, "doSomething");
this.message = function (method) {
try {
sinon.assert[method].apply(sinon.assert, [].slice.call(arguments, 1));
} catch (e) {
return e.message;
}
};
},
"assert.called exception message": function () {
assert.equals(this.message("called", this.obj.doSomething),
"expected doSomething to have been called at " +
"least once but was never called");
},
"assert.notCalled exception message one call": function () {
this.obj.doSomething();
assert.equals(this.message("notCalled", this.obj.doSomething),
"expected doSomething to not have been called " +
"but was called once\n doSomething()");
},
"assert.notCalled exception message four calls": function () {
this.obj.doSomething();
this.obj.doSomething();
this.obj.doSomething();
this.obj.doSomething();
assert.equals(this.message("notCalled", this.obj.doSomething),
"expected doSomething to not have been called " +
"but was called 4 times\n doSomething()\n " +
"doSomething()\n doSomething()\n doSomething()");
},
"assert.notCalled exception message with calls with arguments": function () {
this.obj.doSomething();
this.obj.doSomething(3);
this.obj.doSomething(42, 1);
this.obj.doSomething();
assert.equals(this.message("notCalled", this.obj.doSomething),
"expected doSomething to not have been called " +
"but was called 4 times\n doSomething()\n " +
"doSomething(3)\n doSomething(42, 1)\n doSomething()");
},
"assert.callOrder exception message": function () {
var obj = { doop: function () {}, foo: function () {} };
sinon.spy(obj, "doop");
sinon.spy(obj, "foo");
obj.doop();
this.obj.doSomething();
obj.foo();
var message = this.message("callOrder", this.obj.doSomething, obj.doop, obj.foo);
assert.equals(message,
"expected doSomething, doop, foo to be called in " +
"order but were called as doop, doSomething, foo");
},
"assert.callOrder with missing first call exception message": function () {
var obj = { doop: function () {}, foo: function () {} };
sinon.spy(obj, "doop");
sinon.spy(obj, "foo");
obj.foo();
var message = this.message("callOrder", obj.doop, obj.foo);
assert.equals(message,
"expected doop, foo to be called in " +
"order but were called as foo");
},
"assert.callOrder with missing last call exception message": function () {
var obj = { doop: function () {}, foo: function () {} };
sinon.spy(obj, "doop");
sinon.spy(obj, "foo");
obj.doop();
var message = this.message("callOrder", obj.doop, obj.foo);
assert.equals(message,
"expected doop, foo to be called in " +
"order but were called as doop");
},
"assert.callCount exception message": function () {
this.obj.doSomething();
assert.equals(this.message("callCount", this.obj.doSomething, 3),
"expected doSomething to be called thrice but was called " +
"once\n doSomething()");
},
"assert.calledOnce exception message": function () {
this.obj.doSomething();
this.obj.doSomething();
assert.equals(this.message("calledOnce", this.obj.doSomething),
"expected doSomething to be called once but was called " +
"twice\n doSomething()\n doSomething()");
this.obj.doSomething();
assert.equals(this.message("calledOnce", this.obj.doSomething),
"expected doSomething to be called once but was called " +
"thrice\n doSomething()\n doSomething()\n doSomething()");
},
"assert.calledTwice exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledTwice", this.obj.doSomething),
"expected doSomething to be called twice but was called " +
"once\n doSomething()");
},
"assert.calledThrice exception message": function () {
this.obj.doSomething();
this.obj.doSomething();
this.obj.doSomething();
this.obj.doSomething();
assert.equals(this.message("calledThrice", this.obj.doSomething),
"expected doSomething to be called thrice but was called 4 times\n doSomething()\n doSomething()\n doSomething()\n doSomething()");
},
"assert.calledOn exception message": function () {
this.obj.toString = function () {
return "[Oh yeah]";
};
var obj = { toString: function () { return "[Oh no]"; } };
var obj2 = { toString: function () { return "[Oh well]"; } };
this.obj.doSomething.call(obj);
this.obj.doSomething.call(obj2);
assert.equals(this.message("calledOn", this.obj.doSomething, this.obj),
"expected doSomething to be called with [Oh yeah] as this but was called with [Oh no], [Oh well]");
},
"assert.alwaysCalledOn exception message": function () {
this.obj.toString = function () {
return "[Oh yeah]";
};
var obj = { toString: function () { return "[Oh no]"; } };
var obj2 = { toString: function () { return "[Oh well]"; } };
this.obj.doSomething.call(obj);
this.obj.doSomething.call(obj2);
this.obj.doSomething();
assert.equals(this.message("alwaysCalledOn", this.obj.doSomething, this.obj),
"expected doSomething to always be called with [Oh yeah] as this but was called with [Oh no], [Oh well], [Oh yeah]");
},
"assert.calledWithNew exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWithNew", this.obj.doSomething),
"expected doSomething to be called with new");
},
"assert.alwaysCalledWithNew exception message": function () {
var a = new this.obj.doSomething();
this.obj.doSomething();
assert.equals(this.message("alwaysCalledWithNew", this.obj.doSomething),
"expected doSomething to always be called with new");
},
"assert.calledWith exception message": function () {
this.obj.doSomething(1, 3, "hey");
assert.equals(this.message("calledWith", this.obj.doSomething, 4, 3, "hey"),
"expected doSomething to be called with arguments 4, 3, " +
"hey\n doSomething(1, 3, hey)");
},
"assert.calledWith match.any exception message": function () {
this.obj.doSomething(true, true);
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match.any, false),
"expected doSomething to be called with arguments " +
"any, false\n doSomething(true, true)");
},
"assert.calledWith match.defined exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match.defined),
"expected doSomething to be called with arguments " +
"defined\n doSomething()");
},
"assert.calledWith match.truthy exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match.truthy),
"expected doSomething to be called with arguments " +
"truthy\n doSomething()");
},
"assert.calledWith match.falsy exception message": function () {
this.obj.doSomething(true);
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match.falsy),
"expected doSomething to be called with arguments " +
"falsy\n doSomething(true)");
},
"assert.calledWith match.same exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match.same(1)),
"expected doSomething to be called with arguments " +
"same(1)\n doSomething()");
},
"assert.calledWith match.typeOf exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match.typeOf("string")),
"expected doSomething to be called with arguments " +
"typeOf(\"string\")\n doSomething()");
},
"assert.calledWith match.instanceOf exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match.instanceOf(function CustomType() {})),
"expected doSomething to be called with arguments " +
"instanceOf(CustomType)\n doSomething()");
},
"assert.calledWith match object exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match({ some: "value", and: 123 })),
"expected doSomething to be called with arguments " +
"match(some: value, and: 123)\n doSomething()");
},
"assert.calledWith match boolean exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match(true)),
"expected doSomething to be called with arguments " +
"match(true)\n doSomething()");
},
"assert.calledWith match number exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match(123)),
"expected doSomething to be called with arguments " +
"match(123)\n doSomething()");
},
"assert.calledWith match string exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match("Sinon")),
"expected doSomething to be called with arguments " +
"match(\"Sinon\")\n doSomething()");
},
"assert.calledWith match regexp exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match(/[a-z]+/)),
"expected doSomething to be called with arguments " +
"match(/[a-z]+/)\n doSomething()");
},
"assert.calledWith match test function exception message": function () {
this.obj.doSomething();
assert.equals(this.message("calledWith", this.obj.doSomething, sinon.match({ test: function custom() {} })),
"expected doSomething to be called with arguments " +
"match(custom)\n doSomething()");
},
"assert.calledWithMatch exception message": function () {
this.obj.doSomething(1, 3, "hey");
assert.equals(this.message("calledWithMatch", this.obj.doSomething, 4, 3, "hey"),
"expected doSomething to be called with match 4, 3, " +
"hey\n doSomething(1, 3, hey)");
},
"assert.alwaysCalledWith exception message": function () {
this.obj.doSomething(1, 3, "hey");
this.obj.doSomething(1, "hey");
assert.equals(this.message("alwaysCalledWith", this.obj.doSomething, 1, "hey"),
"expected doSomething to always be called with arguments 1" +
", hey\n doSomething(1, 3, hey)\n doSomething(1, hey)");
},
"assert.alwaysCalledWithMatch exception message": function () {
this.obj.doSomething(1, 3, "hey");
this.obj.doSomething(1, "hey");
assert.equals(this.message("alwaysCalledWithMatch", this.obj.doSomething, 1, "hey"),
"expected doSomething to always be called with match 1" +
", hey\n doSomething(1, 3, hey)\n doSomething(1, hey)");
},
"assert.calledWithExactly exception message": function () {
this.obj.doSomething(1, 3, "hey");
assert.equals(this.message("calledWithExactly", this.obj.doSomething, 1, 3),
"expected doSomething to be called with exact arguments 1" +
", 3\n doSomething(1, 3, hey)");
},
"assert.alwaysCalledWithExactly exception message": function () {
this.obj.doSomething(1, 3, "hey");
this.obj.doSomething(1, 3);
assert.equals(this.message("alwaysCalledWithExactly", this.obj.doSomething, 1, 3),
"expected doSomething to always be called with exact " +
"arguments 1, 3\n doSomething(1, 3, hey)\n " +
"doSomething(1, 3)");
},
"assert.neverCalledWith exception message": function () {
this.obj.doSomething(1, 2, 3);
assert.equals(this.message("neverCalledWith", this.obj.doSomething, 1, 2),
"expected doSomething to never be called with " +
"arguments 1, 2\n doSomething(1, 2, 3)");
},
"assert.neverCalledWithMatch exception message": function () {
this.obj.doSomething(1, 2, 3);
assert.equals(this.message("neverCalledWithMatch", this.obj.doSomething, 1, 2),
"expected doSomething to never be called with match " +
"1, 2\n doSomething(1, 2, 3)");
},
"assert.threw exception message": function () {
this.obj.doSomething(1, 3, "hey");
this.obj.doSomething(1, 3);
assert.equals(this.message("threw", this.obj.doSomething),
"doSomething did not throw exception\n" +
" doSomething(1, 3, hey)\n doSomething(1, 3)");
},
"assert.alwaysThrew exception message": function () {
this.obj.doSomething(1, 3, "hey");
this.obj.doSomething(1, 3);
assert.equals(this.message("alwaysThrew", this.obj.doSomething),
"doSomething did not always throw exception\n" +
" doSomething(1, 3, hey)\n doSomething(1, 3)");
},
"assert.match exception message": function () {
assert.equals(this.message("match", { foo: 1 }, [1, 3]),
"expected value to match\n" +
" expected = [1, 3]\n" +
" actual = { foo: 1 }");
}
}
});
| {
"pile_set_name": "Github"
} |
--disable_warnings
DROP TABLE IF EXISTS t1;
--enable_warnings
CREATE TABLE t1(c1 BIT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 TINYINT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 SMALLINT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 MEDIUMINT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 INT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 INTEGER NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 BIGINT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 DECIMAL NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 DEC NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 FIXED NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 NUMERIC NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 DOUBLE NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 REAL NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 DOUBLE PRECISION NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 FLOAT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 DATE NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 TIME NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 TIMESTAMP NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 DATETIME NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 YEAR NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 TINYBLOB NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 BLOB NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 MEDIUMBLOB NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 LONGBLOB NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 TINYTEXT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 TEXT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 MEDIUMTEXT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
CREATE TABLE t1(c1 LONGTEXT NULL);
SHOW TABLES;
let $ENGINE=`select variable_value from information_schema.global_variables where variable_name='STORAGE_ENGINE'`;
--replace_result $ENGINE ENGINE " PAGE_CHECKSUM=0" ""
SHOW CREATE TABLE t1; DROP TABLE t1; SHOW TABLES;
| {
"pile_set_name": "Github"
} |
<?xml version='1.0' encoding='utf-8'?>
<document xmlns="https://code.dccouncil.us/schemas/dc-library" xmlns:codified="https://code.dccouncil.us/schemas/codified" xmlns:codify="https://code.dccouncil.us/schemas/codify" xmlns:xi="http://www.w3.org/2001/XInclude" id="D.C. Law 19-112">
<num type="law">19-112</num>
<heading type="short">Human Rights Service of Process Amendment Act of 2012</heading>
<meta>
<effective>2012-03-14</effective>
<citations>
<citation type="law" url="http://lims.dccouncil.us/Download/26133/B19-0377-SignedAct.pdf">D.C. Law 19-112</citation>
<citation type="register">59 DCR 453</citation>
</citations>
<history url="http://lims.dccouncil.us/Legislation/B19-0377">
<narrative>Law 19-112, the “Human Rights Service of Process Amendment Act of 2012”, was introduced in Council and assigned Bill No. 19-377, which was referred to the Committee on Aging and Community Affairs. The Bill was adopted on first and second readings on December 6, 2011, and January 4, 2012, respectively. Signed by the Mayor on January 20, 2012, it was assigned Act No. 19-287 and transmitted to both Houses of Congress for its review. D.C. Law 19-112 became effective on March 14, 2012.</narrative>
</history>
</meta>
</document>
| {
"pile_set_name": "Github"
} |
require "fileutils"
module PolicyManager
class JsonExporterView
attr_accessor :template, :folder, :assigns
def initialize(vars={}, options)
self.folder = options[:folder]
self.assigns = options[:assigns]
@template = options.fetch(:template) #, self.class.template)
return self
end
def save
render_json
end
def save_json(file, data)
File.open(file, "w") do |f|
f.write(data)
end
end
def render_json
ac = PolicyManager::ExporterController.new()
options = handled_template.merge!({assigns: self.assigns })
content = ac.render_to_string(options)
save_json("#{folder}/data.json", content)
end
def handled_template
begin
if URI.parse(@template)
return {template: @template}
end
rescue URI::InvalidURIError
end
if @template.is_a?(String)
return {inline: @template}
elsif @template.is_a?(Pathname)
return {file: @template }
end
end
end
end | {
"pile_set_name": "Github"
} |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# 'install' is for this IT to work with Maven 2.2.1. Not required for Maven 3.0+.
invoker.goals = clean install ${project.groupId}:${project.artifactId}:${project.version}:analyze
| {
"pile_set_name": "Github"
} |
:toc:
:toclevels: 2
= Buhtrap Indicators of Compromise
== 0-day campaign
The blog post about this campaign is available on WeLiveSecurity at
https://www.welivesecurity.com/2019/07/11/buhtrap-zero-day-espionage-campaigns/
=== ESET detection names
- VBA/TrojanDropper.Agent.ABM
- VBA/TrojanDropper.Agent.AGK
- Win32/Spy.Buhtrap.W
- Win32/Spy.Buhtrap.AK
- Win32/RiskWare.Meterpreter.G
=== Network indicators
==== C&C servers
* `++https://hdfilm-seyret.com/help/index.php++`
* `++https://redmond.corp-microsoft.com/help/index.php++`
* `++dns://win10.ipv6-microsoft.org++`
* `++https://services-glbdns2.com/FIGm6uJx0MhjJ2ImOVurJQTs0rRv5Ef2UGoSc++`
* `++https://secure-telemetry.net/wp-login.php++`
=== Samples
All hashes are SHA-1
==== Main packages
----
2f2640720cce2f83ca2f0633330f13651384dd6a
e0f3557ea9f2ba4f7074caa0d0cf3b187c4472ff
c17c335b7ddb5c8979444ec36ab668ae8e4e0a72
----
=== Certificates
[options="header"]
|========================================
|Company name|Fingerprint
|YUVA-TRAVEL|`5e662e84b62ca6bdf6d050a1a4f5db6b28fbb7c5`
|SET&CO LIMITED|`b25def9ac34f31b84062a8e8626b2f0ef589921f`
|========================================
| {
"pile_set_name": "Github"
} |
#' kpArrows
#'
#' @description
#'
#' Plots segments at the specified genomic positions.
#'
#' @details
#'
#' This is one of the functions from karyoploteR implementing the adaptation to the genome
#' context of basic plot functions from R base graphics.
#' Given a set of positions on the genome (chromosome, x0 and x1) and values
#' (y0 and y1) for each of them, it plots arrows going from (x0, y0) to (x1, y1). Data can be
#' provided via a \code{GRanges} object (\code{data}), independent parameters for chr,
#' x0, x1, y0 and y1, or a combination of both.
#' A number of parameters can be used to define exactly where and how the arrows are drawn.
#' In addition, via the ellipsis operator (\code{...}), \code{kpSegments} accepts any parameter
#' valid for \code{segments} (e.g. \code{code}, \code{lwd}, \code{lty}, \code{col}, ...)
#'
#' @usage kpArrows(karyoplot, data=NULL, chr=NULL, x0=NULL, x1=NULL, y0=NULL, y1=NULL, ymin=NULL, ymax=NULL, data.panel=1, r0=NULL, r1=NULL, clipping=TRUE, ...)
#'
#' @inheritParams kpRect
#'
#' @return
#'
#' Returns the original karyoplot object, unchanged.
#'
#' @seealso \code{\link{plotKaryotype}}, \code{\link{kpRect}}, \code{\link{kpPoints}},
#' @seealso \code{\link{kpPlotRegions}}
#'
#' @examples
#'
#' set.seed(1000)
#' data.points <- sort(createRandomRegions(nregions=500, length.mean=2000000, mask=NA))
#' y <- runif(500, min=0, max=0.8)
#' mcols(data.points) <- data.frame(y0=y, y1=y+0.2)
#'
#' kp <- plotKaryotype("hg19", plot.type=2, chromosomes=c("chr1", "chr2"))
#' kpDataBackground(kp, data.panel=1)
#' kpDataBackground(kp, data.panel=2)
#'
#' kpArrows(kp, data=data.points, col="black", lwd=2, length=0.04)
#'
#' kpArrows(kp, data=data.points, y0=0, y1=1, r0=0.2, r1=0.8, col="lightblue", data.panel=2)
#'
#'
#'
#' @export kpArrows
#'
kpArrows <- function(karyoplot, data=NULL,
chr=NULL, x0=NULL, x1=NULL, y0=NULL, y1=NULL,
ymin=NULL, ymax=NULL, data.panel=1, r0=NULL, r1=NULL,
clipping=TRUE, ...) {
if(!methods::is(karyoplot, "KaryoPlot")) stop("'karyoplot' must be a valid 'KaryoPlot' object")
karyoplot$beginKpPlot()
on.exit(karyoplot$endKpPlot())
pp <- prepareParameters4("kpArrows", karyoplot=karyoplot, data=data, chr=chr, x0=x0, x1=x1,
y0=y0, y1=y1, ymin=ymin, ymax=ymax, r0=r0, r1=r1,
data.panel=data.panel, ...)
ccf <- karyoplot$coord.change.function
x0plot <- ccf(chr=pp$chr, x=pp$x0, data.panel=data.panel)$x
x1plot <- ccf(chr=pp$chr, x=pp$x1, data.panel=data.panel)$x
y0plot <- ccf(chr=pp$chr, y=pp$y0, data.panel=data.panel)$y
y1plot <- ccf(chr=pp$chr, y=pp$y1, data.panel=data.panel)$y
processClipping(karyoplot=karyoplot, clipping=clipping, data.panel=data.panel)
#Filter the additional parameters using the 'filter' vector returned by prepareParameters4
dots <- filterParams(list(...), pp$filter, pp$original.length)
#And call the base plotting function with both the standard parameters and the modified dots parameters
params <- c(list(x0=x0plot, x1=x1plot, y0=y0plot, y1=y1plot), dots)
do.call(graphics::arrows, params)
#graphics::arrows(x0=x0plot, x1=x1plot, y0=y0plot, y1=y1plot, ...)
invisible(karyoplot)
}
| {
"pile_set_name": "Github"
} |
{{fbdoc item="title" value="STDCALL"}}----
Specifies a //stdcall//-style calling convention in a procedure declaration
{{fbdoc item="syntax"}}##
[[KeyPgSub|Sub]] //name// **Stdcall** [[[KeyPgOverload|Overload]]] [[[KeyPgAlias|Alias]] //"""alias"""//] ( //parameters// )
[[KeyPgFunction|Function]] //name// **Stdcall** [[[KeyPgOverload|Overload]]] [[[KeyPgAlias|Alias]] //"""alias"""//] ( //parameters// ) [ [[KeyPgByrefFunction|Byref]] ] [[KeyPgAs|as]] //return_type//
##
{{fbdoc item="desc"}}
In procedure declarations, ##**Stdcall**## specifies that a procedure will use the ##**Stdcall**## calling convention. In the ##**Stdcall**## calling convention, any parameters are to be passed (pushed onto the stack) in the reverse order in which they are listed, that is, from right to left. The procedures need not preserve the ##EAX##, ##ECX## or ##EDX## registers, and must clean up the stack (pop any parameters) before it returns.
##**Stdcall**## is not allowed to be used with variadic procedure declarations (those with the last parameter listed as "##[[KeyPgDots|...]]##").
##**Stdcall**## is the default calling convention on Windows, unless another calling convention is explicitly specified or implied by one of the ##[[KeyPgExternBlock|EXTERN blocks]]##. ##**Stdcall**## is also the standard (or most common) calling convention used in BASIC languages, and the Windows API.
{{fbdoc item="ex"}}
{{fbdoc item="filename" value="examples/manual/procs/stdcall.bas"}}%%(freebasic)
Declare Function Example stdcall (param1 As Integer, param2 As Integer) As Integer
Declare Function Example2 cdecl (param1 As Integer, param2 As Integer) As Integer
Function Example stdcall (param1 As Integer, param2 As Integer) As Integer
' This is an STDCALL function, the first parameter on the stack is param2, since it was pushed last.
Print param1, param2
Return param1 Mod param2
End Function
Function Example2 cdecl (param1 As Integer, param2 As Integer) As Integer
' This is a CDECL function, the first parameter on the stack is param1, since it was pushed last.
Print param1, param2
Return param1 Mod param2
End Function
%%
{{fbdoc item="target"}}
- On Windows systems, ##**Stdcall**## procedures have an ##"@//N//"## decoration added to their internal/external name, where ##//N//## is the size of the parameter list, in bytes.
{{fbdoc item="diff"}}
- New to ""FreeBASIC""
{{fbdoc item="see"}}
- ##[[KeyPgPascal|Pascal]]##, ##[[KeyPgCdecl|Cdecl]]##
- ##[[KeyPgDeclare|Declare]]##
- ##[[KeyPgSub|Sub]]##, ##[[KeyPgFunction|Function]]##
{{fbdoc item="back" value="CatPgProcedures|Procedures"}} | {
"pile_set_name": "Github"
} |
export {
"PolyhedralObject",
-- Cone object with associated methods:
"Cone",
"coneFromVData",
"coneFromRays" => "coneFromVData",
"coneFromHData",
"coneFromInequalities" => "coneFromHData",
-- Polyhedron object with associated methods:
"Polyhedron",
"convexHull",
"polyhedron",
"polyhedronFromHData",
"polyhedronFromInequalities" => "polyhedronFromHData",
"dualFaceRepresentationMap",
"facets",
"Fan",
"PolyhedralComplex",
"intersection",
"fan",
"fanFromGfan",
"addCone",
"polyhedralComplex",
"addPolyhedron",
"ambDim",
"cones",
"maxCones",
"maxPolyhedra",
"halfspaces",
"hyperplanes",
"linSpace",
"linealitySpace",
"linearTransform",
"polyhedra",
"vertices",
"areCompatible",
"commonFace",
"contains",
"isCompact",
"isComplete",
"isEmpty",
"isFace",
"isFullDimensional",
"isLatticePolytope",
"isPointed",
"isPolytopal",
"isPure",
"isReflexive",
"isSimplicial",
"isSmooth",
"isVeryAmple",
"faces",
"facesAsPolyhedra",
"facesAsCones",
"fVector",
"incompCones",
"incompPolyhedra",
"inInterior",
"interiorPoint",
"interiorVector",
"interiorLatticePoints",
"latticePoints",
"latticeVolume",
"maxFace",
"minFace",
"objectiveVector",
"minkSummandCone",
"mixedVolume",
"polytope",
"posHull" => "coneFromVData",
"proximum",
"skeleton",
"smallestFace",
"smoothSubfan",
"stellarSubdivision",
"tailCone",
"triangulate",
"volume",
"vertexEdgeMatrix",
"vertexFacetMatrix",
"affineHull",
"affineImage",
"affinePreimage",
"bipyramid",
"ccRefinement",
"directProduct",
"dualCone",
"faceFan",
"imageFan",
"minkowskiSum",
"normalFan",
"nVertices",
"polar",
"polarFace",
"pyramid",
"sublatticeBasis",
"toSublattice",
"crossPolytope",
"cellDecompose",
"cyclicPolytope",
"ehrhart",
"emptyPolyhedron",
"hirzebruch",
"hypercube",
"newtonPolytope",
"posOrthant",
"secondaryPolytope",
"statePolytope",
"stdSimplex",
"simplex",
"saveSession",
"regularTriangulation",
"barycentricTriangulation",
"regularSubdivision",
"minimalNonFaces",
"stanleyReisnerRing"
}
| {
"pile_set_name": "Github"
} |
// +build windows
package main
import (
"fmt"
"log"
"os"
ole "github.com/go-ole/go-ole"
"github.com/go-ole/go-ole/oleutil"
)
func writeExample(excel, workbooks *ole.IDispatch, filepath string) {
// ref: https://msdn.microsoft.com/zh-tw/library/office/ff198017.aspx
// http://stackoverflow.com/questions/12159513/what-is-the-correct-xlfileformat-enumeration-for-excel-97-2003
const xlExcel8 = 56
workbook := oleutil.MustCallMethod(workbooks, "Add", nil).ToIDispatch()
defer workbook.Release()
worksheet := oleutil.MustGetProperty(workbook, "Worksheets", 1).ToIDispatch()
defer worksheet.Release()
cell := oleutil.MustGetProperty(worksheet, "Cells", 1, 1).ToIDispatch()
oleutil.PutProperty(cell, "Value", 12345)
cell.Release()
activeWorkBook := oleutil.MustGetProperty(excel, "ActiveWorkBook").ToIDispatch()
defer activeWorkBook.Release()
os.Remove(filepath)
// ref: https://msdn.microsoft.com/zh-tw/library/microsoft.office.tools.excel.workbook.saveas.aspx
oleutil.MustCallMethod(activeWorkBook, "SaveAs", filepath, xlExcel8, nil, nil).ToIDispatch()
//time.Sleep(2 * time.Second)
// let excel could close without asking
// oleutil.PutProperty(workbook, "Saved", true)
// oleutil.CallMethod(workbook, "Close", false)
}
func readExample(fileName string, excel, workbooks *ole.IDispatch) {
workbook, err := oleutil.CallMethod(workbooks, "Open", fileName)
if err != nil {
log.Fatalln(err)
}
defer workbook.ToIDispatch().Release()
sheets := oleutil.MustGetProperty(excel, "Sheets").ToIDispatch()
sheetCount := (int)(oleutil.MustGetProperty(sheets, "Count").Val)
fmt.Println("sheet count=", sheetCount)
sheets.Release()
worksheet := oleutil.MustGetProperty(workbook.ToIDispatch(), "Worksheets", 1).ToIDispatch()
defer worksheet.Release()
for row := 1; row <= 2; row++ {
for col := 1; col <= 5; col++ {
cell := oleutil.MustGetProperty(worksheet, "Cells", row, col).ToIDispatch()
val, err := oleutil.GetProperty(cell, "Value")
if err != nil {
break
}
fmt.Printf("(%d,%d)=%+v toString=%s\n", col, row, val.Value(), val.ToString())
cell.Release()
}
}
}
func showMethodsAndProperties(i *ole.IDispatch) {
n, err := i.GetTypeInfoCount()
if err != nil {
log.Fatalln(err)
}
tinfo, err := i.GetTypeInfo()
if err != nil {
log.Fatalln(err)
}
fmt.Println("n=", n, "tinfo=", tinfo)
}
func main() {
log.SetFlags(log.Flags() | log.Lshortfile)
ole.CoInitialize(0)
unknown, _ := oleutil.CreateObject("Excel.Application")
excel, _ := unknown.QueryInterface(ole.IID_IDispatch)
oleutil.PutProperty(excel, "Visible", true)
workbooks := oleutil.MustGetProperty(excel, "Workbooks").ToIDispatch()
cwd, _ := os.Getwd()
writeExample(excel, workbooks, cwd+"\\write.xls")
readExample(cwd+"\\excel97-2003.xls", excel, workbooks)
showMethodsAndProperties(workbooks)
workbooks.Release()
// oleutil.CallMethod(excel, "Quit")
excel.Release()
ole.CoUninitialize()
}
| {
"pile_set_name": "Github"
} |
{
serverName: 'Hexagon Tests',
bindPort: 0,
bindAddress: '0.0.0.0'
}
| {
"pile_set_name": "Github"
} |
// Copyright 2009 the Sputnik authors. All rights reserved.
// This code is governed by the BSD license found in the LICENSE file.
/**
* @name: S11.9.5_A8_T2;
* @section: 11.9.5, 11.9.6;
* @assertion: If Type(x) is different from Type(y), return true;
* @description: x or y is primitive number;
*/
//CHECK#1
if (!(1 !== new Number(1))) {
$ERROR('#1: 1 !== new Number(1)');
}
//CHECK#2
if (!(1 !== true)) {
$ERROR('#2: 1 !== true');
}
//CHECK#3
if (!(1 !== new Boolean(1))) {
$ERROR('#3: 1 !== new Boolean(1)');
}
//CHECK#4
if (!(1 !== "1")) {
$ERROR('#4: 1 !== "1"');
}
//CHECK#5
if (!(1 !== new String(1))) {
$ERROR('#5: 1 !== new String(1)');
}
//CHECK#6
if (!(new Number(0) !== 0)) {
$ERROR('#6: new Number(0) !== 0');
}
//CHECK#7
if (!(false !== 0)) {
$ERROR('#7: false !== 0');
}
//CHECK#8
if (!(new Boolean(0) !== 0)) {
$ERROR('#8: new Boolean(0) !== 0');
}
//CHECK#9
if (!("0" !== 0)) {
$ERROR('#9: "0" !== 0');
}
//CHECK#10
if (!(new String(0) !== 0)) {
$ERROR('#10: new String(0) !== 0');
}
//CHECK#11
if (!(1 !== {valueOf: function () {return 1}})) {
$ERROR('#11: 1 !== {valueOf: function () {return 1}}');
}
| {
"pile_set_name": "Github"
} |
package com.example.sample_android.reducer;
import com.example.sample_android.action.Check;
import com.example.sample_android.state.TodoItem;
import me.tatarka.redux.Reducer;
public class CheckReducer implements Reducer<Check, TodoItem> {
@Override
public TodoItem reduce(Check action, TodoItem item) {
return TodoItem.create(item.id(), item.text(), action.checked());
}
}
| {
"pile_set_name": "Github"
} |
#include <cstdint>
#include <limits>
#include <immintrin.h>
#ifdef _OPENMP
#include <omp.h>
#endif
#include <fbgemm/QuantUtils.h>
namespace caffe2 {
namespace internal {
template <typename T>
void SpatialBNNHWCAVX2(
const int N,
const int C,
const int HxW,
const int in_zero_point,
const int out_zero_point,
const T* X,
const float* alpha,
const float* beta,
T* Y,
bool relu_fused);
template <bool ReluFused>
void SpatialBNNHWCAVX2_uint8(
const int N,
const int C,
const int HxW,
const int in_zero_point,
const int out_zero_point,
const uint8_t* X,
const float* alpha,
const float* beta,
uint8_t* Y) {
constexpr int kVLen = 8;
const int outer_size = N * HxW;
const __m256i min_v = _mm256_set1_epi32(std::numeric_limits<uint8_t>::min());
const __m256i max_v = _mm256_set1_epi32(std::numeric_limits<uint8_t>::max());
const __m256i shuffle_mask_v = _mm256_set_epi8(
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0x0c,
0x08,
0x04,
0x00,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0xff,
0x0c,
0x08,
0x04,
0x00);
const __m256i permute_mask_v =
_mm256_set_epi32(0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00);
const __m256i in_zero_point_v = _mm256_set1_epi32(in_zero_point);
const __m256i out_zero_point_v = _mm256_set1_epi32(out_zero_point);
#ifdef _OPENMP
#pragma omp parallel for
#endif
for (int i = 0; i < outer_size; ++i) {
int n = C / kVLen * kVLen;
int r = C % kVLen;
const uint8_t* X_ptr = X + i * C;
uint8_t* Y_ptr = Y + i * C;
for (int j = 0; j < n; j += kVLen) {
const __m256i cur_v = _mm256_cvtepu8_epi32(
_mm_loadl_epi64(reinterpret_cast<const __m128i*>(X_ptr + j)));
const __m256 cur_v_float =
_mm256_cvtepi32_ps(_mm256_sub_epi32(cur_v, in_zero_point_v));
const __m256 alpha_v = _mm256_loadu_ps(alpha + j);
const __m256 beta_v = _mm256_loadu_ps(beta + j);
const __m256 result_float_v =
_mm256_fmadd_ps(alpha_v, cur_v_float, beta_v);
const __m256i result_rounded_v = _mm256_cvtps_epi32(result_float_v);
__m256i result_v = _mm256_add_epi32(result_rounded_v, out_zero_point_v);
if (ReluFused) {
result_v = _mm256_max_epi32(result_v, out_zero_point_v);
}
__m256i clipped_v =
_mm256_max_epi32(min_v, _mm256_min_epi32(max_v, result_v));
clipped_v = _mm256_shuffle_epi8(clipped_v, shuffle_mask_v);
clipped_v = _mm256_permutevar8x32_epi32(clipped_v, permute_mask_v);
*reinterpret_cast<int64_t*>(Y_ptr + j) =
_mm256_extract_epi64(clipped_v, 0);
}
for (int j = 0; j < r; ++j) {
long quantized_down = out_zero_point +
std::lrintf(alpha[n + j] * (X_ptr[n + j] - in_zero_point) +
beta[n + j]);
if (ReluFused) { // static if
quantized_down = std::max<long>(quantized_down, out_zero_point);
}
Y_ptr[n + j] = fbgemm::clamp<long, uint8_t>(quantized_down, 8);
}
}
}
template <>
void SpatialBNNHWCAVX2<uint8_t>(
const int N,
const int C,
const int HxW,
const int in_zero_point,
const int out_zero_point,
const uint8_t* X,
const float* alpha,
const float* beta,
uint8_t* Y,
bool relu_fused) {
if (relu_fused) {
SpatialBNNHWCAVX2_uint8<true>(
N, C, HxW, in_zero_point, out_zero_point, X, alpha, beta, Y);
} else {
SpatialBNNHWCAVX2_uint8<false>(
N, C, HxW, in_zero_point, out_zero_point, X, alpha, beta, Y);
}
}
} // namespace internal
} // namespace caffe2
| {
"pile_set_name": "Github"
} |
###### What is a brain?
# Cerebral organoids are becoming more brainlike

> print-edition iconPrint edition | Science and technology | Aug 31st 2019
AT WHAT POINT does a mass of nerve cells growing in a laboratory Petri dish become a brain? That question was first raised seriously in 2013 by the work of Madeline Lancaster, a developmental biologist at the Medical Research Council’s Laboratory of Molecular Biology, in Cambridge, Britain. That year Dr Lancaster and her colleagues grew the first human-derived “cerebral organoid”. They did so using pluripotent human stem cells, which are cells that have the potential to develop into any type of tissue in the human body. The researchers coaxed these cells into becoming nervous tissue that organised itself, albeit crudely, as structures which had some of the cell types and anatomical features of embryonic human brains.
Since then, Dr Lancaster’s work has advanced by leaps and bounds. In March, for example, she announced that her organoids, when they are connected to the spinal cord and back-muscle of a mouse, could make that muscle twitch. This means cerebral organoids are generating electrical impulses. And other scientists are joining the fray. One such, Alysson Muotri of the University of California, San Diego, has published this week, in Cell Stem Cell, a study that looks in more detail at cerebral-organoid electrical activity.
To carry out their study, Dr Muotri and his colleagues grew and examined hundreds of organoids, each a mere half-millimetre in diameter, over the course of ten months. To probe individual neurons within these they used tiny, fluid-filled pipettes that acted as electrodes small enough to maintain contact with the surface of an individual cell.
Neurons probed in this way proved electrically active, so the researchers went on to employ arrays of electrodes inserted simultaneously into different parts of an organoid to study its overall activity. They looked in detail, once a week, at each of the organoids that were chosen for examination. This revealed that, by six months of age, the electrical activity in different parts of an individual organoid had become synchronised.
Such synchronicity is also a feature of real brains, including those of preterm human infants of about the same age as Dr Muotri’s organoids. It is regarded as an important part of healthy brain function. So, to check how similar natural and organoid brain waves actually are, the research team ran those waves obtained from their organoids through a computer program that had previously been trained to recognise the electrical activity generated by the brains of premature babies. This algorithm proved able to predict to within a week the ages of laboratory-grown organoids 28 or more weeks old. That suggests those organoids are indeed growing in a manner similar to natural human brains.
If further research confirms this opinion, then for medical science that conformity with natural development could be a boon. Neuroscientists have long been held back by the differences between human brains and those of other animals—particularly the brains of rodents, the analogue most commonly employed in medical research. The purpose of the work that Dr Lancaster, Dr Muotri and others involved in the field are engaged in has always been to produce better laboratory models of neurological and psychiatric diseases, so that treatments may be developed.
And, although it may be some time in the future, there is also the possibility that organoids might one day be used as transplant material in people who have had part of their brains destroyed by strokes.
For ethicists, however, work like this raises important issues. A sub-millimetre piece of tissue, even one that displays synchronised electrical pulsing, is unlikely to have anything which a full-grown human being would recognise as consciousness. But if organoids grown from human stem cells start to get bigger than that, then the question that was posed back in 2013 becomes pressing.■
<<<<<<< HEAD
--
单词注释:
1.cerebral['seribrәl]:a. 脑的, 大脑的 [医] 小脑的
2.organoids[]: 类器官(organoid的复数)
3.brainlike[]:[网络] 分子能展现出类脑
4.Aug[]:abbr. 八月(August)
5.petri[pɛtrə]:abbr. petroleum 石油
6.Madeline[]:n. 玛德琳(女子名)
7.lancaster['læŋkәstә]:n. 兰开斯特(美国成市)
8.developmental[di.velәp'mentәl]:a. 发展的, 进化的, 启发的 [医] 发育的
9.biologist[bai'ɒlәdʒist]:n. 生物学家 [医] 生物学家
10.molecular[mә'lekjulә]:a. 分子的, 由分子组成的 [医] 分子的
11.Cambridge['keimbridʒ]:n. 剑桥
12.cerebral['seribrәl]:a. 脑的, 大脑的 [医] 小脑的
13.organoid['ɔ:^әnɔid]:n. 类器官 a. 器官状的
14.pluripotent[pljә'ripәtәnt]:a. [生]多能(性)的
15.coax[kәuks]:vt. 哄, 诱骗, 耐心地摆弄 vi. 哄骗 [计] 同轴电缆
16.albeit[ɔ:l'bi:it]:conj. 尽管, 虽然
17.crudely['kru:dli]:adv. 照自然状态, 未成熟地, 粗杂地
18.anatomical[.ænә'tɒmikl]:a. 解剖的, 解剖学的, 构造上的 [医] 解剖学的
19.embryonic[.embri'ɒnik]:a. 萌芽的, 初期的, 未成熟 [医] 胚胎的
20.spinal['spainl]:a. 针的, 尖刺的, 尖刺状突起的, 脊骨的 [医] 棘折; 脊柱的
21.twitch[twitʃ]:vi. 急拉, 抽搐, 阵痛 vt. 急拉, 攫取, 抽动 n. 急拉, 抽搐, 阵痛
22.impulse['impʌls]:n. 冲动, 驱使, 刺激, 推动, 冲力, 建议, 脉冲 vt. 推动
23.fray[frei]:n. 磨损, 打架, 争论 vt. 使磨损 vi. 被磨损
24.California[.kæli'fɒ:njә]:n. 加利福尼亚
25.san[sɑ:n]:abbr. 存储区域网(Storage Area Networking)
26.diego[]:n. 迭戈(男子名)
27.probe[prәub]:n. 探索, 调查, 探针, 探测器 v. 用探针探测, 调查, 探索
28.neuron['njuәrɔn]:n. 神经原, 轴索, 神经细胞 [计] 神经元
29.pipette[pi'pet]:n. 吸量管, 吸移管 [化] 吸移管
30.electrode[i'lektrәud]:n. 电极 [化] 电极; 焊条; 电焊条
31.electrically[i'lektrikәli]:adv. 电力地;有关电地
32.array[ә'rei]:n. 排列, 衣服, 大批, 军队 vt. 布署, 打扮, 排列 [计] 数组; 阵列
33.simultaneously[simәl'teiniәsly; (?@) saim-]:adv. 同时发生, 一齐, 同时, 同时存在
34.synchronise['siŋkrәnaiz, 'sin-]:vi. (使)同时发生, (使)整步, (使)同步, (使)同速进行 vt. 使在时间上一致, 校准, 把钟表拨至相同的时间, 把...并列对照
35.synchronicity[,siŋkrә'nisәti,,sin-]:n. [心]同步性,同时发生
36.preterm[ˌpri:'tɜ:m]:a. 早产的
37.premature[.premә'tjuә]:a. 早产的, 过早的, 不成熟的 n. 早产儿, 过早发生的事物
38.algorithm['ælgәriðm]:n. 算法 [计] 算法
39.conformity[kәn'fɒ:miti]:n. 遵照, 适合, 一致, 相似 [计] 符合度
40.boon[bu:n]:n. 恩惠
41.neuroscientist['njʊərəʊsaɪəntɪst]: [医]神经科学家:神经科学各分支的专家
42.rodent['rәudәnt]:a. 咬的, 啮齿类的 n. 啮齿动物
43.analogue['ænәlɒg]:n. 类似物, 相似情况 [医] 类似物, 同型物, 相似器官, 同功异质体
44.alway['ɔ:lwei]:adv. 永远;总是(等于always)
45.neurological[.njurә'lɒdʒikl]:a. 神经病学上的
46.psychiatric[saiki'ætrik; (?@) si-]:a. 精神病学的, 医精神病的 [医] 精神病学的
47.ethicist['eθisist]:伦理学家
=======
>>>>>>> 50f1fbac684ef65c788c2c3b1cb359dd2a904378
| {
"pile_set_name": "Github"
} |
package com.clarkparsia.pellet.datatypes;
import static java.lang.String.format;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.mindswap.pellet.Literal;
import org.mindswap.pellet.PelletOptions;
import org.mindswap.pellet.utils.ATermUtils;
import aterm.ATermAppl;
import aterm.ATermList;
import com.clarkparsia.pellet.datatypes.exceptions.InvalidConstrainingFacetException;
import com.clarkparsia.pellet.datatypes.exceptions.InvalidLiteralException;
import com.clarkparsia.pellet.datatypes.exceptions.UnrecognizedDatatypeException;
import com.clarkparsia.pellet.datatypes.types.bool.XSDBoolean;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDDate;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDDateTime;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDDateTimeStamp;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDGDay;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDGMonth;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDGMonthDay;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDGYear;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDGYearMonth;
import com.clarkparsia.pellet.datatypes.types.datetime.XSDTime;
import com.clarkparsia.pellet.datatypes.types.duration.XSDDuration;
import com.clarkparsia.pellet.datatypes.types.floating.XSDDouble;
import com.clarkparsia.pellet.datatypes.types.floating.XSDFloat;
import com.clarkparsia.pellet.datatypes.types.real.OWLRational;
import com.clarkparsia.pellet.datatypes.types.real.OWLReal;
import com.clarkparsia.pellet.datatypes.types.real.XSDByte;
import com.clarkparsia.pellet.datatypes.types.real.XSDDecimal;
import com.clarkparsia.pellet.datatypes.types.real.XSDInt;
import com.clarkparsia.pellet.datatypes.types.real.XSDInteger;
import com.clarkparsia.pellet.datatypes.types.real.XSDLong;
import com.clarkparsia.pellet.datatypes.types.real.XSDNegativeInteger;
import com.clarkparsia.pellet.datatypes.types.real.XSDNonNegativeInteger;
import com.clarkparsia.pellet.datatypes.types.real.XSDNonPositiveInteger;
import com.clarkparsia.pellet.datatypes.types.real.XSDPositiveInteger;
import com.clarkparsia.pellet.datatypes.types.real.XSDShort;
import com.clarkparsia.pellet.datatypes.types.real.XSDUnsignedByte;
import com.clarkparsia.pellet.datatypes.types.real.XSDUnsignedInt;
import com.clarkparsia.pellet.datatypes.types.real.XSDUnsignedLong;
import com.clarkparsia.pellet.datatypes.types.real.XSDUnsignedShort;
import com.clarkparsia.pellet.datatypes.types.text.RDFPlainLiteral;
import com.clarkparsia.pellet.datatypes.types.text.XSDLanguage;
import com.clarkparsia.pellet.datatypes.types.text.XSDNCName;
import com.clarkparsia.pellet.datatypes.types.text.XSDNMToken;
import com.clarkparsia.pellet.datatypes.types.text.XSDName;
import com.clarkparsia.pellet.datatypes.types.text.XSDNormalizedString;
import com.clarkparsia.pellet.datatypes.types.text.XSDString;
import com.clarkparsia.pellet.datatypes.types.text.XSDToken;
import com.clarkparsia.pellet.datatypes.types.uri.XSDAnyURI;
/**
* <p>
* Title: Datatype Reasoner Implementation
* </p>
* <p>
* Description: Default implementation of interface {@link DatatypeReasoner}
* </p>
* <p>
* Copyright: Copyright (c) 2009
* </p>
* <p>
* Company: Clark & Parsia, LLC. <http://www.clarkparsia.com>
* </p>
*
* @author Mike Smith
*/
public class DatatypeReasonerImpl implements DatatypeReasoner {
private static final Map<ATermAppl, Datatype<?>> coreDatatypes;
private static final DataRange<?> EMPTY_RANGE;
private static final Logger log;
private static final DataRange<?> TRIVIALLY_SATISFIABLE;
static {
log = Logger.getLogger(DatatypeReasonerImpl.class.getCanonicalName());
coreDatatypes = new HashMap<ATermAppl, Datatype<?>>();
{
coreDatatypes.put(RDFPlainLiteral.getInstance().getName(), RDFPlainLiteral.getInstance());
coreDatatypes.put(XSDString.getInstance().getName(), XSDString.getInstance());
coreDatatypes.put(XSDNormalizedString.getInstance().getName(), XSDNormalizedString.getInstance());
coreDatatypes.put(XSDToken.getInstance().getName(), XSDToken.getInstance());
coreDatatypes.put(XSDLanguage.getInstance().getName(), XSDLanguage.getInstance());
coreDatatypes.put(XSDNMToken.getInstance().getName(), XSDNMToken.getInstance());
coreDatatypes.put(XSDName.getInstance().getName(), XSDName.getInstance());
coreDatatypes.put(XSDNCName.getInstance().getName(), XSDNCName.getInstance());
}
coreDatatypes.put(XSDBoolean.getInstance().getName(), XSDBoolean.getInstance());
{
coreDatatypes.put(OWLReal.getInstance().getName(), OWLReal.getInstance());
coreDatatypes.put(OWLRational.getInstance().getName(), OWLRational.getInstance());
coreDatatypes.put(XSDDecimal.getInstance().getName(), XSDDecimal.getInstance());
coreDatatypes.put(XSDInteger.getInstance().getName(), XSDInteger.getInstance());
coreDatatypes.put(XSDLong.getInstance().getName(), XSDLong.getInstance());
coreDatatypes.put(XSDInt.getInstance().getName(), XSDInt.getInstance());
coreDatatypes.put(XSDShort.getInstance().getName(), XSDShort.getInstance());
coreDatatypes.put(XSDByte.getInstance().getName(), XSDByte.getInstance());
coreDatatypes.put(XSDNonNegativeInteger.getInstance().getName(), XSDNonNegativeInteger.getInstance());
coreDatatypes.put(XSDNonPositiveInteger.getInstance().getName(), XSDNonPositiveInteger.getInstance());
coreDatatypes.put(XSDNegativeInteger.getInstance().getName(), XSDNegativeInteger.getInstance());
coreDatatypes.put(XSDPositiveInteger.getInstance().getName(), XSDPositiveInteger.getInstance());
coreDatatypes.put(XSDUnsignedLong.getInstance().getName(), XSDUnsignedLong.getInstance());
coreDatatypes.put(XSDUnsignedInt.getInstance().getName(), XSDUnsignedInt.getInstance());
coreDatatypes.put(XSDUnsignedShort.getInstance().getName(), XSDUnsignedShort.getInstance());
coreDatatypes.put(XSDUnsignedByte.getInstance().getName(), XSDUnsignedByte.getInstance());
}
coreDatatypes.put(XSDDouble.getInstance().getName(), XSDDouble.getInstance());
coreDatatypes.put(XSDFloat.getInstance().getName(), XSDFloat.getInstance());
{
coreDatatypes.put(XSDDateTime.getInstance().getName(), XSDDateTime.getInstance());
coreDatatypes.put(XSDDateTimeStamp.getInstance().getName(), XSDDateTimeStamp.getInstance());
}
{
coreDatatypes.put(XSDDate.getInstance().getName(), XSDDate.getInstance());
coreDatatypes.put(XSDGYearMonth.getInstance().getName(), XSDGYearMonth.getInstance());
coreDatatypes.put(XSDGMonthDay.getInstance().getName(), XSDGMonthDay.getInstance());
coreDatatypes.put(XSDGYear.getInstance().getName(), XSDGYear.getInstance());
coreDatatypes.put(XSDGMonth.getInstance().getName(), XSDGMonth.getInstance());
coreDatatypes.put(XSDGDay.getInstance().getName(), XSDGDay.getInstance());
coreDatatypes.put(XSDTime.getInstance().getName(), XSDTime.getInstance());
}
coreDatatypes.put(XSDDuration.getInstance().getName(), XSDDuration.getInstance());
coreDatatypes.put(XSDAnyURI.getInstance().getName(), XSDAnyURI.getInstance());
}
static {
EMPTY_RANGE = new EmptyDataRange<Object>();
TRIVIALLY_SATISFIABLE = new DataRange<Object>() {
public boolean contains(Object value) {
return true;
}
public boolean containsAtLeast(int n) {
return true;
}
@Override
public boolean equals(Object obj) {
return this == obj;
}
public Object getValue(int i) {
throw new UnsupportedOperationException();
}
@Override
public int hashCode() {
return super.hashCode();
}
public boolean isEmpty() {
return false;
}
public boolean isEnumerable() {
return false;
}
public boolean isFinite() {
return false;
}
public int size() {
throw new UnsupportedOperationException();
}
public Iterator<Object> valueIterator() {
throw new UnsupportedOperationException();
}
};
}
private static <T> DataValueEnumeration<? extends T> findSmallestEnumeration(
Collection<DataValueEnumeration<? extends T>> ranges) {
DataValueEnumeration<? extends T> ret = null;
int best = Integer.MAX_VALUE;
for (DataValueEnumeration<? extends T> r : ranges) {
final DataValueEnumeration<? extends T> e = r;
final int s = e.size();
if (s < best) {
ret = e;
best = s;
}
}
return ret;
}
private static final ATermAppl getDatatypeName(ATermAppl literal) {
if (!ATermUtils.isLiteral(literal)) {
final String msg = "Method expected an ATermAppl literal as an argument";
log.severe(msg);
throw new IllegalArgumentException(msg);
}
final ATermAppl dtName = (ATermAppl) literal.getArgument(ATermUtils.LIT_URI_INDEX);
if (ATermUtils.EMPTY.equals(dtName)) {
final String msg = "Untyped literals not supported by this datatype reasoner";
log.severe(msg);
throw new IllegalArgumentException(msg);
}
return dtName;
}
private static int inequalityCount(Set<Integer>[] nes, int xIndex) {
final Set<Integer> others = nes[xIndex];
return others == null ? 0 : others.size();
}
private static <T> void partitionDConjunction(Collection<DataRange<? extends T>> dconjunction,
Set<DataValueEnumeration<? extends T>> positiveEnumerations,
Set<DataValueEnumeration<? extends T>> negativeEnumerations,
Set<RestrictedDatatype<? extends T>> positiveRestrictions,
Set<RestrictedDatatype<? extends T>> negativeRestrictions) {
for (DataRange<? extends T> dr : dconjunction) {
if (dr instanceof DataValueEnumeration) {
positiveEnumerations.add((DataValueEnumeration<? extends T>) dr);
}
else if (dr instanceof RestrictedDatatype) {
positiveRestrictions.add((RestrictedDatatype<? extends T>) dr);
}
else if (dr instanceof NegatedDataRange) {
DataRange<? extends T> ndr = ((NegatedDataRange<? extends T>) dr).getDataRange();
if (ndr instanceof DataValueEnumeration) {
negativeEnumerations.add((DataValueEnumeration<? extends T>) ndr);
}
else if (ndr instanceof RestrictedDatatype) {
negativeRestrictions.add((RestrictedDatatype<? extends T>) ndr);
}
else if (dr != TRIVIALLY_SATISFIABLE) {
log.warning("Unknown datatype: " + dr);
}
}
else if (dr != TRIVIALLY_SATISFIABLE) {
log.warning("Unknown datatype: " + dr);
}
}
}
private static boolean removeInequalities(Set<Integer>[] nes, int xIndex) {
final Set<Integer> others = nes[xIndex];
if (others == null) {
return false;
}
else {
for (Integer yIndex : others) {
Set<Integer> s = nes[yIndex];
if (s == null) {
throw new IllegalStateException();
}
if (!s.remove(xIndex)) {
throw new IllegalStateException();
}
}
return true;
}
}
private final Set<ATermAppl> declaredUndefined;
private final NamedDataRangeExpander expander;
private final Map<ATermAppl, ATermAppl> namedDataRanges;
public DatatypeReasonerImpl() {
declaredUndefined = new HashSet<ATermAppl>();
expander = new NamedDataRangeExpander();
namedDataRanges = new HashMap<ATermAppl, ATermAppl>();
}
private boolean containedIn(Object value, ATermAppl dconjunction) throws InvalidConstrainingFacetException,
InvalidLiteralException, UnrecognizedDatatypeException {
if (ATermUtils.isAnd(dconjunction)) {
for (ATermList l = (ATermList) dconjunction.getArgument(0); !l.isEmpty(); l = l.getNext()) {
if (!getDataRange((ATermAppl) l.getFirst()).contains(value)) {
return false;
}
}
return true;
}
else {
return getDataRange(dconjunction).contains(value);
}
}
public boolean containsAtLeast(int n, Collection<ATermAppl> ranges) throws UnrecognizedDatatypeException,
InvalidConstrainingFacetException, InvalidLiteralException {
ATermAppl and = ATermUtils.makeAnd(ATermUtils.makeList(ranges));
ATermAppl dnf = DNF.dnf(expander.expand(and, namedDataRanges));
if (ATermUtils.isOr(dnf)) {
List<DataRange<?>> disjuncts = new ArrayList<DataRange<?>>();
for (ATermList l = (ATermList) dnf.getArgument(0); !l.isEmpty(); l = l.getNext()) {
final DataRange<?> dr = normalizeVarRanges((ATermAppl) l.getFirst());
if (!dr.isEmpty()) {
disjuncts.add(dr);
}
}
final DataRange<?> disjunction = getDisjunction(disjuncts);
return disjunction.containsAtLeast(n);
}
else {
final DataRange<?> dr = normalizeVarRanges(dnf);
return dr.containsAtLeast(n);
}
}
public boolean declare(ATermAppl name) {
if (isDeclared(name)) {
return false;
}
else {
declaredUndefined.add(name);
return true;
}
}
public ATermAppl getCanonicalRepresentation(ATermAppl literal) throws InvalidLiteralException,
UnrecognizedDatatypeException {
final ATermAppl dtName = getDatatypeName(literal);
final Datatype<?> dt = getDatatype(dtName);
if (dt == null) {
switch (PelletOptions.UNDEFINED_DATATYPE_HANDLING) {
case INFINITE_STRING:
return literal;
case EMPTY:
throw new InvalidLiteralException(dtName, ATermUtils.getLiteralValue(literal));
case EXCEPTION:
throw new UnrecognizedDatatypeException(dtName);
default:
throw new IllegalStateException();
}
}
else {
return dt.getCanonicalRepresentation(literal);
}
}
private DataRange<?> getDataRange(ATermAppl a) throws InvalidConstrainingFacetException, InvalidLiteralException,
UnrecognizedDatatypeException {
// TODO: Investigate the impact of keeping a results cache here
/*
* rdfs:Literal
*/
if (a.equals(ATermUtils.TOP_LIT)) {
return TRIVIALLY_SATISFIABLE;
}
/*
* Negation of rdfs:Literal
*/
if (a.equals(ATermUtils.BOTTOM_LIT)) {
return EMPTY_RANGE;
}
/*
* Named datatype
*/
if (ATermUtils.isPrimitive(a)) {
Datatype<?> dt = getDatatype(a);
if (dt == null) {
switch (PelletOptions.UNDEFINED_DATATYPE_HANDLING) {
case INFINITE_STRING:
dt = InfiniteNamedDatatype.get(a);
break;
case EMPTY:
return EMPTY_RANGE;
case EXCEPTION:
throw new UnrecognizedDatatypeException(a);
default:
throw new IllegalStateException();
}
}
return dt.asDataRange();
}
/*
* Datatype restriction
*/
if (ATermUtils.isRestrictedDatatype(a)) {
/*
* Start with the full data range for the datatype
*/
final ATermAppl dtTerm = (ATermAppl) a.getArgument(0);
final DataRange<?> dt = getDataRange(dtTerm);
if (!(dt instanceof RestrictedDatatype<?>)) {
throw new InvalidConstrainingFacetException(dtTerm, dt);
}
RestrictedDatatype<?> dr = (RestrictedDatatype<?>) dt;
/*
* Apply each constraining facet value pair in turn
*/
final ATermList facetValues = (ATermList) a.getArgument(1);
for (ATermList l = facetValues; !l.isEmpty(); l = l.getNext()) {
final ATermAppl fv = (ATermAppl) l.getFirst();
final ATermAppl facet = (ATermAppl) fv.getArgument(0);
final ATermAppl valueTerm = (ATermAppl) fv.getArgument(1);
Object value;
try {
value = getValue(valueTerm);
}
catch (InvalidLiteralException e) {
throw new InvalidConstrainingFacetException(facet, valueTerm, e);
}
dr = dr.applyConstrainingFacet(facet, value);
}
return dr;
}
/*
* Negated datarange
*/
if (ATermUtils.isNot(a)) {
final ATermAppl n = (ATermAppl) a.getArgument(0);
final DataRange<?> ndr = getDataRange(n);
final DataRange<?> dr = new NegatedDataRange<Object>(ndr);
return dr;
}
/*
* TODO: Consider if work before this point to group enumerations (i.e., treat them differently than
* disjunctions of singleton enumerations) is worthwhile.
*/
/*
* Data value enumeration
*/
if (ATermUtils.isNominal(a)) {
final ATermAppl literal = (ATermAppl) a.getArgument(0);
final DataRange<?> dr = new DataValueEnumeration<Object>(Collections.singleton(getValue(literal)));
return dr;
}
final String msg = format("Unrecognized input term (%s) for datarange conversion", a);
log.severe(msg);
throw new IllegalArgumentException(msg);
}
public Datatype<?> getDatatype(ATermAppl uri) {
try {
Datatype<?> dt = coreDatatypes.get(uri);
if (dt == null) {
ATermAppl definition = namedDataRanges.get(uri);
if (definition != null) {
if (ATermUtils.isRestrictedDatatype(definition)) {
RestrictedDatatype<?> dataRange = (RestrictedDatatype<?>) getDataRange(definition);
@SuppressWarnings("unchecked")
NamedDatatype namedDatatype = new NamedDatatype(uri, dataRange);
dt = namedDatatype;
}
}
}
return dt;
}
catch (Exception e) {
throw new RuntimeException(e);
}
}
private DataRange<?> getDisjunction(Collection<DataRange<?>> ranges) {
if (ranges.size() == 1) {
return ranges.iterator().next();
}
for (DataRange<?> r : ranges) {
if (r == TRIVIALLY_SATISFIABLE) {
return r;
}
}
Set<Object> oneOf = Collections.emptySet();
Map<Datatype<?>, Set<RestrictedDatatype<?>>> byPrimitive = new HashMap<Datatype<?>, Set<RestrictedDatatype<?>>>();
/*
* Organize the input data ranges into restrictions partitioned by data and a merged value enumeration.
*/
for (DataRange<?> dr : ranges) {
if (dr instanceof RestrictedDatatype) {
final RestrictedDatatype<?> rd = (RestrictedDatatype<?>) dr;
final Datatype<?> pd = rd.getDatatype().getPrimitiveDatatype();
Set<RestrictedDatatype<?>> others = byPrimitive.get(pd);
if (others == null) {
others = new HashSet<RestrictedDatatype<?>>();
byPrimitive.put(pd, others);
}
others.add(rd);
}
else if (dr instanceof DataValueEnumeration) {
final DataValueEnumeration<?> enm = (DataValueEnumeration<?>) dr;
if (oneOf.isEmpty()) {
oneOf = new HashSet<Object>();
}
for (Iterator<?> it = enm.valueIterator(); it.hasNext();) {
oneOf.add(it.next());
}
}
}
/*
* Merge data ranges that have the same primitive datatype
*/
Set<RestrictedDatatype<?>> disjointRanges = new HashSet<RestrictedDatatype<?>>();
for (Set<RestrictedDatatype<?>> s : byPrimitive.values()) {
Iterator<RestrictedDatatype<?>> it = s.iterator();
RestrictedDatatype<?> merge = it.next();
while (it.hasNext()) {
merge = merge.union(it.next());
}
disjointRanges.add(merge);
}
/*
* Discard any enum elements that are included in other disjuncts
*/
for (Iterator<Object> it = oneOf.iterator(); it.hasNext();) {
final Object o = it.next();
for (RestrictedDatatype<?> rd : disjointRanges) {
if (rd.contains(o)) {
it.remove();
}
}
}
return new UnionDataRange<Object>(disjointRanges, oneOf);
}
public ATermAppl getLiteral(Object value) {
for (Datatype<?> dt : coreDatatypes.values()) {
if (dt.isPrimitive()) {
if (dt.asDataRange().contains(value)) {
return dt.getLiteral(value);
}
}
}
final String msg = "Value is not in the value space of any recognized datatypes: " + value.toString();
log.severe(msg);
throw new IllegalArgumentException(msg);
}
public Object getValue(ATermAppl literal) throws InvalidLiteralException, UnrecognizedDatatypeException {
final ATermAppl dtName = getDatatypeName(literal);
final Datatype<?> dt = getDatatype(dtName);
if (dt == null) {
switch (PelletOptions.UNDEFINED_DATATYPE_HANDLING) {
case INFINITE_STRING:
return literal;
case EMPTY:
throw new InvalidLiteralException(dtName, ATermUtils.getLiteralValue(literal));
case EXCEPTION:
throw new UnrecognizedDatatypeException(dtName);
default:
throw new IllegalStateException();
}
}
else {
return dt.getValue(literal);
}
}
public boolean isDeclared(ATermAppl name) {
return ATermUtils.TOP_LIT.equals(name) || coreDatatypes.containsKey(name) || namedDataRanges.containsKey(name)
|| declaredUndefined.contains(name);
}
public boolean isDefined(ATermAppl name) {
if (ATermUtils.TOP_LIT.equals(name)) {
return true;
}
if (coreDatatypes.containsKey(name)) {
return true;
}
if (namedDataRanges.containsKey(name)) {
return true;
}
return false;
}
public ATermAppl getDefinition(ATermAppl name) {
return namedDataRanges.get( name );
}
public boolean isSatisfiable(Collection<ATermAppl> dataranges) throws InvalidConstrainingFacetException,
InvalidLiteralException, UnrecognizedDatatypeException {
return isSatisfiable(dataranges, null);
}
public boolean isSatisfiable(Collection<ATermAppl> dataranges, Object value)
throws InvalidConstrainingFacetException, InvalidLiteralException, UnrecognizedDatatypeException {
Set<Integer> consts, vars;
if (value == null) {
/*
* TODO: See if code in next method can be restructured to avoid this allocation.
*/
consts = new HashSet<Integer>();
vars = new HashSet<Integer>(Collections.singleton(0));
}
else {
consts = Collections.singleton(0);
vars = Collections.emptySet();
}
ATermAppl and = ATermUtils.makeAnd(ATermUtils.makeList(dataranges));
ATermAppl dnf = DNF.dnf(expander.expand(and, namedDataRanges));
Collection<ATermAppl> dnfDisjuncts;
if (ATermUtils.isOr(dnf)) {
List<ATermAppl> disjuncts = new ArrayList<ATermAppl>();
for (ATermList l = (ATermList) dnf.getArgument(0); !l.isEmpty(); l = l.getNext()) {
disjuncts.add((ATermAppl) l.getFirst());
}
dnfDisjuncts = disjuncts;
}
else {
dnfDisjuncts = Collections.singleton(dnf);
}
@SuppressWarnings("unchecked")
final Collection<ATermAppl>[] dnfTypes = new Collection[] { dnfDisjuncts };
@SuppressWarnings("unchecked")
final Set<Integer>[] ne = new Set[] { Collections.<Integer> emptySet() };
return isSatisfiable(consts, vars, dnfTypes, new Object[] { value }, ne);
}
private boolean isSatisfiable(Set<Integer> consts, Set<Integer> vars, Collection<ATermAppl>[] dnfTypes,
Object[] constValues, Set<Integer>[] ne) throws InvalidConstrainingFacetException,
InvalidLiteralException, UnrecognizedDatatypeException {
/*
* TODO: Remove need for consts and vars sets by using null in constValues array
*/
final int n = dnfTypes.length;
/*
* 1. Loop and eliminate any easy, obvious unsats
*/
for (int i = 0; i < n; i++) {
final Collection<ATermAppl> drs = dnfTypes[i];
for (Iterator<ATermAppl> it = drs.iterator(); it.hasNext();) {
ATermAppl dr = it.next();
if (ATermUtils.BOTTOM_LIT.equals(dr)) {
it.remove();
}
}
if (drs.isEmpty()) {
return false;
}
}
/*
* 2. Get normalized form of data ranges
*/
DataRange<?>[] normalized = new DataRange[n];
for (int i = 0; i < n; i++) {
if (consts.contains(i)) {
boolean satisfied = false;
for (ATermAppl a : dnfTypes[i]) {
if (containedIn(constValues[i], a)) {
satisfied = true;
break;
}
}
if (satisfied) {
normalized[i] = TRIVIALLY_SATISFIABLE;
}
else {
return false;
}
}
else {
List<DataRange<?>> drs = new ArrayList<DataRange<?>>();
for (ATermAppl a : dnfTypes[i]) {
DataRange<?> dr = normalizeVarRanges(a);
if (dr == TRIVIALLY_SATISFIABLE) {
drs = Collections.<DataRange<?>> singletonList(TRIVIALLY_SATISFIABLE);
break;
}
else if (!dr.isEmpty()) {
drs.add(dr);
}
}
if (drs.isEmpty()) {
return false;
}
else {
normalized[i] = getDisjunction(drs);
}
}
}
/*
* Alg lines 7 - 22 (without the 12-13 or 19-20 blocks)
*/
for (Iterator<Integer> it = vars.iterator(); it.hasNext();) {
Integer i = it.next();
final DataRange<?> dr = normalized[i];
/*
* First half of condition 9 - 11 block
*/
if (TRIVIALLY_SATISFIABLE == dr) {
it.remove();
removeInequalities(ne, i);
continue;
}
/*
* Line 15
*/
if (dr.isEmpty()) {
return false;
}
/*
* Second half of condition 9 - 11 block
*/
if (dr.containsAtLeast(inequalityCount(ne, i) + 1)) {
it.remove();
removeInequalities(ne, i);
continue;
}
/*
* Data range is a singleton, replace variable with constant (lines 17 - 18)
*/
if (dr.isFinite() && dr.isEnumerable() && !dr.containsAtLeast(2)) {
final Object c = dr.valueIterator().next();
it.remove();
consts.add(i);
constValues[i] = c;
normalized[i] = TRIVIALLY_SATISFIABLE;
continue;
}
}
if (log.isLoggable(Level.FINEST)) {
log.finest(format("After variable data range normalization %d variables and %d constants", vars.size(),
consts.size()));
}
/*
* Constant checks (alg lines 23 - 30)
*/
for (Integer i : consts) {
/*
* Check that any constant,constant inequalities are satisfied
*/
Set<Integer> diffs = ne[i];
if (diffs != null) {
for (Iterator<Integer> it = diffs.iterator(); it.hasNext();) {
final int j = it.next();
if (consts.contains(j)) {
if (constValues[i].equals(constValues[j])) {
return false;
}
it.remove();
ne[j].remove(i);
}
}
}
}
/*
* Try to eliminate any more variables that can be removed
*/
for (Iterator<Integer> it = vars.iterator(); it.hasNext();) {
final int i = it.next();
final DataRange<?> dr = normalized[i];
final Set<Integer> diffs = ne[i];
final int min = (diffs == null) ? 1 : diffs.size() + 1;
if (dr.containsAtLeast(min)) {
it.remove();
for (int j : diffs) {
if (ne[j] != null) {
ne[j].remove(i);
}
}
ne[i] = null;
vars.remove(i);
}
}
if (log.isLoggable(Level.FINEST)) {
log.finest(format("After size check on variable data ranges %d variables", vars.size()));
}
if (vars.isEmpty()) {
return true;
}
/*
* Assertion: at this point, all remaining variables are from finite and enumerable data ranges.
*/
/*
* Partition remaining variables into disjoint collections
*/
Set<Integer> remaining = new HashSet<Integer>(vars);
List<Set<Integer>> partitions = new ArrayList<Set<Integer>>();
while (!remaining.isEmpty()) {
Set<Integer> p = new HashSet<Integer>();
Iterator<Integer> it = remaining.iterator();
int i = it.next();
it.remove();
p.add(i);
if (ne[i] != null) {
Set<Integer> others = new HashSet<Integer>();
others.addAll(ne[i]);
while (!others.isEmpty()) {
Iterator<Integer> jt = others.iterator();
int j = jt.next();
jt.remove();
if (remaining.contains(j)) {
p.add(j);
remaining.remove(j);
if (ne[j] != null) {
others.addAll(ne[j]);
}
}
}
}
partitions.add(p);
}
if (log.isLoggable(Level.FINEST)) {
log.finest(format("Enumerating to find solutions for %d partitions", partitions.size()));
}
/*
* Enumerate until a solution is found
*/
for (Set<Integer> p : partitions) {
final int nPart = p.size();
int[] indices = new int[nPart];
Map<Integer, Integer> revInd = new HashMap<Integer, Integer>();
DataRange<?>[] drs = new DataRange[nPart];
int i = 0;
for (int j : p) {
drs[i] = normalized[j];
indices[i] = j;
revInd.put(j, i);
i++;
}
Iterator<?>[] its = new Iterator[nPart];
for (i = 0; i < nPart; i++) {
its[i] = drs[i].valueIterator();
}
Object[] values = new Object[nPart];
/*
* Assign a value to each
*/
for (i = 0; i < nPart; i++) {
values[i] = its[i].next();
}
boolean solutionFound = false;
while (!solutionFound) {
/*
* Check solution
*/
solutionFound = true;
for (i = 0; i < nPart && solutionFound; i++) {
Set<Integer> diffs = ne[indices[i]];
if (diffs != null) {
final Object a = values[i];
for (int j : diffs) {
Object b;
if (p.contains(j)) {
b = values[revInd.get(j)];
}
else {
b = constValues[j];
}
if (a.equals(b)) {
solutionFound = false;
break;
}
}
}
}
/*
* If current values are not a solution try a new solution. If no more combinations are available, fail.
*/
if (!solutionFound) {
i = nPart - 1;
while (!its[i].hasNext()) {
if (i == 0) {
return false;
}
its[i] = drs[i].valueIterator();
values[i] = its[i].next();
i--;
}
values[i] = its[i].next();
}
}
}
return true;
}
public boolean isSatisfiable(Set<Literal> nodes, Map<Literal, Set<Literal>> neqs)
throws InvalidConstrainingFacetException, InvalidLiteralException, UnrecognizedDatatypeException {
Literal[] literals = nodes.toArray(new Literal[0]);
// TODO: Evaluate replacing with intset or just int arrays.
Set<Integer> vars = new HashSet<Integer>();
Set<Integer> consts = new HashSet<Integer>();
Object[] constValues = new Object[literals.length];
Map<Literal, Integer> rev = new HashMap<Literal, Integer>();
for (int i = 0; i < literals.length; i++) {
rev.put(literals[i], i);
if (literals[i].isNominal()) {
consts.add(i);
constValues[i] = literals[i].getValue();
}
else {
vars.add(i);
}
}
@SuppressWarnings("unchecked")
Set<Integer>[] ne = new Set[literals.length];
for (Map.Entry<Literal, Set<Literal>> e : neqs.entrySet()) {
int index = rev.get(e.getKey());
ne[index] = new HashSet<Integer>();
for (Literal l : e.getValue()) {
ne[index].add(rev.get(l));
}
}
if (log.isLoggable(Level.FINEST)) {
log.finest(format("Checking satisfiability for %d variables and %d constants", vars.size(), consts.size()));
}
/*
* 1. Get to DNF. After this step <code>dnfMap</code> associates literals with a collection of D-conjunctions,
* of which it must satisfy at least one to be generally satisfied.
*/
@SuppressWarnings("unchecked")
Collection<ATermAppl>[] dnfs = new Collection[literals.length];
for (int i = 0; i < literals.length; i++) {
ATermAppl and = ATermUtils.makeAnd(ATermUtils.makeList(literals[i].getTypes()));
ATermAppl dnf = DNF.dnf(expander.expand(and, namedDataRanges));
if (ATermUtils.isOr(dnf)) {
List<ATermAppl> disjuncts = new ArrayList<ATermAppl>();
for (ATermList l = (ATermList) dnf.getArgument(0); !l.isEmpty(); l = l.getNext()) {
disjuncts.add((ATermAppl) l.getFirst());
}
dnfs[i] = disjuncts;
}
else {
dnfs[i] = Collections.singleton(dnf);
}
}
return isSatisfiable(consts, vars, dnfs, constValues, ne);
}
public boolean define(ATermAppl name, ATermAppl datarange) {
if (name.equals(datarange)) {
throw new IllegalArgumentException();
}
if (namedDataRanges.containsKey(name)) {
return false;
}
namedDataRanges.put(name, datarange);
declaredUndefined.remove(name);
return true;
}
private DataRange<?> normalizeVarRanges(ATermAppl dconjunction) throws InvalidConstrainingFacetException,
InvalidLiteralException, UnrecognizedDatatypeException {
DataRange<?> ret;
if (ATermUtils.isAnd(dconjunction)) {
Collection<DataRange<?>> ranges = new LinkedHashSet<DataRange<?>>();
for (ATermList l = (ATermList) dconjunction.getArgument(0); !l.isEmpty(); l = l.getNext()) {
DataRange<?> dr = getDataRange((ATermAppl) l.getFirst());
if (dr.isEmpty()) {
return EMPTY_RANGE;
}
ranges.add(dr);
}
Set<DataValueEnumeration<?>> positiveEnumerations = new HashSet<DataValueEnumeration<?>>();
Set<DataValueEnumeration<?>> negativeEnumerations = new HashSet<DataValueEnumeration<?>>();
Set<RestrictedDatatype<?>> positiveRestrictions = new HashSet<RestrictedDatatype<?>>();
Set<RestrictedDatatype<?>> negativeRestrictions = new HashSet<RestrictedDatatype<?>>();
partitionDConjunction(ranges, positiveEnumerations, negativeEnumerations, positiveRestrictions,
negativeRestrictions);
/*
* 1. If an enumeration is present, test each element in it against other conjuncts
*/
if (!positiveEnumerations.isEmpty()) {
DataRange<?> enumeration = findSmallestEnumeration(positiveEnumerations);
Set<Object> remainingValues = new HashSet<Object>();
Iterator<?> it = enumeration.valueIterator();
boolean same = true;
while (it.hasNext()) {
Object value = it.next();
boolean permit = true;
for (DataRange<?> dr : ranges) {
if ((dr != enumeration) && !dr.contains(value)) {
permit = false;
same = false;
break;
}
}
if (permit) {
remainingValues.add(value);
}
}
if (same) {
return enumeration;
}
else if (remainingValues.isEmpty()) {
return EMPTY_RANGE;
}
else {
return new DataValueEnumeration<Object>(remainingValues);
}
}
/*
* If there are only negative restrictions, the conjunction is trivially satisfiable (because the
* interpretation domain is infinite).
*/
if (positiveRestrictions.isEmpty()) {
return TRIVIALLY_SATISFIABLE;
}
/*
* Verify that all positive restrictions are on the same primitive type. If not, the data range is empty
* because the primitives are disjoint.
*/
Datatype<?> rootDt = null;
for (RestrictedDatatype<?> pr : positiveRestrictions) {
final Datatype<?> dt = pr.getDatatype().getPrimitiveDatatype();
if (rootDt == null) {
rootDt = dt;
}
else if (!rootDt.equals(dt)) {
return EMPTY_RANGE;
}
}
Iterator<RestrictedDatatype<?>> it = positiveRestrictions.iterator();
RestrictedDatatype<?> rd = it.next();
while (it.hasNext()) {
RestrictedDatatype<?> other = it.next();
rd = rd.intersect(other, false);
}
for (RestrictedDatatype<?> other : negativeRestrictions) {
if (other.isEmpty()) {
continue;
}
final Datatype<?> dt = other.getDatatype().getPrimitiveDatatype();
if (!rootDt.equals(dt)) {
continue;
}
rd = rd.intersect(other, true);
}
if (!negativeEnumerations.isEmpty()) {
Set<Object> notOneOf = new HashSet<Object>();
for (DataValueEnumeration<?> enm : negativeEnumerations) {
for (Iterator<?> oi = enm.valueIterator(); oi.hasNext();) {
notOneOf.add(oi.next());
}
}
rd = rd.exclude(notOneOf);
}
ret = rd;
}
else {
ret = getDataRange(dconjunction);
}
if (!ret.isFinite()) {
return TRIVIALLY_SATISFIABLE;
}
return ret;
}
public Collection<ATermAppl> listDataRanges() {
Collection<ATermAppl> dataRanges = new HashSet<ATermAppl>(coreDatatypes.keySet());
dataRanges.addAll(declaredUndefined);
dataRanges.addAll(namedDataRanges.keySet());
return dataRanges;
}
public boolean validLiteral(ATermAppl typedLiteral) throws UnrecognizedDatatypeException {
if (!ATermUtils.isLiteral(typedLiteral)) {
throw new IllegalArgumentException();
}
final ATermAppl dtTerm = (ATermAppl) typedLiteral.getArgument(ATermUtils.LIT_URI_INDEX);
if (dtTerm == null) {
throw new IllegalArgumentException();
}
final Datatype<?> dt = getDatatype(dtTerm);
if (dt == null) {
throw new UnrecognizedDatatypeException(dtTerm);
}
try {
dt.getValue(typedLiteral);
}
catch (InvalidLiteralException e) {
return false;
}
return true;
}
public Iterator<?> valueIterator(Collection<ATermAppl> dataranges) throws InvalidConstrainingFacetException,
InvalidLiteralException, UnrecognizedDatatypeException {
ATermAppl and = ATermUtils.makeAnd(ATermUtils.makeList(dataranges));
ATermAppl dnf = DNF.dnf(expander.expand(and, namedDataRanges));
if (ATermUtils.isOr(dnf)) {
List<DataRange<?>> disjuncts = new ArrayList<DataRange<?>>();
for (ATermList l = (ATermList) dnf.getArgument(0); !l.isEmpty(); l = l.getNext()) {
final DataRange<?> dr = normalizeVarRanges((ATermAppl) l.getFirst());
disjuncts.add(dr);
}
final DataRange<?> disjunction = getDisjunction(disjuncts);
if (!disjunction.isEnumerable()) {
throw new IllegalArgumentException();
}
else {
return disjunction.valueIterator();
}
}
else {
final DataRange<?> dr = normalizeVarRanges(dnf);
if (!dr.isEnumerable()) {
throw new IllegalArgumentException();
}
else {
return dr.valueIterator();
}
}
}
}
| {
"pile_set_name": "Github"
} |
Swag.addHelper 'add', (value, addition) ->
value = parseFloat value
addition = parseFloat addition
value + addition
, ['number', 'number']
Swag.addHelper 'subtract', (value, substraction) ->
value = parseFloat value
substraction = parseFloat substraction
value - substraction
, ['number', 'number']
Swag.addHelper 'divide', (value, divisor) ->
value = parseFloat value
divisor = parseFloat divisor
value / divisor
, ['number', 'number']
Swag.addHelper 'multiply', (value, multiplier) ->
value = parseFloat value
multiplier = parseFloat multiplier
value * multiplier
, ['number', 'number']
Swag.addHelper 'floor', (value) ->
value = parseFloat value
Math.floor value
, 'number'
Swag.addHelper 'ceil', (value) ->
value = parseFloat value
Math.ceil value
, 'number'
Swag.addHelper 'round', (value) ->
value = parseFloat value
Math.round value
, 'number'
| {
"pile_set_name": "Github"
} |
Test-SHA1.C
EXE = $(FOAM_USER_APPBIN)/Test-SHA1
| {
"pile_set_name": "Github"
} |
#!/bin/expect
spawn /opt/fsayskeyboard
expect "background handler\r"
send -- "q"
expect -exact "\[q\]\r"
send -- "q"
expect -exact "\[q\]\r" | {
"pile_set_name": "Github"
} |
# atomic [![GoDoc][doc-img]][doc] [![Build Status][ci-img]][ci] [![Coverage Status][cov-img]][cov] [![Go Report Card][reportcard-img]][reportcard]
Simple wrappers for primitive types to enforce atomic access.
## Installation
`go get -u go.uber.org/atomic`
## Usage
The standard library's `sync/atomic` is powerful, but it's easy to forget which
variables must be accessed atomically. `go.uber.org/atomic` preserves all the
functionality of the standard library, but wraps the primitive types to
provide a safer, more convenient API.
```go
var atom atomic.Uint32
atom.Store(42)
atom.Sub(2)
atom.CAS(40, 11)
```
See the [documentation][doc] for a complete API specification.
## Development Status
Stable.
<hr>
Released under the [MIT License](LICENSE.txt).
[doc-img]: https://godoc.org/github.com/uber-go/atomic?status.svg
[doc]: https://godoc.org/go.uber.org/atomic
[ci-img]: https://travis-ci.org/uber-go/atomic.svg?branch=master
[ci]: https://travis-ci.org/uber-go/atomic
[cov-img]: https://codecov.io/gh/uber-go/atomic/branch/master/graph/badge.svg
[cov]: https://codecov.io/gh/uber-go/atomic
[reportcard-img]: https://goreportcard.com/badge/go.uber.org/atomic
[reportcard]: https://goreportcard.com/report/go.uber.org/atomic
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html lang="en">
<head>
<title>RPT Test Suite</title>
</head>
<body>
<a href="#navskip">skip to banner content</a>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<h3>WAI-ARIA widgets where accessible name may be provided by inner text of element or alt text</h3>
<ul role="menubar">
<li role="menuitem">
<ul role="menu">
<li role="menuitemradio"></li>
<li role="menuitemradio"> </li>
</ul>
</li>
</ul>
<a name="navskip"></a>
<script type="text/javascript">
//<![CDATA[
if (typeof(OpenAjax) == 'undefined') OpenAjax = {}
if (typeof(OpenAjax.a11y) == 'undefined') OpenAjax.a11y = {}
OpenAjax.a11y.ruleCoverage = [{
ruleId: "1156",
passedXpaths: [],
failedXpaths: [
"/html/body/ul/li",
"/html/body/ul/li/ul/li[1]",
"/html/body/ul/li/ul/li[2]"
]
}];
//]]>
</script>
</body>
</html> | {
"pile_set_name": "Github"
} |
fileFormatVersion: 2
guid: a05da8718a0b346c294864926755369d
NativeFormatImporter:
userData:
assetBundleName:
| {
"pile_set_name": "Github"
} |
namespace TestStack.BDDfy
{
public class AndWhenAttribute : ExecutableAttribute
{
public AndWhenAttribute() : this(null) { }
public AndWhenAttribute(string stepTitle) : base(ExecutionOrder.ConsecutiveTransition, stepTitle) { }
}
} | {
"pile_set_name": "Github"
} |
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
name: hiveconfigs.hive.openshift.io
spec:
group: hive.openshift.io
names:
kind: HiveConfig
listKind: HiveConfigList
plural: hiveconfigs
singular: hiveconfig
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
description: HiveConfig is the Schema for the hives API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: HiveConfigSpec defines the desired state of Hive
properties:
additionalCertificateAuthoritiesSecretRef:
description: AdditionalCertificateAuthoritiesSecretRef is a list of
references to secrets in the TargetNamespace that contain an additional
Certificate Authority to use when communicating with target clusters.
These certificate authorities will be used in addition to any self-signed
CA generated by each cluster on installation.
items:
description: LocalObjectReference contains enough information to let
you locate the referenced object inside the same namespace.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
type: array
backup:
description: Backup specifies configuration for backup integration.
If absent, backup integration will be disabled.
properties:
minBackupPeriodSeconds:
description: MinBackupPeriodSeconds specifies that a minimum of
MinBackupPeriodSeconds will occur in between each backup. This
is used to rate limit backups. This potentially batches together
multiple changes into 1 backup. No backups will be lost as changes
that happen during this interval are queued up and will result
in a backup happening once the interval has been completed.
type: integer
velero:
description: Velero specifies configuration for the Velero backup
integration.
properties:
enabled:
description: Enabled dictates if Velero backup integration is
enabled. If not specified, the default is disabled.
type: boolean
type: object
type: object
deleteProtection:
description: DeleteProtection can be set to "enabled" to turn on automatic
delete protection for ClusterDeployments. When enabled, Hive will
add the "hive.openshift.io/protected-delete" annotation to new ClusterDeployments.
Once a ClusterDeployment has been installed, a user must remove the
annotation from a ClusterDeployment prior to deleting it.
enum:
- enabled
type: string
deprovisionsDisabled:
description: DeprovisionsDisabled can be set to true to block deprovision
jobs from running.
type: boolean
disabledControllers:
description: DisabledControllers allows selectively disabling Hive controllers
by name. The name of an individual controller matches the name of
the controller as seen in the Hive logging output.
items:
type: string
type: array
failedProvisionConfig:
description: FailedProvisionConfig is used to configure settings related
to handling provision failures.
properties:
skipGatherLogs:
description: SkipGatherLogs disables functionality that attempts
to gather full logs from the cluster if an installation fails
for any reason. The logs will be stored in a persistent volume
for up to 7 days.
type: boolean
type: object
globalPullSecretRef:
description: GlobalPullSecretRef is used to specify a pull secret that
will be used globally by all of the cluster deployments. For each
cluster deployment, the contents of GlobalPullSecret will be merged
with the specific pull secret for a cluster deployment(if specified),
with precedence given to the contents of the pull secret for the cluster
deployment. The global pull secret is assumed to be in the TargetNamespace.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
hiveAPIEnabled:
description: HiveAPIEnabled is a boolean controlling whether or not
the Hive operator will start up the v1alpha1 aggregated API server.
type: boolean
logLevel:
description: LogLevel is the level of logging to use for the Hive controllers.
Acceptable levels, from coarsest to finest, are panic, fatal, error,
warn, info, debug, and trace. The default level is info.
type: string
maintenanceMode:
description: MaintenanceMode can be set to true to disable the hive
controllers in situations where we need to ensure nothing is running
that will add or act upon finalizers on Hive types. This should rarely
be needed. Sets replicas to 0 for the hive-controllers deployment
to accomplish this.
type: boolean
managedDomains:
description: 'ManagedDomains is the list of DNS domains that are managed
by the Hive cluster When specifying ''manageDNS: true'' in a ClusterDeployment,
the ClusterDeployment''s baseDomain should be a direct child of one
of these domains, otherwise the ClusterDeployment creation will result
in a validation error.'
items:
description: ManageDNSConfig contains the domain being managed, and
the cloud-specific details for accessing/managing the domain.
properties:
aws:
description: AWS contains AWS-specific settings for external DNS
properties:
credentialsSecretRef:
description: CredentialsSecretRef references a secret in the
TargetNamespace that will be used to authenticate with AWS
Route53. It will need permission to manage entries for the
domain listed in the parent ManageDNSConfig object. Secret
should have AWS keys named 'aws_access_key_id' and 'aws_secret_access_key'.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
region:
description: Region is the AWS region to use for route53 operations.
This defaults to us-east-1. For AWS China, use cn-northwest-1.
type: string
required:
- credentialsSecretRef
type: object
azure:
description: Azure contains Azure-specific settings for external
DNS
properties:
credentialsSecretRef:
description: CredentialsSecretRef references a secret in the
TargetNamespace that will be used to authenticate with Azure
DNS. It wil need permission to manage entries in each of
the managed domains listed in the parent ManageDNSConfig
object. Secret should have a key named 'osServicePrincipal.json'
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
resourceGroupName:
description: ResourceGroupName specifies the Azure resource
group containing the DNS zones for the domains being managed.
type: string
required:
- credentialsSecretRef
- resourceGroupName
type: object
domains:
description: Domains is the list of domains that hive will be
managing entries for with the provided credentials.
items:
type: string
type: array
gcp:
description: GCP contains GCP-specific settings for external DNS
properties:
credentialsSecretRef:
description: CredentialsSecretRef references a secret in the
TargetNamespace that will be used to authenticate with GCP
DNS. It will need permission to manage entries in each of
the managed domains for this cluster. listed in the parent
ManageDNSConfig object. Secret should have a key named 'osServiceAccount.json'.
The credentials must specify the project to use.
properties:
name:
description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?'
type: string
type: object
required:
- credentialsSecretRef
type: object
required:
- domains
type: object
type: array
syncSetReapplyInterval:
description: SyncSetReapplyInterval is a string duration indicating
how much time must pass before SyncSet resources will be reapplied.
The default reapply interval is two hours.
type: string
targetNamespace:
description: TargetNamespace is the namespace where the core Hive components
should be run. Defaults to "hive". Will be created if it does not
already exist. All resource references in HiveConfig can be assumed
to be in the TargetNamespace.
type: string
type: object
status:
description: HiveConfigStatus defines the observed state of Hive
properties:
aggregatorClientCAHash:
description: AggregatorClientCAHash keeps an md5 hash of the aggregator
client CA configmap data from the openshift-config-managed namespace.
When the configmap changes, admission is redeployed.
type: string
configApplied:
description: ConfigApplied will be set by the hive operator to indicate
whether or not the LastGenerationObserved was successfully reconciled.
type: boolean
observedGeneration:
description: ObservedGeneration will record the most recently processed
HiveConfig object's generation.
format: int64
type: integer
type: object
version: v1
versions:
- name: v1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
| {
"pile_set_name": "Github"
} |
% Test file for operator .^
% Matlab version: 7.9.0.529 (R2009b)
% TEST 1
res1 = [].^'a';
% TEST 2
res2 = m2sciUnknownType([]).^'a';
% TEST 3
res3 = m2sciUnknownDims([]).^'a';
% TEST 4
res4 = [1].^'a';
% TEST 5
res5 = [1,2,3;4,5,6;7,8,0].^'a';
% TEST 6
res6 = m2sciUnknownType([1]).^'a';
% TEST 7
res7 = m2sciUnknownType([1,2,3;4,5,6;7,8,0]).^'a';
% TEST 8
res8 = m2sciUnknownDims([1]).^'a';
% TEST 9
res9 = m2sciUnknownDims([1,2,3;4,5,6;7,8,0]).^'a';
% TEST 10
res10 = [i].^'a';
% TEST 11
res11 = [i,2i,3i;4i,5i,6i;7i,8i,0].^'a';
% TEST 12
res12 = m2sciUnknownType([i]).^'a';
% TEST 13
res13 = m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]).^'a';
% TEST 14
res14 = m2sciUnknownDims([i]).^'a';
% TEST 15
res15 = m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]).^'a';
% TEST 16
res16 = ['a'].^'a';
% TEST 17
res17 = ['a','0','0';'d','0','f';'g','h','0'].^'a';
% TEST 18
res18 = m2sciUnknownType(['a']).^'a';
% TEST 19
res19 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^'a';
% TEST 20
res20 = m2sciUnknownDims(['a']).^'a';
% TEST 21
res21 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^'a';
% TEST 22
res22 = [[1]==[1]].^'a';
% TEST 23
res23 = [[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]].^'a';
% TEST 24
res24 = m2sciUnknownType([[1]==[1]]).^'a';
% TEST 25
res25 = m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^'a';
% TEST 26
res26 = m2sciUnknownDims([[1]==[1]]).^'a';
% TEST 27
res27 = m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^'a';
% TEST 28
res28 = [].^1;
% TEST 29
res29 = m2sciUnknownType([]).^1;
% TEST 30
res30 = m2sciUnknownDims([]).^1;
% TEST 31
res31 = [1].^1;
% TEST 32
res32 = [1,2,3;4,5,6;7,8,0].^1;
% TEST 33
res33 = m2sciUnknownType([1]).^1;
% TEST 34
res34 = m2sciUnknownType([1,2,3;4,5,6;7,8,0]).^1;
% TEST 35
res35 = m2sciUnknownDims([1]).^1;
% TEST 36
res36 = m2sciUnknownDims([1,2,3;4,5,6;7,8,0]).^1;
% TEST 37
res37 = [i].^1;
% TEST 38
res38 = [i,2i,3i;4i,5i,6i;7i,8i,0].^1;
% TEST 39
res39 = m2sciUnknownType([i]).^1;
% TEST 40
res40 = m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]).^1;
% TEST 41
res41 = m2sciUnknownDims([i]).^1;
% TEST 42
res42 = m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]).^1;
% TEST 43
res43 = ['a'].^1;
% TEST 44
res44 = ['a','0','0';'d','0','f';'g','h','0'].^1;
% TEST 45
res45 = m2sciUnknownType(['a']).^1;
% TEST 46
res46 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^1;
% TEST 47
res47 = m2sciUnknownDims(['a']).^1;
% TEST 48
res48 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^1;
% TEST 49
res49 = [[1]==[1]].^1;
% TEST 50
res50 = [[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]].^1;
% TEST 51
res51 = m2sciUnknownType([[1]==[1]]).^1;
% TEST 52
res52 = m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^1;
% TEST 53
res53 = m2sciUnknownDims([[1]==[1]]).^1;
% TEST 54
res54 = m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^1;
% TEST 55
res55 = [].^2;
% TEST 56
res56 = m2sciUnknownType([]).^2;
% TEST 57
res57 = m2sciUnknownDims([]).^2;
% TEST 58
res58 = [1].^2;
% TEST 59
res59 = [1,2,3;4,5,6;7,8,0].^2;
% TEST 60
res60 = m2sciUnknownType([1]).^2;
% TEST 61
res61 = m2sciUnknownType([1,2,3;4,5,6;7,8,0]).^2;
% TEST 62
res62 = m2sciUnknownDims([1]).^2;
% TEST 63
res63 = m2sciUnknownDims([1,2,3;4,5,6;7,8,0]).^2;
% TEST 64
res64 = [i].^2;
% TEST 65
res65 = [i,2i,3i;4i,5i,6i;7i,8i,0].^2;
% TEST 66
res66 = m2sciUnknownType([i]).^2;
% TEST 67
res67 = m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]).^2;
% TEST 68
res68 = m2sciUnknownDims([i]).^2;
% TEST 69
res69 = m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]).^2;
% TEST 70
res70 = ['a'].^2;
% TEST 71
res71 = ['a','0','0';'d','0','f';'g','h','0'].^2;
% TEST 72
res72 = m2sciUnknownType(['a']).^2;
% TEST 73
res73 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^2;
% TEST 74
res74 = m2sciUnknownDims(['a']).^2;
% TEST 75
res75 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^2;
% TEST 76
res76 = [[1]==[1]].^2;
% TEST 77
res77 = [[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]].^2;
% TEST 78
res78 = m2sciUnknownType([[1]==[1]]).^2;
% TEST 79
res79 = m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^2;
% TEST 80
res80 = m2sciUnknownDims([[1]==[1]]).^2;
% TEST 81
res81 = m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^2;
% TEST 82
res82 = [].^-1;
% TEST 83
res83 = m2sciUnknownType([]).^-1;
% TEST 84
res84 = m2sciUnknownDims([]).^-1;
% TEST 85
res85 = [1].^-1;
% TEST 86
res86 = [1,2,3;4,5,6;7,8,0].^-1;
% TEST 87
res87 = m2sciUnknownType([1]).^-1;
% TEST 88
res88 = m2sciUnknownType([1,2,3;4,5,6;7,8,0]).^-1;
% TEST 89
res89 = m2sciUnknownDims([1]).^-1;
% TEST 90
res90 = m2sciUnknownDims([1,2,3;4,5,6;7,8,0]).^-1;
% TEST 91
res91 = [i].^-1;
% TEST 92
res92 = [i,2i,3i;4i,5i,6i;7i,8i,0].^-1;
% TEST 93
res93 = m2sciUnknownType([i]).^-1;
% TEST 94
res94 = m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]).^-1;
% TEST 95
res95 = m2sciUnknownDims([i]).^-1;
% TEST 96
res96 = m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]).^-1;
% TEST 97
res97 = ['a'].^-1;
% TEST 98
res98 = ['a','0','0';'d','0','f';'g','h','0'].^-1;
% TEST 99
res99 = m2sciUnknownType(['a']).^-1;
% TEST 100
res100 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^-1;
% TEST 101
res101 = m2sciUnknownDims(['a']).^-1;
% TEST 102
res102 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^-1;
% TEST 103
res103 = [[1]==[1]].^-1;
% TEST 104
res104 = [[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]].^-1;
% TEST 105
res105 = m2sciUnknownType([[1]==[1]]).^-1;
% TEST 106
res106 = m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^-1;
% TEST 107
res107 = m2sciUnknownDims([[1]==[1]]).^-1;
% TEST 108
res108 = m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^-1;
% TEST 109
res109 = [].^-2;
% TEST 110
res110 = m2sciUnknownType([]).^-2;
% TEST 111
res111 = m2sciUnknownDims([]).^-2;
% TEST 112
res112 = [1].^-2;
% TEST 113
res113 = [1,2,3;4,5,6;7,8,0].^-2;
% TEST 114
res114 = m2sciUnknownType([1]).^-2;
% TEST 115
res115 = m2sciUnknownType([1,2,3;4,5,6;7,8,0]).^-2;
% TEST 116
res116 = m2sciUnknownDims([1]).^-2;
% TEST 117
res117 = m2sciUnknownDims([1,2,3;4,5,6;7,8,0]).^-2;
% TEST 118
res118 = [i].^-2;
% TEST 119
res119 = [i,2i,3i;4i,5i,6i;7i,8i,0].^-2;
% TEST 120
res120 = m2sciUnknownType([i]).^-2;
% TEST 121
res121 = m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]).^-2;
% TEST 122
res122 = m2sciUnknownDims([i]).^-2;
% TEST 123
res123 = m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]).^-2;
% TEST 124
res124 = ['a'].^-2;
% TEST 125
res125 = ['a','0','0';'d','0','f';'g','h','0'].^-2;
% TEST 126
res126 = m2sciUnknownType(['a']).^-2;
% TEST 127
res127 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^-2;
% TEST 128
res128 = m2sciUnknownDims(['a']).^-2;
% TEST 129
res129 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^-2;
% TEST 130
res130 = [[1]==[1]].^-2;
% TEST 131
res131 = [[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]].^-2;
% TEST 132
res132 = m2sciUnknownType([[1]==[1]]).^-2;
% TEST 133
res133 = m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^-2;
% TEST 134
res134 = m2sciUnknownDims([[1]==[1]]).^-2;
% TEST 135
res135 = m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^-2;
% TEST 136
res136 = [].^inf;
% TEST 137
res137 = m2sciUnknownType([]).^inf;
% TEST 138
res138 = m2sciUnknownDims([]).^inf;
% TEST 139
res139 = [1].^inf;
% TEST 140
res140 = [1,2,3;4,5,6;7,8,0].^inf;
% TEST 141
res141 = m2sciUnknownType([1]).^inf;
% TEST 142
res142 = m2sciUnknownType([1,2,3;4,5,6;7,8,0]).^inf;
% TEST 143
res143 = m2sciUnknownDims([1]).^inf;
% TEST 144
res144 = m2sciUnknownDims([1,2,3;4,5,6;7,8,0]).^inf;
% TEST 145
res145 = [i].^inf;
% TEST 146
res146 = [i,2i,3i;4i,5i,6i;7i,8i,0].^inf;
% TEST 147
res147 = m2sciUnknownType([i]).^inf;
% TEST 148
res148 = m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]).^inf;
% TEST 149
res149 = m2sciUnknownDims([i]).^inf;
% TEST 150
res150 = m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]).^inf;
% TEST 151
res151 = ['a'].^inf;
% TEST 152
res152 = ['a','0','0';'d','0','f';'g','h','0'].^inf;
% TEST 153
res153 = m2sciUnknownType(['a']).^inf;
% TEST 154
res154 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^inf;
% TEST 155
res155 = m2sciUnknownDims(['a']).^inf;
% TEST 156
res156 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^inf;
% TEST 157
res157 = [[1]==[1]].^inf;
% TEST 158
res158 = [[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]].^inf;
% TEST 159
res159 = m2sciUnknownType([[1]==[1]]).^inf;
% TEST 160
res160 = m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^inf;
% TEST 161
res161 = m2sciUnknownDims([[1]==[1]]).^inf;
% TEST 162
res162 = m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^inf;
% TEST 163
res163 = [].^nan;
% TEST 164
res164 = m2sciUnknownType([]).^nan;
% TEST 165
res165 = m2sciUnknownDims([]).^nan;
% TEST 166
res166 = [1].^nan;
% TEST 167
res167 = [1,2,3;4,5,6;7,8,0].^nan;
% TEST 168
res168 = m2sciUnknownType([1]).^nan;
% TEST 169
res169 = m2sciUnknownType([1,2,3;4,5,6;7,8,0]).^nan;
% TEST 170
res170 = m2sciUnknownDims([1]).^nan;
% TEST 171
res171 = m2sciUnknownDims([1,2,3;4,5,6;7,8,0]).^nan;
% TEST 172
res172 = [i].^nan;
% TEST 173
res173 = [i,2i,3i;4i,5i,6i;7i,8i,0].^nan;
% TEST 174
res174 = m2sciUnknownType([i]).^nan;
% TEST 175
res175 = m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]).^nan;
% TEST 176
res176 = m2sciUnknownDims([i]).^nan;
% TEST 177
res177 = m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]).^nan;
% TEST 178
res178 = ['a'].^nan;
% TEST 179
res179 = ['a','0','0';'d','0','f';'g','h','0'].^nan;
% TEST 180
res180 = m2sciUnknownType(['a']).^nan;
% TEST 181
res181 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^nan;
% TEST 182
res182 = m2sciUnknownDims(['a']).^nan;
% TEST 183
res183 = m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']).^nan;
% TEST 184
res184 = [[1]==[1]].^nan;
% TEST 185
res185 = [[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]].^nan;
% TEST 186
res186 = m2sciUnknownType([[1]==[1]]).^nan;
% TEST 187
res187 = m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^nan;
% TEST 188
res188 = m2sciUnknownDims([[1]==[1]]).^nan;
% TEST 189
res189 = m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]).^nan;
% TEST 190
res190 = 1.^[];
% TEST 191
res191 = 1.^m2sciUnknownType([]);
% TEST 192
res192 = 1.^m2sciUnknownDims([]);
% TEST 193
res193 = 1.^[1];
% TEST 194
res194 = 1.^[1,2,3;4,5,6;7,8,0];
% TEST 195
res195 = 1.^m2sciUnknownType([1]);
% TEST 196
res196 = 1.^m2sciUnknownType([1,2,3;4,5,6;7,8,0]);
% TEST 197
res197 = 1.^m2sciUnknownDims([1]);
% TEST 198
res198 = 1.^m2sciUnknownDims([1,2,3;4,5,6;7,8,0]);
% TEST 199
res199 = 1.^[i];
% TEST 200
res200 = 1.^[i,2i,3i;4i,5i,6i;7i,8i,0];
% TEST 201
res201 = 1.^m2sciUnknownType([i]);
% TEST 202
res202 = 1.^m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 203
res203 = 1.^m2sciUnknownDims([i]);
% TEST 204
res204 = 1.^m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 205
res205 = 1.^['a'];
% TEST 206
res206 = 1.^['a','0','0';'d','0','f';'g','h','0'];
% TEST 207
res207 = 1.^m2sciUnknownType(['a']);
% TEST 208
res208 = 1.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 209
res209 = 1.^m2sciUnknownDims(['a']);
% TEST 210
res210 = 1.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 211
res211 = 1.^[[1]==[1]];
% TEST 212
res212 = 1.^[[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]];
% TEST 213
res213 = 1.^m2sciUnknownType([[1]==[1]]);
% TEST 214
res214 = 1.^m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 215
res215 = 1.^m2sciUnknownDims([[1]==[1]]);
% TEST 216
res216 = 1.^m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 217
res217 = 'a'.^[];
% TEST 218
res218 = 'a'.^m2sciUnknownType([]);
% TEST 219
res219 = 'a'.^m2sciUnknownDims([]);
% TEST 220
res220 = 'a'.^[1];
% TEST 221
res221 = 'a'.^[1,2,3;4,5,6;7,8,0];
% TEST 222
res222 = 'a'.^m2sciUnknownType([1]);
% TEST 223
res223 = 'a'.^m2sciUnknownType([1,2,3;4,5,6;7,8,0]);
% TEST 224
res224 = 'a'.^m2sciUnknownDims([1]);
% TEST 225
res225 = 'a'.^m2sciUnknownDims([1,2,3;4,5,6;7,8,0]);
% TEST 226
res226 = 'a'.^[i];
% TEST 227
res227 = 'a'.^[i,2i,3i;4i,5i,6i;7i,8i,0];
% TEST 228
res228 = 'a'.^m2sciUnknownType([i]);
% TEST 229
res229 = 'a'.^m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 230
res230 = 'a'.^m2sciUnknownDims([i]);
% TEST 231
res231 = 'a'.^m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 232
res232 = 'a'.^['a'];
% TEST 233
res233 = 'a'.^['a','0','0';'d','0','f';'g','h','0'];
% TEST 234
res234 = 'a'.^m2sciUnknownType(['a']);
% TEST 235
res235 = 'a'.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 236
res236 = 'a'.^m2sciUnknownDims(['a']);
% TEST 237
res237 = 'a'.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 238
res238 = 'a'.^[[1]==[1]];
% TEST 239
res239 = 'a'.^[[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]];
% TEST 240
res240 = 'a'.^m2sciUnknownType([[1]==[1]]);
% TEST 241
res241 = 'a'.^m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 242
res242 = 'a'.^m2sciUnknownDims([[1]==[1]]);
% TEST 243
res243 = 'a'.^m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 244
res244 = 2.^[];
% TEST 245
res245 = 2.^m2sciUnknownType([]);
% TEST 246
res246 = 2.^m2sciUnknownDims([]);
% TEST 247
res247 = 2.^[1];
% TEST 248
res248 = 2.^[1,2,3;4,5,6;7,8,0];
% TEST 249
res249 = 2.^m2sciUnknownType([1]);
% TEST 250
res250 = 2.^m2sciUnknownType([1,2,3;4,5,6;7,8,0]);
% TEST 251
res251 = 2.^m2sciUnknownDims([1]);
% TEST 252
res252 = 2.^m2sciUnknownDims([1,2,3;4,5,6;7,8,0]);
% TEST 253
res253 = 2.^[i];
% TEST 254
res254 = 2.^[i,2i,3i;4i,5i,6i;7i,8i,0];
% TEST 255
res255 = 2.^m2sciUnknownType([i]);
% TEST 256
res256 = 2.^m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 257
res257 = 2.^m2sciUnknownDims([i]);
% TEST 258
res258 = 2.^m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 259
res259 = 2.^['a'];
% TEST 260
res260 = 2.^['a','0','0';'d','0','f';'g','h','0'];
% TEST 261
res261 = 2.^m2sciUnknownType(['a']);
% TEST 262
res262 = 2.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 263
res263 = 2.^m2sciUnknownDims(['a']);
% TEST 264
res264 = 2.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 265
res265 = 2.^[[1]==[1]];
% TEST 266
res266 = 2.^[[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]];
% TEST 267
res267 = 2.^m2sciUnknownType([[1]==[1]]);
% TEST 268
res268 = 2.^m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 269
res269 = 2.^m2sciUnknownDims([[1]==[1]]);
% TEST 270
res270 = 2.^m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 271
res271 = inf.^[];
% TEST 272
res272 = inf.^m2sciUnknownType([]);
% TEST 273
res273 = inf.^m2sciUnknownDims([]);
% TEST 274
res274 = inf.^[1];
% TEST 275
res275 = inf.^[1,2,3;4,5,6;7,8,0];
% TEST 276
res276 = inf.^m2sciUnknownType([1]);
% TEST 277
res277 = inf.^m2sciUnknownType([1,2,3;4,5,6;7,8,0]);
% TEST 278
res278 = inf.^m2sciUnknownDims([1]);
% TEST 279
res279 = inf.^m2sciUnknownDims([1,2,3;4,5,6;7,8,0]);
% TEST 280
res280 = inf.^[i];
% TEST 281
res281 = inf.^[i,2i,3i;4i,5i,6i;7i,8i,0];
% TEST 282
res282 = inf.^m2sciUnknownType([i]);
% TEST 283
res283 = inf.^m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 284
res284 = inf.^m2sciUnknownDims([i]);
% TEST 285
res285 = inf.^m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 286
res286 = inf.^['a'];
% TEST 287
res287 = inf.^['a','0','0';'d','0','f';'g','h','0'];
% TEST 288
res288 = inf.^m2sciUnknownType(['a']);
% TEST 289
res289 = inf.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 290
res290 = inf.^m2sciUnknownDims(['a']);
% TEST 291
res291 = inf.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 292
res292 = inf.^[[1]==[1]];
% TEST 293
res293 = inf.^[[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]];
% TEST 294
res294 = inf.^m2sciUnknownType([[1]==[1]]);
% TEST 295
res295 = inf.^m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 296
res296 = inf.^m2sciUnknownDims([[1]==[1]]);
% TEST 297
res297 = inf.^m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 298
res298 = nan.^[];
% TEST 299
res299 = nan.^m2sciUnknownType([]);
% TEST 300
res300 = nan.^m2sciUnknownDims([]);
% TEST 301
res301 = nan.^[1];
% TEST 302
res302 = nan.^[1,2,3;4,5,6;7,8,0];
% TEST 303
res303 = nan.^m2sciUnknownType([1]);
% TEST 304
res304 = nan.^m2sciUnknownType([1,2,3;4,5,6;7,8,0]);
% TEST 305
res305 = nan.^m2sciUnknownDims([1]);
% TEST 306
res306 = nan.^m2sciUnknownDims([1,2,3;4,5,6;7,8,0]);
% TEST 307
res307 = nan.^[i];
% TEST 308
res308 = nan.^[i,2i,3i;4i,5i,6i;7i,8i,0];
% TEST 309
res309 = nan.^m2sciUnknownType([i]);
% TEST 310
res310 = nan.^m2sciUnknownType([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 311
res311 = nan.^m2sciUnknownDims([i]);
% TEST 312
res312 = nan.^m2sciUnknownDims([i,2i,3i;4i,5i,6i;7i,8i,0]);
% TEST 313
res313 = nan.^['a'];
% TEST 314
res314 = nan.^['a','0','0';'d','0','f';'g','h','0'];
% TEST 315
res315 = nan.^m2sciUnknownType(['a']);
% TEST 316
res316 = nan.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 317
res317 = nan.^m2sciUnknownDims(['a']);
% TEST 318
res318 = nan.^m2sciUnknownDims(['a','0','0';'d','0','f';'g','h','0']);
% TEST 319
res319 = nan.^[[1]==[1]];
% TEST 320
res320 = nan.^[[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]];
% TEST 321
res321 = nan.^m2sciUnknownType([[1]==[1]]);
% TEST 322
res322 = nan.^m2sciUnknownType([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 323
res323 = nan.^m2sciUnknownDims([[1]==[1]]);
% TEST 324
res324 = nan.^m2sciUnknownDims([[1,2,3;4,5,6;7,8,9]==[1,0,3;4,5,0;0,8,9]]);
% TEST 325
res325 = 'a'.^'a';
% TEST 326
res326 = 1.^'a';
% TEST 327
res327 = 2.^'a';
% TEST 328
res328 = -1.^'a';
% TEST 329
res329 = -2.^'a';
% TEST 330
res330 = inf.^'a';
% TEST 331
res331 = nan.^'a';
% TEST 332
res332 = 'a'.^1;
% TEST 333
res333 = 1.^1;
% TEST 334
res334 = 2.^1;
% TEST 335
res335 = -1.^1;
% TEST 336
res336 = -2.^1;
% TEST 337
res337 = inf.^1;
% TEST 338
res338 = nan.^1;
% TEST 339
res339 = 'a'.^2;
% TEST 340
res340 = 1.^2;
% TEST 341
res341 = 2.^2;
% TEST 342
res342 = -1.^2;
% TEST 343
res343 = -2.^2;
% TEST 344
res344 = inf.^2;
% TEST 345
res345 = nan.^2;
% TEST 346
res346 = 'a'.^-1;
% TEST 347
res347 = 1.^-1;
% TEST 348
res348 = 2.^-1;
% TEST 349
res349 = -1.^-1;
% TEST 350
res350 = -2.^-1;
% TEST 351
res351 = inf.^-1;
% TEST 352
res352 = nan.^-1;
% TEST 353
res353 = 'a'.^-2;
% TEST 354
res354 = 1.^-2;
% TEST 355
res355 = 2.^-2;
% TEST 356
res356 = -1.^-2;
% TEST 357
res357 = -2.^-2;
% TEST 358
res358 = inf.^-2;
% TEST 359
res359 = nan.^-2;
% TEST 360
res360 = 'a'.^inf;
% TEST 361
res361 = 1.^inf;
% TEST 362
res362 = 2.^inf;
% TEST 363
res363 = -1.^inf;
% TEST 364
res364 = -2.^inf;
% TEST 365
res365 = inf.^inf;
% TEST 366
res366 = nan.^inf;
% TEST 367
res367 = 'a'.^nan;
% TEST 368
res368 = 1.^nan;
% TEST 369
res369 = 2.^nan;
% TEST 370
res370 = -1.^nan;
% TEST 371
res371 = -2.^nan;
% TEST 372
res372 = inf.^nan;
% TEST 373
res373 = nan.^nan;
| {
"pile_set_name": "Github"
} |
package cc.mallet.classify;
import java.util.logging.Logger;
import cc.mallet.fst.TransducerEvaluator;
import cc.mallet.fst.TransducerTrainer;
import cc.mallet.optimize.Optimizable;
import cc.mallet.types.InstanceList;
import cc.mallet.util.MalletLogger;
public abstract class ClassifierEvaluator
{
private static Logger logger = MalletLogger.getLogger(ClassifierEvaluator.class.getName());
InstanceList[] instanceLists;
String[] instanceListDescriptions;
public ClassifierEvaluator (InstanceList[] instanceLists, String[] instanceListDescriptions) {
this.instanceLists = instanceLists;
this.instanceListDescriptions = instanceListDescriptions;
}
public ClassifierEvaluator (InstanceList instanceList1, String instanceListDescription1) {
this(new InstanceList[] {instanceList1}, new String[] {instanceListDescription1});
}
public ClassifierEvaluator (InstanceList instanceList1, String instanceListDescription1,
InstanceList instanceList2, String instanceListDescription2) {
this(new InstanceList[] {instanceList1, instanceList2}, new String[] {instanceListDescription1, instanceListDescription2});
}
public ClassifierEvaluator (InstanceList instanceList1, String instanceListDescription1,
InstanceList instanceList2, String instanceListDescription2,
InstanceList instanceList3, String instanceListDescription3) {
this(new InstanceList[] {instanceList1, instanceList2, instanceList3},
new String[] {instanceListDescription1, instanceListDescription2, instanceListDescription3});
}
/**
* Evaluates a ClassifierTrainer and its Classifier on the instance lists specified in the constructor. .
* <P>
* The default implementation calls the evaluator's <TT>evaluateInstanceList</TT> on each instance list.
*
* @param ct The TransducerTrainer to evaluate.
*/
public void evaluate (ClassifierTrainer ct) {
this.preamble(ct);
for (int k = 0; k < instanceLists.length; k++)
if (instanceLists[k] != null)
evaluateInstanceList (ct, instanceLists[k], instanceListDescriptions[k]);
}
protected void preamble (ClassifierTrainer ct) {
if (ct instanceof ClassifierTrainer.ByOptimization) {
Optimizable opt;
int iteration = ((ClassifierTrainer.ByOptimization)ct).getIteration();
if ((opt = ((ClassifierTrainer.ByOptimization)ct).getOptimizer().getOptimizable()) instanceof Optimizable.ByValue)
logger.info ("Evaluator iteration="+iteration+" cost="+((Optimizable.ByValue)opt).getValue());
else
logger.info ("Evaluator iteration="+iteration+" cost=NA (not Optimizable.ByValue)");
}
}
public abstract void evaluateInstanceList (ClassifierTrainer trainer, InstanceList instances, String description);
}
| {
"pile_set_name": "Github"
} |
--TEST--
DOMDocument::createEntityReference() should create a new entity reference node
--CREDITS--
Knut Urdalen <[email protected]>
#PHPTestFest2009 Norway 2009-06-09 \o/
--SKIPIF--
<?php
require_once dirname(__FILE__) .'/skipif.inc';
?>
--FILE--
<?php
$dom = new DOMDocument('1.0');
$ref = $dom->createEntityReference('nbsp');
$dom->appendChild($ref);
echo $dom->saveXML();
?>
--EXPECTF--
<?xml version="1.0"?>
| {
"pile_set_name": "Github"
} |
using DigitalRune.Graphics;
using DigitalRune.Mathematics.Algebra;
using DigitalRune.Mathematics.Statistics;
using DigitalRune.Particles;
using DigitalRune.Particles.Effectors;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.Graphics;
namespace Samples.Particles
{
// A particle system that draws one ribbon by connecting all particles.
public static class RibbonEffect
{
public static ParticleSystem Create(ContentManager contentManager)
{
var ps = new ParticleSystem
{
Name = "Ribbon",
MaxNumberOfParticles = 50,
};
// Ribbons are enabled by setting the "Type" to ParticleType.Ribbon. Consecutive
// living particles are connected and rendered as ribbons (quad strips). At least
// two living particles are required to create a ribbon. Dead particles
// ("NormalizedAge" ≥ 1) can be used as delimiters to terminate one ribbon and
// start the next ribbon.
ps.Parameters.AddUniform<ParticleType>(ParticleParameterNames.Type).DefaultValue =
ParticleType.Ribbon;
ps.Parameters.AddUniform<float>(ParticleParameterNames.Lifetime).DefaultValue = 1;
ps.Parameters.AddVarying<Vector3F>(ParticleParameterNames.Position);
ps.Effectors.Add(new StartPositionEffector());
// The parameter "Axis" determines the orientation of the ribbon.
// We could use a fixed orientation. It is also possible to "twist" the ribbon
// by using a varying parameter.
//ps.Parameters.AddUniform<Vector3F>(ParticleParameterNames.Axis).DefaultValue =
// Vector3F.Up;
ps.Effectors.Add(new RibbonEffector());
ps.Effectors.Add(new ReserveParticleEffector { Reserve = 1 });
ps.Parameters.AddUniform<float>(ParticleParameterNames.Size).DefaultValue = 1;
ps.Parameters.AddVarying<Vector3F>(ParticleParameterNames.Color);
ps.Effectors.Add(new StartValueEffector<Vector3F>
{
Parameter = ParticleParameterNames.Color,
Distribution = new BoxDistribution { MinValue = new Vector3F(0.5f, 0.5f, 0.5f), MaxValue = new Vector3F(1, 1, 1) }
});
ps.Parameters.AddVarying<float>(ParticleParameterNames.Alpha);
ps.Effectors.Add(new FuncEffector<float, float>
{
InputParameter = ParticleParameterNames.NormalizedAge,
OutputParameter = ParticleParameterNames.Alpha,
Func = age => 6.7f * age * (1 - age) * (1 - age),
});
ps.Parameters.AddUniform<Texture2D>(ParticleParameterNames.Texture).DefaultValue =
contentManager.Load<Texture2D>("Particles/Ribbon");
// The parameter "TextureTiling" defines how the texture spreads across the ribbon.
// 0 ... no tiling,
// 1 ... repeat every particle,
// n ... repeat every n-th particle
ps.Parameters.AddUniform<int>(ParticleParameterNames.TextureTiling).DefaultValue =
1;
ps.Parameters.AddUniform<float>(ParticleParameterNames.BlendMode).DefaultValue = 0;
ParticleSystemValidator.Validate(ps);
return ps;
}
}
}
| {
"pile_set_name": "Github"
} |
<!--
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
*This file is automatically generated. Local changes risk being overwritten by the export process.*
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<!--
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<ItemDefinitionGroup Condition="'$(UseDebugLibraries)' == 'true'">
<ClCompile>
<OmitFramePointers Condition="'%(ClCompile.OmitFramePointers)' == ''">false</OmitFramePointers>
<StrictAliasing Condition="'%(ClCompile.StrictAliasing)' == ''">false</StrictAliasing>
<UnswitchLoops Condition="'%(ClCompile.UnswitchLoops)' == ''">false</UnswitchLoops>
</ClCompile>
</ItemDefinitionGroup>
<!--
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<ItemDefinitionGroup>
<ClCompile>
<PreprocessorDefinitions>__aarch64__;__ARM_ARCH_8__;__ARM_ARCH_8A__;__ARM_EABI__;__LP64__;%(ClCompile.PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalOptions>%(ClCompile.AdditionalOptions)</AdditionalOptions>
<ArmFloatingPointAbi Condition="'%(ClCompile.ArmFloatingPointAbi)' == ''"></ArmFloatingPointAbi>
<ArmFloatingPointHardware Condition="'%(ClCompile.ArmFloatingPointHardware)' == ''"></ArmFloatingPointHardware>
<SysRoot Condition="'%(ClCompile.SysRoot)' == ''">$(PlatformToolsetSysRoot)</SysRoot>
<FunctionSections Condition="'%(ClCompile.FunctionSections)' == ''">true</FunctionSections>
<UnwindTables Condition="'%(ClCompile.UnwindTables)' == ''">true</UnwindTables>
<StackProtection Condition="'%(ClCompile.StackProtection)' == ''">StringProtection</StackProtection>
<OmitFramePointers Condition="'%(ClCompile.OmitFramePointers)' == ''">true</OmitFramePointers>
<StrictAliasing Condition="'%(ClCompile.StrictAliasing)' == ''">true</StrictAliasing>
<InlineLimitSize Condition="'%(ClCompile.InlineLimitSize)' == ''">300</InlineLimitSize>
<UnswitchLoops Condition="'%(ClCompile.UnswitchLoops)' == ''">true</UnswitchLoops>
<PositionIndependentCode Condition="'%(ClCompile.PositionIndependentCode)' == ''">BigPic</PositionIndependentCode>
<PositionIndependentExecutable Condition="'%(ClCompile.PositionIndependentExecutable)' == ''">BigPie</PositionIndependentExecutable>
</ClCompile>
</ItemDefinitionGroup>
<!--
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
<ItemDefinitionGroup>
<Link>
<AdditionalOptions>%(Link.AdditionalOptions)</AdditionalOptions>
<SysRoot Condition="'%(Link.SysRoot)' == ''">$(PlatformToolsetSysRoot)</SysRoot>
<PositionIndependentExecutable Condition="'%(ClCompile.PositionIndependentExecutable)' == ''">Both</PositionIndependentExecutable>
</Link>
</ItemDefinitionGroup>
<!--
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
-->
</Project>
| {
"pile_set_name": "Github"
} |
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package pragma provides types that can be embedded into a struct to
// statically enforce or prevent certain language properties.
package pragma
import "sync"
// NoUnkeyedLiterals can be embedded in a struct to prevent unkeyed literals.
type NoUnkeyedLiterals struct{}
// DoNotImplement can be embedded in an interface to prevent trivial
// implementations of the interface.
//
// This is useful to prevent unauthorized implementations of an interface
// so that it can be extended in the future for any protobuf language changes.
type DoNotImplement interface{ ProtoInternal(DoNotImplement) }
// DoNotCompare can be embedded in a struct to prevent comparability.
type DoNotCompare [0]func()
// DoNotCopy can be embedded in a struct to help prevent shallow copies.
// This does not rely on a Go language feature, but rather a special case
// within the vet checker.
//
// See https://golang.org/issues/8005.
type DoNotCopy [0]sync.Mutex
| {
"pile_set_name": "Github"
} |
{
"texture": {
"blur": true
}
}
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html>
<!-- Any copyright is dedicated to the Public Domain.
- http://creativecommons.org/publicdomain/zero/1.0/ -->
<html>
<body>
<div style="display: flex;">
<div style="display: table-column;"></div>
abc
</div>
</body>
</html>
| {
"pile_set_name": "Github"
} |
struct tmp
{
long long int pad : 12;
long long int field : 52;
};
struct tmp2
{
long long int field : 52;
long long int pad : 12;
};
struct tmp3
{
long long int pad : 11;
long long int field : 53;
};
struct tmp4
{
long long int field : 53;
long long int pad : 11;
};
struct tmp
sub (struct tmp tmp)
{
tmp.field ^= 0x0008765412345678LL;
return tmp;
}
struct tmp2
sub2 (struct tmp2 tmp2)
{
tmp2.field ^= 0x0008765412345678LL;
return tmp2;
}
struct tmp3
sub3 (struct tmp3 tmp3)
{
tmp3.field ^= 0x0018765412345678LL;
return tmp3;
}
struct tmp4
sub4 (struct tmp4 tmp4)
{
tmp4.field ^= 0x0018765412345678LL;
return tmp4;
}
struct tmp tmp = {0x123, 0x123456789ABCDLL};
struct tmp2 tmp2 = {0x123456789ABCDLL, 0x123};
struct tmp3 tmp3 = {0x123, 0x1FFFF00000000LL};
struct tmp4 tmp4 = {0x1FFFF00000000LL, 0x123};
main()
{
if (sizeof (long long) != 8)
exit (0);
tmp = sub (tmp);
tmp2 = sub2 (tmp2);
if (tmp.pad != 0x123 || tmp.field != 0xFFF9551175BDFDB5LL)
abort ();
if (tmp2.pad != 0x123 || tmp2.field != 0xFFF9551175BDFDB5LL)
abort ();
tmp3 = sub3 (tmp3);
tmp4 = sub4 (tmp4);
if (tmp3.pad != 0x123 || tmp3.field != 0xFFF989AB12345678LL)
abort ();
if (tmp4.pad != 0x123 || tmp4.field != 0xFFF989AB12345678LL)
abort ();
exit (0);
}
| {
"pile_set_name": "Github"
} |
<?php
/*
* This file is part of PHPUnit.
*
* (c) Sebastian Bergmann <[email protected]>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
class PHPUnit_Framework_InvalidCoversTargetException extends PHPUnit_Framework_CodeCoverageException
{
}
| {
"pile_set_name": "Github"
} |
#!/usr/bin/python
"""
Author : Ms.ambari
contact : [email protected]
Github : https://github.com/Ranginang67
my youtube channel : Ms.ambari
subcribe my youtube Channel to learn ethical Hacking ^_^
"""
import sys
import os.path
import subprocess
ntfile = ['.module', 'lib']
ubuntu = '' + str(open('.module/jalurU.ms').read())
termux = '' + str(open('.module/jalurT.ms').read())
termlb = '' + str(open('.module/ngentot.ms').read())
instal = '' + str(open('.module/IN.ms').read())
def _main_():
if os.path.isdir('' + ubuntu.strip()):
if os.getuid() != 0:
print '[x] Failed: your must be root'
sys.exit()
if not os.path.isdir(ntfile[0]):
print '[x] Failed: no directory module'
sys.exit()
if not os.path.isdir(ntfile[1]):
print '[x] Failed: no directory lib'
sys.exit()
else:
print '' + instal.strip()
# ==================================================================#
os.system('python .module/files/' + str(open('.module/A.ms'
).read()))
os.system('python .module/files/' + str(open('.module/B.ms'
).read()))
os.system('python .module/files/' + str(open('.module/C.ms'
).read()))
os.system('python .module/files/' + str(open('.module/D.ms'
).read()))
os.system('python .module/files/' + str(open('.module/E.ms'
).read()))
os.system('python .module/files/' + str(open('.module/F.ms'
).read()))
os.system('python .module/files/' + str(open('.module/G.ms'
).read()))
os.system('python .module/files/' + str(open('.module/H.ms'
).read()))
# ==================================================================#
if os.path.isdir('/usr/bin/lib'):
os.system('rm -rf /usr/bin/lib')
os.system('' + str(open('.module/pindahU.ms').read()))
os.system('' + str(open('.module/PindahU.ms').read()))
if not os.path.isdir('/usr/bin/lib'):
os.system('' + str(open('.module/pindahU.ms').read()))
os.system('' + str(open('.module/PindahU.ms').read()))
# ==================================================================#
print '' + str(open('.module/DU.la').read())
print '' + str(open('.module/Du').read())
os.system('python .JM.xn')
# ==================================================================#
if os.path.isdir('' + termux.strip()):
if not os.path.isdir('' + termux.strip()):
sys.exit()
if not os.path.isdir(ntfile[0]):
print '[x] Failed: no directory module'
sys.exit()
if not os.path.isdir(ntfile[1]):
print '[x] Failed: no directory lib'
sys.exit()
else:
print '' + instal.strip()
# ==============================================================#
os.system('python2 .module/' + str(open('.module/A.ms'
).read()))
os.system('python2 .module/' + str(open('.module/B.ms'
).read()))
os.system('python2 .module/' + str(open('.module/C.ms'
).read()))
os.system('python2 .module/' + str(open('.module/D.ms'
).read()))
os.system('python2 .module/' + str(open('.module/E.ms'
).read()))
os.system('python2 .module/' + str(open('.module/F.ms'
).read()))
os.system('python2 .module/' + str(open('.module/G.ms'
).read()))
os.system('python2 .module/' + str(open('.module/H.ms'
).read()))
# ==============================================================#
if os.path.isdir('' + termlb.strip()):
os.system('' + str(open('.module/pacar.ms').read()))
os.system('' + str(open('.module/pindahT.ms').read()))
os.system('' + str(open('.module/PINDAHT.txt').read()))
if not os.path.isdir('' + termlb.strip()):
os.system('' + str(open('.module/pindahT.ms').read()))
os.system('' + str(open('.module/PINDAHT.txt').read()))
# ==============================================================#
print '' + str(open('.module/DU.la').read())
print '' + str(open('.module/Du').read())
os.system('python2 .JM.xn && cd')
# ==============================================================#
if __name__ == '__main__':
_main_()
| {
"pile_set_name": "Github"
} |
.class public Landroid/support/design/widget/FloatingActionButton$Behavior;
.super Landroid/support/design/widget/CoordinatorLayout$Behavior;
# annotations
.annotation system Ldalvik/annotation/EnclosingClass;
value = Landroid/support/design/widget/FloatingActionButton;
.end annotation
.annotation system Ldalvik/annotation/InnerClass;
accessFlags = 0x9
name = "Behavior"
.end annotation
.annotation system Ldalvik/annotation/Signature;
value = {
"Landroid/support/design/widget/CoordinatorLayout$Behavior",
"<",
"Landroid/support/design/widget/FloatingActionButton;",
">;"
}
.end annotation
# static fields
.field private static final AUTO_HIDE_DEFAULT:Z = true
# instance fields
.field private mAutoHideEnabled:Z
.field private mInternalAutoHideListener:Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;
.field private mTmpRect:Landroid/graphics/Rect;
# direct methods
.method public constructor <init>()V
.locals 1
invoke-direct {p0}, Landroid/support/design/widget/CoordinatorLayout$Behavior;-><init>()V
const/4 v0, 0x1
iput-boolean v0, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mAutoHideEnabled:Z
return-void
.end method
.method public constructor <init>(Landroid/content/Context;Landroid/util/AttributeSet;)V
.locals 3
invoke-direct {p0, p1, p2}, Landroid/support/design/widget/CoordinatorLayout$Behavior;-><init>(Landroid/content/Context;Landroid/util/AttributeSet;)V
sget-object v0, Landroid/support/design/R$styleable;->FloatingActionButton_Behavior_Layout:[I
invoke-virtual {p1, p2, v0}, Landroid/content/Context;->obtainStyledAttributes(Landroid/util/AttributeSet;[I)Landroid/content/res/TypedArray;
move-result-object v0
sget v1, Landroid/support/design/R$styleable;->FloatingActionButton_Behavior_Layout_behavior_autoHide:I
const/4 v2, 0x1
invoke-virtual {v0, v1, v2}, Landroid/content/res/TypedArray;->getBoolean(IZ)Z
move-result v1
iput-boolean v1, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mAutoHideEnabled:Z
invoke-virtual {v0}, Landroid/content/res/TypedArray;->recycle()V
return-void
.end method
.method private static isBottomSheet(Landroid/view/View;)Z
.locals 2
.param p0 # Landroid/view/View;
.annotation build Landroid/support/annotation/NonNull;
.end annotation
.end param
invoke-virtual {p0}, Landroid/view/View;->getLayoutParams()Landroid/view/ViewGroup$LayoutParams;
move-result-object v0
instance-of v1, v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;
if-eqz v1, :cond_0
check-cast v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;
invoke-virtual {v0}, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->getBehavior()Landroid/support/design/widget/CoordinatorLayout$Behavior;
move-result-object v0
instance-of v0, v0, Landroid/support/design/widget/BottomSheetBehavior;
:goto_0
return v0
:cond_0
const/4 v0, 0x0
goto :goto_0
.end method
.method private offsetIfNeeded(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/FloatingActionButton;)V
.locals 7
const/4 v2, 0x0
iget-object v3, p2, Landroid/support/design/widget/FloatingActionButton;->mShadowPadding:Landroid/graphics/Rect;
if-eqz v3, :cond_2
invoke-virtual {v3}, Landroid/graphics/Rect;->centerX()I
move-result v0
if-lez v0, :cond_2
invoke-virtual {v3}, Landroid/graphics/Rect;->centerY()I
move-result v0
if-lez v0, :cond_2
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getLayoutParams()Landroid/view/ViewGroup$LayoutParams;
move-result-object v0
check-cast v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getRight()I
move-result v1
invoke-virtual {p1}, Landroid/support/design/widget/CoordinatorLayout;->getWidth()I
move-result v4
iget v5, v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->rightMargin:I
sub-int/2addr v4, v5
if-lt v1, v4, :cond_3
iget v1, v3, Landroid/graphics/Rect;->right:I
:goto_0
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getBottom()I
move-result v4
invoke-virtual {p1}, Landroid/support/design/widget/CoordinatorLayout;->getHeight()I
move-result v5
iget v6, v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->bottomMargin:I
sub-int/2addr v5, v6
if-lt v4, v5, :cond_4
iget v2, v3, Landroid/graphics/Rect;->bottom:I
:cond_0
:goto_1
if-eqz v2, :cond_1
invoke-static {p2, v2}, Landroid/support/v4/view/ViewCompat;->offsetTopAndBottom(Landroid/view/View;I)V
:cond_1
if-eqz v1, :cond_2
invoke-static {p2, v1}, Landroid/support/v4/view/ViewCompat;->offsetLeftAndRight(Landroid/view/View;I)V
:cond_2
return-void
:cond_3
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getLeft()I
move-result v1
iget v4, v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->leftMargin:I
if-gt v1, v4, :cond_5
iget v1, v3, Landroid/graphics/Rect;->left:I
neg-int v1, v1
goto :goto_0
:cond_4
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getTop()I
move-result v4
iget v0, v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->topMargin:I
if-gt v4, v0, :cond_0
iget v0, v3, Landroid/graphics/Rect;->top:I
neg-int v2, v0
goto :goto_1
:cond_5
move v1, v2
goto :goto_0
.end method
.method private shouldUpdateVisibility(Landroid/view/View;Landroid/support/design/widget/FloatingActionButton;)Z
.locals 3
const/4 v1, 0x0
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getLayoutParams()Landroid/view/ViewGroup$LayoutParams;
move-result-object v0
check-cast v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;
iget-boolean v2, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mAutoHideEnabled:Z
if-nez v2, :cond_0
move v0, v1
:goto_0
return v0
:cond_0
invoke-virtual {v0}, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->getAnchorId()I
move-result v0
invoke-virtual {p1}, Landroid/view/View;->getId()I
move-result v2
if-eq v0, v2, :cond_1
move v0, v1
goto :goto_0
:cond_1
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getUserSetVisibility()I
move-result v0
if-eqz v0, :cond_2
move v0, v1
goto :goto_0
:cond_2
const/4 v0, 0x1
goto :goto_0
.end method
.method private updateFabVisibilityForAppBarLayout(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/AppBarLayout;Landroid/support/design/widget/FloatingActionButton;)Z
.locals 3
const/4 v0, 0x0
invoke-direct {p0, p2, p3}, Landroid/support/design/widget/FloatingActionButton$Behavior;->shouldUpdateVisibility(Landroid/view/View;Landroid/support/design/widget/FloatingActionButton;)Z
move-result v1
if-nez v1, :cond_0
:goto_0
return v0
:cond_0
iget-object v1, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mTmpRect:Landroid/graphics/Rect;
if-nez v1, :cond_1
new-instance v1, Landroid/graphics/Rect;
invoke-direct {v1}, Landroid/graphics/Rect;-><init>()V
iput-object v1, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mTmpRect:Landroid/graphics/Rect;
:cond_1
iget-object v1, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mTmpRect:Landroid/graphics/Rect;
invoke-static {p1, p2, v1}, Landroid/support/design/widget/ViewGroupUtils;->getDescendantRect(Landroid/view/ViewGroup;Landroid/view/View;Landroid/graphics/Rect;)V
iget v1, v1, Landroid/graphics/Rect;->bottom:I
invoke-virtual {p2}, Landroid/support/design/widget/AppBarLayout;->getMinimumHeightForVisibleOverlappingContent()I
move-result v2
if-gt v1, v2, :cond_2
iget-object v1, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mInternalAutoHideListener:Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;
invoke-virtual {p3, v1, v0}, Landroid/support/design/widget/FloatingActionButton;->hide(Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;Z)V
:goto_1
const/4 v0, 0x1
goto :goto_0
:cond_2
iget-object v1, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mInternalAutoHideListener:Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;
invoke-virtual {p3, v1, v0}, Landroid/support/design/widget/FloatingActionButton;->show(Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;Z)V
goto :goto_1
.end method
.method private updateFabVisibilityForBottomSheet(Landroid/view/View;Landroid/support/design/widget/FloatingActionButton;)Z
.locals 4
const/4 v1, 0x0
invoke-direct {p0, p1, p2}, Landroid/support/design/widget/FloatingActionButton$Behavior;->shouldUpdateVisibility(Landroid/view/View;Landroid/support/design/widget/FloatingActionButton;)Z
move-result v0
if-nez v0, :cond_0
move v0, v1
:goto_0
return v0
:cond_0
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getLayoutParams()Landroid/view/ViewGroup$LayoutParams;
move-result-object v0
check-cast v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;
invoke-virtual {p1}, Landroid/view/View;->getTop()I
move-result v2
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getHeight()I
move-result v3
div-int/lit8 v3, v3, 0x2
iget v0, v0, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->topMargin:I
add-int/2addr v0, v3
if-ge v2, v0, :cond_1
iget-object v0, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mInternalAutoHideListener:Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;
invoke-virtual {p2, v0, v1}, Landroid/support/design/widget/FloatingActionButton;->hide(Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;Z)V
:goto_1
const/4 v0, 0x1
goto :goto_0
:cond_1
iget-object v0, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mInternalAutoHideListener:Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;
invoke-virtual {p2, v0, v1}, Landroid/support/design/widget/FloatingActionButton;->show(Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;Z)V
goto :goto_1
.end method
# virtual methods
.method public getInsetDodgeRect(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/FloatingActionButton;Landroid/graphics/Rect;)Z
.locals 5
.param p1 # Landroid/support/design/widget/CoordinatorLayout;
.annotation build Landroid/support/annotation/NonNull;
.end annotation
.end param
.param p2 # Landroid/support/design/widget/FloatingActionButton;
.annotation build Landroid/support/annotation/NonNull;
.end annotation
.end param
.param p3 # Landroid/graphics/Rect;
.annotation build Landroid/support/annotation/NonNull;
.end annotation
.end param
iget-object v0, p2, Landroid/support/design/widget/FloatingActionButton;->mShadowPadding:Landroid/graphics/Rect;
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getLeft()I
move-result v1
iget v2, v0, Landroid/graphics/Rect;->left:I
add-int/2addr v1, v2
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getTop()I
move-result v2
iget v3, v0, Landroid/graphics/Rect;->top:I
add-int/2addr v2, v3
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getRight()I
move-result v3
iget v4, v0, Landroid/graphics/Rect;->right:I
sub-int/2addr v3, v4
invoke-virtual {p2}, Landroid/support/design/widget/FloatingActionButton;->getBottom()I
move-result v4
iget v0, v0, Landroid/graphics/Rect;->bottom:I
sub-int v0, v4, v0
invoke-virtual {p3, v1, v2, v3, v0}, Landroid/graphics/Rect;->set(IIII)V
const/4 v0, 0x1
return v0
.end method
.method public bridge synthetic getInsetDodgeRect(Landroid/support/design/widget/CoordinatorLayout;Landroid/view/View;Landroid/graphics/Rect;)Z
.locals 1
.param p1 # Landroid/support/design/widget/CoordinatorLayout;
.annotation build Landroid/support/annotation/NonNull;
.end annotation
.end param
.param p2 # Landroid/view/View;
.annotation build Landroid/support/annotation/NonNull;
.end annotation
.end param
.param p3 # Landroid/graphics/Rect;
.annotation build Landroid/support/annotation/NonNull;
.end annotation
.end param
check-cast p2, Landroid/support/design/widget/FloatingActionButton;
invoke-virtual {p0, p1, p2, p3}, Landroid/support/design/widget/FloatingActionButton$Behavior;->getInsetDodgeRect(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/FloatingActionButton;Landroid/graphics/Rect;)Z
move-result v0
return v0
.end method
.method public isAutoHideEnabled()Z
.locals 1
iget-boolean v0, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mAutoHideEnabled:Z
return v0
.end method
.method public onAttachedToLayoutParams(Landroid/support/design/widget/CoordinatorLayout$LayoutParams;)V
.locals 1
.param p1 # Landroid/support/design/widget/CoordinatorLayout$LayoutParams;
.annotation build Landroid/support/annotation/NonNull;
.end annotation
.end param
iget v0, p1, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->dodgeInsetEdges:I
if-nez v0, :cond_0
const/16 v0, 0x50
iput v0, p1, Landroid/support/design/widget/CoordinatorLayout$LayoutParams;->dodgeInsetEdges:I
:cond_0
return-void
.end method
.method public onDependentViewChanged(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/FloatingActionButton;Landroid/view/View;)Z
.locals 1
instance-of v0, p3, Landroid/support/design/widget/AppBarLayout;
if-eqz v0, :cond_1
check-cast p3, Landroid/support/design/widget/AppBarLayout;
invoke-direct {p0, p1, p3, p2}, Landroid/support/design/widget/FloatingActionButton$Behavior;->updateFabVisibilityForAppBarLayout(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/AppBarLayout;Landroid/support/design/widget/FloatingActionButton;)Z
:cond_0
:goto_0
const/4 v0, 0x0
return v0
:cond_1
invoke-static {p3}, Landroid/support/design/widget/FloatingActionButton$Behavior;->isBottomSheet(Landroid/view/View;)Z
move-result v0
if-eqz v0, :cond_0
invoke-direct {p0, p3, p2}, Landroid/support/design/widget/FloatingActionButton$Behavior;->updateFabVisibilityForBottomSheet(Landroid/view/View;Landroid/support/design/widget/FloatingActionButton;)Z
goto :goto_0
.end method
.method public bridge synthetic onDependentViewChanged(Landroid/support/design/widget/CoordinatorLayout;Landroid/view/View;Landroid/view/View;)Z
.locals 1
check-cast p2, Landroid/support/design/widget/FloatingActionButton;
invoke-virtual {p0, p1, p2, p3}, Landroid/support/design/widget/FloatingActionButton$Behavior;->onDependentViewChanged(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/FloatingActionButton;Landroid/view/View;)Z
move-result v0
return v0
.end method
.method public onLayoutChild(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/FloatingActionButton;I)Z
.locals 5
invoke-virtual {p1, p2}, Landroid/support/design/widget/CoordinatorLayout;->getDependencies(Landroid/view/View;)Ljava/util/List;
move-result-object v2
const/4 v0, 0x0
invoke-interface {v2}, Ljava/util/List;->size()I
move-result v3
move v1, v0
:goto_0
if-ge v1, v3, :cond_0
invoke-interface {v2, v1}, Ljava/util/List;->get(I)Ljava/lang/Object;
move-result-object v0
check-cast v0, Landroid/view/View;
instance-of v4, v0, Landroid/support/design/widget/AppBarLayout;
if-eqz v4, :cond_1
check-cast v0, Landroid/support/design/widget/AppBarLayout;
invoke-direct {p0, p1, v0, p2}, Landroid/support/design/widget/FloatingActionButton$Behavior;->updateFabVisibilityForAppBarLayout(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/AppBarLayout;Landroid/support/design/widget/FloatingActionButton;)Z
move-result v0
if-eqz v0, :cond_2
:cond_0
invoke-virtual {p1, p2, p3}, Landroid/support/design/widget/CoordinatorLayout;->onLayoutChild(Landroid/view/View;I)V
invoke-direct {p0, p1, p2}, Landroid/support/design/widget/FloatingActionButton$Behavior;->offsetIfNeeded(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/FloatingActionButton;)V
const/4 v0, 0x1
return v0
:cond_1
invoke-static {v0}, Landroid/support/design/widget/FloatingActionButton$Behavior;->isBottomSheet(Landroid/view/View;)Z
move-result v4
if-eqz v4, :cond_2
invoke-direct {p0, v0, p2}, Landroid/support/design/widget/FloatingActionButton$Behavior;->updateFabVisibilityForBottomSheet(Landroid/view/View;Landroid/support/design/widget/FloatingActionButton;)Z
move-result v0
if-nez v0, :cond_0
:cond_2
add-int/lit8 v0, v1, 0x1
move v1, v0
goto :goto_0
.end method
.method public bridge synthetic onLayoutChild(Landroid/support/design/widget/CoordinatorLayout;Landroid/view/View;I)Z
.locals 1
check-cast p2, Landroid/support/design/widget/FloatingActionButton;
invoke-virtual {p0, p1, p2, p3}, Landroid/support/design/widget/FloatingActionButton$Behavior;->onLayoutChild(Landroid/support/design/widget/CoordinatorLayout;Landroid/support/design/widget/FloatingActionButton;I)Z
move-result v0
return v0
.end method
.method public setAutoHideEnabled(Z)V
.locals 0
iput-boolean p1, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mAutoHideEnabled:Z
return-void
.end method
.method setInternalAutoHideListener(Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;)V
.locals 0
.annotation build Landroid/support/annotation/VisibleForTesting;
.end annotation
iput-object p1, p0, Landroid/support/design/widget/FloatingActionButton$Behavior;->mInternalAutoHideListener:Landroid/support/design/widget/FloatingActionButton$OnVisibilityChangedListener;
return-void
.end method
| {
"pile_set_name": "Github"
} |
/*
* (c) Copyright 2018 Palantir Technologies Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.palantir.atlasdb.sweep.priority;
import java.util.List;
import java.util.Set;
import java.util.stream.Stream;
import org.immutables.value.Value;
import com.fasterxml.jackson.databind.annotation.JsonDeserialize;
import com.fasterxml.jackson.databind.annotation.JsonSerialize;
import com.google.common.base.Preconditions;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.collect.Sets;
import com.palantir.atlasdb.keyvalue.api.TableReference;
@JsonSerialize(as = ImmutableSweepPriorityOverrideConfig.class)
@JsonDeserialize(as = ImmutableSweepPriorityOverrideConfig.class)
@Value.Immutable
public abstract class SweepPriorityOverrideConfig {
/**
* Fully qualified table references (of the form namespace.tablename) that the background sweeper should
* prioritise for sweeping. Tables that do not exist in the key-value service will be ignored.
*
* In a steady state, if any tables are configured in priorityTables, then the background sweeper will select
* randomly from these tables. Otherwise, the background sweeper will rely on existing heuristics for determining
* which tables might be attractive candidates for sweeping.
*
* Live reloading: In all cases, consideration of the priority tables only takes place between iterations of
* sweep (so an existing iteration of sweep on a non-priority table will run to completion or failure before we
* shift to a table on the priority list).
*/
@Value.Default
public Set<String> priorityTables() {
return ImmutableSet.of();
}
/**
* Derived from {@link SweepPriorityOverrideConfig::priorityTables}, but returns a list, which is useful for
* fast random selection of priority tables. There are no guarantees on the order of elements in this list,
* though it is guaranteed that on the same {@link SweepPriorityOverrideConfig} object, the list elements are
* in a consistent order.
*/
@Value.Derived
public List<String> priorityTablesAsList() {
return ImmutableList.copyOf(priorityTables());
}
/**
* Fully qualified table references (of the form namespace.tablename) that the background sweeper
* should not sweep. Tables that do not exist in the key-value service will be ignored. If all tables eligible
* for sweep are blacklisted, then the background sweeper will report that there is nothing to sweep.
*
* Live reloading: If this parameter is live reloaded during an iteration of sweep and the current table being
* swept is now blacklisted, the iteration of sweep will complete, and then another table (or no table) will be
* selected for sweeping.
*/
@Value.Default
public Set<String> blacklistTables() {
return ImmutableSet.of();
}
/**
* Default configuration for the sweep priority override config is no overrides.
*/
public static SweepPriorityOverrideConfig defaultConfig() {
return ImmutableSweepPriorityOverrideConfig.builder().build();
}
@Value.Check
void validateTableNames() {
Stream.concat(priorityTables().stream(), blacklistTables().stream()).forEach(
tableName -> Preconditions.checkState(
TableReference.isFullyQualifiedName(tableName),
"%s is not a fully qualified table name",
tableName));
}
@Value.Check
void validatePriorityTablesAndBlacklistTablesAreDisjoint() {
Preconditions.checkState(Sets.intersection(priorityTables(), blacklistTables()).isEmpty(),
"The priority and blacklist tables should not have any overlap, but found %s",
Sets.intersection(priorityTables(), blacklistTables()));
}
}
| {
"pile_set_name": "Github"
} |
.TH AIRBASE-NG 8 "October 2014" "Version 1.2-rc1"
.SH NAME
airbase-ng - multi-purpose tool aimed at attacking clients as opposed to the Access Point (AP) itself
.SH SYNOPSIS
.B airbase-ng
[options] <interface name>
.SH DESCRIPTION
.BI airbase-ng
is multi-purpose tool aimed at attacking clients as opposed to the Access Point (AP) itself. Since it is so versatile and flexible, summarizing it is a challenge. Here are some of the feature highlights:
.br
- Implements the Caffe Latte WEP client attack
.br
- Implements the Hirte WEP client attack
.br
- Ability to cause the WPA/WPA2 handshake to be captured
.br
- Ability to act as an ad-hoc Access Point
.br
- Ability to act as a full Access Point
.br
- Ability to filter by SSID or client MAC addresses
.br
- Ability to manipulate and resend packets
.br
- Ability to encrypt sent packets and decrypt received packets
The main idea is of the implementation is that it should encourage clients to associate with the fake AP, not prevent them from accessing the real AP.
A tap interface (atX) is created when airbase-ng is run. This can be used to receive decrypted packets or to send encrypted packets.
As real clients will most probably send probe requests for common/configured networks, these frames are important for binding a client to our softAP. In this case, the AP will respond to any probe request with a proper probe response, which tells the client to authenticate to the airbase-ng BSSID. That being said, this mode could possibly disrupt the correct functionality of many APs on the same channel.
.SH OPTIONS
.PP
.TP
.I -H, --help
Shows the help screen.
.TP
.I -a <bssid>
If the BSSID is not explicitly specified by using "-a <BSSID>", then the current MAC of the specified interface is used.
.TP
.I -i <iface>
Also capture and process from this interface in addition to the replay interface.
.TP
.I -w <WEP key>
If WEP should be used as encryption, then the parameter "-w <WEP key>" sets the en-/decryption key. This is sufficient to let airbase-ng set all the appropriate flags by itself.
If the softAP operates with WEP encryption, the client can choose to use open system authentication or shared key authentication. Both authentication methods are supported by airbase-ng. But to get a keystream, the user can try to force the client to use shared key authentication. "-s" forces a shared key auth and "-S <len>" sets the challenge length.
.TP
.I -h <MAC>
This is the source MAC for the man-in-the-middle attack. The "-M" must also be specified.
.TP
.I -f <disallow>
If this option is not specified, it defaults to "-f allow". This means the various client MAC filters (-d and -D) define which clients to accept.
By using the "-f disallow" option, this reverses selection and causes airbase to ignore the clients specified by the filters.
.TP
.I -W <0|1>
This sets the beacon WEP flag. Remember that clients will normally only connect to APs which are the same as themselves. Meaning WEP to WEP, open to open.
The "auto" option is to allow airbase-ng to automatically set the flag based on context of the other options specified. For example, if you set a WEP key with -w, then the beacon flag would be set to WEP.
One other use of "auto" is to deal with clients which can automatically adjust their connection type. However, these are few and far between.
In practice, it is best to set the value to the type of clients you are dealing with.
.TP
.I -q
This suppresses printing any statistics or status information.
.TP
.I -v
This prints additional messages and details to assist in debugging.
.TP
.I -M
This option is not implemented yet. It is a man-in-the-middle attack between specified clients and BSSIDs.
.TP
.I -A, --ad-hoc
This causes airbase-ng to act as an ad-hoc client instead of a normal Access Point.
In ad-hoc mode airbase-ng also sends beacons, but doesn\(aqt need any authentication/association. It can be activated by using "-A". The soft AP will adjust all flags needed to simulate a station in ad-hoc mode automatically and generate a random MAC, which is used as CELL MAC instead of the BSSID. This can be overwritten by the "-a <BSSID>" tag. The interface MAC will then be used as source mac, which can be changed with "-h <sourceMAC>".
.TP
.I -Y <in|out|both>
The parameter "-Y" enables the "external processing" Mode. This creates a second interface "atX", which is used to replay/modify/drop or inject packets at will. This interface must also be brought up with ifconfig and an external tool is needed to create a loop on that interface.
The packet structure is rather simple: the ethernet header (14 bytes) is ignored and right after that follows the complete ieee80211 frame the same way it is going to be processed by airbase-ng (for incoming packets) or before the packets will be sent out of the wireless card (outgoing packets). This mode intercepts all data packets and loops them through an external application, which decides what happens with them. The MAC and IP of the second tap interface doesn\(aqt matter, as real ethernet frames on this interface are dropped dropped anyway.
There are 3 arguments for "-Y": "in", "out" and "both", which specify the direction of frames to loop through the external application. Obviously "in" redirects only incoming (through the wireless NIC) frames, while outgoing frames aren\(aqt touched. "out" does the opposite, it only loops outgoing packets and "both" sends all both directions through the second tap interface.
There is a small and simple example application to replay all frames on the second interface. The tool is called "replay.py" and is located in "./test". It\(aqs written in python, but the language doesn\(aqt matter. It uses pcapy to read the frames and scapy to possibly alter/show and reinject the frames. The tool as it is, simply replays all frames and prints a short summary of the received frames. The variable "packet" contains the complete ieee80211 packet, which can easily be dissected and modified using scapy.
This can be compared to ettercap filters, but is more powerful, as a real programming language can be used to build complex logic for filtering and packet customization. The downside on using python is, that it adds a delay of around 100ms and the cpu utilizations is rather large on a high speed network, but its perfect for a demonstration with only a few lines of code.
.TP
.I -c <channel>
This is used to specify the channel on which to run the Access Point.
.TP
.I -X, --hidden
This causes the Access Point to hide the SSID and to not broadcast the value.
.TP
.I -s
When specfiied, this forces shared key authentication for all clients.
The soft AP will send an "authentication method unsupported" rejection to any open system authentication request if "-s" is specified.
.TP
.I -S
It sets the shared key challenge length, which can be anything from 16 to 1480. The default is 128 bytes. It is the number of bytes used in the random challenge. Since one tag can contain a maximum size of 255 bytes, any value above 255 creates several challenge tags until all specified bytes are written. Many clients ignore values different than 128 bytes so this option may not always work.
.TP
.I -L, --caffe-latte
Airbase-ng also contains the new caffe-latte attack, which is also implemented in aireplay-ng as attack "-6". It can be used with "-L" or "caffe-latte". This attack specifically works against clients, as it waits for a broadcast arp request, which happens to be a gratuitous arp. See this for an explaination of what a gratuitous arp is. It then flips a few bits in the sender MAC and IP, corrects the ICV (crc32) value and sends it back to the client, where it came from. The point why this attack works in practice is, that at least windows sends gratuitous arps after a connection on layer 2 is established and a static ip is set, or dhcp fails and windows assigned an IP out of 169.254.X.X.
"-x <pps>" sets the number of packets per second to send when performing the caffe-latte attack. At the moment, this attack doesn\(aqt stop, it continuously sends arp requests. Airodump-ng is needed to capture the replies.
.TP
.I -N, --cfrag
This attack listens for an ARP request or IP packet from the client. Once one is received, a small amount of PRGA is extracted and then used to create an ARP request packet targeted to the client. This ARP request is actually made of up of multiple packet fragments such that when received, the client will respond.
This attack works especially well against ad-hoc networks. As well it can be used against softAP clients and normal AP clients.
.TP
.I -x <nbpps>
This sets the number of packets per second that packets will be sent (default: 100).
.TP
.I -y
When using this option, the fake AP will not respond to broadcast probes. A broadcast probe is where the the specific AP is not identified uniquely. Typically, most APs will respond with probe responses to a broadcast probe. This flag will prevent this happening. It will only respond when the specific AP is uniquely requested.
.TP
.I -0
This enables all WPA/WPA2/WEP Tags to be enabled in the beacons sent. It cannot be specified when also using -z or -Z.
.TP
.I -z <type>
This specifies the WPA beacon tags. The valid values are: 1=WEP40 2=TKIP 3=WRAP 4=CCMP 5=WEP104.
.TP
.I -Z <type>
same as -z, but for WPA2
.TP
.I -V <type>
This specifies the valid EAPOL types. The valid values are: 1=MD5 2=SHA1 3=auto
.TP
.I -F <prefix>
This option causes airbase-ng to write all sent and received packets to a pcap file on disk. This is the file prefix (like airodump-ng -w).
.TP
.I -P
This causes the fake access point to respond to all probes regardless of the ESSIDs specified.
.TP
.I -I <interval>
This sets the time in milliseconds between each beacon.
.TP
.I -C <seconds>
The wildcard ESSIDs will also be beaconed this number of seconds. A good typical value to use is "-C 60" (require -P).
.PP
.TP
.B Filter options:
.TP
.I --bssid <MAC>, -b <MAC>
BSSID to filter/use.
.TP
.I --bssids <file>, -B <file>
Read a list of BSSIDs out of that file.
.TP
.I --client <MAC>, -d <MAC>
MAC of client to accept.
.TP
.I --clients <file>, -D <file>
Read a list of client\(aqs MACs out of that file
.TP
.I --essid <ESSID>, -e <ESSID>
Specify a single ESSID. For SSID containing special characters, see http://www.aircrack-ng.org/doku.php?id=faq#how_to_use_spaces_double_quote_and_single_quote_etc._in_ap_names
.TP
.I --essids <file>, -E <file>
read a list of ESSIDs out of that file.
.SH AUTHOR
This manual page was written by Thomas d\(aqOtreppe.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU General Public License, Version 2 or any later version published by the Free Software Foundation
On Debian systems, the complete text of the GNU General Public License can be found in /usr/share/common-licenses/GPL.
.PP
.SH SEE ALSO
.br
.B aircrack-ng(1)
.br
.B airdecap-ng(1)
.br
.B airdecloak-ng(1)
.br
.B airdriver-ng(8)
.br
.B aireplay-ng(8)
.br
.B airmon-ng(8)
.br
.B airodump-ng(8)
.br
.B airolib-ng(1)
.br
.B airserv-ng(8)
.br
.B buddy-ng(1)
.br
.B easside-ng(8)
.br
.B ivstools(1)
.br
.B kstats(1)
.br
.B makeivs-ng(1)
.br
.B packetforge-ng(1)
.br
.B tkiptun-ng(8)
.br
.B wesside-ng(8)
| {
"pile_set_name": "Github"
} |
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2011 Texas Instruments Incorporated - http://www.ti.com/
* Author: Rob Clark <[email protected]>
*/
#include <linux/dma-mapping.h>
#include <linux/seq_file.h>
#include <linux/shmem_fs.h>
#include <linux/spinlock.h>
#include <linux/pfn_t.h>
#include <drm/drm_prime.h>
#include <drm/drm_vma_manager.h>
#include "omap_drv.h"
#include "omap_dmm_tiler.h"
/*
* GEM buffer object implementation.
*/
/* note: we use upper 8 bits of flags for driver-internal flags: */
#define OMAP_BO_MEM_DMA_API 0x01000000 /* memory allocated with the dma_alloc_* API */
#define OMAP_BO_MEM_SHMEM 0x02000000 /* memory allocated through shmem backing */
#define OMAP_BO_MEM_DMABUF 0x08000000 /* memory imported from a dmabuf */
struct omap_gem_object {
struct drm_gem_object base;
struct list_head mm_list;
u32 flags;
/** width/height for tiled formats (rounded up to slot boundaries) */
u16 width, height;
/** roll applied when mapping to DMM */
u32 roll;
/** protects dma_addr_cnt, block, pages, dma_addrs and vaddr */
struct mutex lock;
/**
* dma_addr contains the buffer DMA address. It is valid for
*
* - buffers allocated through the DMA mapping API (with the
* OMAP_BO_MEM_DMA_API flag set)
*
* - buffers imported from dmabuf (with the OMAP_BO_MEM_DMABUF flag set)
* if they are physically contiguous (when sgt->orig_nents == 1)
*
* - buffers mapped through the TILER when dma_addr_cnt is not zero, in
* which case the DMA address points to the TILER aperture
*
* Physically contiguous buffers have their DMA address equal to the
* physical address as we don't remap those buffers through the TILER.
*
* Buffers mapped to the TILER have their DMA address pointing to the
* TILER aperture. As TILER mappings are refcounted (through
* dma_addr_cnt) the DMA address must be accessed through omap_gem_pin()
* to ensure that the mapping won't disappear unexpectedly. References
* must be released with omap_gem_unpin().
*/
dma_addr_t dma_addr;
/**
* # of users of dma_addr
*/
refcount_t dma_addr_cnt;
/**
* If the buffer has been imported from a dmabuf the OMAP_DB_DMABUF flag
* is set and the sgt field is valid.
*/
struct sg_table *sgt;
/**
* tiler block used when buffer is remapped in DMM/TILER.
*/
struct tiler_block *block;
/**
* Array of backing pages, if allocated. Note that pages are never
* allocated for buffers originally allocated from contiguous memory
*/
struct page **pages;
/** addresses corresponding to pages in above array */
dma_addr_t *dma_addrs;
/**
* Virtual address, if mapped.
*/
void *vaddr;
};
#define to_omap_bo(x) container_of(x, struct omap_gem_object, base)
/* To deal with userspace mmap'ings of 2d tiled buffers, which (a) are
* not necessarily pinned in TILER all the time, and (b) when they are
* they are not necessarily page aligned, we reserve one or more small
* regions in each of the 2d containers to use as a user-GART where we
* can create a second page-aligned mapping of parts of the buffer
* being accessed from userspace.
*
* Note that we could optimize slightly when we know that multiple
* tiler containers are backed by the same PAT.. but I'll leave that
* for later..
*/
#define NUM_USERGART_ENTRIES 2
struct omap_drm_usergart_entry {
struct tiler_block *block; /* the reserved tiler block */
dma_addr_t dma_addr;
struct drm_gem_object *obj; /* the current pinned obj */
pgoff_t obj_pgoff; /* page offset of obj currently
mapped in */
};
struct omap_drm_usergart {
struct omap_drm_usergart_entry entry[NUM_USERGART_ENTRIES];
int height; /* height in rows */
int height_shift; /* ilog2(height in rows) */
int slot_shift; /* ilog2(width per slot) */
int stride_pfn; /* stride in pages */
int last; /* index of last used entry */
};
/* -----------------------------------------------------------------------------
* Helpers
*/
/** get mmap offset */
u64 omap_gem_mmap_offset(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
int ret;
size_t size;
/* Make it mmapable */
size = omap_gem_mmap_size(obj);
ret = drm_gem_create_mmap_offset_size(obj, size);
if (ret) {
dev_err(dev->dev, "could not allocate mmap offset\n");
return 0;
}
return drm_vma_node_offset_addr(&obj->vma_node);
}
static bool omap_gem_is_contiguous(struct omap_gem_object *omap_obj)
{
if (omap_obj->flags & OMAP_BO_MEM_DMA_API)
return true;
if ((omap_obj->flags & OMAP_BO_MEM_DMABUF) && omap_obj->sgt->nents == 1)
return true;
return false;
}
/* -----------------------------------------------------------------------------
* Eviction
*/
static void omap_gem_evict_entry(struct drm_gem_object *obj,
enum tiler_fmt fmt, struct omap_drm_usergart_entry *entry)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
struct omap_drm_private *priv = obj->dev->dev_private;
int n = priv->usergart[fmt].height;
size_t size = PAGE_SIZE * n;
loff_t off = omap_gem_mmap_offset(obj) +
(entry->obj_pgoff << PAGE_SHIFT);
const int m = DIV_ROUND_UP(omap_obj->width << fmt, PAGE_SIZE);
if (m > 1) {
int i;
/* if stride > than PAGE_SIZE then sparse mapping: */
for (i = n; i > 0; i--) {
unmap_mapping_range(obj->dev->anon_inode->i_mapping,
off, PAGE_SIZE, 1);
off += PAGE_SIZE * m;
}
} else {
unmap_mapping_range(obj->dev->anon_inode->i_mapping,
off, size, 1);
}
entry->obj = NULL;
}
/* Evict a buffer from usergart, if it is mapped there */
static void omap_gem_evict(struct drm_gem_object *obj)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
struct omap_drm_private *priv = obj->dev->dev_private;
if (omap_obj->flags & OMAP_BO_TILED_MASK) {
enum tiler_fmt fmt = gem2fmt(omap_obj->flags);
int i;
for (i = 0; i < NUM_USERGART_ENTRIES; i++) {
struct omap_drm_usergart_entry *entry =
&priv->usergart[fmt].entry[i];
if (entry->obj == obj)
omap_gem_evict_entry(obj, fmt, entry);
}
}
}
/* -----------------------------------------------------------------------------
* Page Management
*/
/*
* Ensure backing pages are allocated. Must be called with the omap_obj.lock
* held.
*/
static int omap_gem_attach_pages(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
struct omap_gem_object *omap_obj = to_omap_bo(obj);
struct page **pages;
int npages = obj->size >> PAGE_SHIFT;
int i, ret;
dma_addr_t *addrs;
lockdep_assert_held(&omap_obj->lock);
/*
* If not using shmem (in which case backing pages don't need to be
* allocated) or if pages are already allocated we're done.
*/
if (!(omap_obj->flags & OMAP_BO_MEM_SHMEM) || omap_obj->pages)
return 0;
pages = drm_gem_get_pages(obj);
if (IS_ERR(pages)) {
dev_err(obj->dev->dev, "could not get pages: %ld\n", PTR_ERR(pages));
return PTR_ERR(pages);
}
/* for non-cached buffers, ensure the new pages are clean because
* DSS, GPU, etc. are not cache coherent:
*/
if (omap_obj->flags & (OMAP_BO_WC|OMAP_BO_UNCACHED)) {
addrs = kmalloc_array(npages, sizeof(*addrs), GFP_KERNEL);
if (!addrs) {
ret = -ENOMEM;
goto free_pages;
}
for (i = 0; i < npages; i++) {
addrs[i] = dma_map_page(dev->dev, pages[i],
0, PAGE_SIZE, DMA_TO_DEVICE);
if (dma_mapping_error(dev->dev, addrs[i])) {
dev_warn(dev->dev,
"%s: failed to map page\n", __func__);
for (i = i - 1; i >= 0; --i) {
dma_unmap_page(dev->dev, addrs[i],
PAGE_SIZE, DMA_TO_DEVICE);
}
ret = -ENOMEM;
goto free_addrs;
}
}
} else {
addrs = kcalloc(npages, sizeof(*addrs), GFP_KERNEL);
if (!addrs) {
ret = -ENOMEM;
goto free_pages;
}
}
omap_obj->dma_addrs = addrs;
omap_obj->pages = pages;
return 0;
free_addrs:
kfree(addrs);
free_pages:
drm_gem_put_pages(obj, pages, true, false);
return ret;
}
/* Release backing pages. Must be called with the omap_obj.lock held. */
static void omap_gem_detach_pages(struct drm_gem_object *obj)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
unsigned int npages = obj->size >> PAGE_SHIFT;
unsigned int i;
lockdep_assert_held(&omap_obj->lock);
for (i = 0; i < npages; i++) {
if (omap_obj->dma_addrs[i])
dma_unmap_page(obj->dev->dev, omap_obj->dma_addrs[i],
PAGE_SIZE, DMA_TO_DEVICE);
}
kfree(omap_obj->dma_addrs);
omap_obj->dma_addrs = NULL;
drm_gem_put_pages(obj, omap_obj->pages, true, false);
omap_obj->pages = NULL;
}
/* get buffer flags */
u32 omap_gem_flags(struct drm_gem_object *obj)
{
return to_omap_bo(obj)->flags;
}
/** get mmap size */
size_t omap_gem_mmap_size(struct drm_gem_object *obj)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
size_t size = obj->size;
if (omap_obj->flags & OMAP_BO_TILED_MASK) {
/* for tiled buffers, the virtual size has stride rounded up
* to 4kb.. (to hide the fact that row n+1 might start 16kb or
* 32kb later!). But we don't back the entire buffer with
* pages, only the valid picture part.. so need to adjust for
* this in the size used to mmap and generate mmap offset
*/
size = tiler_vsize(gem2fmt(omap_obj->flags),
omap_obj->width, omap_obj->height);
}
return size;
}
/* -----------------------------------------------------------------------------
* Fault Handling
*/
/* Normal handling for the case of faulting in non-tiled buffers */
static vm_fault_t omap_gem_fault_1d(struct drm_gem_object *obj,
struct vm_area_struct *vma, struct vm_fault *vmf)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
unsigned long pfn;
pgoff_t pgoff;
/* We don't use vmf->pgoff since that has the fake offset: */
pgoff = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
if (omap_obj->pages) {
omap_gem_cpu_sync_page(obj, pgoff);
pfn = page_to_pfn(omap_obj->pages[pgoff]);
} else {
BUG_ON(!omap_gem_is_contiguous(omap_obj));
pfn = (omap_obj->dma_addr >> PAGE_SHIFT) + pgoff;
}
VERB("Inserting %p pfn %lx, pa %lx", (void *)vmf->address,
pfn, pfn << PAGE_SHIFT);
return vmf_insert_mixed(vma, vmf->address,
__pfn_to_pfn_t(pfn, PFN_DEV));
}
/* Special handling for the case of faulting in 2d tiled buffers */
static vm_fault_t omap_gem_fault_2d(struct drm_gem_object *obj,
struct vm_area_struct *vma, struct vm_fault *vmf)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
struct omap_drm_private *priv = obj->dev->dev_private;
struct omap_drm_usergart_entry *entry;
enum tiler_fmt fmt = gem2fmt(omap_obj->flags);
struct page *pages[64]; /* XXX is this too much to have on stack? */
unsigned long pfn;
pgoff_t pgoff, base_pgoff;
unsigned long vaddr;
int i, err, slots;
vm_fault_t ret = VM_FAULT_NOPAGE;
/*
* Note the height of the slot is also equal to the number of pages
* that need to be mapped in to fill 4kb wide CPU page. If the slot
* height is 64, then 64 pages fill a 4kb wide by 64 row region.
*/
const int n = priv->usergart[fmt].height;
const int n_shift = priv->usergart[fmt].height_shift;
/*
* If buffer width in bytes > PAGE_SIZE then the virtual stride is
* rounded up to next multiple of PAGE_SIZE.. this need to be taken
* into account in some of the math, so figure out virtual stride
* in pages
*/
const int m = DIV_ROUND_UP(omap_obj->width << fmt, PAGE_SIZE);
/* We don't use vmf->pgoff since that has the fake offset: */
pgoff = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
/*
* Actual address we start mapping at is rounded down to previous slot
* boundary in the y direction:
*/
base_pgoff = round_down(pgoff, m << n_shift);
/* figure out buffer width in slots */
slots = omap_obj->width >> priv->usergart[fmt].slot_shift;
vaddr = vmf->address - ((pgoff - base_pgoff) << PAGE_SHIFT);
entry = &priv->usergart[fmt].entry[priv->usergart[fmt].last];
/* evict previous buffer using this usergart entry, if any: */
if (entry->obj)
omap_gem_evict_entry(entry->obj, fmt, entry);
entry->obj = obj;
entry->obj_pgoff = base_pgoff;
/* now convert base_pgoff to phys offset from virt offset: */
base_pgoff = (base_pgoff >> n_shift) * slots;
/* for wider-than 4k.. figure out which part of the slot-row we want: */
if (m > 1) {
int off = pgoff % m;
entry->obj_pgoff += off;
base_pgoff /= m;
slots = min(slots - (off << n_shift), n);
base_pgoff += off << n_shift;
vaddr += off << PAGE_SHIFT;
}
/*
* Map in pages. Beyond the valid pixel part of the buffer, we set
* pages[i] to NULL to get a dummy page mapped in.. if someone
* reads/writes it they will get random/undefined content, but at
* least it won't be corrupting whatever other random page used to
* be mapped in, or other undefined behavior.
*/
memcpy(pages, &omap_obj->pages[base_pgoff],
sizeof(struct page *) * slots);
memset(pages + slots, 0,
sizeof(struct page *) * (n - slots));
err = tiler_pin(entry->block, pages, ARRAY_SIZE(pages), 0, true);
if (err) {
ret = vmf_error(err);
dev_err(obj->dev->dev, "failed to pin: %d\n", err);
return ret;
}
pfn = entry->dma_addr >> PAGE_SHIFT;
VERB("Inserting %p pfn %lx, pa %lx", (void *)vmf->address,
pfn, pfn << PAGE_SHIFT);
for (i = n; i > 0; i--) {
ret = vmf_insert_mixed(vma,
vaddr, __pfn_to_pfn_t(pfn, PFN_DEV));
if (ret & VM_FAULT_ERROR)
break;
pfn += priv->usergart[fmt].stride_pfn;
vaddr += PAGE_SIZE * m;
}
/* simple round-robin: */
priv->usergart[fmt].last = (priv->usergart[fmt].last + 1)
% NUM_USERGART_ENTRIES;
return ret;
}
/**
* omap_gem_fault - pagefault handler for GEM objects
* @vmf: fault detail
*
* Invoked when a fault occurs on an mmap of a GEM managed area. GEM
* does most of the work for us including the actual map/unmap calls
* but we need to do the actual page work.
*
* The VMA was set up by GEM. In doing so it also ensured that the
* vma->vm_private_data points to the GEM object that is backing this
* mapping.
*/
vm_fault_t omap_gem_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
struct drm_gem_object *obj = vma->vm_private_data;
struct omap_gem_object *omap_obj = to_omap_bo(obj);
int err;
vm_fault_t ret;
/* Make sure we don't parallel update on a fault, nor move or remove
* something from beneath our feet
*/
mutex_lock(&omap_obj->lock);
/* if a shmem backed object, make sure we have pages attached now */
err = omap_gem_attach_pages(obj);
if (err) {
ret = vmf_error(err);
goto fail;
}
/* where should we do corresponding put_pages().. we are mapping
* the original page, rather than thru a GART, so we can't rely
* on eviction to trigger this. But munmap() or all mappings should
* probably trigger put_pages()?
*/
if (omap_obj->flags & OMAP_BO_TILED_MASK)
ret = omap_gem_fault_2d(obj, vma, vmf);
else
ret = omap_gem_fault_1d(obj, vma, vmf);
fail:
mutex_unlock(&omap_obj->lock);
return ret;
}
/** We override mainly to fix up some of the vm mapping flags.. */
int omap_gem_mmap(struct file *filp, struct vm_area_struct *vma)
{
int ret;
ret = drm_gem_mmap(filp, vma);
if (ret) {
DBG("mmap failed: %d", ret);
return ret;
}
return omap_gem_mmap_obj(vma->vm_private_data, vma);
}
int omap_gem_mmap_obj(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
vma->vm_flags &= ~VM_PFNMAP;
vma->vm_flags |= VM_MIXEDMAP;
if (omap_obj->flags & OMAP_BO_WC) {
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
} else if (omap_obj->flags & OMAP_BO_UNCACHED) {
vma->vm_page_prot = pgprot_noncached(vm_get_page_prot(vma->vm_flags));
} else {
/*
* We do have some private objects, at least for scanout buffers
* on hardware without DMM/TILER. But these are allocated write-
* combine
*/
if (WARN_ON(!obj->filp))
return -EINVAL;
/*
* Shunt off cached objs to shmem file so they have their own
* address_space (so unmap_mapping_range does what we want,
* in particular in the case of mmap'd dmabufs)
*/
fput(vma->vm_file);
vma->vm_pgoff = 0;
vma->vm_file = get_file(obj->filp);
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
}
return 0;
}
/* -----------------------------------------------------------------------------
* Dumb Buffers
*/
/**
* omap_gem_dumb_create - create a dumb buffer
* @drm_file: our client file
* @dev: our device
* @args: the requested arguments copied from userspace
*
* Allocate a buffer suitable for use for a frame buffer of the
* form described by user space. Give userspace a handle by which
* to reference it.
*/
int omap_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
struct drm_mode_create_dumb *args)
{
union omap_gem_size gsize;
args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
args->size = PAGE_ALIGN(args->pitch * args->height);
gsize = (union omap_gem_size){
.bytes = args->size,
};
return omap_gem_new_handle(dev, file, gsize,
OMAP_BO_SCANOUT | OMAP_BO_WC, &args->handle);
}
/**
* omap_gem_dumb_map - buffer mapping for dumb interface
* @file: our drm client file
* @dev: drm device
* @handle: GEM handle to the object (from dumb_create)
*
* Do the necessary setup to allow the mapping of the frame buffer
* into user memory. We don't have to do much here at the moment.
*/
int omap_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
u32 handle, u64 *offset)
{
struct drm_gem_object *obj;
int ret = 0;
/* GEM does all our handle to object mapping */
obj = drm_gem_object_lookup(file, handle);
if (obj == NULL) {
ret = -ENOENT;
goto fail;
}
*offset = omap_gem_mmap_offset(obj);
drm_gem_object_put(obj);
fail:
return ret;
}
#ifdef CONFIG_DRM_FBDEV_EMULATION
/* Set scrolling position. This allows us to implement fast scrolling
* for console.
*
* Call only from non-atomic contexts.
*/
int omap_gem_roll(struct drm_gem_object *obj, u32 roll)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
u32 npages = obj->size >> PAGE_SHIFT;
int ret = 0;
if (roll > npages) {
dev_err(obj->dev->dev, "invalid roll: %d\n", roll);
return -EINVAL;
}
omap_obj->roll = roll;
mutex_lock(&omap_obj->lock);
/* if we aren't mapped yet, we don't need to do anything */
if (omap_obj->block) {
ret = omap_gem_attach_pages(obj);
if (ret)
goto fail;
ret = tiler_pin(omap_obj->block, omap_obj->pages, npages,
roll, true);
if (ret)
dev_err(obj->dev->dev, "could not repin: %d\n", ret);
}
fail:
mutex_unlock(&omap_obj->lock);
return ret;
}
#endif
/* -----------------------------------------------------------------------------
* Memory Management & DMA Sync
*/
/*
* shmem buffers that are mapped cached are not coherent.
*
* We keep track of dirty pages using page faulting to perform cache management.
* When a page is mapped to the CPU in read/write mode the device can't access
* it and omap_obj->dma_addrs[i] is NULL. When a page is mapped to the device
* the omap_obj->dma_addrs[i] is set to the DMA address, and the page is
* unmapped from the CPU.
*/
static inline bool omap_gem_is_cached_coherent(struct drm_gem_object *obj)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
return !((omap_obj->flags & OMAP_BO_MEM_SHMEM) &&
((omap_obj->flags & OMAP_BO_CACHE_MASK) == OMAP_BO_CACHED));
}
/* Sync the buffer for CPU access.. note pages should already be
* attached, ie. omap_gem_get_pages()
*/
void omap_gem_cpu_sync_page(struct drm_gem_object *obj, int pgoff)
{
struct drm_device *dev = obj->dev;
struct omap_gem_object *omap_obj = to_omap_bo(obj);
if (omap_gem_is_cached_coherent(obj))
return;
if (omap_obj->dma_addrs[pgoff]) {
dma_unmap_page(dev->dev, omap_obj->dma_addrs[pgoff],
PAGE_SIZE, DMA_TO_DEVICE);
omap_obj->dma_addrs[pgoff] = 0;
}
}
/* sync the buffer for DMA access */
void omap_gem_dma_sync_buffer(struct drm_gem_object *obj,
enum dma_data_direction dir)
{
struct drm_device *dev = obj->dev;
struct omap_gem_object *omap_obj = to_omap_bo(obj);
int i, npages = obj->size >> PAGE_SHIFT;
struct page **pages = omap_obj->pages;
bool dirty = false;
if (omap_gem_is_cached_coherent(obj))
return;
for (i = 0; i < npages; i++) {
if (!omap_obj->dma_addrs[i]) {
dma_addr_t addr;
addr = dma_map_page(dev->dev, pages[i], 0,
PAGE_SIZE, dir);
if (dma_mapping_error(dev->dev, addr)) {
dev_warn(dev->dev, "%s: failed to map page\n",
__func__);
break;
}
dirty = true;
omap_obj->dma_addrs[i] = addr;
}
}
if (dirty) {
unmap_mapping_range(obj->filp->f_mapping, 0,
omap_gem_mmap_size(obj), 1);
}
}
/**
* omap_gem_pin() - Pin a GEM object in memory
* @obj: the GEM object
* @dma_addr: the DMA address
*
* Pin the given GEM object in memory and fill the dma_addr pointer with the
* object's DMA address. If the buffer is not physically contiguous it will be
* remapped through the TILER to provide a contiguous view.
*
* Pins are reference-counted, calling this function multiple times is allowed
* as long the corresponding omap_gem_unpin() calls are balanced.
*
* Return 0 on success or a negative error code otherwise.
*/
int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
{
struct omap_drm_private *priv = obj->dev->dev_private;
struct omap_gem_object *omap_obj = to_omap_bo(obj);
int ret = 0;
mutex_lock(&omap_obj->lock);
if (!omap_gem_is_contiguous(omap_obj) && priv->has_dmm) {
if (refcount_read(&omap_obj->dma_addr_cnt) == 0) {
u32 npages = obj->size >> PAGE_SHIFT;
enum tiler_fmt fmt = gem2fmt(omap_obj->flags);
struct tiler_block *block;
BUG_ON(omap_obj->block);
refcount_set(&omap_obj->dma_addr_cnt, 1);
ret = omap_gem_attach_pages(obj);
if (ret)
goto fail;
if (omap_obj->flags & OMAP_BO_TILED_MASK) {
block = tiler_reserve_2d(fmt,
omap_obj->width,
omap_obj->height, 0);
} else {
block = tiler_reserve_1d(obj->size);
}
if (IS_ERR(block)) {
ret = PTR_ERR(block);
dev_err(obj->dev->dev,
"could not remap: %d (%d)\n", ret, fmt);
goto fail;
}
/* TODO: enable async refill.. */
ret = tiler_pin(block, omap_obj->pages, npages,
omap_obj->roll, true);
if (ret) {
tiler_release(block);
dev_err(obj->dev->dev,
"could not pin: %d\n", ret);
goto fail;
}
omap_obj->dma_addr = tiler_ssptr(block);
omap_obj->block = block;
DBG("got dma address: %pad", &omap_obj->dma_addr);
} else {
refcount_inc(&omap_obj->dma_addr_cnt);
}
if (dma_addr)
*dma_addr = omap_obj->dma_addr;
} else if (omap_gem_is_contiguous(omap_obj)) {
if (dma_addr)
*dma_addr = omap_obj->dma_addr;
} else {
ret = -EINVAL;
goto fail;
}
fail:
mutex_unlock(&omap_obj->lock);
return ret;
}
/**
* omap_gem_unpin_locked() - Unpin a GEM object from memory
* @obj: the GEM object
*
* omap_gem_unpin() without locking.
*/
static void omap_gem_unpin_locked(struct drm_gem_object *obj)
{
struct omap_drm_private *priv = obj->dev->dev_private;
struct omap_gem_object *omap_obj = to_omap_bo(obj);
int ret;
if (omap_gem_is_contiguous(omap_obj) || !priv->has_dmm)
return;
if (refcount_dec_and_test(&omap_obj->dma_addr_cnt)) {
ret = tiler_unpin(omap_obj->block);
if (ret) {
dev_err(obj->dev->dev,
"could not unpin pages: %d\n", ret);
}
ret = tiler_release(omap_obj->block);
if (ret) {
dev_err(obj->dev->dev,
"could not release unmap: %d\n", ret);
}
omap_obj->dma_addr = 0;
omap_obj->block = NULL;
}
}
/**
* omap_gem_unpin() - Unpin a GEM object from memory
* @obj: the GEM object
*
* Unpin the given GEM object previously pinned with omap_gem_pin(). Pins are
* reference-counted, the actual unpin will only be performed when the number
* of calls to this function matches the number of calls to omap_gem_pin().
*/
void omap_gem_unpin(struct drm_gem_object *obj)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
mutex_lock(&omap_obj->lock);
omap_gem_unpin_locked(obj);
mutex_unlock(&omap_obj->lock);
}
/* Get rotated scanout address (only valid if already pinned), at the
* specified orientation and x,y offset from top-left corner of buffer
* (only valid for tiled 2d buffers)
*/
int omap_gem_rotated_dma_addr(struct drm_gem_object *obj, u32 orient,
int x, int y, dma_addr_t *dma_addr)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
int ret = -EINVAL;
mutex_lock(&omap_obj->lock);
if ((refcount_read(&omap_obj->dma_addr_cnt) > 0) && omap_obj->block &&
(omap_obj->flags & OMAP_BO_TILED_MASK)) {
*dma_addr = tiler_tsptr(omap_obj->block, orient, x, y);
ret = 0;
}
mutex_unlock(&omap_obj->lock);
return ret;
}
/* Get tiler stride for the buffer (only valid for 2d tiled buffers) */
int omap_gem_tiled_stride(struct drm_gem_object *obj, u32 orient)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
int ret = -EINVAL;
if (omap_obj->flags & OMAP_BO_TILED_MASK)
ret = tiler_stride(gem2fmt(omap_obj->flags), orient);
return ret;
}
/* if !remap, and we don't have pages backing, then fail, rather than
* increasing the pin count (which we don't really do yet anyways,
* because we don't support swapping pages back out). And 'remap'
* might not be quite the right name, but I wanted to keep it working
* similarly to omap_gem_pin(). Note though that mutex is not
* aquired if !remap (because this can be called in atomic ctxt),
* but probably omap_gem_unpin() should be changed to work in the
* same way. If !remap, a matching omap_gem_put_pages() call is not
* required (and should not be made).
*/
int omap_gem_get_pages(struct drm_gem_object *obj, struct page ***pages,
bool remap)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
int ret = 0;
mutex_lock(&omap_obj->lock);
if (remap) {
ret = omap_gem_attach_pages(obj);
if (ret)
goto unlock;
}
if (!omap_obj->pages) {
ret = -ENOMEM;
goto unlock;
}
*pages = omap_obj->pages;
unlock:
mutex_unlock(&omap_obj->lock);
return ret;
}
/* release pages when DMA no longer being performed */
int omap_gem_put_pages(struct drm_gem_object *obj)
{
/* do something here if we dynamically attach/detach pages.. at
* least they would no longer need to be pinned if everyone has
* released the pages..
*/
return 0;
}
#ifdef CONFIG_DRM_FBDEV_EMULATION
/*
* Get kernel virtual address for CPU access.. this more or less only
* exists for omap_fbdev.
*/
void *omap_gem_vaddr(struct drm_gem_object *obj)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
void *vaddr;
int ret;
mutex_lock(&omap_obj->lock);
if (!omap_obj->vaddr) {
ret = omap_gem_attach_pages(obj);
if (ret) {
vaddr = ERR_PTR(ret);
goto unlock;
}
omap_obj->vaddr = vmap(omap_obj->pages, obj->size >> PAGE_SHIFT,
VM_MAP, pgprot_writecombine(PAGE_KERNEL));
}
vaddr = omap_obj->vaddr;
unlock:
mutex_unlock(&omap_obj->lock);
return vaddr;
}
#endif
/* -----------------------------------------------------------------------------
* Power Management
*/
#ifdef CONFIG_PM
/* re-pin objects in DMM in resume path: */
int omap_gem_resume(struct drm_device *dev)
{
struct omap_drm_private *priv = dev->dev_private;
struct omap_gem_object *omap_obj;
int ret = 0;
mutex_lock(&priv->list_lock);
list_for_each_entry(omap_obj, &priv->obj_list, mm_list) {
if (omap_obj->block) {
struct drm_gem_object *obj = &omap_obj->base;
u32 npages = obj->size >> PAGE_SHIFT;
WARN_ON(!omap_obj->pages); /* this can't happen */
ret = tiler_pin(omap_obj->block,
omap_obj->pages, npages,
omap_obj->roll, true);
if (ret) {
dev_err(dev->dev, "could not repin: %d\n", ret);
goto done;
}
}
}
done:
mutex_unlock(&priv->list_lock);
return ret;
}
#endif
/* -----------------------------------------------------------------------------
* DebugFS
*/
#ifdef CONFIG_DEBUG_FS
void omap_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
{
struct omap_gem_object *omap_obj = to_omap_bo(obj);
u64 off;
off = drm_vma_node_start(&obj->vma_node);
mutex_lock(&omap_obj->lock);
seq_printf(m, "%08x: %2d (%2d) %08llx %pad (%2d) %p %4d",
omap_obj->flags, obj->name, kref_read(&obj->refcount),
off, &omap_obj->dma_addr,
refcount_read(&omap_obj->dma_addr_cnt),
omap_obj->vaddr, omap_obj->roll);
if (omap_obj->flags & OMAP_BO_TILED_MASK) {
seq_printf(m, " %dx%d", omap_obj->width, omap_obj->height);
if (omap_obj->block) {
struct tcm_area *area = &omap_obj->block->area;
seq_printf(m, " (%dx%d, %dx%d)",
area->p0.x, area->p0.y,
area->p1.x, area->p1.y);
}
} else {
seq_printf(m, " %zu", obj->size);
}
mutex_unlock(&omap_obj->lock);
seq_printf(m, "\n");
}
void omap_gem_describe_objects(struct list_head *list, struct seq_file *m)
{
struct omap_gem_object *omap_obj;
int count = 0;
size_t size = 0;
list_for_each_entry(omap_obj, list, mm_list) {
struct drm_gem_object *obj = &omap_obj->base;
seq_printf(m, " ");
omap_gem_describe(obj, m);
count++;
size += obj->size;
}
seq_printf(m, "Total %d objects, %zu bytes\n", count, size);
}
#endif
/* -----------------------------------------------------------------------------
* Constructor & Destructor
*/
void omap_gem_free_object(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
struct omap_drm_private *priv = dev->dev_private;
struct omap_gem_object *omap_obj = to_omap_bo(obj);
omap_gem_evict(obj);
mutex_lock(&priv->list_lock);
list_del(&omap_obj->mm_list);
mutex_unlock(&priv->list_lock);
/*
* We own the sole reference to the object at this point, but to keep
* lockdep happy, we must still take the omap_obj_lock to call
* omap_gem_detach_pages(). This should hardly make any difference as
* there can't be any lock contention.
*/
mutex_lock(&omap_obj->lock);
/* The object should not be pinned. */
WARN_ON(refcount_read(&omap_obj->dma_addr_cnt) > 0);
if (omap_obj->pages) {
if (omap_obj->flags & OMAP_BO_MEM_DMABUF)
kfree(omap_obj->pages);
else
omap_gem_detach_pages(obj);
}
if (omap_obj->flags & OMAP_BO_MEM_DMA_API) {
dma_free_wc(dev->dev, obj->size, omap_obj->vaddr,
omap_obj->dma_addr);
} else if (omap_obj->vaddr) {
vunmap(omap_obj->vaddr);
} else if (obj->import_attach) {
drm_prime_gem_destroy(obj, omap_obj->sgt);
}
mutex_unlock(&omap_obj->lock);
drm_gem_object_release(obj);
mutex_destroy(&omap_obj->lock);
kfree(omap_obj);
}
static bool omap_gem_validate_flags(struct drm_device *dev, u32 flags)
{
struct omap_drm_private *priv = dev->dev_private;
switch (flags & OMAP_BO_CACHE_MASK) {
case OMAP_BO_CACHED:
case OMAP_BO_WC:
case OMAP_BO_CACHE_MASK:
break;
default:
return false;
}
if (flags & OMAP_BO_TILED_MASK) {
if (!priv->usergart)
return false;
switch (flags & OMAP_BO_TILED_MASK) {
case OMAP_BO_TILED_8:
case OMAP_BO_TILED_16:
case OMAP_BO_TILED_32:
break;
default:
return false;
}
}
return true;
}
/* GEM buffer object constructor */
struct drm_gem_object *omap_gem_new(struct drm_device *dev,
union omap_gem_size gsize, u32 flags)
{
struct omap_drm_private *priv = dev->dev_private;
struct omap_gem_object *omap_obj;
struct drm_gem_object *obj;
struct address_space *mapping;
size_t size;
int ret;
if (!omap_gem_validate_flags(dev, flags))
return NULL;
/* Validate the flags and compute the memory and cache flags. */
if (flags & OMAP_BO_TILED_MASK) {
/*
* Tiled buffers are always shmem paged backed. When they are
* scanned out, they are remapped into DMM/TILER.
*/
flags |= OMAP_BO_MEM_SHMEM;
/*
* Currently don't allow cached buffers. There is some caching
* stuff that needs to be handled better.
*/
flags &= ~(OMAP_BO_CACHED|OMAP_BO_WC|OMAP_BO_UNCACHED);
flags |= tiler_get_cpu_cache_flags();
} else if ((flags & OMAP_BO_SCANOUT) && !priv->has_dmm) {
/*
* If we don't have DMM, we must allocate scanout buffers
* from contiguous DMA memory.
*/
flags |= OMAP_BO_MEM_DMA_API;
} else if (!(flags & OMAP_BO_MEM_DMABUF)) {
/*
* All other buffers not backed by dma_buf are shmem-backed.
*/
flags |= OMAP_BO_MEM_SHMEM;
}
/* Allocate the initialize the OMAP GEM object. */
omap_obj = kzalloc(sizeof(*omap_obj), GFP_KERNEL);
if (!omap_obj)
return NULL;
obj = &omap_obj->base;
omap_obj->flags = flags;
mutex_init(&omap_obj->lock);
if (flags & OMAP_BO_TILED_MASK) {
/*
* For tiled buffers align dimensions to slot boundaries and
* calculate size based on aligned dimensions.
*/
tiler_align(gem2fmt(flags), &gsize.tiled.width,
&gsize.tiled.height);
size = tiler_size(gem2fmt(flags), gsize.tiled.width,
gsize.tiled.height);
omap_obj->width = gsize.tiled.width;
omap_obj->height = gsize.tiled.height;
} else {
size = PAGE_ALIGN(gsize.bytes);
}
/* Initialize the GEM object. */
if (!(flags & OMAP_BO_MEM_SHMEM)) {
drm_gem_private_object_init(dev, obj, size);
} else {
ret = drm_gem_object_init(dev, obj, size);
if (ret)
goto err_free;
mapping = obj->filp->f_mapping;
mapping_set_gfp_mask(mapping, GFP_USER | __GFP_DMA32);
}
/* Allocate memory if needed. */
if (flags & OMAP_BO_MEM_DMA_API) {
omap_obj->vaddr = dma_alloc_wc(dev->dev, size,
&omap_obj->dma_addr,
GFP_KERNEL);
if (!omap_obj->vaddr)
goto err_release;
}
mutex_lock(&priv->list_lock);
list_add(&omap_obj->mm_list, &priv->obj_list);
mutex_unlock(&priv->list_lock);
return obj;
err_release:
drm_gem_object_release(obj);
err_free:
kfree(omap_obj);
return NULL;
}
struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
struct sg_table *sgt)
{
struct omap_drm_private *priv = dev->dev_private;
struct omap_gem_object *omap_obj;
struct drm_gem_object *obj;
union omap_gem_size gsize;
/* Without a DMM only physically contiguous buffers can be supported. */
if (sgt->orig_nents != 1 && !priv->has_dmm)
return ERR_PTR(-EINVAL);
gsize.bytes = PAGE_ALIGN(size);
obj = omap_gem_new(dev, gsize, OMAP_BO_MEM_DMABUF | OMAP_BO_WC);
if (!obj)
return ERR_PTR(-ENOMEM);
omap_obj = to_omap_bo(obj);
mutex_lock(&omap_obj->lock);
omap_obj->sgt = sgt;
if (sgt->orig_nents == 1) {
omap_obj->dma_addr = sg_dma_address(sgt->sgl);
} else {
/* Create pages list from sgt */
struct sg_page_iter iter;
struct page **pages;
unsigned int npages;
unsigned int i = 0;
npages = DIV_ROUND_UP(size, PAGE_SIZE);
pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
if (!pages) {
omap_gem_free_object(obj);
obj = ERR_PTR(-ENOMEM);
goto done;
}
omap_obj->pages = pages;
for_each_sg_page(sgt->sgl, &iter, sgt->orig_nents, 0) {
pages[i++] = sg_page_iter_page(&iter);
if (i > npages)
break;
}
if (WARN_ON(i != npages)) {
omap_gem_free_object(obj);
obj = ERR_PTR(-ENOMEM);
goto done;
}
}
done:
mutex_unlock(&omap_obj->lock);
return obj;
}
/* convenience method to construct a GEM buffer object, and userspace handle */
int omap_gem_new_handle(struct drm_device *dev, struct drm_file *file,
union omap_gem_size gsize, u32 flags, u32 *handle)
{
struct drm_gem_object *obj;
int ret;
obj = omap_gem_new(dev, gsize, flags);
if (!obj)
return -ENOMEM;
ret = drm_gem_handle_create(file, obj, handle);
if (ret) {
omap_gem_free_object(obj);
return ret;
}
/* drop reference from allocate - handle holds it now */
drm_gem_object_put(obj);
return 0;
}
/* -----------------------------------------------------------------------------
* Init & Cleanup
*/
/* If DMM is used, we need to set some stuff up.. */
void omap_gem_init(struct drm_device *dev)
{
struct omap_drm_private *priv = dev->dev_private;
struct omap_drm_usergart *usergart;
const enum tiler_fmt fmts[] = {
TILFMT_8BIT, TILFMT_16BIT, TILFMT_32BIT
};
int i, j;
if (!dmm_is_available()) {
/* DMM only supported on OMAP4 and later, so this isn't fatal */
dev_warn(dev->dev, "DMM not available, disable DMM support\n");
return;
}
usergart = kcalloc(3, sizeof(*usergart), GFP_KERNEL);
if (!usergart)
return;
/* reserve 4k aligned/wide regions for userspace mappings: */
for (i = 0; i < ARRAY_SIZE(fmts); i++) {
u16 h = 1, w = PAGE_SIZE >> i;
tiler_align(fmts[i], &w, &h);
/* note: since each region is 1 4kb page wide, and minimum
* number of rows, the height ends up being the same as the
* # of pages in the region
*/
usergart[i].height = h;
usergart[i].height_shift = ilog2(h);
usergart[i].stride_pfn = tiler_stride(fmts[i], 0) >> PAGE_SHIFT;
usergart[i].slot_shift = ilog2((PAGE_SIZE / h) >> i);
for (j = 0; j < NUM_USERGART_ENTRIES; j++) {
struct omap_drm_usergart_entry *entry;
struct tiler_block *block;
entry = &usergart[i].entry[j];
block = tiler_reserve_2d(fmts[i], w, h, PAGE_SIZE);
if (IS_ERR(block)) {
dev_err(dev->dev,
"reserve failed: %d, %d, %ld\n",
i, j, PTR_ERR(block));
return;
}
entry->dma_addr = tiler_ssptr(block);
entry->block = block;
DBG("%d:%d: %dx%d: dma_addr=%pad stride=%d", i, j, w, h,
&entry->dma_addr,
usergart[i].stride_pfn << PAGE_SHIFT);
}
}
priv->usergart = usergart;
priv->has_dmm = true;
}
void omap_gem_deinit(struct drm_device *dev)
{
struct omap_drm_private *priv = dev->dev_private;
/* I believe we can rely on there being no more outstanding GEM
* objects which could depend on usergart/dmm at this point.
*/
kfree(priv->usergart);
}
| {
"pile_set_name": "Github"
} |
interactions:
- request:
body: !!python/unicode '{}'
headers:
Accept: [application/json]
Accept-Encoding: ['gzip, deflate']
Connection: [keep-alive]
Content-Length: ['2']
Content-Type: [application/json]
User-Agent: [python-requests/2.18.4]
method: GET
uri: https://api.linode.com/api/?api_action=domain.list&resultFormat=JSON
response:
body: {string: !!python/unicode '{"ACTION":"domain.list","DATA":[{"DOMAIN":"lexicon-example.com","AXFR_IPS":"none","TTL_SEC":0,"DOMAINID":1047330,"SOA_EMAIL":"[email protected]","EXPIRE_SEC":0,"RETRY_SEC":0,"STATUS":1,"DESCRIPTION":"","LPM_DISPLAYGROUP":"","REFRESH_SEC":0,"MASTER_IPS":"","TYPE":"master"}],"ERRORARRAY":[]}'}
headers:
access-control-allow-origin: ['*']
connection: [keep-alive]
content-type: [application/json;charset=UTF-8]
date: ['Tue, 20 Mar 2018 11:52:41 GMT']
server: [nginx]
strict-transport-security: [max-age=31536000]
transfer-encoding: [chunked]
x-powered-by: [Tiger Blood]
status: {code: 200, message: OK}
version: 1
| {
"pile_set_name": "Github"
} |
// Copyright (c) Six Labors.
// Licensed under the Apache License, Version 2.0.
using System;
using SixLabors.ImageSharp.Processing;
namespace SixLabors.ImageSharp
{
/// <summary>
/// Adds extensions that allow the processing of images to the <see cref="Image{TPixel}"/> type.
/// </summary>
public static class GraphicOptionsDefaultsExtensions
{
/// <summary>
/// Sets the default options against the image processing context.
/// </summary>
/// <param name="context">The image processing context to store default against.</param>
/// <param name="optionsBuilder">The action to update instance of the default options used.</param>
/// <returns>The passed in <paramref name="context"/> to allow chaining.</returns>
public static IImageProcessingContext SetGraphicsOptions(this IImageProcessingContext context, Action<GraphicsOptions> optionsBuilder)
{
var cloned = context.GetGraphicsOptions().DeepClone();
optionsBuilder(cloned);
context.Properties[typeof(GraphicsOptions)] = cloned;
return context;
}
/// <summary>
/// Sets the default options against the configuration.
/// </summary>
/// <param name="configuration">The configuration to store default against.</param>
/// <param name="optionsBuilder">The default options to use.</param>
public static void SetGraphicsOptions(this Configuration configuration, Action<GraphicsOptions> optionsBuilder)
{
var cloned = configuration.GetGraphicsOptions().DeepClone();
optionsBuilder(cloned);
configuration.Properties[typeof(GraphicsOptions)] = cloned;
}
/// <summary>
/// Sets the default options against the image processing context.
/// </summary>
/// <param name="context">The image processing context to store default against.</param>
/// <param name="options">The default options to use.</param>
/// <returns>The passed in <paramref name="context"/> to allow chaining.</returns>
public static IImageProcessingContext SetGraphicsOptions(this IImageProcessingContext context, GraphicsOptions options)
{
context.Properties[typeof(GraphicsOptions)] = options;
return context;
}
/// <summary>
/// Sets the default options against the configuration.
/// </summary>
/// <param name="configuration">The configuration to store default against.</param>
/// <param name="options">The default options to use.</param>
public static void SetGraphicsOptions(this Configuration configuration, GraphicsOptions options)
{
configuration.Properties[typeof(GraphicsOptions)] = options;
}
/// <summary>
/// Gets the default options against the image processing context.
/// </summary>
/// <param name="context">The image processing context to retrieve defaults from.</param>
/// <returns>The globaly configued default options.</returns>
public static GraphicsOptions GetGraphicsOptions(this IImageProcessingContext context)
{
if (context.Properties.TryGetValue(typeof(GraphicsOptions), out var options) && options is GraphicsOptions go)
{
return go;
}
// do not cache the fall back to config into the the processing context
// in case someone want to change the value on the config and expects it re trflow thru
return context.Configuration.GetGraphicsOptions();
}
/// <summary>
/// Gets the default options against the image processing context.
/// </summary>
/// <param name="configuration">The configuration to retrieve defaults from.</param>
/// <returns>The globaly configued default options.</returns>
public static GraphicsOptions GetGraphicsOptions(this Configuration configuration)
{
if (configuration.Properties.TryGetValue(typeof(GraphicsOptions), out var options) && options is GraphicsOptions go)
{
return go;
}
var configOptions = new GraphicsOptions();
// capture the fallback so the same instance will always be returned in case its mutated
configuration.Properties[typeof(GraphicsOptions)] = configOptions;
return configOptions;
}
}
}
| {
"pile_set_name": "Github"
} |
These tests are brought to you by the letter `q'.
When you start the test, you should see:
You have the following choices:
1 - Reset the struct termios
2 - Look at the current termios setting
3 - Change the line characteristics
4 - Test canonical input
5 - Test raw input
9 - Exit
Enter your choice (1 to 5 or 9, followed by a carriage return):
The individual tests are briefly described below:
1. Reset the struct termios.
Included just in case you get into trouble. More than likely, if you are in
trouble, neither input nor output are likely to work and this won't help. But
hey, it should give you some warm fuzzy feeling that its there...
2. Look at the current termios setting
Dumps the current state of the termios settings in hex and with symbolic flag
names.
3. Change the line characteristics
Allows you to change the line speed, parity, number of data bits and number of
stop bits. You must supply a delay before the change takes effect. This gives
you time to switch your terminal settings to continue with the test.
WARNING: Minicom under Linux gets extremely unhappy (as does the /dev/ttyS?
underlying devices) if you change the line characteristics and do not make the
corresponding change in the terminal emulator.
4. Test canonical input
Simple test of canonical or cooked input mode. Try typing some tabs and/or control characters and make sure that you can backspace over them properly.
5. Test raw input
The line is placed into raw mode and four separate test are done:
VMIN=0, VTIME=0
Each letter you type should produce a line of output.
The `count' should be quite large, since (as you correctly
pointed out) the read is non-blocking. The time should be
the interval between typing characters.
Type a `q' to finish the test.
VMIN=0, VTIME=20
Again, each letter should produce a line of output. The
`count' should be much smaller -- the read is non-blocking
but has a timeout of 2 seconds, so the count should be about
half the `interval'.
Type a `q' to finish the test.
VMIN=5, VTIME=0
A line should be produced for every 5 characters typed. The
count should be 1. This is a blocking read.
Type a `q' as the first character of a group of 5 to finish
the test.
VMIN=5, VTIME=20
Type a character. Two seconds later a line should be printed.
Count should be 1. Type a character, and another within 2 seconds.
Two seconds after last character (or right after the 5th character)
a line should be printed.
Type a `q' as the first character of a group to finish the test.
9. Exit
Gets you out of the test.
Clear???
---
Eric Norum
[email protected]
Saskatchewan Accelerator Laboratory
University of Saskatchewan
Saskatoon, Canada.
Charles-Antoine Gauthier
Software Engineering Group
Institute for Information Technology
National Research Council of Canada
[email protected]
| {
"pile_set_name": "Github"
} |
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">iPhoneSimulator</Platform>
<ProductVersion>8.0.30703</ProductVersion>
<SchemaVersion>2.0</SchemaVersion>
<ProjectGuid>{035196B5-D260-42ED-BAE0-2C07CC1AC537}</ProjectGuid>
<ProjectTypeGuids>{FEACFBD2-3405-455C-9665-78FE426C6842};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
<OutputType>Exe</OutputType>
<RootNamespace>MonkeyTap.iOS</RootNamespace>
<IPhoneResourcePrefix>Resources</IPhoneResourcePrefix>
<AssemblyName>MonkeyTapiOS</AssemblyName>
<NuGetPackageImportStamp />
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|iPhoneSimulator' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>bin\iPhoneSimulator\Debug</OutputPath>
<DefineConstants>DEBUG</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<ConsolePause>false</ConsolePause>
<MtouchArch>x86_64</MtouchArch>
<MtouchLink>None</MtouchLink>
<MtouchDebug>true</MtouchDebug>
<CodesignEntitlements />
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|iPhoneSimulator' ">
<DebugType>none</DebugType>
<Optimize>true</Optimize>
<OutputPath>bin\iPhoneSimulator\Release</OutputPath>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<MtouchLink>None</MtouchLink>
<MtouchArch>x86_64</MtouchArch>
<ConsolePause>false</ConsolePause>
<CodesignEntitlements />
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|iPhone' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>bin\iPhone\Debug</OutputPath>
<DefineConstants>DEBUG</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<ConsolePause>false</ConsolePause>
<MtouchArch>ARM64</MtouchArch>
<CodesignKey>iPhone Developer</CodesignKey>
<MtouchDebug>true</MtouchDebug>
<CodesignEntitlements>Entitlements.plist</CodesignEntitlements>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|iPhone' ">
<DebugType>none</DebugType>
<Optimize>true</Optimize>
<OutputPath>bin\iPhone\Release</OutputPath>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<MtouchArch>ARM64</MtouchArch>
<ConsolePause>false</ConsolePause>
<CodesignKey>iPhone Developer</CodesignKey>
<CodesignEntitlements>Entitlements.plist</CodesignEntitlements>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Ad-Hoc|iPhone' ">
<DebugType>none</DebugType>
<Optimize>True</Optimize>
<OutputPath>bin\iPhone\Ad-Hoc</OutputPath>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<ConsolePause>False</ConsolePause>
<MtouchArch>ARM64</MtouchArch>
<BuildIpa>True</BuildIpa>
<CodesignProvision>Automatic:AdHoc</CodesignProvision>
<CodesignKey>iPhone Distribution</CodesignKey>
<CodesignEntitlements>Entitlements.plist</CodesignEntitlements>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'AppStore|iPhone' ">
<DebugType>none</DebugType>
<Optimize>True</Optimize>
<OutputPath>bin\iPhone\AppStore</OutputPath>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
<ConsolePause>False</ConsolePause>
<MtouchArch>ARM64</MtouchArch>
<CodesignProvision>Automatic:AppStore</CodesignProvision>
<CodesignKey>iPhone Distribution</CodesignKey>
<CodesignEntitlements>Entitlements.plist</CodesignEntitlements>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Xamarin.Forms" Version="3.1.0.637273" />
</ItemGroup>
<ItemGroup>
<Compile Include="Main.cs" />
<Compile Include="AppDelegate.cs" />
<None Include="Entitlements.plist" />
<None Include="Info.plist" />
<Compile Include="Properties\AssemblyInfo.cs" />
<ITunesArtwork Include="iTunesArtwork" />
<ITunesArtwork Include="iTunesArtwork@2x" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\MonkeyTap\MonkeyTap.csproj">
<Name>MonkeyTap</Name>
</ProjectReference>
</ItemGroup>
<ItemGroup>
<BundleResource Include="Resources\Default-568h%402x.png" />
<BundleResource Include="Resources\Default-Portrait.png" />
<BundleResource Include="Resources\Default-Portrait%402x.png" />
<BundleResource Include="Resources\Default.png" />
<BundleResource Include="Resources\Default%402x.png" />
<BundleResource Include="Resources\Icon-60%402x.png" />
<BundleResource Include="Resources\Icon-60%403x.png" />
<BundleResource Include="Resources\Icon-76.png" />
<BundleResource Include="Resources\Icon-76%402x.png" />
<BundleResource Include="Resources\Icon-Small-40.png" />
<BundleResource Include="Resources\Icon-Small-40%402x.png" />
<BundleResource Include="Resources\Icon-Small-40%403x.png" />
<BundleResource Include="Resources\Icon-Small.png" />
<BundleResource Include="Resources\Icon-Small%402x.png" />
<BundleResource Include="Resources\Icon-Small%403x.png" />
<BundleResource Include="Resources\LaunchScreen.storyboard" />
</ItemGroup>
<ItemGroup>
<Reference Include="System" />
<Reference Include="System.Xml" />
<Reference Include="System.Core" />
<Reference Include="Xamarin.iOS" />
</ItemGroup>
<Import Project="$(MSBuildExtensionsPath)\Xamarin\iOS\Xamarin.iOS.CSharp.targets" />
<Target Name="EnsureNuGetPackageBuildImports" BeforeTargets="PrepareForBuild">
<PropertyGroup />
</Target>
</Project> | {
"pile_set_name": "Github"
} |
The pattern "(null)" does not match the hostname domain.com
The pattern "example.com domain.com" matches the hostname domain.com
The pattern "example.com other.com" does not match the hostname domain.com
The pattern "example.com,domain.com" matches the hostname domain.com
The pattern "example.com,domain.com" does not match the hostname otherdomain.com
The pattern "example.com, *.domain.com" matches the hostname sub.domain.com
The pattern "example.com, *.domain.com" matches the hostname domain.com
The pattern "example.com, .domain.com" matches the hostname domain.com
The pattern "*" matches the hostname domain.com
| {
"pile_set_name": "Github"
} |
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s_customize_controller/pkg/apis/bolingcavalry"
)
var SchemeGroupVersion = schema.GroupVersion{
Group: bolingcavalry.GroupName,
Version: bolingcavalry.Version,
}
var (
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
AddToScheme = SchemeBuilder.AddToScheme
)
func Resource(resource string) schema.GroupResource {
return SchemeGroupVersion.WithResource(resource).GroupResource()
}
func Kind(kind string) schema.GroupKind {
return SchemeGroupVersion.WithKind(kind).GroupKind()
}
func addKnownTypes(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(
SchemeGroupVersion,
&Student{},
&StudentList{},
)
// register the type in the scheme
metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
return nil
}
| {
"pile_set_name": "Github"
} |
/*
Copyright 2014 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package clientcmd
import (
"io"
"sync"
"github.com/golang/glog"
"k8s.io/client-go/1.5/rest"
clientcmdapi "k8s.io/client-go/1.5/tools/clientcmd/api"
)
// DeferredLoadingClientConfig is a ClientConfig interface that is backed by a client config loader.
// It is used in cases where the loading rules may change after you've instantiated them and you want to be sure that
// the most recent rules are used. This is useful in cases where you bind flags to loading rule parameters before
// the parse happens and you want your calling code to be ignorant of how the values are being mutated to avoid
// passing extraneous information down a call stack
type DeferredLoadingClientConfig struct {
loader ClientConfigLoader
overrides *ConfigOverrides
fallbackReader io.Reader
clientConfig ClientConfig
loadingLock sync.Mutex
// provided for testing
icc InClusterConfig
}
// InClusterConfig abstracts details of whether the client is running in a cluster for testing.
type InClusterConfig interface {
ClientConfig
Possible() bool
}
// NewNonInteractiveDeferredLoadingClientConfig creates a ConfigClientClientConfig using the passed context name
func NewNonInteractiveDeferredLoadingClientConfig(loader ClientConfigLoader, overrides *ConfigOverrides) ClientConfig {
return &DeferredLoadingClientConfig{loader: loader, overrides: overrides, icc: inClusterClientConfig{}}
}
// NewInteractiveDeferredLoadingClientConfig creates a ConfigClientClientConfig using the passed context name and the fallback auth reader
func NewInteractiveDeferredLoadingClientConfig(loader ClientConfigLoader, overrides *ConfigOverrides, fallbackReader io.Reader) ClientConfig {
return &DeferredLoadingClientConfig{loader: loader, overrides: overrides, icc: inClusterClientConfig{}, fallbackReader: fallbackReader}
}
func (config *DeferredLoadingClientConfig) createClientConfig() (ClientConfig, error) {
if config.clientConfig == nil {
config.loadingLock.Lock()
defer config.loadingLock.Unlock()
if config.clientConfig == nil {
mergedConfig, err := config.loader.Load()
if err != nil {
return nil, err
}
var mergedClientConfig ClientConfig
if config.fallbackReader != nil {
mergedClientConfig = NewInteractiveClientConfig(*mergedConfig, config.overrides.CurrentContext, config.overrides, config.fallbackReader, config.loader)
} else {
mergedClientConfig = NewNonInteractiveClientConfig(*mergedConfig, config.overrides.CurrentContext, config.overrides, config.loader)
}
config.clientConfig = mergedClientConfig
}
}
return config.clientConfig, nil
}
func (config *DeferredLoadingClientConfig) RawConfig() (clientcmdapi.Config, error) {
mergedConfig, err := config.createClientConfig()
if err != nil {
return clientcmdapi.Config{}, err
}
return mergedConfig.RawConfig()
}
// ClientConfig implements ClientConfig
func (config *DeferredLoadingClientConfig) ClientConfig() (*rest.Config, error) {
mergedClientConfig, err := config.createClientConfig()
if err != nil {
return nil, err
}
// load the configuration and return on non-empty errors and if the
// content differs from the default config
mergedConfig, err := mergedClientConfig.ClientConfig()
switch {
case err != nil:
if !IsEmptyConfig(err) {
// return on any error except empty config
return nil, err
}
case mergedConfig != nil:
// the configuration is valid, but if this is equal to the defaults we should try
// in-cluster configuration
if !config.loader.IsDefaultConfig(mergedConfig) {
return mergedConfig, nil
}
}
// check for in-cluster configuration and use it
if config.icc.Possible() {
glog.V(4).Infof("Using in-cluster configuration")
return config.icc.ClientConfig()
}
// return the result of the merged client config
return mergedConfig, err
}
// Namespace implements KubeConfig
func (config *DeferredLoadingClientConfig) Namespace() (string, bool, error) {
mergedKubeConfig, err := config.createClientConfig()
if err != nil {
return "", false, err
}
ns, ok, err := mergedKubeConfig.Namespace()
// if we get an error and it is not empty config, or if the merged config defined an explicit namespace, or
// if in-cluster config is not possible, return immediately
if (err != nil && !IsEmptyConfig(err)) || ok || !config.icc.Possible() {
// return on any error except empty config
return ns, ok, err
}
glog.V(4).Infof("Using in-cluster namespace")
// allow the namespace from the service account token directory to be used.
return config.icc.Namespace()
}
// ConfigAccess implements ClientConfig
func (config *DeferredLoadingClientConfig) ConfigAccess() ConfigAccess {
return config.loader
}
| {
"pile_set_name": "Github"
} |
using AvalonStudio.Extensibility.Templating;
using AvalonStudio.MVVM;
using ReactiveUI;
using System.Globalization;
namespace AvalonStudio.Controls.Standard.SolutionExplorer
{
class TemplateParameterViewModel : ViewModel
{
private ITemplateParameter _inner;
private string _value;
public TemplateParameterViewModel(ITemplate parent, ITemplateParameter parameter)
{
_inner = parameter;
if (parameter.Name == "name" && string.IsNullOrEmpty(parameter.DefaultValue))
{
_value = parent.DefaultName;
}
else
{
_value = parameter.DefaultValue;
}
}
public string Name => CultureInfo.CurrentCulture.TextInfo.ToTitleCase(_inner.Name);
public string Value
{
get { return _value; }
set { this.RaiseAndSetIfChanged(ref _value, value); }
}
}
}
| {
"pile_set_name": "Github"
} |
// Code generated by smithy-go-codegen DO NOT EDIT.
package sesv2
import (
"context"
awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
"github.com/aws/aws-sdk-go-v2/aws/retry"
"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
smithy "github.com/awslabs/smithy-go"
"github.com/awslabs/smithy-go/middleware"
smithyhttp "github.com/awslabs/smithy-go/transport/http"
)
// Deletes an email identity. An identity can be either an email address or a
// domain name.
func (c *Client) DeleteEmailIdentity(ctx context.Context, params *DeleteEmailIdentityInput, optFns ...func(*Options)) (*DeleteEmailIdentityOutput, error) {
stack := middleware.NewStack("DeleteEmailIdentity", smithyhttp.NewStackRequest)
options := c.options.Copy()
for _, fn := range optFns {
fn(&options)
}
addawsRestjson1_serdeOpDeleteEmailIdentityMiddlewares(stack)
awsmiddleware.AddRequestInvocationIDMiddleware(stack)
smithyhttp.AddContentLengthMiddleware(stack)
AddResolveEndpointMiddleware(stack, options)
v4.AddComputePayloadSHA256Middleware(stack)
retry.AddRetryMiddlewares(stack, options)
addHTTPSignerV4Middleware(stack, options)
awsmiddleware.AddAttemptClockSkewMiddleware(stack)
addClientUserAgent(stack)
smithyhttp.AddErrorCloseResponseBodyMiddleware(stack)
smithyhttp.AddCloseResponseBodyMiddleware(stack)
addOpDeleteEmailIdentityValidationMiddleware(stack)
stack.Initialize.Add(newServiceMetadataMiddleware_opDeleteEmailIdentity(options.Region), middleware.Before)
addRequestIDRetrieverMiddleware(stack)
addResponseErrorMiddleware(stack)
for _, fn := range options.APIOptions {
if err := fn(stack); err != nil {
return nil, err
}
}
handler := middleware.DecorateHandler(smithyhttp.NewClientHandler(options.HTTPClient), stack)
result, metadata, err := handler.Handle(ctx, params)
if err != nil {
return nil, &smithy.OperationError{
ServiceID: ServiceID,
OperationName: "DeleteEmailIdentity",
Err: err,
}
}
out := result.(*DeleteEmailIdentityOutput)
out.ResultMetadata = metadata
return out, nil
}
// A request to delete an existing email identity. When you delete an identity, you
// lose the ability to send email from that identity. You can restore your ability
// to send email by completing the verification process for the identity again.
type DeleteEmailIdentityInput struct {
// The identity (that is, the email address or domain) that you want to delete.
EmailIdentity *string
}
// An HTTP 200 response if the request succeeds, or an error message if the request
// fails.
type DeleteEmailIdentityOutput struct {
// Metadata pertaining to the operation's result.
ResultMetadata middleware.Metadata
}
func addawsRestjson1_serdeOpDeleteEmailIdentityMiddlewares(stack *middleware.Stack) {
stack.Serialize.Add(&awsRestjson1_serializeOpDeleteEmailIdentity{}, middleware.After)
stack.Deserialize.Add(&awsRestjson1_deserializeOpDeleteEmailIdentity{}, middleware.After)
}
func newServiceMetadataMiddleware_opDeleteEmailIdentity(region string) awsmiddleware.RegisterServiceMetadata {
return awsmiddleware.RegisterServiceMetadata{
Region: region,
ServiceID: ServiceID,
SigningName: "ses",
OperationName: "DeleteEmailIdentity",
}
}
| {
"pile_set_name": "Github"
} |
var Variable = this.$el.find("#Variable").val().toUpperCase();
var IndexStart = GetInputConstructorValue("IndexStart", loader);
var IndexEnd = GetInputConstructorValue("IndexEnd", loader);
var VariableNewList = this.$el.find("#VariableNewList").val().toUpperCase();
if(IndexEnd["original"].length == 0)
{
Invalid("IndexEnd is empty");
return;
}
if(IndexStart["original"].length == 0)
{
Invalid("IndexStart is empty");
return;
}
if(Variable.length == 0)
{
Invalid("Variable is empty");
return;
}
if(VariableNewList.length == 0)
{
Invalid("VariableNewList is empty");
return;
}
try{
var code = loader.GetAdditionalData() + _.template($("#sublist_code").html())({variable:"VAR_" + Variable,variable_new:"VAR_" + VariableNewList,index_start: IndexStart["updated"],index_end: IndexEnd["updated"]})
code = Normalize(code,0)
BrowserAutomationStudio_Append("", BrowserAutomationStudio_SaveControls() + code, action, DisableIfAdd);
}catch(e)
{} | {
"pile_set_name": "Github"
} |
package procfs
import (
"fmt"
"os"
"path"
)
// FS represents the pseudo-filesystem proc, which provides an interface to
// kernel data structures.
type FS string
// DefaultMountPoint is the common mount point of the proc filesystem.
const DefaultMountPoint = "/proc"
// NewFS returns a new FS mounted under the given mountPoint. It will error
// if the mount point can't be read.
func NewFS(mountPoint string) (FS, error) {
info, err := os.Stat(mountPoint)
if err != nil {
return "", fmt.Errorf("could not read %s: %s", mountPoint, err)
}
if !info.IsDir() {
return "", fmt.Errorf("mount point %s is not a directory", mountPoint)
}
return FS(mountPoint), nil
}
// Path returns the path of the given subsystem relative to the procfs root.
func (fs FS) Path(p ...string) string {
return path.Join(append([]string{string(fs)}, p...)...)
}
| {
"pile_set_name": "Github"
} |
//*******************************************************
//
// Delphi DataSnap Framework
//
// Copyright(c) 1995-2012 Embarcadero Technologies, Inc.
//
//*******************************************************
unit DSRESTConnection;
{$IFDEF FPC}
{$mode DELPHI}
{$IFNDEF WINDOWS}
{$modeswitch objectivec2}
{$ENDIF}
{$ENDIF}
{$I dsrestdefines.inc}
interface
uses
SysUtils, Types, Classes,
DSRestParameter,
DSRESTParameterMetaData,
Contnrs,
DSRestTypes,
DBXFPCJSON
{$IFDEF FPC}
, DBXConnection, iPhoneAll
{$ENDIF};
type
TDSRESTConnection = class;
TRequestTypes = (GET, POST, PUT, Delete);
TDSRESTParameterList = class;
TDSRESTCommand = class(TObject)
private
FConnection: TDSRESTConnection;
FFullyQualifiedMethodName: string;
FParameters: TDSRESTParameterList;
FText: string;
FRequestType: TRequestTypes;
public
constructor Create(aConn: TDSRESTConnection); reintroduce; virtual;
destructor Destroy; override;
procedure Prepare(metadatas: TDSRESTParameterMetaDataArray);
procedure Execute;
property Connection: TDSRESTConnection read FConnection write FConnection;
property FullyQualifiedMethodName: string read FFullyQualifiedMethodName
write FFullyQualifiedMethodName;
property Parameters: TDSRESTParameterList read FParameters
write FParameters;
property Text: string read FText write FText;
property RequestType: TRequestTypes read FRequestType write FRequestType;
end;
TDSRESTConnection = class(TComponent)
private
FPort: integer;
FUrlPath: string;
FHost: String;
FProtocol: String;
FContext: string;
FSessionID: String;
FSessionIDExpires: LongInt;
FUserName: string;
FPassword: string;
FConnectionTimeout: integer;
{$IFDEF FPC_ON_IOS}
FListner: TConnectionEventLister;
FInternalDelegate: TDBXConnectionEventHook;
{$ENDIF}
protected
{$IFDEF FPC_ON_IOS}
function GetInnerDelegate: TDBXConnectionEventHook;
{$ENDIF}
function isUrlParameter(parameter: TDSRestParameter): Boolean;
{$IFDEF FPC}
function buildRequestURLNS(command: TDSRESTCommand): NSMutableString;
function encodeURIComponentWithParamNS(parameter: TDSRestParameter)
: NSString;
function encodeURIComponentNS(Value: NSString): NSString;
procedure SetUpSessionHeader(var request: NSMutableURLRequest);
procedure setUpAuthorizationHeader(var request: NSMutableURLRequest);
procedure setUpHeader(var request: NSMutableURLRequest);
function CreateRequestNS(command: TDSRESTCommand): NSMutableURLRequest;
function BuildRequestNS(arequestType: TRequestTypes; aUrl: NSString;
Parameters: TJSONArray): NSMutableURLRequest;
procedure setSessionIdentifier(response: NSHTTPURLResponse);
function SafeParseResponse(response: NSHTTPURLResponse; err: NSError;
jsonStr: String): TJSONObject;
function NSDataToStream(Value: NSData): TStream;
{$ENDIF}
function isThereOnlyOneStreamInOutput(Parameters
: TDSRESTParameterList): Boolean;
function isOnlyOneParameterInOutput(Parameters
: TDSRESTParameterList): Boolean;
procedure throwJsonExceptionIfNeed(json: TJSONObject; athrowEx: Boolean);
procedure CloseSession;
public
constructor Create(AOwner: TComponent); override;
function Clone(includeSession: Boolean): TDSRESTConnection;
function CreateCommand: TDSRESTCommand;
procedure Execute(cmd: TDSRESTCommand);
property Port: integer read FPort write FPort;
property UrlPath: string read FUrlPath write FUrlPath;
property Host: String read FHost write FHost;
property Protocol: String read FProtocol write FProtocol;
property Context: string read FContext write FContext;
property SessionID: String read FSessionID write FSessionID;
property SessionIDExpires: LongInt read FSessionIDExpires
write FSessionIDExpires;
property UserName: string read FUserName write FUserName;
property Password: string read FPassword write FPassword;
property ConnectionTimeout: integer read FConnectionTimeout
write FConnectionTimeout;
{$IFDEF FPC_ON_IOS}
property Listener: TConnectionEventLister read FListner write FListner;
{$ENDIF}
end;
TDSRESTParameterList = class
private
FList: TObjectList;
FOwnsObjects: Boolean;
function getItems(index: integer): TDSRestParameter;
procedure SetOwnsObjects(const Value: Boolean);
public
constructor Create;
destructor Destroy; override;
function Count: integer;
function Add(Value: TDSRestParameter): TDSRestParameter;
property OwnsObjects: Boolean read FOwnsObjects write SetOwnsObjects;
property Items[index: integer]: TDSRestParameter read getItems;
end;
implementation
uses
DBXDefaultFormatter, DBXJsonTools, DBXValue, FPCStrings
{$IFDEF FPC_ON_IOS} , CFURL, CFString{$ENDIF};
{$IFDEF FPC_ON_IOS}
function EncodeUrlNS(Value: string): NSString;
begin
Result := NSString(CFURLCreateStringByAddingPercentEscapes(nil,
CFSTR(PChar(Value)), nil, CFSTR(PChar('!*();:@&=+$,/?%#[]''')),
kCFStringEncodingUTF8));
Result.autorelease;
end;
function EncodeUrlNS2(Value: NSString): NSString;
begin
Result := NSString(CFURLCreateStringByAddingPercentEscapes(nil,
CFSTR(Value.UTF8String), nil, CFSTR(PChar('!*();:@&=+$,/?%#[]''')),
kCFStringEncodingUTF8));
Result.autorelease;
end;
function TDSRESTConnection.encodeURIComponentNS(Value: NSString): NSString;
begin
Result := EncodeUrlNS2(Value);
end;
function TDSRESTConnection.buildRequestURLNS(command: TDSRESTCommand)
: NSMutableString;
var
LPathPrefix, LMethodName, LUrl: NSMutableString;
LPort: integer;
LProtocol, LHost, LPortString, LContext: NSString;
replaceRange, replaceRange2: NSRange;
begin
LPathPrefix := NSMutableString.alloc.initWithCapacity(0);
if (Trim(FUrlPath) <> '') then
LPathPrefix.appendString(StringToNS(FUrlPath));
LPort := FPort;
LMethodName := NSMutableString.alloc.initWithString(StringToNS(command.Text));
LProtocol := NSString.alloc.initWithString(StringToNS(FProtocol));
if (LProtocol.isEqualToString(StringToNS(''))) then
begin
LProtocol.Release;
LProtocol := StringToNS('http').retain;
end;
LHost := NSString.alloc.initWithString(StringToNS(FHost));
if (LHost.isEqualToString(StringToNS(''))) then
begin
LHost.Release;
LHost := StringToNS('localhost').retain;
end;
if (not LPathPrefix.isEqualToString(StringToNS(''))) then
LPathPrefix.insertString_atIndex(StringToNS('/'), 0);
LPortString := StringToNS('');
if (LPort > 0) then
begin
LPortString.Release;
LPortString := NSString.alloc.initWithString
(StringToNS(':' + IntToStr(LPort)));
end;
if (command.RequestType in [GET, Delete]) then
begin
replaceRange := LMethodName.rangeOfString(StringToNS('.'));
if (replaceRange.location <> NSNotFound) then
LMethodName.replaceCharactersInRange_WithString(replaceRange,
StringToNS('/'));
replaceRange2 := LMethodName.rangeOfString(LMethodName);
LMethodName.replaceOccurrencesOfString_withString_options_range
(StringToNS('\\'), StringToNS('\%22'), NSCaseInsensitiveSearch,
replaceRange2);
end
else
begin
replaceRange := LMethodName.rangeOfString(StringToNS('.'));
if (replaceRange.location <> NSNotFound) then
LMethodName.replaceCharactersInRange_WithString(replaceRange,
StringToNS('/%22'));
replaceRange2 := LMethodName.rangeOfString(LMethodName);
LMethodName.replaceOccurrencesOfString_withString_options_range
(StringToNS('\'), StringToNS('\%22'), NSCaseInsensitiveSearch,
replaceRange2);
LMethodName.appendString(StringToNS('%22'));
end;
LContext := NSString.alloc.initWithString(StringToNS(FContext));
if (LContext.isEqualToString(StringToNS(''))) then
begin
LContext.Release;
LContext := StringToNS('datasnap/').retain;
end;
LUrl := NSMutableString.alloc.initWithCapacity(0);
LUrl.appendString(LProtocol);
LUrl.appendString(StringToNS('://'));
LUrl.appendString(encodeURIComponentNS(LHost));
LUrl.appendString(LPortString);
LUrl.appendString(LPathPrefix);
LUrl.appendString(StringToNS('/'));
LUrl.appendString(LContext);
LUrl.appendString(StringToNS('rest/'));
LUrl.appendString(LMethodName);
LHost.Release;
LProtocol.Release;
LPortString.Release;
LPathPrefix.Release;
LContext.Release;
LMethodName.Release;
Result := LUrl.autorelease;
end;
{$ENDIF}
{ TDSRestParameterList }
function TDSRESTParameterList.Add(Value: TDSRestParameter): TDSRestParameter;
begin
FList.Add(Value);
Result := Value;
end;
function TDSRESTParameterList.Count: integer;
begin
Result := FList.Count;
end;
constructor TDSRESTParameterList.Create;
begin
inherited;
FList := TObjectList.Create;
end;
destructor TDSRESTParameterList.Destroy;
begin
FList.Free;
inherited;
end;
function TDSRESTParameterList.getItems(index: integer): TDSRestParameter;
begin
Result := TDSRestParameter(FList.Items[index]);
end;
procedure TDSRESTParameterList.SetOwnsObjects(const Value: Boolean);
begin
FOwnsObjects := Value;
FList.OwnsObjects := Value;
end;
{ TDSRestCommand }
constructor TDSRESTCommand.Create(aConn: TDSRESTConnection);
begin
inherited Create;
FConnection := aConn;
FParameters := TDSRESTParameterList.Create;
FParameters.OwnsObjects := True;
end;
destructor TDSRESTCommand.Destroy;
begin
FParameters.Free;
inherited;
end;
procedure TDSRESTCommand.Execute;
begin
FConnection.Execute(self);
end;
procedure TDSRESTCommand.Prepare(metadatas: TDSRESTParameterMetaDataArray);
var
p: TDSRestParameter;
param: TDSRESTParameterMetaData;
begin
for param in metadatas do
begin
p := TDSRestParameter.Create;
p.Direction := param.Direction;
p.DBXType := param.DBXType;
p.TypeName := param.TypeName;
p.Value.DBXType := param.DBXType;
Parameters.Add(p);
end;
end;
{ TDSREstConnection }
procedure TDSRESTConnection.CloseSession;
begin
FSessionID := '';
FSessionIDExpires := -1;
end;
constructor TDSRESTConnection.Create(AOwner: TComponent);
begin
inherited;
FConnectionTimeout := 30;
end;
function TDSRESTConnection.CreateCommand: TDSRESTCommand;
begin
Result := TDSRESTCommand.Create(self);
end;
procedure TDSRESTConnection.throwJsonExceptionIfNeed(json: TJSONObject;
athrowEx: Boolean);
var
msg: String;
begin
if json.has('error') then
begin
msg := json.getString('error');
raise DBXException.Create(msg);
end
else if json.has('SessionExpired') then
begin
CloseSession;
msg := json.getString('SessionExpired');
raise DBXException.Create(msg);
end;
end;
{$IFDEF FPC}
function TDSRESTConnection.encodeURIComponentWithParamNS
(parameter: TDSRestParameter): NSString;
begin
Result := encodeURIComponentNS(StringToNS(parameter.Value.ToString));
end;
function TDSRESTConnection.BuildRequestNS(arequestType: TRequestTypes;
aUrl: NSString; Parameters: TJSONArray): NSMutableURLRequest;
var
url, _dataString: NSString;
nsBody: NSString;
cmdurl: NSURL;
_nsdata: NSData;
jbody: TJSONObject;
begin
cmdurl := NSURL.URLWithString(aUrl);
Result := NSMutableURLRequest.requestWithURL(cmdurl);
case arequestType of
GET:
Result.setHTTPMethod(NSStr(string('GET')));
Delete:
Result.setHTTPMethod(NSStr(string('DELETE')));
POST:
begin
Result.setHTTPMethod(NSStr(string('POST')));
if not assigned(Parameters) then
raise DBXException.Create
('Parameters cannot be null in a POST request');
if Parameters.size > 0 then
begin
jbody := TJSONObject.Create;
try
jbody.addPairs('_parameters', Parameters.Clone);
nsBody := StringToNS(jbody.ToString);
finally
jbody.Free;
end;
Result.setHTTPBody(nsBody.dataUsingEncoding(NSUTF8StringEncoding));
end;
end;
PUT:
begin
Result.setHTTPMethod(NSStr(string('POST')));
if Parameters.size > 0 then
begin
nsBody := NSStr(string('body'));
Result.setHTTPBody(nsBody.dataUsingEncoding(NSUTF8StringEncoding));
end;
end;
end;
Result.setTimeoutInterval(FConnectionTimeout);
end;
procedure TDSRESTConnection.setSessionIdentifier(response: NSHTTPURLResponse);
var
found: Boolean;
hd: NSDictionary;
pragma: NSString;
ds, er: NSArray;
e, s: NSString;
i: integer;
begin
found := False;
hd := response.allHeaderFields;
pragma := hd.objectForKey(NSStr('Pragma'));
if assigned(pragma) then
begin
ds := pragma.componentsSeparatedByString(NSStr(','));
for i := 0 to ds.Count - 1 do
begin
e := NSString(ds.objectAtIndex(i));
er := e.componentsSeparatedByString(NSStr('='));
if (er.Count > 0) then
begin
s := NSString(er.objectAtIndex(0));
if (s.isEqualToString(NSStr('dssession'))) then
begin
SessionID := String(NSToString(NSString(er.objectAtIndex(1))));
found := True;
end
else if (s.isEqualToString(NSStr('dssessionexpires'))) then
FSessionIDExpires := LongInt(er.objectAtIndex(1));
end;
end;
end;
if (not found) then
CloseSession;
end;
function TDSRESTConnection.SafeParseResponse(response: NSHTTPURLResponse;
err: NSError; jsonStr: String): TJSONObject;
var
msg: String;
json: TJSONObject;
begin
Result := nil;
if (response.isKindOfClass(NSHTTPURLResponse.classClass)) then
begin
if (response.statusCode <> 200) then
begin
try
Result := TJSONObject.parse(jsonStr);
except
end;
if Result = nil then
begin
msg := NSToString(NSHTTPURLResponse.localizedStringForStatusCode
(response.statusCode));
msg := msg + NSToString(err.userInfo.description);
raise DBXException.Create(format('%i:%s', [response.statusCode, msg]));
end
else
throwJsonExceptionIfNeed(Result, True);
end;
end;
if assigned(err) then
begin
msg := NSToString(err.localizedDescription);
raise DBXException.Create(msg);
end;
end;
function TDSRESTConnection.NSDataToStream(Value: NSData): TStream;
var
s: NSString;
sStream: String;
begin
Result := nil;
s := NSString.alloc.initWithData_encoding(Value, NSUTF8StringEncoding);
try
sStream := NSToString(s);
if sStream <> '' then
begin
Result := TStringStream.Create(sStream);
end;
finally
s.Release;
end;
end;
procedure TDSRESTConnection.SetUpSessionHeader(var request
: NSMutableURLRequest);
var
s: String;
begin
if Trim(FSessionID) <> '' then
begin
s := 'dssession=' + FSessionID;
request.addValue_forHTTPHeaderField(StringToNS(s), NSStr(String('Pragma')));
end;
end;
procedure TDSRESTConnection.setUpAuthorizationHeader
(var request: NSMutableURLRequest);
var
s: String;
begin
if Trim(FUserName) = '' then
request.addValue_forHTTPHeaderField(NSStr(string('Basic Og==')),
NSStr(String('Authorization')))
else
begin
s := TDBXDefaultFormatter.getInstance.Base64Encode
(FUserName + ':' + FPassword);
request.addValue_forHTTPHeaderField(StringToNS(s),
NSStr(String('Authorization')));
end;
end;
procedure TDSRESTConnection.setUpHeader(var request: NSMutableURLRequest);
begin
request.addValue_forHTTPHeaderField
(NSStr(string('Mon, 1 Oct 1990 05:00:00 GMT')),
NSStr(String('If-Modified-Since')));
request.addValue_forHTTPHeaderField(NSStr(string('Keep-Alive')),
NSStr(String('Connection')));
request.addValue_forHTTPHeaderField(NSStr(string('text/plain;charset=UTF-8')),
NSStr(String('Content-Type')));
request.addValue_forHTTPHeaderField(NSStr(string('application/JSON')),
NSStr(String('Accept')));
request.addValue_forHTTPHeaderField(NSStr(string('identity')),
NSStr(String('Accept-Encoding')));
request.addValue_forHTTPHeaderField
(NSStr(string('Mozilla/3.0 (compatible; Indy Library)')),
NSStr(String('User-Agent')));
if Trim(FSessionID) = '' then
setUpAuthorizationHeader(request)
else
SetUpSessionHeader(request);
end;
function TDSRESTConnection.CreateRequestNS(command: TDSRESTCommand)
: NSMutableURLRequest;
var
url: NSMutableString;
parametersToSend: TObjectList;
_parameters: TJSONArray;
i: integer;
p: TDSRestParameter;
CanAddParamsToUrl: Boolean;
begin
Result := nil;
url := buildRequestURLNS(command);
parametersToSend := TObjectList.Create(False);
_parameters := TJSONArray.Create;
try
for i := 0 to command.Parameters.Count - 1 do
begin
p := TDSRestParameter(command.Parameters.Items[i]);
if p.Direction in [Input, InputOutput] then
begin
parametersToSend.Add(p);
end;
end;
if command.RequestType in [GET, Delete] then
begin
for i := 0 to parametersToSend.Count - 1 do
begin
p := TDSRestParameter(parametersToSend.Items[i]);
url.appendString(StringToNS('/'));
url.appendString(encodeURIComponentWithParamNS(p));
end;
end
else
begin
CanAddParamsToUrl := True;
for i := 0 to parametersToSend.Count - 1 do
begin
p := TDSRestParameter(parametersToSend.Items[i]);
if CanAddParamsToUrl and isUrlParameter(p) then
begin
url.appendString(StringToNS('/'));
url.appendString(encodeURIComponentWithParamNS(p));
end
else
begin
CanAddParamsToUrl := False;
p.Value.appendTo(_parameters);
end;
end;
end;
Result := BuildRequestNS(command.RequestType, url, _parameters);
setUpHeader(Result);
finally
_parameters.Free;
parametersToSend.Free;
end;
end;
{$ENDIF}
function CloneStream(Value: TStream): TStream;
begin
Result := nil;
if assigned(Value) then
begin
Result := TMemoryStream.Create;
Value.Position := 0;
Result.CopyFrom(Value, Value.size);
Result.Position := 0;
end;
end;
{$IFDEF FPC_ON_IOS}
function TDSRESTConnection.GetInnerDelegate: TDBXConnectionEventHook;
begin
if not assigned(FInternalDelegate) then
begin
{$IFDEF FPC}
FInternalDelegate := TDBXConnectionEventHook.alloc.init.autorelease;
{$ELSE}
FInternalDelegate := TDBXConnectionEventHook.Create;
{$ENDIF}
FInternalDelegate.SetListener(self.Listener);
end;
Result := FInternalDelegate;
end;
{$ENDIF}
{$IFNDEF FPC}
procedure TDSRESTConnection.Execute(cmd: TDSRESTCommand);
begin
raise Exception.Create
('This proxy cannot be used with DELPHI on Windows. You have to use it only on FreePascal for iOS');
end;
{$ELSE}
procedure TDSRESTConnection.Execute(cmd: TDSRESTCommand);
var
request: NSMutableURLRequest;
urlr: NSHTTPURLResponse = nil;
urle: NSError;
json_string1: NSString;
res: NSData;
response, cpyResponse: TStream;
json: TJSONObject;
results: TJSONArray;
i, returnParIndex: integer;
v: TDBXValue;
begin
response := nil;
json := nil;
request := CreateRequestNS(cmd);
// objective c object are in autorelease
urlr := nil;
urle := nil;
try
res := TDBXConnection.
sendSynchronousRequest_returningResponse_error_usingDelegate(request,
NSURLResponse(urlr), urle, GetInnerDelegate);
if not assigned(res) then
begin
raise DBXException.Create(NSToString(urle.localizedDescription));
end;
json_string1 := NSString.alloc.initWithData_encoding(res,
NSUTF8StringEncoding);
response := NSDataToStream(res);
try
setSessionIdentifier(urlr);
try
// try to read the error from the json response
json := SafeParseResponse(urlr, urle, NSToString(json_string1));
// try to find an error in the http response
if isThereOnlyOneStreamInOutput(cmd.Parameters) then
begin
// Here I know that HTTP response is not an error and there isn't a json in the response
for i := 0 to cmd.Parameters.Count - 1 do
begin
if (cmd.Parameters.Items[i].Direction in [ReturnValue, InputOutput,
Output]) then
begin
if cmd.Parameters.Items[i].isDBXValue then
cmd.Parameters.Items[i].Value.AsDBXValue.AsStream :=
CloneStream(response)
else
cmd.Parameters.Items[i].Value.AsStream := CloneStream(response);
end;
end;
end
else
begin
if not assigned(json) then
begin
json := TJSONObject.parse(NSToString(json_string1));
end;
throwJsonExceptionIfNeed(json, True);
// Here I know that HTTP response is not an error, JSON is valid and doesn't contain an error
results := json.getJSONArray('result');
returnParIndex := 0;
for i := 0 to cmd.Parameters.Count - 1 do
begin
if (cmd.Parameters.Items[i].Direction in [ReturnValue, InputOutput,
Output]) then
begin
v := cmd.Parameters.Items[i].Value;
TDBXJsonTools.jsonToDBX(results.GET(returnParIndex), v,
cmd.Parameters.Items[i].TypeName);
Inc(returnParIndex);
end;
end;
FreeAndNil(json);
end;
finally
FreeAndNil(json);
end;
finally
response.Free;
json_string1.Release;
end;
finally
if assigned(urle) then
urle.Release;
if assigned(urlr) then
urlr.Release;
end
end;
{$ENDIF}
function TDSRESTConnection.isOnlyOneParameterInOutput
(Parameters: TDSRESTParameterList): Boolean;
var
i, Count: integer;
begin
Count := 0;
for i := 0 to Parameters.Count - 1 do
begin
if Parameters.Items[i].Direction in [ReturnValue, InputOutput, Output] then
begin
Inc(Count);
end;
end;
Result := Count = 1;
end;
function TDSRESTConnection.isThereOnlyOneStreamInOutput
(Parameters: TDSRESTParameterList): Boolean;
var
i: integer;
begin
Result := False;
if isOnlyOneParameterInOutput(Parameters) then
begin
for i := 0 to Parameters.Count - 1 do
begin
if (Parameters.Items[i].Direction in [ReturnValue, InputOutput, Output])
and (Parameters.Items[i].DBXType = BinaryBlobType) then
begin
Exit(True);
end;
end;
end;
end;
function TDSRESTConnection.isUrlParameter(parameter: TDSRestParameter): Boolean;
begin
Result := not(parameter.DBXType in [JSONValueType, BinaryBlobType,
TableType]);
end;
function TDSRESTConnection.Clone(includeSession: Boolean): TDSRESTConnection;
begin
Result := TDSRESTConnection.Create(nil);
Result.Host := self.Host;
Result.Port := self.Port;
Result.Protocol := self.Protocol;
Result.UserName := self.UserName;
Result.Password := self.Password;
Result.UrlPath := self.UrlPath;
Result.ConnectionTimeout := self.ConnectionTimeout;
if (includeSession) then
begin
Result.SessionID := self.SessionID;
Result.SessionIDExpires := self.SessionIDExpires;
end;
end;
end.
| {
"pile_set_name": "Github"
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package javax.servlet.jsp.el;
import java.beans.FeatureDescriptor;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.Iterator;
import java.util.List;
import java.util.Objects;
import javax.el.ELClass;
import javax.el.ELContext;
import javax.el.ELResolver;
import javax.el.ImportHandler;
import javax.servlet.jsp.JspContext;
import javax.servlet.jsp.PageContext;
/**
*
* @since 2.1
*/
public class ScopedAttributeELResolver extends ELResolver {
// Indicates if a performance short-cut is available
private static final Class<?> AST_IDENTIFIER_KEY;
static {
Class<?> key = null;
try {
key = Class.forName("org.apache.el.parser.AstIdentifier");
} catch (Exception e) {
// Ignore: Expected if not running on Tomcat. Not a problem since
// this just allows a short-cut.
}
AST_IDENTIFIER_KEY = key;
}
@Override
public Object getValue(ELContext context, Object base, Object property) {
Objects.requireNonNull(context);
Object result = null;
if (base == null) {
context.setPropertyResolved(base, property);
if (property != null) {
String key = property.toString();
PageContext page = (PageContext) context.getContext(JspContext.class);
result = page.findAttribute(key);
if (result == null) {
boolean resolveClass = true;
// Performance short-cut available when running on Tomcat
if (AST_IDENTIFIER_KEY != null) {
// Tomcat will set this key to Boolean.TRUE if the
// identifier is a stand-alone identifier (i.e.
// identifier) rather than part of an AstValue (i.e.
// identifier.something). Imports do not need to be
// checked if this is a stand-alone identifier
Boolean value = (Boolean) context.getContext(AST_IDENTIFIER_KEY);
if (value != null && value.booleanValue()) {
resolveClass = false;
}
}
// This might be the name of an imported class
ImportHandler importHandler = context.getImportHandler();
if (importHandler != null) {
Class<?> clazz = null;
if (resolveClass) {
clazz = importHandler.resolveClass(key);
}
if (clazz != null) {
result = new ELClass(clazz);
}
if (result == null) {
// This might be the name of an imported static field
clazz = importHandler.resolveStatic(key);
if (clazz != null) {
try {
result = clazz.getField(key).get(null);
} catch (IllegalArgumentException | IllegalAccessException |
NoSuchFieldException | SecurityException e) {
// Most (all?) of these should have been
// prevented by the checks when the import
// was defined.
}
}
}
}
}
}
}
return result;
}
@Override
public Class<Object> getType(ELContext context, Object base, Object property) {
Objects.requireNonNull(context);
if (base == null) {
context.setPropertyResolved(base, property);
return Object.class;
}
return null;
}
@Override
public void setValue(ELContext context, Object base, Object property, Object value) {
Objects.requireNonNull(context);
if (base == null) {
context.setPropertyResolved(base, property);
if (property != null) {
String key = property.toString();
PageContext page = (PageContext) context.getContext(JspContext.class);
int scope = page.getAttributesScope(key);
if (scope != 0) {
page.setAttribute(key, value, scope);
} else {
page.setAttribute(key, value);
}
}
}
}
@Override
public boolean isReadOnly(ELContext context, Object base, Object property) {
Objects.requireNonNull(context);
if (base == null) {
context.setPropertyResolved(base, property);
}
return false;
}
@Override
public Iterator<FeatureDescriptor> getFeatureDescriptors(ELContext context, Object base) {
PageContext ctxt = (PageContext) context.getContext(JspContext.class);
List<FeatureDescriptor> list = new ArrayList<>();
Enumeration<String> e;
Object value;
String name;
e = ctxt.getAttributeNamesInScope(PageContext.PAGE_SCOPE);
while (e.hasMoreElements()) {
name = e.nextElement();
value = ctxt.getAttribute(name, PageContext.PAGE_SCOPE);
FeatureDescriptor descriptor = new FeatureDescriptor();
descriptor.setName(name);
descriptor.setDisplayName(name);
descriptor.setExpert(false);
descriptor.setHidden(false);
descriptor.setPreferred(true);
descriptor.setShortDescription("page scoped attribute");
descriptor.setValue("type", value.getClass());
descriptor.setValue("resolvableAtDesignTime", Boolean.FALSE);
list.add(descriptor);
}
e = ctxt.getAttributeNamesInScope(PageContext.REQUEST_SCOPE);
while (e.hasMoreElements()) {
name = e.nextElement();
value = ctxt.getAttribute(name, PageContext.REQUEST_SCOPE);
FeatureDescriptor descriptor = new FeatureDescriptor();
descriptor.setName(name);
descriptor.setDisplayName(name);
descriptor.setExpert(false);
descriptor.setHidden(false);
descriptor.setPreferred(true);
descriptor.setShortDescription("request scope attribute");
descriptor.setValue("type", value.getClass());
descriptor.setValue("resolvableAtDesignTime", Boolean.FALSE);
list.add(descriptor);
}
if (ctxt.getSession() != null) {
e = ctxt.getAttributeNamesInScope(PageContext.SESSION_SCOPE);
while (e.hasMoreElements()) {
name = e.nextElement();
value = ctxt.getAttribute(name, PageContext.SESSION_SCOPE);
FeatureDescriptor descriptor = new FeatureDescriptor();
descriptor.setName(name);
descriptor.setDisplayName(name);
descriptor.setExpert(false);
descriptor.setHidden(false);
descriptor.setPreferred(true);
descriptor.setShortDescription("session scoped attribute");
descriptor.setValue("type", value.getClass());
descriptor.setValue("resolvableAtDesignTime", Boolean.FALSE);
list.add(descriptor);
}
}
e = ctxt.getAttributeNamesInScope(PageContext.APPLICATION_SCOPE);
while (e.hasMoreElements()) {
name = e.nextElement();
value = ctxt.getAttribute(name, PageContext.APPLICATION_SCOPE);
FeatureDescriptor descriptor = new FeatureDescriptor();
descriptor.setName(name);
descriptor.setDisplayName(name);
descriptor.setExpert(false);
descriptor.setHidden(false);
descriptor.setPreferred(true);
descriptor.setShortDescription("application scoped attribute");
descriptor.setValue("type", value.getClass());
descriptor.setValue("resolvableAtDesignTime", Boolean.FALSE);
list.add(descriptor);
}
return list.iterator();
}
@Override
public Class<String> getCommonPropertyType(ELContext context, Object base) {
if (base == null) {
return String.class;
}
return null;
}
}
| {
"pile_set_name": "Github"
} |
{
"abc": "edf",
"json": "crab",
"ololo": [
1,
2,
3
],
"subcrab": {
"name": "crab",
"surname": "subcrab"
}
}
| {
"pile_set_name": "Github"
} |
/**
* flow
*/
'use strict';
class TestDataService {
getAll() {
return [
{
"id":"2172517",
"observation":{
"location":"Canberra",
"forecast":"Clouds",
"feelsLike":10,
"current":17,
"low":11,
"high":17,
"icon":"10d"
},
"forecast":[
{
"day":"Monday",
"forecast":"Rain",
"low":11,
"high":17,
"icon":"10d"
},
{
"day":"Tuesday",
"forecast":"Rain",
"low":7,
"high":8,
"icon":"10d"
},
{
"day":"Wednesday",
"forecast":"Rain",
"low":8,
"high":10,
"icon":"10d"
},
{
"day":"Thursday",
"forecast":"Rain",
"low":8,
"high":10,
"icon":"10d"
},
{
"day":"Friday",
"forecast":"Rain",
"low":5,
"high":10,
"icon":"10d"
},
{
"day":"Saturday",
"forecast":"Rain",
"low":3,
"high":10,
"icon":"10d"
},
{
"day":"Sunday",
"forecast":"Rain",
"low":2,
"high":9,
"icon":"10d"
}
]
},
{
"id":"2147714",
"observation":{
"location":"Sydney",
"forecast":"Clear",
"feelsLike":12,
"current":13,
"low":13,
"high":14,
"icon":"10n"
},
"forecast":[
{
"day":"Monday",
"forecast":"Rain",
"low":13,
"high":14,
"icon":"10n"
},
{
"day":"Tuesday",
"forecast":"Clear",
"low":14,
"high":17,
"icon":"01d"
},
{
"day":"Wednesday",
"forecast":"Clear",
"low":15,
"high":17,
"icon":"01d"
},
{
"day":"Thursday",
"forecast":"Rain",
"low":14,
"high":18,
"icon":"10d"
},
{
"day":"Friday",
"forecast":"Rain",
"low":14,
"high":16,
"icon":"10d"
},
{
"day":"Saturday",
"forecast":"Rain",
"low":13,
"high":15,
"icon":"10d"
},
{
"day":"Sunday",
"forecast":"Rain",
"low":13,
"high":16,
"icon":"10d"
}
]
},
{
"id":"2174003",
"observation":{
"location":"Brisbane",
"forecast":"Clear",
"feelsLike":12,
"current":14,
"low":14,
"high":15,
"icon":"01n"
},
"forecast":[
{
"day":"Monday",
"forecast":"Clear",
"low":14,
"high":15,
"icon":"01n"
},
{
"day":"Tuesday",
"forecast":"Clear",
"low":11,
"high":17,
"icon":"01d"
},
{
"day":"Wednesday",
"forecast":"Clear",
"low":10,
"high":19,
"icon":"01d"
},
{
"day":"Thursday",
"forecast":"Clear",
"low":13,
"high":23,
"icon":"01d"
},
{
"day":"Friday",
"forecast":"Clear",
"low":17,
"high":23,
"icon":"01d"
},
{
"day":"Saturday",
"forecast":"Clear",
"low":15,
"high":18,
"icon":"01d"
},
{
"day":"Sunday",
"forecast":"Rain",
"low":16,
"high":20,
"icon":"10d"
}
]
},
{
"id":"2158177",
"observation":{
"location":"Melbourne",
"forecast":"Clouds",
"feelsLike":10,
"current":10,
"low":9,
"high":10,
"icon":"10d"
},
"forecast":[
{
"day":"Monday",
"forecast":"Rain",
"low":9,
"high":10,
"icon":"10d"
},
{
"day":"Tuesday",
"forecast":"Rain",
"low":10,
"high":11,
"icon":"10d"
},
{
"day":"Wednesday",
"forecast":"Rain",
"low":7,
"high":12,
"icon":"10d"
},
{
"day":"Thursday",
"forecast":"Rain",
"low":11,
"high":13,
"icon":"10d"
},
{
"day":"Friday",
"forecast":"Rain",
"low":10,
"high":12,
"icon":"10d"
},
{
"day":"Saturday",
"forecast":"Rain",
"low":9,
"high":12,
"icon":"10d"
},
{
"day":"Sunday",
"forecast":"Rain",
"low":9,
"high":14,
"icon":"10d"
}
]
},
{
"id":"2063523",
"observation":{
"location":"Perth",
"forecast":"Rain",
"feelsLike":15,
"current":15,
"low":12,
"high":16,
"icon":"10d"
},
"forecast":[
{
"day":"Monday",
"forecast":"Rain",
"low":12,
"high":16,
"icon":"10d"
},
{
"day":"Tuesday",
"forecast":"Rain",
"low":10,
"high":11,
"icon":"10d"
},
{
"day":"Wednesday",
"forecast":"Clouds",
"low":10,
"high":15,
"icon":"03d"
},
{
"day":"Thursday",
"forecast":"Clear",
"low":7,
"high":16,
"icon":"01d"
},
{
"day":"Friday",
"forecast":"Clear",
"low":7,
"high":18,
"icon":"01d"
},
{
"day":"Saturday",
"forecast":"Rain",
"low":10,
"high":18,
"icon":"10d"
},
{
"day":"Sunday",
"forecast":"Rain",
"low":14,
"high":18,
"icon":"10d"
}
]
},
{
"id":"2073124",
"observation":{
"location":"Darwin",
"forecast":"Clouds",
"feelsLike":30,
"current":30,
"low":29,
"high":30,
"icon":"01d"
},
"forecast":[
{
"day":"Monday",
"forecast":"Clear",
"low":29,
"high":30,
"icon":"01d"
},
{
"day":"Tuesday",
"forecast":"Clear",
"low":28,
"high":29,
"icon":"01d"
},
{
"day":"Wednesday",
"forecast":"Clear",
"low":27,
"high":29,
"icon":"01d"
},
{
"day":"Thursday",
"forecast":"Clear",
"low":27,
"high":32,
"icon":"01d"
},
{
"day":"Friday",
"forecast":"Rain",
"low":27,
"high":33,
"icon":"10d"
},
{
"day":"Saturday",
"forecast":"Rain",
"low":27,
"high":32,
"icon":"10d"
},
{
"day":"Sunday",
"forecast":"Clear",
"low":27,
"high":31,
"icon":"01d"
}
]
}
];
}
}
module.exports = TestDataService;
| {
"pile_set_name": "Github"
} |
let COLORS = [
"#e6194b", // red
"#3cb44b", // green
"#ffe119", // yellow
"#0082c8", // blue
"#f58231", // orange
"#911eb4", // purple
"#46f0f0", // cyan
"#f032e6", // magenta
"#d2f53c", // lime
"#fabebe", // pink
"#008080", // teal
"#e6beff", // lavender
"#aa6e28", // brown
"#fffac8", // beige
"#800000", // maroon
"#aaffc3", // mint
"#808000", // olive
"#ffd8b1", // coral
"#000080", // navy
"#808080", // grey
"#FFFFFF", // white
"#000000" // black
];
let KEYS = {
SPACE : 32,
LEFT_ARROW : 37,
RIGHT_ARROW : 39,
ESCAPE : 27,
D: 68,
H: 72,
N: 78,
S: 83
};
export {COLORS,KEYS}; | {
"pile_set_name": "Github"
} |
{
"name": "ipaddr.js",
"version": "1.4.0",
"homepage": "https://github.com/whitequark/ipaddr.js",
"authors": [
"whitequark <[email protected]>"
],
"description": "IP address manipulation library in JavaScript (CoffeeScript, actually)",
"main": "lib/ipaddr.js",
"moduleType": [
"globals",
"node"
],
"keywords": [
"javscript",
"ip",
"address",
"ipv4",
"ipv6"
],
"license": "MIT",
"ignore": [
"**/.*",
"node_modules",
"bower_components",
"test",
"tests"
]
}
| {
"pile_set_name": "Github"
} |
mars.tensor.nanmin
==================
.. currentmodule:: mars.tensor
.. autofunction:: nanmin | {
"pile_set_name": "Github"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.