text
stringlengths 2
100k
| meta
dict |
---|---|
## Code layout
The curl source code tree is neither large nor complicated. A key thing to
remember is, perhaps, that libcurl is the library and that library is the
biggest component of the curl command-line tool.
### root
We try to keep the number of files in the source tree root to a minimum. You
will see a slight difference in files if you check a release archive compared
to what is stored in the git repository as several files are generated by the
release scripts.
Some of the more notable ones include:
- `buildconf`: used to build configure and more when
building curl from source out of the git repository.
- `buildconf.bat`: the Windows version of buildconf. Run this after having
checked out the full source code from git.
- `CHANGES`: generated at release and put into the release archive. It
contains the 1000 latest changes to the source repository.
- `configure`: a generated script that is used on Unix-like systems to
generate a setup when building curl.
- `COPYING`: the license detailing the rules for your using the code.
- `GIT-INFO`: only present in git and contains information about how to
build curl after having checked out the code from git.
- `maketgz`: the script used to produce release archives and daily snapshots
- `README`: a short summary of what curl and libcurl are.
- `RELEASE-NOTES`: contains the changes done for the latest release; when
found in git it contains the changes done since the previous release that
are destined to end up in the coming release.
### lib
This directory contains the full source code for libcurl. It is the same
source code for all platforms—over one hundred C source files and a few more
private header files. The header files used when building applications against
libcurl are not stored in this directory; see include/curl for those.
Depending on what features are enabled in your own build and what
functions your platform provides, some of the source files or portions of the
source files may contain code that is not used in your particular build.
### lib/vtls
The VTLS sub section within libcurl is the home of all the TLS backends
libcurl can be built to support. The "virtual" TLS internal API is a common
API that is used within libcurl to access TLS and crypto functions without the
main code knowing exactly which TLS library that is used. This allows the
person who builds libcurl to select from a wide variety TLS libraries to build
with.
We also maintain a [SSL comparison
table](https://curl.haxx.se/docs/ssl-compared.html) on the web site to aid
users.
- OpenSSL: the (by far) most popular TLS library.
- BoringSSL: an OpenSSL fork maintained by Google. It will make libcurl disable a
few features due to lacking some functionality in the library.
- LibreSSL: an OpenSSL fork maintained by the OpenBSD team.
- NSS: a full-blown TLS library perhaps most known for being used by the
Firefox web browser. This was the default TLS backend for curl on Fedora and
Redhat systems for a while in the past.
- GnuTLS: a full-blown TLS library used by default by the Debian packaged curl.
- mbedTLS: (formerly known as PolarSSL) is a TLS library more targeted
towards the embedded market.
- WolfSSL: (formerly known as cyaSSL) is a TLS library more targeted
towards the embedded market.
- MesaLink: a TLS library written in rust
- Schannel: the native TLS library on Windows.
- SecureTransport: the native TLS library on Mac OS X.
- GSKit: the native TLS library on OS/400.
### src
This directory holds the source code for the curl command-line tool. It is the
same source code for all platforms that run the tool.
Most of what the command-line tool does is to convert given command line
options into the corresponding libcurl options or set of options and then makes
sure to issue them correctly to drive the network transfer according to the
user's wishes.
This code uses libcurl just as any other application would.
### include/curl
Here are the public header files that are provided for libcurl-using
applications. Some of them are generated at configure or release time so they
do not look identical in the git repository as they do in a release archive.
With modern libcurl, all an application is expected to include in its C source code
is `#include <curl/curl.h>`
### docs
The main documentation location. Text files in this directory are typically
plain text files. We have slowly started to move towards Markdown format so a
few (but growing number of) files use the .md extension to signify that.
Most of these documents are also shown on the curl web site automatically
converted from text to a web friendly format/look.
- `BINDINGS`: lists all known libcurl language bindings and where to find them
- `BUGS`: how to report bugs and where
- `CODE_OF_CONDUCT.md`: how we expect people to behave in this project
- `CONTRIBUTE`: what to think about when contributing to the project
- `curl.1`: the curl command-line tool man page, in nroff format
- `curl-config.1`: the curl-config man page, in nroff format
- `FAQ`: frequently asked questions about various curl-related subjects
- `FEATURES`: an incomplete list of curl features
- `HISTORY`: describes how the project started and has evolved over the years
- `HTTP2.md`: how to use HTTP/2 with curl and libcurl
- `HTTP-COOKIES`: how curl supports and works with HTTP cookies
- `index.html`: a basic HTML page as a documentation index page
- `INSTALL`: how to build and install curl and libcurl from source
- `INSTALL.cmake`: how to build curl and libcurl with CMake
- `INSTALL.devcpp`: how to build curl and libcurl with devcpp
- `INTERNALS`: details curl and libcurl internal structures
- `KNOWN_BUGS`: list of known bugs and problems
- `LICENSE-MIXING`: describes how to combine different third party modules and
their individual licenses
- `MAIL-ETIQUETTE`: this is how to communicate on our mailing lists
- `MANUAL`: a tutorial-like guide on how to use curl
- `mk-ca-bundle.1`: the mk-ca-bundle tool man page, in nroff format
- `README.cmake`: CMake details
- `README.netware`: Netware details
- `README.win32`: win32 details
- `RELEASE-PROCEDURE`: how to do a curl and libcurl release
- `RESOURCES`: further resources for further reading on what, why and how curl
does things
- `ROADMAP.md`: what we want to work on in the future
- `SECURITY`: how we work on security vulnerabilities
- `SSLCERTS`: TLS certificate handling documented
- `SSL-PROBLEMS`: common SSL problems and their causes
- `THANKS`: thanks to this extensive list of friendly people, curl exists today!
- `TheArtOfHttpScripting`: a tutorial into HTTP scripting with curl
- `TODO`: things we or you can work on implementing
- `VERSIONS`: how the version numbering of libcurl works
### docs/libcurl
All libcurl functions have their own man pages in individual files with .3
extensions, using nroff format, in this directory. There are also a few other
files that are described below.
- `ABI`
- `index.html`
- `libcurl.3`
- `libcurl-easy.3`
- `libcurl-errors.3`
- `libcurl.m4`
- `libcurl-multi.3`
- `libcurl-share.3`
- `libcurl-thread.3`
- `libcurl-tutorial.3`
- `symbols-in-versions`
### docs/libcurl/opts
This directory contains the man pages for the individual options for three
different libcurl functions.
`curl_easy_setopt()` options start with `CURLOPT_`,
`curl_multi_setopt()` options start with `CURLMOPT_` and
`curl_easy_getinfo()` options start with `CURLINFO_`.
### docs/examples
Contains around 100 stand-alone examples that are meant to help readers
understand how libcurl can be used.
See also the [libcurl examples](libcurl-examples.md) section of this book.
### scripts
Handy scripts.
- `contributors.sh`: extracts all contributors from the git repository since a
given hash/tag. The purpose is to generate a list for the RELEASE-NOTES file
and to allow manually added names to remain in there even on updates. The
script uses the 'THANKS-filter` file to rewrite some names.
- `contrithanks.sh`: extracts contributors from the git repository since a
given hash/tag, filters out all the names that are already mentioned in
`THANKS`, and then outputs `THANKS` to stdout with the list of new
contributors appended at the end; it's meant to allow easier updates of the THANKS
document. The script uses the 'THANKS-filter` file to rewrite some names.
- `log2changes.pl`: generates the `CHANGES` file for releases, as used by the
release script. It simply converts git log output.
- `zsh.pl`: helper script to provide curl command-line completions to users of
the zsh shell.
| {
"pile_set_name": "Github"
} |
# ------------------
# Only for running this script here
import logging
import sys
from os.path import dirname
sys.path.insert(1, f"{dirname(__file__)}/../../..")
logging.basicConfig(level=logging.DEBUG)
# ------------------
# ---------------------
# Flask App for Slack OAuth flow
# ---------------------
import os
import json
from slack import WebClient
from slack.signature import SignatureVerifier
logger = logging.getLogger(__name__)
signature_verifier = SignatureVerifier(os.environ["SLACK_SIGNING_SECRET"])
client = WebClient(token=os.environ["SLACK_BOT_TOKEN"])
# ---------------------
# Flask App for Slack events
# ---------------------
from concurrent.futures.thread import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=5)
# pip3 install flask
from flask import Flask, request, make_response
app = Flask(__name__)
app.debug = True
@app.route("/slack/events", methods=["POST"])
def slack_app():
request_body = request.get_data()
if not signature_verifier.is_valid_request(request_body, request.headers):
return make_response("invalid request", 403)
if request.headers["content-type"] == "application/json":
# Events API
body = json.loads(request_body)
if body["event"]["type"] == "workflow_step_execute":
step = body["event"]["workflow_step"]
def handle_step():
try:
client.workflows_stepCompleted(
workflow_step_execute_id=step["workflow_step_execute_id"],
outputs={
"taskName": step["inputs"]["taskName"]["value"],
"taskDescription": step["inputs"]["taskDescription"]["value"],
"taskAuthorEmail": step["inputs"]["taskAuthorEmail"]["value"],
},
)
except Exception as err:
client.workflows_stepFailed(
workflow_step_execute_id=step["workflow_step_execute_id"],
error={"message": f"Something went wrong! ({err})", }
)
executor.submit(handle_step)
return make_response("", 200)
elif "payload" in request.form:
# Action / View Submission
body = json.loads(request.form["payload"])
if body["type"] == "workflow_step_edit":
new_modal = client.views_open(
trigger_id=body["trigger_id"],
view={
"type": "workflow_step",
"callback_id": "copy_review_view",
"blocks": [
{
"type": "section",
"block_id": "intro-section",
"text": {
"type": "plain_text",
"text": "Create a task in one of the listed projects. The link to the task and other details will be available as variable data in later steps.",
},
},
{
"type": "input",
"block_id": "task_name_input",
"element": {
"type": "plain_text_input",
"action_id": "task_name",
"placeholder": {
"type": "plain_text",
"text": "Write a task name",
},
},
"label": {"type": "plain_text", "text": "Task name"},
},
{
"type": "input",
"block_id": "task_description_input",
"element": {
"type": "plain_text_input",
"action_id": "task_description",
"placeholder": {
"type": "plain_text",
"text": "Write a description for your task",
},
},
"label": {"type": "plain_text", "text": "Task description"},
},
{
"type": "input",
"block_id": "task_author_input",
"element": {
"type": "plain_text_input",
"action_id": "task_author",
"placeholder": {
"type": "plain_text",
"text": "Write a task name",
},
},
"label": {"type": "plain_text", "text": "Task author"},
},
],
},
)
return make_response("", 200)
if body["type"] == "view_submission" \
and body["view"]["callback_id"] == "copy_review_view":
state_values = body["view"]["state"]["values"]
client.workflows_updateStep(
workflow_step_edit_id=body["workflow_step"]["workflow_step_edit_id"],
inputs={
"taskName": {
"value": state_values["task_name_input"]["task_name"]["value"],
},
"taskDescription": {
"value": state_values["task_description_input"]["task_description"][
"value"
],
},
"taskAuthorEmail": {
"value": state_values["task_author_input"]["task_author"]["value"],
},
},
outputs=[
{"name": "taskName", "type": "text", "label": "Task Name", },
{
"name": "taskDescription",
"type": "text",
"label": "Task Description",
},
{
"name": "taskAuthorEmail",
"type": "text",
"label": "Task Author Email",
},
],
)
return make_response("", 200)
return make_response("", 404)
if __name__ == "__main__":
# export SLACK_BOT_TOKEN=***
# export SLACK_SIGNING_SECRET=***
# export FLASK_ENV=development
app.run("localhost", 3000)
# python3 integration_tests/samples/workflows/steps_from_apps.py
# ngrok http 3000
# POST https://{yours}.ngrok.io/slack/events
| {
"pile_set_name": "Github"
} |
// Copyright 2019 Google
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <stdint.h>
#include "FIRCLSFeatures.h"
#if CLS_DWARF_UNWINDING_SUPPORTED
#if CLS_CPU_64BIT
#define CLS_INVALID_ADDRESS (0xffffffffffffffff)
#else
#define CLS_INVALID_ADDRESS (0xffffffff)
#endif
// basic data types
uint8_t FIRCLSParseUint8AndAdvance(const void** cursor);
uint16_t FIRCLSParseUint16AndAdvance(const void** cursor);
int16_t FIRCLSParseInt16AndAdvance(const void** cursor);
uint32_t FIRCLSParseUint32AndAdvance(const void** cursor);
int32_t FIRCLSParseInt32AndAdvance(const void** cursor);
uint64_t FIRCLSParseUint64AndAdvance(const void** cursor);
int64_t FIRCLSParseInt64AndAdvance(const void** cursor);
uintptr_t FIRCLSParsePointerAndAdvance(const void** cursor);
uint64_t FIRCLSParseULEB128AndAdvance(const void** cursor);
int64_t FIRCLSParseLEB128AndAdvance(const void** cursor);
const char* FIRCLSParseStringAndAdvance(const void** cursor);
// FDE/CIE-specifc structures
uint64_t FIRCLSParseRecordLengthAndAdvance(const void** cursor);
uintptr_t FIRCLSParseAddressWithEncodingAndAdvance(const void** cursor, uint8_t encoding);
#endif
| {
"pile_set_name": "Github"
} |
/*************************************************
* Perl-Compatible Regular Expressions *
*************************************************/
/* PCRE is a library of functions to support regular expressions whose syntax
and semantics are as close as possible to those of the Perl 5 language.
Written by Philip Hazel
Copyright (c) 1997-2012 University of Cambridge
-----------------------------------------------------------------------------
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the University of Cambridge nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
-----------------------------------------------------------------------------
*/
/* Generate code with 16 bit character support. */
#define COMPILE_PCRE16
#include "pcre_refcount.c"
/* End of pcre16_refcount.c */
| {
"pile_set_name": "Github"
} |
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.HttpsPolicy;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.DependencyInjection.Extensions;
namespace MvcSample
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc()
.SetCompatibilityVersion(CompatibilityVersion.Version_2_2)
.AddXmlSerializerFormatters();
services.TryAddEnumerable(ServiceDescriptor.Singleton<MatcherPolicy, DomainMatcherPolicy.DomainMatcherPolicy>());
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseDeveloperExceptionPage();
app.UseMvc();
}
}
}
| {
"pile_set_name": "Github"
} |
Version 6.2.6
=============
Improvements and bug fixes:
---------------------------
- Fix a bug using the wrong row for showing error popup
💜 this plugin? Consider supporting its development! 💰💸💶
------------------------------------------------------------
‼️NEW‼️ Head to GitHub Sponsors and support my work if you enjoy the plugin!
https://github.com/sponsors/niosus
Alternatively, become a backer on Open Collective!
https://opencollective.com/EasyClangComplete#backers
I you use this plugin in a company, push to become a sponsor!
https://opencollective.com/EasyClangComplete#sponsor
This plugin took a significant amount of effort. It is available and will always
be available for free, both as freedom and as beer.
If you appreciate it - support it. How much would you say it is worth to you?
For more info head here:
https://github.com/niosus/EasyClangComplete#support-it
| {
"pile_set_name": "Github"
} |
# Settings specified here will take precedence over those in config/environment.rb
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the webserver when you make code changes.
config.cache_classes = false
# Log error messages when you accidentally call methods on nil.
config.whiny_nils = true
# Show full error reports and disable caching
config.action_controller.consider_all_requests_local = true
config.action_view.debug_rjs = true
config.action_controller.perform_caching = false
# Don't care if the mailer can't send
config.action_mailer.raise_delivery_errors = false | {
"pile_set_name": "Github"
} |
// Copyright 2015-2020 Swim inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package swim.warp;
import swim.codec.Decoder;
import swim.codec.DecoderException;
import swim.codec.InputBuffer;
import swim.codec.Utf8;
import swim.recon.Recon;
import swim.structure.Value;
final class EnvelopeDecoder extends Decoder<Envelope> {
final Decoder<Value> output;
EnvelopeDecoder(Decoder<Value> output) {
this.output = output;
}
EnvelopeDecoder() {
this(null);
}
static Decoder<Envelope> decode(InputBuffer input, Decoder<Value> output) {
if (output == null) {
output = Utf8.parseDecoded(Recon.structureParser().blockParser(), input);
} else {
output = output.feed(input);
}
if (output.isDone()) {
try {
final Value value = output.bind();
final Envelope envelope = Envelope.fromValue(value);
if (envelope != null) {
return done(envelope);
} else {
return error(new DecoderException(Recon.toString(value)));
}
} catch (RuntimeException cause) {
return error(cause);
}
} else if (output.isError()) {
return error(output.trap());
} else if (input.isError()) {
return error(input.trap());
}
return new EnvelopeDecoder(output);
}
@Override
public Decoder<Envelope> feed(InputBuffer input) {
return decode(input, this.output);
}
}
| {
"pile_set_name": "Github"
} |
/*
* MIT License
*
* Copyright (c) 2019 everythingbest
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
package com.rpcpostman.service.creation.entity;
import com.rpcpostman.service.GAV;
import com.fasterxml.jackson.annotation.JsonIgnore;
import lombok.Data;
import java.util.ArrayList;
import java.util.List;
/**
* @author everythingbest
*/
@Data
public class DubboPostmanService implements PostmanService {
String cluster;
String serviceName;
GAV gav = new GAV();
long generateTime;
/**
* 标识是否加载到classLoader
* 这个值不能持久化
*/
@JsonIgnore
Boolean loadedToClassLoader = false;
/**
* 一个dubbo应用包含多个接口定义
*/
List<InterfaceEntity> interfaceModels = new ArrayList<>();
@Override
public String getCluster() {
return cluster;
}
@Override
public String getServiceName() {
return serviceName;
}
@Override
public GAV getGav() {
return gav;
}
@Override
public List<InterfaceEntity> getInterfaceModelList() {
return interfaceModels;
}
@Override
public void setLoadedToClassLoader(boolean load) {
this.loadedToClassLoader = load;
}
public boolean getLoadedToClassLoader(){
return loadedToClassLoader;
}
}
| {
"pile_set_name": "Github"
} |
/************************************************************************
* Copyright(c) 2013, One Unified. All rights reserved. *
* *
* This file is provided as is WITHOUT ANY WARRANTY *
* without even the implied warranty of *
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. *
* 8 *
* This software may not be used nor distributed without proper license *
* agreement. *
* *
* See the file LICENSE.txt for redistribution information. *
************************************************************************/
#include "stdafx.h"
#include "VuTreePortfolioPositionOrder.h"
namespace ou { // One Unified
namespace tf { // TradeFrame
VuTreePortfolioPositionOrder::VuTreePortfolioPositionOrder( ModelPortfolioPositionOrderExecution* pMPPOE )
: VuBase(), m_pdvmdlPPOE( pMPPOE ) {
Construct();
}
VuTreePortfolioPositionOrder::VuTreePortfolioPositionOrder(
ModelPortfolioPositionOrderExecution* pMPPOE,
wxWindow *parent, wxWindowID id,
const wxPoint& pos,
const wxSize& size,
long style,
const wxValidator& validator )
: VuBase( parent, id, pos, size, style, validator ), m_pdvmdlPPOE( pMPPOE )
{
Construct();
}
VuTreePortfolioPositionOrder::~VuTreePortfolioPositionOrder(void) {
}
void VuTreePortfolioPositionOrder::Construct( void ) {
AssociateModel( m_pdvmdlPPOE.get() );
// structPopulateColumns f( this );
// m_pdvmdlPPOE.get()->IterateColumnNames( f );
wxDataViewColumn* col = new wxDataViewColumn( "Portfolio Manager", new wxDataViewTextRenderer(), 0 );
col->SetAlignment( wxAlignment::wxALIGN_LEFT );
this->AppendColumn( col );
// need to work on auto-width setting to match width of space available for tree
}
} // namespace tf
} // namespace ou
| {
"pile_set_name": "Github"
} |
version https://git-lfs.github.com/spec/v1
oid sha256:0da601df8457c855af953ee76e414d38fde39fa6d12b1cf94019e0ea1fbd2d80
size 1048620
| {
"pile_set_name": "Github"
} |
#
# This program is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License, version 2.1 as published by the Free Software
# Foundation.
#
# You should have received a copy of the GNU Lesser General Public License along with this
# program; if not, you can obtain a copy at http://www.gnu.org/licenses/old-licenses/lgpl-2.1.html
# or from the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
# without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Lesser General Public License for more details.
#
# Copyright (c) 2005-2017 Hitachi Vantara. All rights reserved.
#
display-name=Available Printer Names
group=System Sequences
parameter-count=0
description=Lists the names of all known printers.
| {
"pile_set_name": "Github"
} |
import io
import re
import zipfile
from lxml import etree
from messytables.core import RowSet, TableSet, Cell
from messytables.types import (StringType, DecimalType,
DateType, BoolType, CurrencyType,
TimeType, PercentageType)
ODS_NAMESPACES_TAG_MATCH = re.compile(
b"(<office:document-content[^>]*>)", re.MULTILINE)
ODS_TABLE_MATCH = re.compile(
b".*?(<table:table.*?<\/.*?:table>).*?", re.MULTILINE)
ODS_TABLE_NAME = re.compile(b'.*?table:name=\"(.*?)\".*?')
ODS_ROW_MATCH = re.compile(
b".*?(<table:table-row.*?<\/.*?:table-row>).*?", re.MULTILINE)
NS_OPENDOCUMENT_PTTN = u"urn:oasis:names:tc:opendocument:xmlns:%s"
NS_CAL_PTTN = u"urn:org:documentfoundation:names:experimental:calc:xmlns:%s"
NS_OPENDOCUMENT_TABLE = NS_OPENDOCUMENT_PTTN % "table:1.0"
NS_OPENDOCUMENT_OFFICE = NS_OPENDOCUMENT_PTTN % "office:1.0"
TABLE_CELL = 'table-cell'
VALUE_TYPE = 'value-type'
COLUMN_REPEAT = 'number-columns-repeated'
EMPTY_CELL_VALUE = ''
ODS_VALUE_TOKEN = {
"float": "value",
"date": "date-value",
"time": "time-value",
"boolean": "boolean-value",
"percentage": "value",
"currency": "value"
}
ODS_TYPES = {
'float': DecimalType(),
'date': DateType('%Y-%m-%d'),
'boolean': BoolType(),
'percentage': PercentageType(),
'time': TimeType()
}
class ODSTableSet(TableSet):
"""
A wrapper around ODS files. Because they are zipped and the info we want
is in the zipped file as content.xml we must ensure that we either have
a seekable object (local file) or that we retrieve all of the content from
the remote URL.
"""
def __init__(self, fileobj, window=None, **kw):
'''Initialize the object.
:param fileobj: may be a file path or a file-like object. Note the
file-like object *must* be in binary mode and must be seekable (it will
get passed to zipfile).
As a specific tip: urllib2.urlopen returns a file-like object that is
not in file-like mode while urllib.urlopen *does*!
To get a seekable file you *cannot* use
messytables.core.seekable_stream as it does not support the full seek
functionality.
'''
if hasattr(fileobj, 'read'):
# wrap in a StringIO so we do not have hassle with seeks and
# binary etc (see notes to __init__ above)
# TODO: rather wasteful if in fact fileobj comes from disk
fileobj = io.BytesIO(fileobj.read())
self.window = window
zf = zipfile.ZipFile(fileobj).open("content.xml")
self.content = zf.read()
zf.close()
def make_tables(self):
"""
Return the sheets in the workbook.
A regex is used for this to avoid having to:
1. load large the entire file into memory, or
2. SAX parse the file more than once
"""
namespace_tags = self._get_namespace_tags()
sheets = [m.groups(0)[0]
for m in ODS_TABLE_MATCH.finditer(self.content)]
return [ODSRowSet(sheet, self.window, namespace_tags)
for sheet in sheets]
def _get_namespace_tags(self):
match = re.search(ODS_NAMESPACES_TAG_MATCH, self.content)
assert match
tag_open = match.groups()[0]
tag_close = b'</office:document-content>'
return tag_open, tag_close
class ODSRowSet(RowSet):
""" ODS support for a single sheet in the ODS workbook. Unlike
the CSV row set this is not a streaming operation. """
def __init__(self, sheet, window=None, namespace_tags=None):
self.sheet = sheet
self.name = "Unknown"
m = ODS_TABLE_NAME.match(self.sheet)
if m:
self.name = m.groups(0)[0]
self.window = window or 1000
# We must wrap the XML fragments in a valid header otherwise iterparse
# will explode with certain (undefined) versions of libxml2. The
# namespaces are in the ODS file, and change with the libreoffice
# version saving it, so get them from the ODS file if possible. The
# default namespaces are an option to preserve backwards compatibility
# of ODSRowSet.
if namespace_tags:
self.namespace_tags = namespace_tags
else:
namespaces = {
"dc": u"http://purl.org/dc/elements/1.1/",
"draw": NS_OPENDOCUMENT_PTTN % u"drawing:1.0",
"number": NS_OPENDOCUMENT_PTTN % u"datastyle:1.0",
"office": NS_OPENDOCUMENT_PTTN % u"office:1.0",
"svg": NS_OPENDOCUMENT_PTTN % u"svg-compatible:1.0",
"table": NS_OPENDOCUMENT_PTTN % u"table:1.0",
"text": NS_OPENDOCUMENT_PTTN % u"text:1.0",
"calcext": NS_CAL_PTTN % u"calcext:1.0",
}
ods_header = u"<wrapper {0}>"\
.format(" ".join('xmlns:{0}="{1}"'.format(k, v)
for k, v in namespaces.iteritems())).encode('utf-8')
ods_footer = u"</wrapper>".encode('utf-8')
self.namespace_tags = (ods_header, ods_footer)
super(ODSRowSet, self).__init__(typed=True)
def raw(self, sample=False):
""" Iterate over all rows in this sheet. """
rows = ODS_ROW_MATCH.findall(self.sheet)
for row in rows:
row_data = []
block = self.namespace_tags[0] + row + self.namespace_tags[1]
partial = io.BytesIO(block)
empty_row = True
for action, element in etree.iterparse(partial, ('end',)):
if element.tag != _tag(NS_OPENDOCUMENT_TABLE, TABLE_CELL):
continue
cell = _read_cell(element)
if empty_row is True and cell.value != EMPTY_CELL_VALUE:
empty_row = False
repeat = element.attrib.get(
_tag(NS_OPENDOCUMENT_TABLE, COLUMN_REPEAT))
if repeat:
number_of_repeat = int(repeat)
row_data += [cell] * number_of_repeat
else:
row_data.append(cell)
if empty_row:
# ignore blank lines
continue
del partial
yield row_data
del rows
def _read_cell(element):
cell_type = element.attrib.get(_tag(NS_OPENDOCUMENT_OFFICE, VALUE_TYPE))
value_token = ODS_VALUE_TOKEN.get(cell_type, 'value')
if cell_type == 'string':
cell = _read_text_cell(element)
elif cell_type == 'currency':
value = element.attrib.get(_tag(NS_OPENDOCUMENT_OFFICE, value_token))
currency = element.attrib.get(_tag(NS_OPENDOCUMENT_OFFICE, 'currency'))
cell = Cell(value + ' ' + currency, type=CurrencyType())
elif cell_type is not None:
value = element.attrib.get(_tag(NS_OPENDOCUMENT_OFFICE, value_token))
cell = Cell(value, type=ODS_TYPES.get(cell_type, StringType()))
else:
cell = Cell(EMPTY_CELL_VALUE, type=StringType())
return cell
def _read_text_cell(element):
children = element.getchildren()
text_content = []
for child in children:
if child.text:
text_content.append(child.text)
else:
text_content.append(EMPTY_CELL_VALUE)
if len(text_content) > 0:
cell_value = '\n'.join(text_content)
else:
cell_value = EMPTY_CELL_VALUE
return Cell(cell_value, type=StringType())
def _tag(namespace, tag):
return '{%s}%s' % (namespace, tag)
| {
"pile_set_name": "Github"
} |
import 'package:fish_redux/fish_redux.dart';
import 'action.dart';
import 'state.dart';
Reducer<TalkState> buildReducer() {
return asReducer(
<Object, Reducer<TalkState>>{
TalkAction.getTalk: _onGetTalk,
TalkAction.getHotTalk: _onGetHotTalk,
TalkAction.talkPage: _onTalkPage
},
);
}
TalkState _onGetTalk(TalkState state, Action action) {
final TalkState newState = state.clone();
newState.comments??[]..addAll(action.payload);
newState.showLoading = false;
return newState;
}
TalkState _onGetHotTalk(TalkState state, Action action) {
final TalkState newState = state.clone();
newState.hotComments??[]..addAll(action.payload);
return newState;
}
TalkState _onTalkPage(TalkState state, Action action) {
final TalkState newState = state.clone();
newState.page = action.payload;
return newState;
} | {
"pile_set_name": "Github"
} |
NaN
| {
"pile_set_name": "Github"
} |
<component name="libraryTable">
<library name="Maven: com.google.inject.extensions:guice-multibindings:3.0">
<CLASSES>
<root url="jar://$MAVEN_REPOSITORY$/com/google/inject/extensions/guice-multibindings/3.0/guice-multibindings-3.0.jar!/" />
</CLASSES>
<JAVADOC>
<root url="jar://$MAVEN_REPOSITORY$/com/google/inject/extensions/guice-multibindings/3.0/guice-multibindings-3.0-javadoc.jar!/" />
</JAVADOC>
<SOURCES>
<root url="jar://$MAVEN_REPOSITORY$/com/google/inject/extensions/guice-multibindings/3.0/guice-multibindings-3.0-sources.jar!/" />
</SOURCES>
</library>
</component> | {
"pile_set_name": "Github"
} |
fileFormatVersion: 2
guid: 9d351b1c5439a9842a763d4431560cef
timeCreated: 1464165314
licenseType: Store
DefaultImporter:
userData:
assetBundleName:
assetBundleVariant:
| {
"pile_set_name": "Github"
} |
import Coffee from './Coffee';
const coffee = new Coffee();
console.log(coffee.taste()); // Delicious | {
"pile_set_name": "Github"
} |
package structs
import "strings"
// tagOptions contains a slice of tag options
type tagOptions []string
// Has returns true if the given optiton is available in tagOptions
func (t tagOptions) Has(opt string) bool {
for _, tagOpt := range t {
if tagOpt == opt {
return true
}
}
return false
}
// parseTag splits a struct field's tag into its name and a list of options
// which comes after a name. A tag is in the form of: "name,option1,option2".
// The name can be neglectected.
func parseTag(tag string) (string, tagOptions) {
// tag is one of followings:
// ""
// "name"
// "name,opt"
// "name,opt,opt2"
// ",opt"
res := strings.Split(tag, ",")
return res[0], res[1:]
}
| {
"pile_set_name": "Github"
} |
include: package:pedantic/analysis_options.1.8.0.yaml
analyzer:
errors:
# Increase the severity of several hints.
prefer_single_quotes: warning
unused_import: warning
linter:
rules:
- directives_ordering
- prefer_relative_imports
- prefer_single_quotes
| {
"pile_set_name": "Github"
} |
Under this folder are files included by the _Supervisor_ configuration file "/etc/supervisor/supervisord.conf" when
running under service mode, where the Docker container is to start some long-running webservices like Nginx. e.g.,
```bash
docker run --rm --name=app -p 80:80 phpswoole/swoole
```
In this case, the list of programs defined under this folder (along with those already under folder _../conf.d/_) will
be started by _Supervisord_.
For the list of programs defined under this folder, file extension must be "_.conf_".
| {
"pile_set_name": "Github"
} |
/*
* BioJava development code
*
* This code may be freely distributed and modified under the
* terms of the GNU Lesser General Public Licence. This should
* be distributed with the code. If you do not have a copy,
* see:
*
* http://www.gnu.org/copyleft/lesser.html
*
* Copyright for this code is held jointly by the individual
* authors. These should be listed in @author doc comments.
*
* For more information on the BioJava project and its aims,
* or to join the biojava-l mailing list, visit the home page
* at:
*
* http://www.biojava.org/
*
*/
package org.biojava.nbio.structure.xtal;
import org.biojava.nbio.structure.*;
import org.biojava.nbio.structure.align.util.AtomCache;
import org.junit.Test;
import java.io.IOException;
import static org.junit.Assert.*;
/**
* Testing of crystallographic info parsing in both pdb and mmCIF files
*
* @author duarte_j
*
*/
public class TestCrystalInfo {
private static final float DELTA = 0.000001f;
@Test
public void test1NMR() throws IOException, StructureException {
AtomCache cache = new AtomCache();
StructureIO.setAtomCache(cache);
cache.setUseMmCif(false);
Structure s1 = StructureIO.getStructure("1NMR");
assertFalse(s1.isCrystallographic());
assertTrue(s1.isNmr());
assertEquals(s1.getPDBHeader().getExperimentalTechniques().iterator().next(),ExperimentalTechnique.SOLUTION_NMR);
cache.setUseMmCif(true);
Structure s2 = StructureIO.getStructure("1NMR");
assertFalse(s2.isCrystallographic());
assertTrue(s2.isNmr());
assertEquals(s2.getPDBHeader().getExperimentalTechniques().iterator().next(),ExperimentalTechnique.SOLUTION_NMR);
testCrystallographicInfo(s1, s2);
}
@Test
public void test1B8G() throws IOException, StructureException {
AtomCache cache = new AtomCache();
StructureIO.setAtomCache(cache);
cache.setUseMmCif(false);
Structure s1 = StructureIO.getStructure("1B8G");
assertTrue(s1.isCrystallographic());
assertFalse(s1.isNmr());
assertEquals(s1.getPDBHeader().getExperimentalTechniques().iterator().next(),ExperimentalTechnique.XRAY_DIFFRACTION);
cache.setUseMmCif(true);
Structure s2 = StructureIO.getStructure("1B8G");
assertTrue(s2.isCrystallographic());
assertFalse(s2.isNmr());
assertEquals(s2.getPDBHeader().getExperimentalTechniques().iterator().next(),ExperimentalTechnique.XRAY_DIFFRACTION);
testCrystallographicInfo(s1, s2);
}
@Test
public void test4M7P() throws IOException, StructureException {
// multimodel x-ray structure
AtomCache cache = new AtomCache();
StructureIO.setAtomCache(cache);
cache.setUseMmCif(false);
Structure s1 = StructureIO.getStructure("4M7P");
assertTrue(s1.isCrystallographic());
assertFalse(s1.isNmr());
assertTrue(s1.nrModels()>1);
assertEquals(s1.getPDBHeader().getExperimentalTechniques().iterator().next(),ExperimentalTechnique.XRAY_DIFFRACTION);
cache.setUseMmCif(true);
Structure s2 = StructureIO.getStructure("4M7P");
assertTrue(s2.isCrystallographic());
assertFalse(s2.isNmr());
assertTrue(s2.nrModels()>1);
assertEquals(s2.getPDBHeader().getExperimentalTechniques().iterator().next(),ExperimentalTechnique.XRAY_DIFFRACTION);
testCrystallographicInfo(s1, s2);
}
@Test
public void test2MBQ() throws IOException, StructureException {
// single model NMR structure
AtomCache cache = new AtomCache();
StructureIO.setAtomCache(cache);
cache.setUseMmCif(false);
Structure s1 = StructureIO.getStructure("2MBQ");
assertFalse(s1.isCrystallographic());
assertTrue(s1.isNmr());
assertFalse(s1.nrModels()>1);
assertEquals(s1.getPDBHeader().getExperimentalTechniques().iterator().next(),ExperimentalTechnique.SOLUTION_NMR);
cache.setUseMmCif(true);
Structure s2 = StructureIO.getStructure("2MBQ");
assertFalse(s2.isCrystallographic());
assertTrue(s2.isNmr());
assertFalse(s2.nrModels()>1);
assertEquals(s2.getPDBHeader().getExperimentalTechniques().iterator().next(),ExperimentalTechnique.SOLUTION_NMR);
testCrystallographicInfo(s1, s2);
}
private void testCrystallographicInfo(Structure s1, Structure s2) {
PDBCrystallographicInfo xtalInfo = s1.getPDBHeader().getCrystallographicInfo();
PDBCrystallographicInfo xtalInfo2 = s2.getPDBHeader().getCrystallographicInfo();
if (xtalInfo==null && xtalInfo2==null) return; // both null: NMR or something similar, nothing to test
CrystalCell cell1 = xtalInfo.getCrystalCell();
CrystalCell cell2 = xtalInfo2.getCrystalCell();
if (s1.isNmr()) assertTrue(cell1==null);
if (s2.isNmr()) assertTrue(cell2==null);
if (cell1==null && cell2==null) return;
assertEquals(xtalInfo.getA(), xtalInfo2.getA(),DELTA);
assertEquals(xtalInfo.getB(), xtalInfo2.getB(),DELTA);
assertEquals(xtalInfo.getC(), xtalInfo2.getC(),DELTA);
assertEquals(xtalInfo.getAlpha(), xtalInfo2.getAlpha(),DELTA);
assertEquals(xtalInfo.getBeta(), xtalInfo2.getBeta(),DELTA);
assertEquals(xtalInfo.getGamma(), xtalInfo2.getGamma(),DELTA);
assertEquals(xtalInfo.getSpaceGroup(),xtalInfo2.getSpaceGroup());
}
}
| {
"pile_set_name": "Github"
} |
#pragma warning disable 108 // new keyword hiding
#pragma warning disable 114 // new keyword hiding
namespace Windows.UI.Core
{
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
[global::Uno.NotImplemented]
#endif
public partial interface ICorePointerInputSource
{
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
bool HasCapture
{
get;
}
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
global::Windows.UI.Core.CoreCursor PointerCursor
{
get;
set;
}
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
global::Windows.Foundation.Point PointerPosition
{
get;
}
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
void ReleasePointerCapture();
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
void SetPointerCapture();
#endif
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.HasCapture.get
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerPosition.get
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerCursor.get
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerCursor.set
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerCaptureLost.add
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerCaptureLost.remove
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerEntered.add
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerEntered.remove
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerExited.add
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerExited.remove
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerMoved.add
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerMoved.remove
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerPressed.add
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerPressed.remove
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerReleased.add
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerReleased.remove
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerWheelChanged.add
// Forced skipping of method Windows.UI.Core.ICorePointerInputSource.PointerWheelChanged.remove
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
event global::Windows.Foundation.TypedEventHandler<object, global::Windows.UI.Core.PointerEventArgs> PointerCaptureLost;
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
event global::Windows.Foundation.TypedEventHandler<object, global::Windows.UI.Core.PointerEventArgs> PointerEntered;
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
event global::Windows.Foundation.TypedEventHandler<object, global::Windows.UI.Core.PointerEventArgs> PointerExited;
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
event global::Windows.Foundation.TypedEventHandler<object, global::Windows.UI.Core.PointerEventArgs> PointerMoved;
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
event global::Windows.Foundation.TypedEventHandler<object, global::Windows.UI.Core.PointerEventArgs> PointerPressed;
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
event global::Windows.Foundation.TypedEventHandler<object, global::Windows.UI.Core.PointerEventArgs> PointerReleased;
#endif
#if __ANDROID__ || __IOS__ || NET461 || __WASM__ || __SKIA__ || __NETSTD_REFERENCE__ || __MACOS__
event global::Windows.Foundation.TypedEventHandler<object, global::Windows.UI.Core.PointerEventArgs> PointerWheelChanged;
#endif
}
}
| {
"pile_set_name": "Github"
} |
fileFormatVersion: 2
guid: b78118db070740a0bda430f26b276383
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:
| {
"pile_set_name": "Github"
} |
###
# Copyright (c) 2003-2005, Daniel DiPaolo
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions, and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions, and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the author of this software nor the name of
# contributors to this software may be used to endorse or promote products
# derived from this software without specific prior written consent.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
###
from __future__ import print_function
import time
from supybot.test import *
class NewsTestCase(ChannelPluginTestCase):
plugins = ('News','User')
def setUp(self):
ChannelPluginTestCase.setUp(self)
# Create a valid user to use
self.prefix = 'news!bar@baz'
self.irc.feedMsg(ircmsgs.privmsg(self.nick, 'register tester moo',
prefix=self.prefix))
m = self.irc.takeMsg() # Response to register.
def testAddnews(self):
self.assertNotError('add 0 subject: foo')
self.assertRegexp('news', 'subject')
self.assertNotError('add 0 subject2: foo2')
self.assertRegexp('news', 'subject.*subject2')
self.assertNotError('add 5 subject3: foo3')
self.assertRegexp('news', 'subject3')
timeFastForward(6)
self.assertNotRegexp('news', 'subject3')
def testNews(self):
# These should both fail first, as they will have nothing in the DB
self.assertRegexp('news', 'no news')
self.assertRegexp('news #channel', 'no news')
# Now we'll add news and make sure listnews doesn't fail
self.assertNotError('add #channel 0 subject: foo')
self.assertNotError('news #channel')
self.assertNotError('add 0 subject: foo')
self.assertRegexp('news', '#1')
self.assertNotError('news 1')
def testChangenews(self):
self.assertNotError('add 0 Foo: bar')
self.assertNotError('change 1 s/bar/baz/')
self.assertNotRegexp('news 1', 'bar')
self.assertRegexp('news 1', 'baz')
def testOldnews(self):
self.assertRegexp('old', 'No old news')
self.assertNotError('add 0 a: b')
self.assertRegexp('old', 'No old news')
self.assertNotError('add 5 foo: bar')
self.assertRegexp('old', 'No old news')
timeFastForward(6)
self.assertNotError('old')
# vim:set shiftwidth=4 softtabstop=4 expandtab textwidth=79:
| {
"pile_set_name": "Github"
} |
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package packages loads Go packages for inspection and analysis.
The Load function takes as input a list of patterns and return a list of Package
structs describing individual packages matched by those patterns.
The LoadMode controls the amount of detail in the loaded packages.
Load passes most patterns directly to the underlying build tool,
but all patterns with the prefix "query=", where query is a
non-empty string of letters from [a-z], are reserved and may be
interpreted as query operators.
Two query operators are currently supported: "file" and "pattern".
The query "file=path/to/file.go" matches the package or packages enclosing
the Go source file path/to/file.go. For example "file=~/go/src/fmt/print.go"
might return the packages "fmt" and "fmt [fmt.test]".
The query "pattern=string" causes "string" to be passed directly to
the underlying build tool. In most cases this is unnecessary,
but an application can use Load("pattern=" + x) as an escaping mechanism
to ensure that x is not interpreted as a query operator if it contains '='.
All other query operators are reserved for future use and currently
cause Load to report an error.
The Package struct provides basic information about the package, including
- ID, a unique identifier for the package in the returned set;
- GoFiles, the names of the package's Go source files;
- Imports, a map from source import strings to the Packages they name;
- Types, the type information for the package's exported symbols;
- Syntax, the parsed syntax trees for the package's source code; and
- TypeInfo, the result of a complete type-check of the package syntax trees.
(See the documentation for type Package for the complete list of fields
and more detailed descriptions.)
For example,
Load(nil, "bytes", "unicode...")
returns four Package structs describing the standard library packages
bytes, unicode, unicode/utf16, and unicode/utf8. Note that one pattern
can match multiple packages and that a package might be matched by
multiple patterns: in general it is not possible to determine which
packages correspond to which patterns.
Note that the list returned by Load contains only the packages matched
by the patterns. Their dependencies can be found by walking the import
graph using the Imports fields.
The Load function can be configured by passing a pointer to a Config as
the first argument. A nil Config is equivalent to the zero Config, which
causes Load to run in LoadFiles mode, collecting minimal information.
See the documentation for type Config for details.
As noted earlier, the Config.Mode controls the amount of detail
reported about the loaded packages, with each mode returning all the data of the
previous mode with some extra added. See the documentation for type LoadMode
for details.
Most tools should pass their command-line arguments (after any flags)
uninterpreted to the loader, so that the loader can interpret them
according to the conventions of the underlying build system.
See the Example function for typical usage.
*/
package packages // import "golang.org/x/tools/go/packages"
/*
Motivation and design considerations
The new package's design solves problems addressed by two existing
packages: go/build, which locates and describes packages, and
golang.org/x/tools/go/loader, which loads, parses and type-checks them.
The go/build.Package structure encodes too much of the 'go build' way
of organizing projects, leaving us in need of a data type that describes a
package of Go source code independent of the underlying build system.
We wanted something that works equally well with go build and vgo, and
also other build systems such as Bazel and Blaze, making it possible to
construct analysis tools that work in all these environments.
Tools such as errcheck and staticcheck were essentially unavailable to
the Go community at Google, and some of Google's internal tools for Go
are unavailable externally.
This new package provides a uniform way to obtain package metadata by
querying each of these build systems, optionally supporting their
preferred command-line notations for packages, so that tools integrate
neatly with users' build environments. The Metadata query function
executes an external query tool appropriate to the current workspace.
Loading packages always returns the complete import graph "all the way down",
even if all you want is information about a single package, because the query
mechanisms of all the build systems we currently support ({go,vgo} list, and
blaze/bazel aspect-based query) cannot provide detailed information
about one package without visiting all its dependencies too, so there is
no additional asymptotic cost to providing transitive information.
(This property might not be true of a hypothetical 5th build system.)
In calls to TypeCheck, all initial packages, and any package that
transitively depends on one of them, must be loaded from source.
Consider A->B->C->D->E: if A,C are initial, A,B,C must be loaded from
source; D may be loaded from export data, and E may not be loaded at all
(though it's possible that D's export data mentions it, so a
types.Package may be created for it and exposed.)
The old loader had a feature to suppress type-checking of function
bodies on a per-package basis, primarily intended to reduce the work of
obtaining type information for imported packages. Now that imports are
satisfied by export data, the optimization no longer seems necessary.
Despite some early attempts, the old loader did not exploit export data,
instead always using the equivalent of WholeProgram mode. This was due
to the complexity of mixing source and export data packages (now
resolved by the upward traversal mentioned above), and because export data
files were nearly always missing or stale. Now that 'go build' supports
caching, all the underlying build systems can guarantee to produce
export data in a reasonable (amortized) time.
Test "main" packages synthesized by the build system are now reported as
first-class packages, avoiding the need for clients (such as go/ssa) to
reinvent this generation logic.
One way in which go/packages is simpler than the old loader is in its
treatment of in-package tests. In-package tests are packages that
consist of all the files of the library under test, plus the test files.
The old loader constructed in-package tests by a two-phase process of
mutation called "augmentation": first it would construct and type check
all the ordinary library packages and type-check the packages that
depend on them; then it would add more (test) files to the package and
type-check again. This two-phase approach had four major problems:
1) in processing the tests, the loader modified the library package,
leaving no way for a client application to see both the test
package and the library package; one would mutate into the other.
2) because test files can declare additional methods on types defined in
the library portion of the package, the dispatch of method calls in
the library portion was affected by the presence of the test files.
This should have been a clue that the packages were logically
different.
3) this model of "augmentation" assumed at most one in-package test
per library package, which is true of projects using 'go build',
but not other build systems.
4) because of the two-phase nature of test processing, all packages that
import the library package had to be processed before augmentation,
forcing a "one-shot" API and preventing the client from calling Load
in several times in sequence as is now possible in WholeProgram mode.
(TypeCheck mode has a similar one-shot restriction for a different reason.)
Early drafts of this package supported "multi-shot" operation.
Although it allowed clients to make a sequence of calls (or concurrent
calls) to Load, building up the graph of Packages incrementally,
it was of marginal value: it complicated the API
(since it allowed some options to vary across calls but not others),
it complicated the implementation,
it cannot be made to work in Types mode, as explained above,
and it was less efficient than making one combined call (when this is possible).
Among the clients we have inspected, none made multiple calls to load
but could not be easily and satisfactorily modified to make only a single call.
However, applications changes may be required.
For example, the ssadump command loads the user-specified packages
and in addition the runtime package. It is tempting to simply append
"runtime" to the user-provided list, but that does not work if the user
specified an ad-hoc package such as [a.go b.go].
Instead, ssadump no longer requests the runtime package,
but seeks it among the dependencies of the user-specified packages,
and emits an error if it is not found.
Overlays: The Overlay field in the Config allows providing alternate contents
for Go source files, by providing a mapping from file path to contents.
go/packages will pull in new imports added in overlay files when go/packages
is run in LoadImports mode or greater.
Overlay support for the go list driver isn't complete yet: if the file doesn't
exist on disk, it will only be recognized in an overlay if it is a non-test file
and the package would be reported even without the overlay.
Questions & Tasks
- Add GOARCH/GOOS?
They are not portable concepts, but could be made portable.
Our goal has been to allow users to express themselves using the conventions
of the underlying build system: if the build system honors GOARCH
during a build and during a metadata query, then so should
applications built atop that query mechanism.
Conversely, if the target architecture of the build is determined by
command-line flags, the application can pass the relevant
flags through to the build system using a command such as:
myapp -query_flag="--cpu=amd64" -query_flag="--os=darwin"
However, this approach is low-level, unwieldy, and non-portable.
GOOS and GOARCH seem important enough to warrant a dedicated option.
- How should we handle partial failures such as a mixture of good and
malformed patterns, existing and non-existent packages, successful and
failed builds, import failures, import cycles, and so on, in a call to
Load?
- Support bazel, blaze, and go1.10 list, not just go1.11 list.
- Handle (and test) various partial success cases, e.g.
a mixture of good packages and:
invalid patterns
nonexistent packages
empty packages
packages with malformed package or import declarations
unreadable files
import cycles
other parse errors
type errors
Make sure we record errors at the correct place in the graph.
- Missing packages among initial arguments are not reported.
Return bogus packages for them, like golist does.
- "undeclared name" errors (for example) are reported out of source file
order. I suspect this is due to the breadth-first resolution now used
by go/types. Is that a bug? Discuss with gri.
*/
| {
"pile_set_name": "Github"
} |
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_COMPILER_MLIR_TENSORFLOW_UTILS_MANGLING_UTIL_H_
#define TENSORFLOW_COMPILER_MLIR_TENSORFLOW_UTILS_MANGLING_UTIL_H_
#include "absl/strings/string_view.h"
#include "tensorflow/core/framework/tensor.pb.h"
#include "tensorflow/core/framework/tensor_shape.pb.h"
#include "tensorflow/core/framework/types.h"
#include "tensorflow/core/lib/core/status.h"
namespace tensorflow {
namespace mangling_util {
// The type of a mangled string.
enum class MangledKind { kUnknown, kDataType, kTensorShape, kTensor };
// Mangles an attribute name, marking the attribute as a TensorFlow attribute.
string MangleAttributeName(absl::string_view str);
// Returns true if 'str' was mangled with MangleAttributeName.
bool IsMangledAttributeName(absl::string_view str);
// Demangles an attribute name that was manged with MangleAttributeName.
// REQUIRES: IsMangledAttributeName returns true.
absl::string_view DemangleAttributeName(absl::string_view str);
// Returns the type of a mangled string, or kUnknown.
MangledKind GetMangledKind(absl::string_view str);
// Return a TensorShapeProto mangled as a string.
string MangleShape(const TensorShapeProto& shape);
// Demangle a string mangled with MangleShape.
Status DemangleShape(absl::string_view str, TensorShapeProto* proto);
// Return a TensorProto mangled as a string.
string MangleTensor(const TensorProto& tensor);
// Demangle a string mangled with MangleTensor.
Status DemangleTensor(absl::string_view str, TensorProto* proto);
// Return a DataType mangled as as string.
string MangleDataType(const DataType& dtype);
// Demangle a string mangled with MangleDataType.
Status DemangleDataType(absl::string_view str, DataType* proto);
} // namespace mangling_util
} // namespace tensorflow
#endif // TENSORFLOW_COMPILER_MLIR_TENSORFLOW_UTILS_MANGLING_UTIL_H_
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>UReport Test</title>
</head>
<body>
${table}
</body>
</html> | {
"pile_set_name": "Github"
} |
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.yalantis.flipviewpager">
<application android:allowBackup="true">
</application>
</manifest>
| {
"pile_set_name": "Github"
} |
from vaitk import core
class Selection:
def __init__(self):
self.clear()
self.changed = core.VSignal(self)
def isValid(self):
return self._start_line is not None and self._end_line is not None
def clear(self):
self._start_line = None
self._end_line = None
@property
def num_lines(self):
return self.high_line - self.low_line + 1
@property
def low_line(self):
x = self._start_line, self._end_line
return min(x)
@property
def high_line(self):
x = self._start_line, self._end_line
return max(x)
@property
def start_line(self):
return self._start_line
@start_line.setter
def start_line(self, start_line):
self._start_line = start_line
self.changed.emit()
@property
def end_line(self):
return self._end_line
@end_line.setter
def end_line(self, end_line):
self._end_line = end_line
self.changed.emit()
| {
"pile_set_name": "Github"
} |
{
"JSON Test Pattern pass3":
{
"In this test": It is an object.
"The outermost value": must be an object or array.
}
} | {
"pile_set_name": "Github"
} |
/*
* linux/fs/nfs/pagelist.c
*
* A set of helper functions for managing NFS read and write requests.
* The main purpose of these routines is to provide support for the
* coalescing of several requests into a single RPC call.
*
* Copyright 2000, 2001 (c) Trond Myklebust <[email protected]>
*
*/
#include <linux/slab.h>
#include <linux/file.h>
#include <linux/sched.h>
#include <linux/sunrpc/clnt.h>
#include <linux/nfs.h>
#include <linux/nfs3.h>
#include <linux/nfs4.h>
#include <linux/nfs_page.h>
#include <linux/nfs_fs.h>
#include <linux/nfs_mount.h>
#include <linux/export.h>
#include "internal.h"
#include "pnfs.h"
#define NFSDBG_FACILITY NFSDBG_PAGECACHE
static struct kmem_cache *nfs_page_cachep;
static const struct rpc_call_ops nfs_pgio_common_ops;
struct nfs_pgio_mirror *
nfs_pgio_current_mirror(struct nfs_pageio_descriptor *desc)
{
return nfs_pgio_has_mirroring(desc) ?
&desc->pg_mirrors[desc->pg_mirror_idx] :
&desc->pg_mirrors[0];
}
EXPORT_SYMBOL_GPL(nfs_pgio_current_mirror);
void nfs_pgheader_init(struct nfs_pageio_descriptor *desc,
struct nfs_pgio_header *hdr,
void (*release)(struct nfs_pgio_header *hdr))
{
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
hdr->req = nfs_list_entry(mirror->pg_list.next);
hdr->inode = desc->pg_inode;
hdr->cred = hdr->req->wb_context->cred;
hdr->io_start = req_offset(hdr->req);
hdr->good_bytes = mirror->pg_count;
hdr->io_completion = desc->pg_io_completion;
hdr->dreq = desc->pg_dreq;
hdr->release = release;
hdr->completion_ops = desc->pg_completion_ops;
if (hdr->completion_ops->init_hdr)
hdr->completion_ops->init_hdr(hdr);
hdr->pgio_mirror_idx = desc->pg_mirror_idx;
}
EXPORT_SYMBOL_GPL(nfs_pgheader_init);
void nfs_set_pgio_error(struct nfs_pgio_header *hdr, int error, loff_t pos)
{
spin_lock(&hdr->lock);
if (!test_and_set_bit(NFS_IOHDR_ERROR, &hdr->flags)
|| pos < hdr->io_start + hdr->good_bytes) {
clear_bit(NFS_IOHDR_EOF, &hdr->flags);
hdr->good_bytes = pos - hdr->io_start;
hdr->error = error;
}
spin_unlock(&hdr->lock);
}
static inline struct nfs_page *
nfs_page_alloc(void)
{
struct nfs_page *p = kmem_cache_zalloc(nfs_page_cachep, GFP_NOIO);
if (p)
INIT_LIST_HEAD(&p->wb_list);
return p;
}
static inline void
nfs_page_free(struct nfs_page *p)
{
kmem_cache_free(nfs_page_cachep, p);
}
/**
* nfs_iocounter_wait - wait for i/o to complete
* @l_ctx: nfs_lock_context with io_counter to use
*
* returns -ERESTARTSYS if interrupted by a fatal signal.
* Otherwise returns 0 once the io_count hits 0.
*/
int
nfs_iocounter_wait(struct nfs_lock_context *l_ctx)
{
return wait_on_atomic_t(&l_ctx->io_count, nfs_wait_atomic_killable,
TASK_KILLABLE);
}
/**
* nfs_async_iocounter_wait - wait on a rpc_waitqueue for I/O
* to complete
* @task: the rpc_task that should wait
* @l_ctx: nfs_lock_context with io_counter to check
*
* Returns true if there is outstanding I/O to wait on and the
* task has been put to sleep.
*/
bool
nfs_async_iocounter_wait(struct rpc_task *task, struct nfs_lock_context *l_ctx)
{
struct inode *inode = d_inode(l_ctx->open_context->dentry);
bool ret = false;
if (atomic_read(&l_ctx->io_count) > 0) {
rpc_sleep_on(&NFS_SERVER(inode)->uoc_rpcwaitq, task, NULL);
ret = true;
}
if (atomic_read(&l_ctx->io_count) == 0) {
rpc_wake_up_queued_task(&NFS_SERVER(inode)->uoc_rpcwaitq, task);
ret = false;
}
return ret;
}
EXPORT_SYMBOL_GPL(nfs_async_iocounter_wait);
/*
* nfs_page_group_lock - lock the head of the page group
* @req - request in group that is to be locked
*
* this lock must be held when traversing or modifying the page
* group list
*
* return 0 on success, < 0 on error
*/
int
nfs_page_group_lock(struct nfs_page *req)
{
struct nfs_page *head = req->wb_head;
WARN_ON_ONCE(head != head->wb_head);
if (!test_and_set_bit(PG_HEADLOCK, &head->wb_flags))
return 0;
set_bit(PG_CONTENDED1, &head->wb_flags);
smp_mb__after_atomic();
return wait_on_bit_lock(&head->wb_flags, PG_HEADLOCK,
TASK_UNINTERRUPTIBLE);
}
/*
* nfs_page_group_unlock - unlock the head of the page group
* @req - request in group that is to be unlocked
*/
void
nfs_page_group_unlock(struct nfs_page *req)
{
struct nfs_page *head = req->wb_head;
WARN_ON_ONCE(head != head->wb_head);
smp_mb__before_atomic();
clear_bit(PG_HEADLOCK, &head->wb_flags);
smp_mb__after_atomic();
if (!test_bit(PG_CONTENDED1, &head->wb_flags))
return;
wake_up_bit(&head->wb_flags, PG_HEADLOCK);
}
/*
* nfs_page_group_sync_on_bit_locked
*
* must be called with page group lock held
*/
static bool
nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit)
{
struct nfs_page *head = req->wb_head;
struct nfs_page *tmp;
WARN_ON_ONCE(!test_bit(PG_HEADLOCK, &head->wb_flags));
WARN_ON_ONCE(test_and_set_bit(bit, &req->wb_flags));
tmp = req->wb_this_page;
while (tmp != req) {
if (!test_bit(bit, &tmp->wb_flags))
return false;
tmp = tmp->wb_this_page;
}
/* true! reset all bits */
tmp = req;
do {
clear_bit(bit, &tmp->wb_flags);
tmp = tmp->wb_this_page;
} while (tmp != req);
return true;
}
/*
* nfs_page_group_sync_on_bit - set bit on current request, but only
* return true if the bit is set for all requests in page group
* @req - request in page group
* @bit - PG_* bit that is used to sync page group
*/
bool nfs_page_group_sync_on_bit(struct nfs_page *req, unsigned int bit)
{
bool ret;
nfs_page_group_lock(req);
ret = nfs_page_group_sync_on_bit_locked(req, bit);
nfs_page_group_unlock(req);
return ret;
}
/*
* nfs_page_group_init - Initialize the page group linkage for @req
* @req - a new nfs request
* @prev - the previous request in page group, or NULL if @req is the first
* or only request in the group (the head).
*/
static inline void
nfs_page_group_init(struct nfs_page *req, struct nfs_page *prev)
{
struct inode *inode;
WARN_ON_ONCE(prev == req);
if (!prev) {
/* a head request */
req->wb_head = req;
req->wb_this_page = req;
} else {
/* a subrequest */
WARN_ON_ONCE(prev->wb_this_page != prev->wb_head);
WARN_ON_ONCE(!test_bit(PG_HEADLOCK, &prev->wb_head->wb_flags));
req->wb_head = prev->wb_head;
req->wb_this_page = prev->wb_this_page;
prev->wb_this_page = req;
/* All subrequests take a ref on the head request until
* nfs_page_group_destroy is called */
kref_get(&req->wb_head->wb_kref);
/* grab extra ref and bump the request count if head request
* has extra ref from the write/commit path to handle handoff
* between write and commit lists. */
if (test_bit(PG_INODE_REF, &prev->wb_head->wb_flags)) {
inode = page_file_mapping(req->wb_page)->host;
set_bit(PG_INODE_REF, &req->wb_flags);
kref_get(&req->wb_kref);
atomic_long_inc(&NFS_I(inode)->nrequests);
}
}
}
/*
* nfs_page_group_destroy - sync the destruction of page groups
* @req - request that no longer needs the page group
*
* releases the page group reference from each member once all
* members have called this function.
*/
static void
nfs_page_group_destroy(struct kref *kref)
{
struct nfs_page *req = container_of(kref, struct nfs_page, wb_kref);
struct nfs_page *head = req->wb_head;
struct nfs_page *tmp, *next;
if (!nfs_page_group_sync_on_bit(req, PG_TEARDOWN))
goto out;
tmp = req;
do {
next = tmp->wb_this_page;
/* unlink and free */
tmp->wb_this_page = tmp;
tmp->wb_head = tmp;
nfs_free_request(tmp);
tmp = next;
} while (tmp != req);
out:
/* subrequests must release the ref on the head request */
if (head != req)
nfs_release_request(head);
}
/**
* nfs_create_request - Create an NFS read/write request.
* @ctx: open context to use
* @page: page to write
* @last: last nfs request created for this page group or NULL if head
* @offset: starting offset within the page for the write
* @count: number of bytes to read/write
*
* The page must be locked by the caller. This makes sure we never
* create two different requests for the same page.
* User should ensure it is safe to sleep in this function.
*/
struct nfs_page *
nfs_create_request(struct nfs_open_context *ctx, struct page *page,
struct nfs_page *last, unsigned int offset,
unsigned int count)
{
struct nfs_page *req;
struct nfs_lock_context *l_ctx;
if (test_bit(NFS_CONTEXT_BAD, &ctx->flags))
return ERR_PTR(-EBADF);
/* try to allocate the request struct */
req = nfs_page_alloc();
if (req == NULL)
return ERR_PTR(-ENOMEM);
/* get lock context early so we can deal with alloc failures */
l_ctx = nfs_get_lock_context(ctx);
if (IS_ERR(l_ctx)) {
nfs_page_free(req);
return ERR_CAST(l_ctx);
}
req->wb_lock_context = l_ctx;
atomic_inc(&l_ctx->io_count);
/* Initialize the request struct. Initially, we assume a
* long write-back delay. This will be adjusted in
* update_nfs_request below if the region is not locked. */
req->wb_page = page;
if (page) {
req->wb_index = page_index(page);
get_page(page);
}
req->wb_offset = offset;
req->wb_pgbase = offset;
req->wb_bytes = count;
req->wb_context = get_nfs_open_context(ctx);
kref_init(&req->wb_kref);
nfs_page_group_init(req, last);
return req;
}
/**
* nfs_unlock_request - Unlock request and wake up sleepers.
* @req:
*/
void nfs_unlock_request(struct nfs_page *req)
{
if (!NFS_WBACK_BUSY(req)) {
printk(KERN_ERR "NFS: Invalid unlock attempted\n");
BUG();
}
smp_mb__before_atomic();
clear_bit(PG_BUSY, &req->wb_flags);
smp_mb__after_atomic();
if (!test_bit(PG_CONTENDED2, &req->wb_flags))
return;
wake_up_bit(&req->wb_flags, PG_BUSY);
}
/**
* nfs_unlock_and_release_request - Unlock request and release the nfs_page
* @req:
*/
void nfs_unlock_and_release_request(struct nfs_page *req)
{
nfs_unlock_request(req);
nfs_release_request(req);
}
/*
* nfs_clear_request - Free up all resources allocated to the request
* @req:
*
* Release page and open context resources associated with a read/write
* request after it has completed.
*/
static void nfs_clear_request(struct nfs_page *req)
{
struct page *page = req->wb_page;
struct nfs_open_context *ctx = req->wb_context;
struct nfs_lock_context *l_ctx = req->wb_lock_context;
if (page != NULL) {
put_page(page);
req->wb_page = NULL;
}
if (l_ctx != NULL) {
if (atomic_dec_and_test(&l_ctx->io_count)) {
wake_up_atomic_t(&l_ctx->io_count);
if (test_bit(NFS_CONTEXT_UNLOCK, &ctx->flags))
rpc_wake_up(&NFS_SERVER(d_inode(ctx->dentry))->uoc_rpcwaitq);
}
nfs_put_lock_context(l_ctx);
req->wb_lock_context = NULL;
}
if (ctx != NULL) {
put_nfs_open_context(ctx);
req->wb_context = NULL;
}
}
/**
* nfs_release_request - Release the count on an NFS read/write request
* @req: request to release
*
* Note: Should never be called with the spinlock held!
*/
void nfs_free_request(struct nfs_page *req)
{
WARN_ON_ONCE(req->wb_this_page != req);
/* extra debug: make sure no sync bits are still set */
WARN_ON_ONCE(test_bit(PG_TEARDOWN, &req->wb_flags));
WARN_ON_ONCE(test_bit(PG_UNLOCKPAGE, &req->wb_flags));
WARN_ON_ONCE(test_bit(PG_UPTODATE, &req->wb_flags));
WARN_ON_ONCE(test_bit(PG_WB_END, &req->wb_flags));
WARN_ON_ONCE(test_bit(PG_REMOVE, &req->wb_flags));
/* Release struct file and open context */
nfs_clear_request(req);
nfs_page_free(req);
}
void nfs_release_request(struct nfs_page *req)
{
kref_put(&req->wb_kref, nfs_page_group_destroy);
}
EXPORT_SYMBOL_GPL(nfs_release_request);
/**
* nfs_wait_on_request - Wait for a request to complete.
* @req: request to wait upon.
*
* Interruptible by fatal signals only.
* The user is responsible for holding a count on the request.
*/
int
nfs_wait_on_request(struct nfs_page *req)
{
if (!test_bit(PG_BUSY, &req->wb_flags))
return 0;
set_bit(PG_CONTENDED2, &req->wb_flags);
smp_mb__after_atomic();
return wait_on_bit_io(&req->wb_flags, PG_BUSY,
TASK_UNINTERRUPTIBLE);
}
EXPORT_SYMBOL_GPL(nfs_wait_on_request);
/*
* nfs_generic_pg_test - determine if requests can be coalesced
* @desc: pointer to descriptor
* @prev: previous request in desc, or NULL
* @req: this request
*
* Returns zero if @req can be coalesced into @desc, otherwise it returns
* the size of the request.
*/
size_t nfs_generic_pg_test(struct nfs_pageio_descriptor *desc,
struct nfs_page *prev, struct nfs_page *req)
{
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
if (mirror->pg_count > mirror->pg_bsize) {
/* should never happen */
WARN_ON_ONCE(1);
return 0;
}
/*
* Limit the request size so that we can still allocate a page array
* for it without upsetting the slab allocator.
*/
if (((mirror->pg_count + req->wb_bytes) >> PAGE_SHIFT) *
sizeof(struct page *) > PAGE_SIZE)
return 0;
return min(mirror->pg_bsize - mirror->pg_count, (size_t)req->wb_bytes);
}
EXPORT_SYMBOL_GPL(nfs_generic_pg_test);
struct nfs_pgio_header *nfs_pgio_header_alloc(const struct nfs_rw_ops *ops)
{
struct nfs_pgio_header *hdr = ops->rw_alloc_header();
if (hdr) {
INIT_LIST_HEAD(&hdr->pages);
spin_lock_init(&hdr->lock);
hdr->rw_ops = ops;
}
return hdr;
}
EXPORT_SYMBOL_GPL(nfs_pgio_header_alloc);
/**
* nfs_pgio_data_destroy - make @hdr suitable for reuse
*
* Frees memory and releases refs from nfs_generic_pgio, so that it may
* be called again.
*
* @hdr: A header that has had nfs_generic_pgio called
*/
static void nfs_pgio_data_destroy(struct nfs_pgio_header *hdr)
{
if (hdr->args.context)
put_nfs_open_context(hdr->args.context);
if (hdr->page_array.pagevec != hdr->page_array.page_array)
kfree(hdr->page_array.pagevec);
}
/*
* nfs_pgio_header_free - Free a read or write header
* @hdr: The header to free
*/
void nfs_pgio_header_free(struct nfs_pgio_header *hdr)
{
nfs_pgio_data_destroy(hdr);
hdr->rw_ops->rw_free_header(hdr);
}
EXPORT_SYMBOL_GPL(nfs_pgio_header_free);
/**
* nfs_pgio_rpcsetup - Set up arguments for a pageio call
* @hdr: The pageio hdr
* @count: Number of bytes to read
* @offset: Initial offset
* @how: How to commit data (writes only)
* @cinfo: Commit information for the call (writes only)
*/
static void nfs_pgio_rpcsetup(struct nfs_pgio_header *hdr,
unsigned int count, unsigned int offset,
int how, struct nfs_commit_info *cinfo)
{
struct nfs_page *req = hdr->req;
/* Set up the RPC argument and reply structs
* NB: take care not to mess about with hdr->commit et al. */
hdr->args.fh = NFS_FH(hdr->inode);
hdr->args.offset = req_offset(req) + offset;
/* pnfs_set_layoutcommit needs this */
hdr->mds_offset = hdr->args.offset;
hdr->args.pgbase = req->wb_pgbase + offset;
hdr->args.pages = hdr->page_array.pagevec;
hdr->args.count = count;
hdr->args.context = get_nfs_open_context(req->wb_context);
hdr->args.lock_context = req->wb_lock_context;
hdr->args.stable = NFS_UNSTABLE;
switch (how & (FLUSH_STABLE | FLUSH_COND_STABLE)) {
case 0:
break;
case FLUSH_COND_STABLE:
if (nfs_reqs_to_commit(cinfo))
break;
default:
hdr->args.stable = NFS_FILE_SYNC;
}
hdr->res.fattr = &hdr->fattr;
hdr->res.count = count;
hdr->res.eof = 0;
hdr->res.verf = &hdr->verf;
nfs_fattr_init(&hdr->fattr);
}
/**
* nfs_pgio_prepare - Prepare pageio hdr to go over the wire
* @task: The current task
* @calldata: pageio header to prepare
*/
static void nfs_pgio_prepare(struct rpc_task *task, void *calldata)
{
struct nfs_pgio_header *hdr = calldata;
int err;
err = NFS_PROTO(hdr->inode)->pgio_rpc_prepare(task, hdr);
if (err)
rpc_exit(task, err);
}
int nfs_initiate_pgio(struct rpc_clnt *clnt, struct nfs_pgio_header *hdr,
struct rpc_cred *cred, const struct nfs_rpc_ops *rpc_ops,
const struct rpc_call_ops *call_ops, int how, int flags)
{
struct rpc_task *task;
struct rpc_message msg = {
.rpc_argp = &hdr->args,
.rpc_resp = &hdr->res,
.rpc_cred = cred,
};
struct rpc_task_setup task_setup_data = {
.rpc_client = clnt,
.task = &hdr->task,
.rpc_message = &msg,
.callback_ops = call_ops,
.callback_data = hdr,
.workqueue = nfsiod_workqueue,
.flags = RPC_TASK_ASYNC | flags,
};
int ret = 0;
hdr->rw_ops->rw_initiate(hdr, &msg, rpc_ops, &task_setup_data, how);
dprintk("NFS: initiated pgio call "
"(req %s/%llu, %u bytes @ offset %llu)\n",
hdr->inode->i_sb->s_id,
(unsigned long long)NFS_FILEID(hdr->inode),
hdr->args.count,
(unsigned long long)hdr->args.offset);
task = rpc_run_task(&task_setup_data);
if (IS_ERR(task)) {
ret = PTR_ERR(task);
goto out;
}
if (how & FLUSH_SYNC) {
ret = rpc_wait_for_completion_task(task);
if (ret == 0)
ret = task->tk_status;
}
rpc_put_task(task);
out:
return ret;
}
EXPORT_SYMBOL_GPL(nfs_initiate_pgio);
/**
* nfs_pgio_error - Clean up from a pageio error
* @desc: IO descriptor
* @hdr: pageio header
*/
static void nfs_pgio_error(struct nfs_pgio_header *hdr)
{
set_bit(NFS_IOHDR_REDO, &hdr->flags);
hdr->completion_ops->completion(hdr);
}
/**
* nfs_pgio_release - Release pageio data
* @calldata: The pageio header to release
*/
static void nfs_pgio_release(void *calldata)
{
struct nfs_pgio_header *hdr = calldata;
hdr->completion_ops->completion(hdr);
}
static void nfs_pageio_mirror_init(struct nfs_pgio_mirror *mirror,
unsigned int bsize)
{
INIT_LIST_HEAD(&mirror->pg_list);
mirror->pg_bytes_written = 0;
mirror->pg_count = 0;
mirror->pg_bsize = bsize;
mirror->pg_base = 0;
mirror->pg_recoalesce = 0;
}
/**
* nfs_pageio_init - initialise a page io descriptor
* @desc: pointer to descriptor
* @inode: pointer to inode
* @pg_ops: pointer to pageio operations
* @compl_ops: pointer to pageio completion operations
* @rw_ops: pointer to nfs read/write operations
* @bsize: io block size
* @io_flags: extra parameters for the io function
*/
void nfs_pageio_init(struct nfs_pageio_descriptor *desc,
struct inode *inode,
const struct nfs_pageio_ops *pg_ops,
const struct nfs_pgio_completion_ops *compl_ops,
const struct nfs_rw_ops *rw_ops,
size_t bsize,
int io_flags)
{
desc->pg_moreio = 0;
desc->pg_inode = inode;
desc->pg_ops = pg_ops;
desc->pg_completion_ops = compl_ops;
desc->pg_rw_ops = rw_ops;
desc->pg_ioflags = io_flags;
desc->pg_error = 0;
desc->pg_lseg = NULL;
desc->pg_io_completion = NULL;
desc->pg_dreq = NULL;
desc->pg_bsize = bsize;
desc->pg_mirror_count = 1;
desc->pg_mirror_idx = 0;
desc->pg_mirrors_dynamic = NULL;
desc->pg_mirrors = desc->pg_mirrors_static;
nfs_pageio_mirror_init(&desc->pg_mirrors[0], bsize);
}
/**
* nfs_pgio_result - Basic pageio error handling
* @task: The task that ran
* @calldata: Pageio header to check
*/
static void nfs_pgio_result(struct rpc_task *task, void *calldata)
{
struct nfs_pgio_header *hdr = calldata;
struct inode *inode = hdr->inode;
dprintk("NFS: %s: %5u, (status %d)\n", __func__,
task->tk_pid, task->tk_status);
if (hdr->rw_ops->rw_done(task, hdr, inode) != 0)
return;
if (task->tk_status < 0)
nfs_set_pgio_error(hdr, task->tk_status, hdr->args.offset);
else
hdr->rw_ops->rw_result(task, hdr);
}
/*
* Create an RPC task for the given read or write request and kick it.
* The page must have been locked by the caller.
*
* It may happen that the page we're passed is not marked dirty.
* This is the case if nfs_updatepage detects a conflicting request
* that has been written but not committed.
*/
int nfs_generic_pgio(struct nfs_pageio_descriptor *desc,
struct nfs_pgio_header *hdr)
{
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
struct nfs_page *req;
struct page **pages,
*last_page;
struct list_head *head = &mirror->pg_list;
struct nfs_commit_info cinfo;
struct nfs_page_array *pg_array = &hdr->page_array;
unsigned int pagecount, pageused;
gfp_t gfp_flags = GFP_KERNEL;
pagecount = nfs_page_array_len(mirror->pg_base, mirror->pg_count);
pg_array->npages = pagecount;
if (pagecount <= ARRAY_SIZE(pg_array->page_array))
pg_array->pagevec = pg_array->page_array;
else {
if (hdr->rw_mode == FMODE_WRITE)
gfp_flags = GFP_NOIO;
pg_array->pagevec = kcalloc(pagecount, sizeof(struct page *), gfp_flags);
if (!pg_array->pagevec) {
pg_array->npages = 0;
nfs_pgio_error(hdr);
desc->pg_error = -ENOMEM;
return desc->pg_error;
}
}
nfs_init_cinfo(&cinfo, desc->pg_inode, desc->pg_dreq);
pages = hdr->page_array.pagevec;
last_page = NULL;
pageused = 0;
while (!list_empty(head)) {
req = nfs_list_entry(head->next);
nfs_list_remove_request(req);
nfs_list_add_request(req, &hdr->pages);
if (!last_page || last_page != req->wb_page) {
pageused++;
if (pageused > pagecount)
break;
*pages++ = last_page = req->wb_page;
}
}
if (WARN_ON_ONCE(pageused != pagecount)) {
nfs_pgio_error(hdr);
desc->pg_error = -EINVAL;
return desc->pg_error;
}
if ((desc->pg_ioflags & FLUSH_COND_STABLE) &&
(desc->pg_moreio || nfs_reqs_to_commit(&cinfo)))
desc->pg_ioflags &= ~FLUSH_COND_STABLE;
/* Set up the argument struct */
nfs_pgio_rpcsetup(hdr, mirror->pg_count, 0, desc->pg_ioflags, &cinfo);
desc->pg_rpc_callops = &nfs_pgio_common_ops;
return 0;
}
EXPORT_SYMBOL_GPL(nfs_generic_pgio);
static int nfs_generic_pg_pgios(struct nfs_pageio_descriptor *desc)
{
struct nfs_pgio_header *hdr;
int ret;
hdr = nfs_pgio_header_alloc(desc->pg_rw_ops);
if (!hdr) {
desc->pg_error = -ENOMEM;
return desc->pg_error;
}
nfs_pgheader_init(desc, hdr, nfs_pgio_header_free);
ret = nfs_generic_pgio(desc, hdr);
if (ret == 0)
ret = nfs_initiate_pgio(NFS_CLIENT(hdr->inode),
hdr,
hdr->cred,
NFS_PROTO(hdr->inode),
desc->pg_rpc_callops,
desc->pg_ioflags, 0);
return ret;
}
static struct nfs_pgio_mirror *
nfs_pageio_alloc_mirrors(struct nfs_pageio_descriptor *desc,
unsigned int mirror_count)
{
struct nfs_pgio_mirror *ret;
unsigned int i;
kfree(desc->pg_mirrors_dynamic);
desc->pg_mirrors_dynamic = NULL;
if (mirror_count == 1)
return desc->pg_mirrors_static;
ret = kmalloc_array(mirror_count, sizeof(*ret), GFP_NOFS);
if (ret != NULL) {
for (i = 0; i < mirror_count; i++)
nfs_pageio_mirror_init(&ret[i], desc->pg_bsize);
desc->pg_mirrors_dynamic = ret;
}
return ret;
}
/*
* nfs_pageio_setup_mirroring - determine if mirroring is to be used
* by calling the pg_get_mirror_count op
*/
static void nfs_pageio_setup_mirroring(struct nfs_pageio_descriptor *pgio,
struct nfs_page *req)
{
unsigned int mirror_count = 1;
if (pgio->pg_ops->pg_get_mirror_count)
mirror_count = pgio->pg_ops->pg_get_mirror_count(pgio, req);
if (mirror_count == pgio->pg_mirror_count || pgio->pg_error < 0)
return;
if (!mirror_count || mirror_count > NFS_PAGEIO_DESCRIPTOR_MIRROR_MAX) {
pgio->pg_error = -EINVAL;
return;
}
pgio->pg_mirrors = nfs_pageio_alloc_mirrors(pgio, mirror_count);
if (pgio->pg_mirrors == NULL) {
pgio->pg_error = -ENOMEM;
pgio->pg_mirrors = pgio->pg_mirrors_static;
mirror_count = 1;
}
pgio->pg_mirror_count = mirror_count;
}
/*
* nfs_pageio_stop_mirroring - stop using mirroring (set mirror count to 1)
*/
void nfs_pageio_stop_mirroring(struct nfs_pageio_descriptor *pgio)
{
pgio->pg_mirror_count = 1;
pgio->pg_mirror_idx = 0;
}
static void nfs_pageio_cleanup_mirroring(struct nfs_pageio_descriptor *pgio)
{
pgio->pg_mirror_count = 1;
pgio->pg_mirror_idx = 0;
pgio->pg_mirrors = pgio->pg_mirrors_static;
kfree(pgio->pg_mirrors_dynamic);
pgio->pg_mirrors_dynamic = NULL;
}
static bool nfs_match_lock_context(const struct nfs_lock_context *l1,
const struct nfs_lock_context *l2)
{
return l1->lockowner == l2->lockowner;
}
/**
* nfs_can_coalesce_requests - test two requests for compatibility
* @prev: pointer to nfs_page
* @req: pointer to nfs_page
*
* The nfs_page structures 'prev' and 'req' are compared to ensure that the
* page data area they describe is contiguous, and that their RPC
* credentials, NFSv4 open state, and lockowners are the same.
*
* Return 'true' if this is the case, else return 'false'.
*/
static bool nfs_can_coalesce_requests(struct nfs_page *prev,
struct nfs_page *req,
struct nfs_pageio_descriptor *pgio)
{
size_t size;
struct file_lock_context *flctx;
if (prev) {
if (!nfs_match_open_context(req->wb_context, prev->wb_context))
return false;
flctx = d_inode(req->wb_context->dentry)->i_flctx;
if (flctx != NULL &&
!(list_empty_careful(&flctx->flc_posix) &&
list_empty_careful(&flctx->flc_flock)) &&
!nfs_match_lock_context(req->wb_lock_context,
prev->wb_lock_context))
return false;
if (req_offset(req) != req_offset(prev) + prev->wb_bytes)
return false;
if (req->wb_page == prev->wb_page) {
if (req->wb_pgbase != prev->wb_pgbase + prev->wb_bytes)
return false;
} else {
if (req->wb_pgbase != 0 ||
prev->wb_pgbase + prev->wb_bytes != PAGE_SIZE)
return false;
}
}
size = pgio->pg_ops->pg_test(pgio, prev, req);
WARN_ON_ONCE(size > req->wb_bytes);
if (size && size < req->wb_bytes)
req->wb_bytes = size;
return size > 0;
}
/**
* nfs_pageio_do_add_request - Attempt to coalesce a request into a page list.
* @desc: destination io descriptor
* @req: request
*
* Returns true if the request 'req' was successfully coalesced into the
* existing list of pages 'desc'.
*/
static int nfs_pageio_do_add_request(struct nfs_pageio_descriptor *desc,
struct nfs_page *req)
{
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
struct nfs_page *prev = NULL;
if (mirror->pg_count != 0) {
prev = nfs_list_entry(mirror->pg_list.prev);
} else {
if (desc->pg_ops->pg_init)
desc->pg_ops->pg_init(desc, req);
if (desc->pg_error < 0)
return 0;
mirror->pg_base = req->wb_pgbase;
}
if (!nfs_can_coalesce_requests(prev, req, desc))
return 0;
nfs_list_remove_request(req);
nfs_list_add_request(req, &mirror->pg_list);
mirror->pg_count += req->wb_bytes;
return 1;
}
/*
* Helper for nfs_pageio_add_request and nfs_pageio_complete
*/
static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
{
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
if (!list_empty(&mirror->pg_list)) {
int error = desc->pg_ops->pg_doio(desc);
if (error < 0)
desc->pg_error = error;
else
mirror->pg_bytes_written += mirror->pg_count;
}
if (list_empty(&mirror->pg_list)) {
mirror->pg_count = 0;
mirror->pg_base = 0;
}
}
/**
* nfs_pageio_add_request - Attempt to coalesce a request into a page list.
* @desc: destination io descriptor
* @req: request
*
* This may split a request into subrequests which are all part of the
* same page group.
*
* Returns true if the request 'req' was successfully coalesced into the
* existing list of pages 'desc'.
*/
static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
struct nfs_page *req)
{
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
struct nfs_page *subreq;
unsigned int bytes_left = 0;
unsigned int offset, pgbase;
nfs_page_group_lock(req);
subreq = req;
bytes_left = subreq->wb_bytes;
offset = subreq->wb_offset;
pgbase = subreq->wb_pgbase;
do {
if (!nfs_pageio_do_add_request(desc, subreq)) {
/* make sure pg_test call(s) did nothing */
WARN_ON_ONCE(subreq->wb_bytes != bytes_left);
WARN_ON_ONCE(subreq->wb_offset != offset);
WARN_ON_ONCE(subreq->wb_pgbase != pgbase);
nfs_page_group_unlock(req);
desc->pg_moreio = 1;
nfs_pageio_doio(desc);
if (desc->pg_error < 0)
return 0;
if (mirror->pg_recoalesce)
return 0;
/* retry add_request for this subreq */
nfs_page_group_lock(req);
continue;
}
/* check for buggy pg_test call(s) */
WARN_ON_ONCE(subreq->wb_bytes + subreq->wb_pgbase > PAGE_SIZE);
WARN_ON_ONCE(subreq->wb_bytes > bytes_left);
WARN_ON_ONCE(subreq->wb_bytes == 0);
bytes_left -= subreq->wb_bytes;
offset += subreq->wb_bytes;
pgbase += subreq->wb_bytes;
if (bytes_left) {
subreq = nfs_create_request(req->wb_context,
req->wb_page,
subreq, pgbase, bytes_left);
if (IS_ERR(subreq))
goto err_ptr;
nfs_lock_request(subreq);
subreq->wb_offset = offset;
subreq->wb_index = req->wb_index;
}
} while (bytes_left > 0);
nfs_page_group_unlock(req);
return 1;
err_ptr:
desc->pg_error = PTR_ERR(subreq);
nfs_page_group_unlock(req);
return 0;
}
static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
{
struct nfs_pgio_mirror *mirror = nfs_pgio_current_mirror(desc);
LIST_HEAD(head);
do {
list_splice_init(&mirror->pg_list, &head);
mirror->pg_bytes_written -= mirror->pg_count;
mirror->pg_count = 0;
mirror->pg_base = 0;
mirror->pg_recoalesce = 0;
while (!list_empty(&head)) {
struct nfs_page *req;
req = list_first_entry(&head, struct nfs_page, wb_list);
nfs_list_remove_request(req);
if (__nfs_pageio_add_request(desc, req))
continue;
if (desc->pg_error < 0) {
list_splice_tail(&head, &mirror->pg_list);
mirror->pg_recoalesce = 1;
return 0;
}
break;
}
} while (mirror->pg_recoalesce);
return 1;
}
static int nfs_pageio_add_request_mirror(struct nfs_pageio_descriptor *desc,
struct nfs_page *req)
{
int ret;
do {
ret = __nfs_pageio_add_request(desc, req);
if (ret)
break;
if (desc->pg_error < 0)
break;
ret = nfs_do_recoalesce(desc);
} while (ret);
return ret;
}
int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
struct nfs_page *req)
{
u32 midx;
unsigned int pgbase, offset, bytes;
struct nfs_page *dupreq, *lastreq;
pgbase = req->wb_pgbase;
offset = req->wb_offset;
bytes = req->wb_bytes;
nfs_pageio_setup_mirroring(desc, req);
if (desc->pg_error < 0)
goto out_failed;
for (midx = 0; midx < desc->pg_mirror_count; midx++) {
if (midx) {
nfs_page_group_lock(req);
/* find the last request */
for (lastreq = req->wb_head;
lastreq->wb_this_page != req->wb_head;
lastreq = lastreq->wb_this_page)
;
dupreq = nfs_create_request(req->wb_context,
req->wb_page, lastreq, pgbase, bytes);
if (IS_ERR(dupreq)) {
nfs_page_group_unlock(req);
desc->pg_error = PTR_ERR(dupreq);
goto out_failed;
}
nfs_lock_request(dupreq);
nfs_page_group_unlock(req);
dupreq->wb_offset = offset;
dupreq->wb_index = req->wb_index;
} else
dupreq = req;
if (nfs_pgio_has_mirroring(desc))
desc->pg_mirror_idx = midx;
if (!nfs_pageio_add_request_mirror(desc, dupreq))
goto out_failed;
}
return 1;
out_failed:
/*
* We might have failed before sending any reqs over wire.
* Clean up rest of the reqs in mirror pg_list.
*/
if (desc->pg_error) {
struct nfs_pgio_mirror *mirror;
void (*func)(struct list_head *);
/* remember fatal errors */
if (nfs_error_is_fatal(desc->pg_error))
nfs_context_set_write_error(req->wb_context,
desc->pg_error);
func = desc->pg_completion_ops->error_cleanup;
for (midx = 0; midx < desc->pg_mirror_count; midx++) {
mirror = &desc->pg_mirrors[midx];
func(&mirror->pg_list);
}
}
return 0;
}
/*
* nfs_pageio_complete_mirror - Complete I/O on the current mirror of an
* nfs_pageio_descriptor
* @desc: pointer to io descriptor
* @mirror_idx: pointer to mirror index
*/
static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc,
u32 mirror_idx)
{
struct nfs_pgio_mirror *mirror = &desc->pg_mirrors[mirror_idx];
u32 restore_idx = desc->pg_mirror_idx;
if (nfs_pgio_has_mirroring(desc))
desc->pg_mirror_idx = mirror_idx;
for (;;) {
nfs_pageio_doio(desc);
if (!mirror->pg_recoalesce)
break;
if (!nfs_do_recoalesce(desc))
break;
}
desc->pg_mirror_idx = restore_idx;
}
/*
* nfs_pageio_resend - Transfer requests to new descriptor and resend
* @hdr - the pgio header to move request from
* @desc - the pageio descriptor to add requests to
*
* Try to move each request (nfs_page) from @hdr to @desc then attempt
* to send them.
*
* Returns 0 on success and < 0 on error.
*/
int nfs_pageio_resend(struct nfs_pageio_descriptor *desc,
struct nfs_pgio_header *hdr)
{
LIST_HEAD(failed);
desc->pg_io_completion = hdr->io_completion;
desc->pg_dreq = hdr->dreq;
while (!list_empty(&hdr->pages)) {
struct nfs_page *req = nfs_list_entry(hdr->pages.next);
nfs_list_remove_request(req);
if (!nfs_pageio_add_request(desc, req))
nfs_list_add_request(req, &failed);
}
nfs_pageio_complete(desc);
if (!list_empty(&failed)) {
list_move(&failed, &hdr->pages);
return desc->pg_error < 0 ? desc->pg_error : -EIO;
}
return 0;
}
EXPORT_SYMBOL_GPL(nfs_pageio_resend);
/**
* nfs_pageio_complete - Complete I/O then cleanup an nfs_pageio_descriptor
* @desc: pointer to io descriptor
*/
void nfs_pageio_complete(struct nfs_pageio_descriptor *desc)
{
u32 midx;
for (midx = 0; midx < desc->pg_mirror_count; midx++)
nfs_pageio_complete_mirror(desc, midx);
if (desc->pg_ops->pg_cleanup)
desc->pg_ops->pg_cleanup(desc);
nfs_pageio_cleanup_mirroring(desc);
}
/**
* nfs_pageio_cond_complete - Conditional I/O completion
* @desc: pointer to io descriptor
* @index: page index
*
* It is important to ensure that processes don't try to take locks
* on non-contiguous ranges of pages as that might deadlock. This
* function should be called before attempting to wait on a locked
* nfs_page. It will complete the I/O if the page index 'index'
* is not contiguous with the existing list of pages in 'desc'.
*/
void nfs_pageio_cond_complete(struct nfs_pageio_descriptor *desc, pgoff_t index)
{
struct nfs_pgio_mirror *mirror;
struct nfs_page *prev;
u32 midx;
for (midx = 0; midx < desc->pg_mirror_count; midx++) {
mirror = &desc->pg_mirrors[midx];
if (!list_empty(&mirror->pg_list)) {
prev = nfs_list_entry(mirror->pg_list.prev);
if (index != prev->wb_index + 1) {
nfs_pageio_complete(desc);
break;
}
}
}
}
int __init nfs_init_nfspagecache(void)
{
nfs_page_cachep = kmem_cache_create("nfs_page",
sizeof(struct nfs_page),
0, SLAB_HWCACHE_ALIGN,
NULL);
if (nfs_page_cachep == NULL)
return -ENOMEM;
return 0;
}
void nfs_destroy_nfspagecache(void)
{
kmem_cache_destroy(nfs_page_cachep);
}
static const struct rpc_call_ops nfs_pgio_common_ops = {
.rpc_call_prepare = nfs_pgio_prepare,
.rpc_call_done = nfs_pgio_result,
.rpc_release = nfs_pgio_release,
};
const struct nfs_pageio_ops nfs_pgio_rw_ops = {
.pg_test = nfs_generic_pg_test,
.pg_doio = nfs_generic_pg_pgios,
};
| {
"pile_set_name": "Github"
} |
// Copyright 2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package google.ads.googleads.v4.enums;
import "google/api/annotations.proto";
option csharp_namespace = "Google.Ads.GoogleAds.V4.Enums";
option go_package = "google.golang.org/genproto/googleapis/ads/googleads/v4/enums;enums";
option java_multiple_files = true;
option java_outer_classname = "UserListMembershipStatusProto";
option java_package = "com.google.ads.googleads.v4.enums";
option objc_class_prefix = "GAA";
option php_namespace = "Google\\Ads\\GoogleAds\\V4\\Enums";
option ruby_package = "Google::Ads::GoogleAds::V4::Enums";
// Proto file describing user list membership status.
// Membership status of this user list. Indicates whether a user list is open
// or active. Only open user lists can accumulate more users and can be used for
// targeting.
message UserListMembershipStatusEnum {
// Enum containing possible user list membership statuses.
enum UserListMembershipStatus {
// Not specified.
UNSPECIFIED = 0;
// Used for return value only. Represents value unknown in this version.
UNKNOWN = 1;
// Open status - List is accruing members and can be targeted to.
OPEN = 2;
// Closed status - No new members being added. Cannot be used for targeting.
CLOSED = 3;
}
}
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<link rel="shortcut icon" type="image/ico" href="http://www.datatables.net/favicon.ico">
<meta name="viewport" content="initial-scale=1.0, maximum-scale=2.0">
<title>Buttons example - PDF - message</title>
<link rel="stylesheet" type="text/css" href="../../../../media/css/jquery.dataTables.css">
<link rel="stylesheet" type="text/css" href="../../css/buttons.dataTables.css">
<link rel="stylesheet" type="text/css" href="../../../../examples/resources/syntax/shCore.css">
<link rel="stylesheet" type="text/css" href="../../../../examples/resources/demo.css">
<style type="text/css" class="init">
</style>
<script type="text/javascript" language="javascript" src="//code.jquery.com/jquery-1.12.4.js">
</script>
<script type="text/javascript" language="javascript" src="../../../../media/js/jquery.dataTables.js">
</script>
<script type="text/javascript" language="javascript" src="../../js/dataTables.buttons.js">
</script>
<script type="text/javascript" language="javascript" src="//cdn.rawgit.com/bpampuch/pdfmake/0.1.27/build/pdfmake.min.js">
</script>
<script type="text/javascript" language="javascript" src="//cdn.rawgit.com/bpampuch/pdfmake/0.1.27/build/vfs_fonts.js">
</script>
<script type="text/javascript" language="javascript" src="../../js/buttons.html5.js">
</script>
<script type="text/javascript" language="javascript" src="../../../../examples/resources/syntax/shCore.js">
</script>
<script type="text/javascript" language="javascript" src="../../../../examples/resources/demo.js">
</script>
<script type="text/javascript" language="javascript" class="init">
$(document).ready(function() {
$('#example').DataTable( {
dom: 'Bfrtip',
buttons: [
{
extend: 'pdfHtml5',
message: 'PDF created by PDFMake with Buttons for DataTables.'
}
]
} );
} );
</script>
</head>
<body class="dt-example">
<div class="container">
<section>
<h1>Buttons example <span>PDF - message</span></h1>
<div class="info">
<p>It can often be useful to include description text in the print view to provide summary information about the table that will be included in the generated PDF.
The <code>message</code> option of the <a href="//datatables.net/reference/button/pdfHtml5"><code class="button" title="Buttons button type">pdfHtml5</code></a>
button type provides this ability.</p>
<p>In the example here a simple string is used - for more complex options, please see the <a href="pdfImage.html">PDF image</a> example for complete customisation
control.</p>
</div>
<div class="demo-html"></div>
<table id="example" class="display" cellspacing="0" width="100%">
<thead>
<tr>
<th>Name</th>
<th>Position</th>
<th>Office</th>
<th>Age</th>
<th>Start date</th>
<th>Salary</th>
</tr>
</thead>
<tfoot>
<tr>
<th>Name</th>
<th>Position</th>
<th>Office</th>
<th>Age</th>
<th>Start date</th>
<th>Salary</th>
</tr>
</tfoot>
<tbody>
<tr>
<td>Tiger Nixon</td>
<td>System Architect</td>
<td>Edinburgh</td>
<td>61</td>
<td>2011/04/25</td>
<td>$320,800</td>
</tr>
<tr>
<td>Garrett Winters</td>
<td>Accountant</td>
<td>Tokyo</td>
<td>63</td>
<td>2011/07/25</td>
<td>$170,750</td>
</tr>
<tr>
<td>Ashton Cox</td>
<td>Junior Technical Author</td>
<td>San Francisco</td>
<td>66</td>
<td>2009/01/12</td>
<td>$86,000</td>
</tr>
<tr>
<td>Cedric Kelly</td>
<td>Senior Javascript Developer</td>
<td>Edinburgh</td>
<td>22</td>
<td>2012/03/29</td>
<td>$433,060</td>
</tr>
<tr>
<td>Airi Satou</td>
<td>Accountant</td>
<td>Tokyo</td>
<td>33</td>
<td>2008/11/28</td>
<td>$162,700</td>
</tr>
<tr>
<td>Brielle Williamson</td>
<td>Integration Specialist</td>
<td>New York</td>
<td>61</td>
<td>2012/12/02</td>
<td>$372,000</td>
</tr>
<tr>
<td>Herrod Chandler</td>
<td>Sales Assistant</td>
<td>San Francisco</td>
<td>59</td>
<td>2012/08/06</td>
<td>$137,500</td>
</tr>
<tr>
<td>Rhona Davidson</td>
<td>Integration Specialist</td>
<td>Tokyo</td>
<td>55</td>
<td>2010/10/14</td>
<td>$327,900</td>
</tr>
<tr>
<td>Colleen Hurst</td>
<td>Javascript Developer</td>
<td>San Francisco</td>
<td>39</td>
<td>2009/09/15</td>
<td>$205,500</td>
</tr>
<tr>
<td>Sonya Frost</td>
<td>Software Engineer</td>
<td>Edinburgh</td>
<td>23</td>
<td>2008/12/13</td>
<td>$103,600</td>
</tr>
<tr>
<td>Jena Gaines</td>
<td>Office Manager</td>
<td>London</td>
<td>30</td>
<td>2008/12/19</td>
<td>$90,560</td>
</tr>
<tr>
<td>Quinn Flynn</td>
<td>Support Lead</td>
<td>Edinburgh</td>
<td>22</td>
<td>2013/03/03</td>
<td>$342,000</td>
</tr>
<tr>
<td>Charde Marshall</td>
<td>Regional Director</td>
<td>San Francisco</td>
<td>36</td>
<td>2008/10/16</td>
<td>$470,600</td>
</tr>
<tr>
<td>Haley Kennedy</td>
<td>Senior Marketing Designer</td>
<td>London</td>
<td>43</td>
<td>2012/12/18</td>
<td>$313,500</td>
</tr>
<tr>
<td>Tatyana Fitzpatrick</td>
<td>Regional Director</td>
<td>London</td>
<td>19</td>
<td>2010/03/17</td>
<td>$385,750</td>
</tr>
<tr>
<td>Michael Silva</td>
<td>Marketing Designer</td>
<td>London</td>
<td>66</td>
<td>2012/11/27</td>
<td>$198,500</td>
</tr>
<tr>
<td>Paul Byrd</td>
<td>Chief Financial Officer (CFO)</td>
<td>New York</td>
<td>64</td>
<td>2010/06/09</td>
<td>$725,000</td>
</tr>
<tr>
<td>Gloria Little</td>
<td>Systems Administrator</td>
<td>New York</td>
<td>59</td>
<td>2009/04/10</td>
<td>$237,500</td>
</tr>
<tr>
<td>Bradley Greer</td>
<td>Software Engineer</td>
<td>London</td>
<td>41</td>
<td>2012/10/13</td>
<td>$132,000</td>
</tr>
<tr>
<td>Dai Rios</td>
<td>Personnel Lead</td>
<td>Edinburgh</td>
<td>35</td>
<td>2012/09/26</td>
<td>$217,500</td>
</tr>
<tr>
<td>Jenette Caldwell</td>
<td>Development Lead</td>
<td>New York</td>
<td>30</td>
<td>2011/09/03</td>
<td>$345,000</td>
</tr>
<tr>
<td>Yuri Berry</td>
<td>Chief Marketing Officer (CMO)</td>
<td>New York</td>
<td>40</td>
<td>2009/06/25</td>
<td>$675,000</td>
</tr>
<tr>
<td>Caesar Vance</td>
<td>Pre-Sales Support</td>
<td>New York</td>
<td>21</td>
<td>2011/12/12</td>
<td>$106,450</td>
</tr>
<tr>
<td>Doris Wilder</td>
<td>Sales Assistant</td>
<td>Sidney</td>
<td>23</td>
<td>2010/09/20</td>
<td>$85,600</td>
</tr>
<tr>
<td>Angelica Ramos</td>
<td>Chief Executive Officer (CEO)</td>
<td>London</td>
<td>47</td>
<td>2009/10/09</td>
<td>$1,200,000</td>
</tr>
<tr>
<td>Gavin Joyce</td>
<td>Developer</td>
<td>Edinburgh</td>
<td>42</td>
<td>2010/12/22</td>
<td>$92,575</td>
</tr>
<tr>
<td>Jennifer Chang</td>
<td>Regional Director</td>
<td>Singapore</td>
<td>28</td>
<td>2010/11/14</td>
<td>$357,650</td>
</tr>
<tr>
<td>Brenden Wagner</td>
<td>Software Engineer</td>
<td>San Francisco</td>
<td>28</td>
<td>2011/06/07</td>
<td>$206,850</td>
</tr>
<tr>
<td>Fiona Green</td>
<td>Chief Operating Officer (COO)</td>
<td>San Francisco</td>
<td>48</td>
<td>2010/03/11</td>
<td>$850,000</td>
</tr>
<tr>
<td>Shou Itou</td>
<td>Regional Marketing</td>
<td>Tokyo</td>
<td>20</td>
<td>2011/08/14</td>
<td>$163,000</td>
</tr>
<tr>
<td>Michelle House</td>
<td>Integration Specialist</td>
<td>Sidney</td>
<td>37</td>
<td>2011/06/02</td>
<td>$95,400</td>
</tr>
<tr>
<td>Suki Burks</td>
<td>Developer</td>
<td>London</td>
<td>53</td>
<td>2009/10/22</td>
<td>$114,500</td>
</tr>
<tr>
<td>Prescott Bartlett</td>
<td>Technical Author</td>
<td>London</td>
<td>27</td>
<td>2011/05/07</td>
<td>$145,000</td>
</tr>
<tr>
<td>Gavin Cortez</td>
<td>Team Leader</td>
<td>San Francisco</td>
<td>22</td>
<td>2008/10/26</td>
<td>$235,500</td>
</tr>
<tr>
<td>Martena Mccray</td>
<td>Post-Sales support</td>
<td>Edinburgh</td>
<td>46</td>
<td>2011/03/09</td>
<td>$324,050</td>
</tr>
<tr>
<td>Unity Butler</td>
<td>Marketing Designer</td>
<td>San Francisco</td>
<td>47</td>
<td>2009/12/09</td>
<td>$85,675</td>
</tr>
<tr>
<td>Howard Hatfield</td>
<td>Office Manager</td>
<td>San Francisco</td>
<td>51</td>
<td>2008/12/16</td>
<td>$164,500</td>
</tr>
<tr>
<td>Hope Fuentes</td>
<td>Secretary</td>
<td>San Francisco</td>
<td>41</td>
<td>2010/02/12</td>
<td>$109,850</td>
</tr>
<tr>
<td>Vivian Harrell</td>
<td>Financial Controller</td>
<td>San Francisco</td>
<td>62</td>
<td>2009/02/14</td>
<td>$452,500</td>
</tr>
<tr>
<td>Timothy Mooney</td>
<td>Office Manager</td>
<td>London</td>
<td>37</td>
<td>2008/12/11</td>
<td>$136,200</td>
</tr>
<tr>
<td>Jackson Bradshaw</td>
<td>Director</td>
<td>New York</td>
<td>65</td>
<td>2008/09/26</td>
<td>$645,750</td>
</tr>
<tr>
<td>Olivia Liang</td>
<td>Support Engineer</td>
<td>Singapore</td>
<td>64</td>
<td>2011/02/03</td>
<td>$234,500</td>
</tr>
<tr>
<td>Bruno Nash</td>
<td>Software Engineer</td>
<td>London</td>
<td>38</td>
<td>2011/05/03</td>
<td>$163,500</td>
</tr>
<tr>
<td>Sakura Yamamoto</td>
<td>Support Engineer</td>
<td>Tokyo</td>
<td>37</td>
<td>2009/08/19</td>
<td>$139,575</td>
</tr>
<tr>
<td>Thor Walton</td>
<td>Developer</td>
<td>New York</td>
<td>61</td>
<td>2013/08/11</td>
<td>$98,540</td>
</tr>
<tr>
<td>Finn Camacho</td>
<td>Support Engineer</td>
<td>San Francisco</td>
<td>47</td>
<td>2009/07/07</td>
<td>$87,500</td>
</tr>
<tr>
<td>Serge Baldwin</td>
<td>Data Coordinator</td>
<td>Singapore</td>
<td>64</td>
<td>2012/04/09</td>
<td>$138,575</td>
</tr>
<tr>
<td>Zenaida Frank</td>
<td>Software Engineer</td>
<td>New York</td>
<td>63</td>
<td>2010/01/04</td>
<td>$125,250</td>
</tr>
<tr>
<td>Zorita Serrano</td>
<td>Software Engineer</td>
<td>San Francisco</td>
<td>56</td>
<td>2012/06/01</td>
<td>$115,000</td>
</tr>
<tr>
<td>Jennifer Acosta</td>
<td>Junior Javascript Developer</td>
<td>Edinburgh</td>
<td>43</td>
<td>2013/02/01</td>
<td>$75,650</td>
</tr>
<tr>
<td>Cara Stevens</td>
<td>Sales Assistant</td>
<td>New York</td>
<td>46</td>
<td>2011/12/06</td>
<td>$145,600</td>
</tr>
<tr>
<td>Hermione Butler</td>
<td>Regional Director</td>
<td>London</td>
<td>47</td>
<td>2011/03/21</td>
<td>$356,250</td>
</tr>
<tr>
<td>Lael Greer</td>
<td>Systems Administrator</td>
<td>London</td>
<td>21</td>
<td>2009/02/27</td>
<td>$103,500</td>
</tr>
<tr>
<td>Jonas Alexander</td>
<td>Developer</td>
<td>San Francisco</td>
<td>30</td>
<td>2010/07/14</td>
<td>$86,500</td>
</tr>
<tr>
<td>Shad Decker</td>
<td>Regional Director</td>
<td>Edinburgh</td>
<td>51</td>
<td>2008/11/13</td>
<td>$183,000</td>
</tr>
<tr>
<td>Michael Bruce</td>
<td>Javascript Developer</td>
<td>Singapore</td>
<td>29</td>
<td>2011/06/27</td>
<td>$183,000</td>
</tr>
<tr>
<td>Donna Snider</td>
<td>Customer Support</td>
<td>New York</td>
<td>27</td>
<td>2011/01/25</td>
<td>$112,000</td>
</tr>
</tbody>
</table>
<ul class="tabs">
<li class="active">Javascript</li>
<li>HTML</li>
<li>CSS</li>
<li>Ajax</li>
<li>Server-side script</li>
</ul>
<div class="tabs">
<div class="js">
<p>The Javascript shown below is used to initialise the table shown in this example:</p><code class="multiline language-js">$(document).ready(function() {
$('#example').DataTable( {
dom: 'Bfrtip',
buttons: [
{
extend: 'pdfHtml5',
message: 'PDF created by PDFMake with Buttons for DataTables.'
}
]
} );
} );</code>
<p>In addition to the above code, the following Javascript library files are loaded for use in this example:</p>
<ul>
<li>
<a href="//code.jquery.com/jquery-1.12.4.js">//code.jquery.com/jquery-1.12.4.js</a>
</li>
<li>
<a href="../../../../media/js/jquery.dataTables.js">../../../../media/js/jquery.dataTables.js</a>
</li>
<li>
<a href="../../js/dataTables.buttons.js">../../js/dataTables.buttons.js</a>
</li>
<li>
<a href="//cdn.rawgit.com/bpampuch/pdfmake/0.1.27/build/pdfmake.min.js">//cdn.rawgit.com/bpampuch/pdfmake/0.1.27/build/pdfmake.min.js</a>
</li>
<li>
<a href="//cdn.rawgit.com/bpampuch/pdfmake/0.1.27/build/vfs_fonts.js">//cdn.rawgit.com/bpampuch/pdfmake/0.1.27/build/vfs_fonts.js</a>
</li>
<li>
<a href="../../js/buttons.html5.js">../../js/buttons.html5.js</a>
</li>
</ul>
</div>
<div class="table">
<p>The HTML shown below is the raw HTML table element, before it has been enhanced by DataTables:</p>
</div>
<div class="css">
<div>
<p>This example uses a little bit of additional CSS beyond what is loaded from the library files (below), in order to correctly display the table. The
additional CSS used is shown below:</p><code class="multiline language-css"></code>
</div>
<p>The following CSS library files are loaded for use in this example to provide the styling of the table:</p>
<ul>
<li>
<a href="../../../../media/css/jquery.dataTables.css">../../../../media/css/jquery.dataTables.css</a>
</li>
<li>
<a href="../../css/buttons.dataTables.css">../../css/buttons.dataTables.css</a>
</li>
</ul>
</div>
<div class="ajax">
<p>This table loads data by Ajax. The latest data that has been loaded is shown below. This data will update automatically as any additional data is
loaded.</p>
</div>
<div class="php">
<p>The script used to perform the server-side processing for this table is shown below. Please note that this is just an example script using PHP. Server-side
processing scripts can be written in any language, using <a href="//datatables.net/manual/server-side">the protocol described in the DataTables
documentation</a>.</p>
</div>
</div>
</section>
</div>
<section>
<div class="footer">
<div class="gradient"></div>
<div class="liner">
<h2>Other examples</h2>
<div class="toc">
<div class="toc-group">
<h3><a href="../initialisation/index.html">Basic initialisation</a></h3>
<ul class="toc">
<li>
<a href="../initialisation/simple.html">Basic initialisation</a>
</li>
<li>
<a href="../initialisation/export.html">File export</a>
</li>
<li>
<a href="../initialisation/custom.html">Custom button</a>
</li>
<li>
<a href="../initialisation/className.html">Class names</a>
</li>
<li>
<a href="../initialisation/keys.html">Keyboard activation</a>
</li>
<li>
<a href="../initialisation/collections.html">Collections</a>
</li>
<li>
<a href="../initialisation/collections-sub.html">Multi-level collections</a>
</li>
<li>
<a href="../initialisation/collections-autoClose.html">Auto close collection</a>
</li>
<li>
<a href="../initialisation/plugins.html">Plug-ins</a>
</li>
<li>
<a href="../initialisation/new.html">`new` initialisation</a>
</li>
<li>
<a href="../initialisation/multiple.html">Multiple button groups</a>
</li>
<li>
<a href="../initialisation/pageLength.html">Page length</a>
</li>
</ul>
</div>
<div class="toc-group">
<h3><a href="./index.html">HTML 5 data export</a></h3>
<ul class="toc active">
<li>
<a href="./simple.html">HTML5 export buttons</a>
</li>
<li>
<a href="./tsv.html">Tab separated values</a>
</li>
<li>
<a href="./filename.html">File name</a>
</li>
<li>
<a href="./copyi18n.html">Copy button internationalisation</a>
</li>
<li>
<a href="./columns.html">Column selectors</a>
</li>
<li>
<a href="./outputFormat-orthogonal.html">Format output data - orthogonal data</a>
</li>
<li>
<a href="./outputFormat-function.html">Format output data - export options</a>
</li>
<li>
<a href="./excelTextBold.html">Excel - Bold text</a>
</li>
<li>
<a href="./excelCellShading.html">Excel - Cell background</a>
</li>
<li>
<a href="./excelBorder.html">Excel - Customise borders</a>
</li>
<li class="active">
<a href="./pdfMessage.html">PDF - message</a>
</li>
<li>
<a href="./pdfPage.html">PDF - page size and orientation</a>
</li>
<li>
<a href="./pdfImage.html">PDF - image</a>
</li>
<li>
<a href="./pdfOpen.html">PDF - open in new window</a>
</li>
<li>
<a href="./customFile.html">Custom file (JSON)</a>
</li>
</ul>
</div>
<div class="toc-group">
<h3><a href="../flash/index.html">Flash data export</a></h3>
<ul class="toc">
<li>
<a href="../flash/simple.html">Flash export buttons</a>
</li>
<li>
<a href="../flash/tsv.html">Tab separated values</a>
</li>
<li>
<a href="../flash/filename.html">File name</a>
</li>
<li>
<a href="../flash/copyi18n.html">Copy button internationalisation</a>
</li>
<li>
<a href="../flash/pdfMessage.html">PDF message</a>
</li>
<li>
<a href="../flash/pdfPage.html">Page size and orientation</a>
</li>
<li>
<a href="../flash/hidden.html">Hidden initialisation</a>
</li>
<li>
<a href="../flash/swfPath.html">SWF file location</a>
</li>
</ul>
</div>
<div class="toc-group">
<h3><a href="../column_visibility/index.html">Column visibility</a></h3>
<ul class="toc">
<li>
<a href="../column_visibility/simple.html">Basic column visibility</a>
</li>
<li>
<a href="../column_visibility/layout.html">Multi-column layout</a>
</li>
<li>
<a href="../column_visibility/text.html">Internationalisation</a>
</li>
<li>
<a href="../column_visibility/columnText.html">Customisation of column button text</a>
</li>
<li>
<a href="../column_visibility/restore.html">Restore column visibility</a>
</li>
<li>
<a href="../column_visibility/columns.html">Select columns</a>
</li>
<li>
<a href="../column_visibility/columnsToggle.html">Visibility toggle buttons</a>
</li>
<li>
<a href="../column_visibility/columnGroups.html">Column groups</a>
</li>
<li>
<a href="../column_visibility/stateSave.html">State saving</a>
</li>
</ul>
</div>
<div class="toc-group">
<h3><a href="../print/index.html">Print</a></h3>
<ul class="toc">
<li>
<a href="../print/simple.html">Print button</a>
</li>
<li>
<a href="../print/message.html">Custom message</a>
</li>
<li>
<a href="../print/columns.html">Export options - column selector</a>
</li>
<li>
<a href="../print/select.html">Export options - row selector</a>
</li>
<li>
<a href="../print/autoPrint.html">Disable auto print</a>
</li>
<li>
<a href="../print/customisation.html">Customisation of the print view window</a>
</li>
</ul>
</div>
<div class="toc-group">
<h3><a href="../api/index.html">API</a></h3>
<ul class="toc">
<li>
<a href="../api/enable.html">Enable / disable</a>
</li>
<li>
<a href="../api/text.html">Dynamic text</a>
</li>
<li>
<a href="../api/addRemove.html">Adding and removing buttons dynamically</a>
</li>
<li>
<a href="../api/group.html">Group selection</a>
</li>
</ul>
</div>
<div class="toc-group">
<h3><a href="../styling/index.html">Styling</a></h3>
<ul class="toc">
<li>
<a href="../styling/bootstrap.html">Bootstrap 3</a>
</li>
<li>
<a href="../styling/bootstrap4.html">Bootstrap 4</a>
</li>
<li>
<a href="../styling/foundation.html">Foundation styling</a>
</li>
<li>
<a href="../styling/jqueryui.html">jQuery UI styling</a>
</li>
<li>
<a href="../styling/semanticui.html">Semantic UI styling</a>
</li>
<li>
<a href="../styling/icons.html">Icons</a>
</li>
</ul>
</div>
</div>
<div class="epilogue">
<p>Please refer to the <a href="http://www.datatables.net">DataTables documentation</a> for full information about its API properties and methods.<br>
Additionally, there are a wide range of <a href="http://www.datatables.net/extensions">extensions</a> and <a href=
"http://www.datatables.net/plug-ins">plug-ins</a> which extend the capabilities of DataTables.</p>
<p class="copyright">DataTables designed and created by <a href="http://www.sprymedia.co.uk">SpryMedia Ltd</a> © 2007-2017<br>
DataTables is licensed under the <a href="http://www.datatables.net/mit">MIT license</a>.</p>
</div>
</div>
</div>
</section>
</body>
</html> | {
"pile_set_name": "Github"
} |
/**
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0.
*/
#include <aws/opsworks/model/DescribeRaidArraysRequest.h>
#include <aws/core/utils/json/JsonSerializer.h>
#include <utility>
using namespace Aws::OpsWorks::Model;
using namespace Aws::Utils::Json;
using namespace Aws::Utils;
DescribeRaidArraysRequest::DescribeRaidArraysRequest() :
m_instanceIdHasBeenSet(false),
m_stackIdHasBeenSet(false),
m_raidArrayIdsHasBeenSet(false)
{
}
Aws::String DescribeRaidArraysRequest::SerializePayload() const
{
JsonValue payload;
if(m_instanceIdHasBeenSet)
{
payload.WithString("InstanceId", m_instanceId);
}
if(m_stackIdHasBeenSet)
{
payload.WithString("StackId", m_stackId);
}
if(m_raidArrayIdsHasBeenSet)
{
Array<JsonValue> raidArrayIdsJsonList(m_raidArrayIds.size());
for(unsigned raidArrayIdsIndex = 0; raidArrayIdsIndex < raidArrayIdsJsonList.GetLength(); ++raidArrayIdsIndex)
{
raidArrayIdsJsonList[raidArrayIdsIndex].AsString(m_raidArrayIds[raidArrayIdsIndex]);
}
payload.WithArray("RaidArrayIds", std::move(raidArrayIdsJsonList));
}
return payload.View().WriteReadable();
}
Aws::Http::HeaderValueCollection DescribeRaidArraysRequest::GetRequestSpecificHeaders() const
{
Aws::Http::HeaderValueCollection headers;
headers.insert(Aws::Http::HeaderValuePair("X-Amz-Target", "OpsWorks_20130218.DescribeRaidArrays"));
return headers;
}
| {
"pile_set_name": "Github"
} |
import torch
def heaviside(data):
"""
A `heaviside step function <https://en.wikipedia.org/wiki/Heaviside_step_function>`_
that truncates numbers <= 0 to 0 and everything else to 1.
.. math::
H[n]=\\begin{cases} 0, & n <= 0, \\ 1, & n \g 0, \end{cases}
"""
return torch.where(
data <= torch.zeros_like(data),
torch.zeros_like(data),
torch.ones_like(data),
)
| {
"pile_set_name": "Github"
} |
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
import '../page_index.dart';
class SectionView extends StatelessWidget {
final VoidCallback onPressed;
final String title;
final String more;
final bool hiddenMore;
final Color textColor;
final EdgeInsetsGeometry padding;
final Widget child;
SectionView(
this.title, {
Key key,
this.onPressed,
this.more: "更多",
this.hiddenMore: false,
this.textColor = Colors.black,
this.padding,
@required this.child,
}) : assert(title != null, child != null),
super(key: key);
@override
Widget build(BuildContext context) {
return Column(
children: <Widget>[
Container(
padding: padding ?? EdgeInsets.fromLTRB(10, 5, 10, 5),
child: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
crossAxisAlignment: CrossAxisAlignment.center,
children: <Widget>[
Container(
padding: EdgeInsets.symmetric(vertical: 8.0),
child: Text(
"$title",
style: TextStyle(
fontSize: 20.0,
fontWeight: FontWeight.bold,
color: textColor,
height: 1.5),
),
decoration: UnderlineTabIndicator(
borderSide:
BorderSide(width: 2.0, color: Colors.white),
insets: EdgeInsets.fromLTRB(0, 10, 0, 0))),
Offstage(
child: InkWell(
child: Container(
height: 36,
padding: EdgeInsets.symmetric(horizontal: 10),
child: Row(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text('$more',
style: TextStyle(
fontWeight: FontWeight.bold,
fontSize: 14,
color: textColor)),
Gaps.hGap3,
Icon(CupertinoIcons.forward,
size: 18, color: textColor)
]),
),
onTap: onPressed),
offstage: hiddenMore),
])),
child
],
);
}
}
| {
"pile_set_name": "Github"
} |
/*******************************************************************************
* This file is part of OpenNMS(R).
*
* Copyright (C) 2017-2017 The OpenNMS Group, Inc.
* OpenNMS(R) is Copyright (C) 1999-2017 The OpenNMS Group, Inc.
*
* OpenNMS(R) is a registered trademark of The OpenNMS Group, Inc.
*
* OpenNMS(R) is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as published
* by the Free Software Foundation, either version 3 of the License,
* or (at your option) any later version.
*
* OpenNMS(R) is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with OpenNMS(R). If not, see:
* http://www.gnu.org/licenses/
*
* For more information contact:
* OpenNMS(R) Licensing <[email protected]>
* http://www.opennms.org/
* http://www.opennms.com/
*******************************************************************************/
package org.opennms.netmgt.config.charts;
import java.util.Objects;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
@XmlRootElement(name = "red")
@XmlAccessorType(XmlAccessType.FIELD)
public class Red implements java.io.Serializable {
private static final long serialVersionUID = 1L;
@XmlElement(name = "rgb-color", required = true)
private Integer rgbColor;
public Red() {
}
/**
*/
public void deleteRgbColor() {
this.rgbColor= null;
}
/**
* Overrides the Object.equals method.
*
* @param obj
* @return true if the objects are equal.
*/
@Override
public boolean equals(final Object obj) {
if ( this == obj ) {
return true;
}
if (obj instanceof Red) {
Red temp = (Red)obj;
boolean equals = Objects.equals(temp.rgbColor, rgbColor);
return equals;
}
return false;
}
/**
* Returns the value of field 'rgbColor'.
*
* @return the value of field 'RgbColor'.
*/
public Integer getRgbColor() {
return this.rgbColor;
}
/**
* Method hasRgbColor.
*
* @return true if at least one RgbColor has been added
*/
public boolean hasRgbColor() {
return this.rgbColor != null;
}
/**
* Method hashCode.
*
* @return a hash code value for the object.
*/
@Override
public int hashCode() {
int hash = Objects.hash(
rgbColor);
return hash;
}
/**
* Sets the value of field 'rgbColor'.
*
* @param rgbColor the value of field 'rgbColor'.
*/
public void setRgbColor(final Integer rgbColor) {
if (rgbColor == null) {
throw new IllegalArgumentException("'rgb-color' is a required element!");
}
this.rgbColor = rgbColor;
}
}
| {
"pile_set_name": "Github"
} |
declare namespace Keycode {
interface CodesMap {
[key: string]: number;
}
interface KeycodeStatic {
(event: Event): string;
(keycode: number): string;
(name: string): number;
code: CodesMap;
codes: CodesMap;
aliases: CodesMap;
names: CodesMap;
title: CodesMap;
}
}
declare var keycode: Keycode.KeycodeStatic;
declare module "keycode" {
export = keycode;
}
| {
"pile_set_name": "Github"
} |
/**
@license
Copyright (c) 2015 The Polymer Project Authors. All rights reserved.
This code may only be used under the BSD style license found at
http://polymer.github.io/LICENSE.txt The complete set of authors may be found at
http://polymer.github.io/AUTHORS.txt The complete set of contributors may be
found at http://polymer.github.io/CONTRIBUTORS.txt Code distributed by Google as
part of the polymer project is also subject to an additional IP rights grant
found at http://polymer.github.io/PATENTS.txt
*/
import "../polymer/polymer-legacy.js";
import { IronSelectableBehavior } from './iron-selectable.js';
/**
* @polymerBehavior IronMultiSelectableBehavior
*/
export const IronMultiSelectableBehaviorImpl = {
properties: {
/**
* If true, multiple selections are allowed.
*/
multi: {
type: Boolean,
value: false,
observer: 'multiChanged'
},
/**
* Gets or sets the selected elements. This is used instead of `selected`
* when `multi` is true.
*/
selectedValues: {
type: Array,
notify: true,
value: function () {
return [];
}
},
/**
* Returns an array of currently selected items.
*/
selectedItems: {
type: Array,
readOnly: true,
notify: true,
value: function () {
return [];
}
}
},
observers: ['_updateSelected(selectedValues.splices)'],
/**
* Selects the given value. If the `multi` property is true, then the selected
* state of the `value` will be toggled; otherwise the `value` will be
* selected.
*
* @method select
* @param {string|number} value the value to select.
*/
select: function (value) {
if (this.multi) {
this._toggleSelected(value);
} else {
this.selected = value;
}
},
multiChanged: function (multi) {
this._selection.multi = multi;
this._updateSelected();
},
// UNUSED, FOR API COMPATIBILITY
get _shouldUpdateSelection() {
return this.selected != null || this.selectedValues != null && this.selectedValues.length;
},
_updateAttrForSelected: function () {
if (!this.multi) {
IronSelectableBehavior._updateAttrForSelected.apply(this);
} else if (this.selectedItems && this.selectedItems.length > 0) {
this.selectedValues = this.selectedItems.map(function (selectedItem) {
return this._indexToValue(this.indexOf(selectedItem));
}, this).filter(function (unfilteredValue) {
return unfilteredValue != null;
}, this);
}
},
_updateSelected: function () {
if (this.multi) {
this._selectMulti(this.selectedValues);
} else {
this._selectSelected(this.selected);
}
},
_selectMulti: function (values) {
values = values || [];
var selectedItems = (this._valuesToItems(values) || []).filter(function (item) {
return item !== null && item !== undefined;
}); // clear all but the current selected items
this._selection.clear(selectedItems); // select only those not selected yet
for (var i = 0; i < selectedItems.length; i++) {
this._selection.setItemSelected(selectedItems[i], true);
} // Check for items, since this array is populated only when attached
if (this.fallbackSelection && !this._selection.get().length) {
var fallback = this._valueToItem(this.fallbackSelection);
if (fallback) {
this.select(this.fallbackSelection);
}
}
},
_selectionChange: function () {
var s = this._selection.get();
if (this.multi) {
this._setSelectedItems(s);
this._setSelectedItem(s.length ? s[0] : null);
} else {
if (s !== null && s !== undefined) {
this._setSelectedItems([s]);
this._setSelectedItem(s);
} else {
this._setSelectedItems([]);
this._setSelectedItem(null);
}
}
},
_toggleSelected: function (value) {
var i = this.selectedValues.indexOf(value);
var unselected = i < 0;
if (unselected) {
this.push('selectedValues', value);
} else {
this.splice('selectedValues', i, 1);
}
},
_valuesToItems: function (values) {
return values == null ? null : values.map(function (value) {
return this._valueToItem(value);
}, this);
}
};
/** @polymerBehavior */
export const IronMultiSelectableBehavior = [IronSelectableBehavior, IronMultiSelectableBehaviorImpl]; | {
"pile_set_name": "Github"
} |
<div class="gf-form-group">
<div class="grafana-info-box">
<h4>Stackdriver Authentication</h4>
<p>There are two ways to authenticate the Stackdriver plugin - either by uploading a Service Account key file, or by
automatically retrieving credentials from the Google metadata server. The latter option is only available
when running Grafana on a GCE virtual machine.</p>
<h5>Uploading a Service Account Key File</h5>
<p>
First you need to create a Google Cloud Platform (GCP) Service Account for
the Project you want to show data for. A Grafana datasource integrates with one GCP Project. If you want to
visualize data from multiple GCP Projects then you need to create one datasource per GCP Project.
</p>
<p>
The <strong>Monitoring Viewer</strong> role provides all the permissions that Grafana needs. The following API
needs to be enabled on GCP for the datasource to work: <a class="external-link" target="_blank" href="https://console.cloud.google.com/apis/library/monitoring.googleapis.com">Monitoring
API</a>
</p>
<h5>GCE Default Service Account</h5>
<p>
If Grafana is running on a Google Compute Engine (GCE) virtual machine, it is possible for Grafana to
automatically retrieve the default project id and authentication token from the metadata server. In order for this to
work, you need to make sure that you have a service account that is setup as the default account for the virtual
machine and that the service account has been given read access to the Stackdriver Monitoring API.
</p>
<p>Detailed instructions on how to create a Service Account can be found <a class="external-link" target="_blank"
href="http://docs.grafana.org/datasources/stackdriver/">in
the documentation.</a>
</p>
</div>
</div>
<div class="gf-form-group">
<div class="gf-form">
<h3>Authentication</h3>
<info-popover mode="header">Upload your Service Account key file or paste in the contents of the file. The file
contents will be encrypted and saved in the Grafana database.</info-popover>
</div>
<div class="gf-form-inline">
<div class="gf-form max-width-30">
<span class="gf-form-label width-10">Authentication Type</span>
<div class="gf-form-select-wrapper max-width-24">
<select class="gf-form-input" ng-model="ctrl.current.jsonData.authenticationType" ng-options="f.key as f.value for f in ctrl.authenticationTypes"></select>
</div>
</div>
</div>
<div ng-if="ctrl.current.jsonData.authenticationType === ctrl.defaultAuthenticationType && !ctrl.current.jsonData.clientEmail && !ctrl.inputDataValid">
<div class="gf-form-group" ng-if="!ctrl.inputDataValid">
<div class="gf-form">
<form>
<dash-upload on-upload="ctrl.onUpload(dash)" btn-text="Upload Service Account key file"></dash-upload>
</form>
</div>
</div>
<div class="gf-form-group">
<h5 class="section-heading" ng-if="!ctrl.inputDataValid">Or paste Service Account key JSON</h5>
<div class="gf-form" ng-if="!ctrl.inputDataValid">
<textarea rows="10" data-share-panel-url="" class="gf-form-input" ng-model="ctrl.jsonText" ng-paste="ctrl.onPasteJwt($event)"></textarea>
</div>
<div ng-repeat="valError in ctrl.validationErrors" class="text-error p-l-1">
<i class="fa fa-warning"></i>
{{valError}}
</div>
</div>
</div>
</div>
<div class="gf-form-group" ng-if="ctrl.current.jsonData.authenticationType === ctrl.defaultAuthenticationType && (ctrl.inputDataValid || ctrl.current.jsonData.clientEmail)">
<h6>Uploaded Key Details</h6>
<div class="gf-form">
<span class="gf-form-label width-10">Project</span>
<input class="gf-form-input width-40" disabled type="text" ng-model="ctrl.current.jsonData.defaultProject" />
</div>
<div class="gf-form">
<span class="gf-form-label width-10">Client Email</span>
<input class="gf-form-input width-40" disabled type="text" ng-model="ctrl.current.jsonData.clientEmail" />
</div>
<div class="gf-form">
<span class="gf-form-label width-10">Token URI</span>
<input class="gf-form-input width-40" disabled type="text" ng-model='ctrl.current.jsonData.tokenUri' />
</div>
<div class="gf-form" ng-if="ctrl.current.secureJsonFields.privateKey">
<span class="gf-form-label width-10">Private Key</span>
<input type="text" class="gf-form-input max-width-12" disabled="disabled" value="configured">
</div>
<div class="gf-form width-18">
<a class="btn btn-secondary gf-form-btn" href="#" ng-click="ctrl.resetValidationMessages()">Reset Service
Account Key </a>
<info-popover mode="right-normal">
Reset to clear the uploaded key and upload a new file.
</info-popover>
</div>
</div>
<p class="gf-form-label" ng-hide="ctrl.current.secureJsonFields.privateKey || ctrl.current.jsonData.authenticationType !== ctrl.defaultAuthenticationType"><i
class="fa fa-save"></i> Do not forget to save your changes after uploading a file.</p>
<p class="gf-form-label" ng-show="ctrl.current.jsonData.authenticationType !== ctrl.defaultAuthenticationType"><i class="fa fa-save"></i>
Verify GCE default service account by clicking Save & Test</p>
| {
"pile_set_name": "Github"
} |
// Copyright Christopher Kormanyos 2002 - 2011.
// Copyright 2011 John Maddock. Distributed under the Boost
// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
// This work is based on an earlier work:
// "Algorithm 910: A Portable C++ Multiple-Precision System for Special-Function Calculations",
// in ACM TOMS, {VOL 37, ISSUE 4, (February 2011)} (C) ACM, 2011. http://doi.acm.org/10.1145/1916461.1916469
//
// This file has no include guards or namespaces - it's expanded inline inside default_ops.hpp
//
#ifdef BOOST_MSVC
#pragma warning(push)
#pragma warning(disable:6326) // comparison of two constants
#endif
template <class T>
void hyp0F1(T& result, const T& b, const T& x)
{
typedef typename boost::multiprecision::detail::canonical<boost::int32_t, T>::type si_type;
typedef typename boost::multiprecision::detail::canonical<boost::uint32_t, T>::type ui_type;
// Compute the series representation of Hypergeometric0F1 taken from
// http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric0F1/06/01/01/
// There are no checks on input range or parameter boundaries.
T x_pow_n_div_n_fact(x);
T pochham_b (b);
T bp (b);
eval_divide(result, x_pow_n_div_n_fact, pochham_b);
eval_add(result, ui_type(1));
si_type n;
T tol;
tol = ui_type(1);
eval_ldexp(tol, tol, 1 - boost::multiprecision::detail::digits2<number<T, et_on> >::value());
eval_multiply(tol, result);
if(eval_get_sign(tol) < 0)
tol.negate();
T term;
const int series_limit =
boost::multiprecision::detail::digits2<number<T, et_on> >::value() < 100
? 100 : boost::multiprecision::detail::digits2<number<T, et_on> >::value();
// Series expansion of hyperg_0f1(; b; x).
for(n = 2; n < series_limit; ++n)
{
eval_multiply(x_pow_n_div_n_fact, x);
eval_divide(x_pow_n_div_n_fact, n);
eval_increment(bp);
eval_multiply(pochham_b, bp);
eval_divide(term, x_pow_n_div_n_fact, pochham_b);
eval_add(result, term);
bool neg_term = eval_get_sign(term) < 0;
if(neg_term)
term.negate();
if(term.compare(tol) <= 0)
break;
}
if(n >= series_limit)
BOOST_THROW_EXCEPTION(std::runtime_error("H0F1 Failed to Converge"));
}
template <class T>
void eval_sin(T& result, const T& x)
{
BOOST_STATIC_ASSERT_MSG(number_category<T>::value == number_kind_floating_point, "The sin function is only valid for floating point types.");
if(&result == &x)
{
T temp;
eval_sin(temp, x);
result = temp;
return;
}
typedef typename boost::multiprecision::detail::canonical<boost::int32_t, T>::type si_type;
typedef typename boost::multiprecision::detail::canonical<boost::uint32_t, T>::type ui_type;
typedef typename mpl::front<typename T::float_types>::type fp_type;
switch(eval_fpclassify(x))
{
case FP_INFINITE:
case FP_NAN:
if(std::numeric_limits<number<T, et_on> >::has_quiet_NaN)
result = std::numeric_limits<number<T, et_on> >::quiet_NaN().backend();
else
BOOST_THROW_EXCEPTION(std::domain_error("Result is undefined or complex and there is no NaN for this number type."));
return;
case FP_ZERO:
result = ui_type(0);
return;
default: ;
}
// Local copy of the argument
T xx = x;
// Analyze and prepare the phase of the argument.
// Make a local, positive copy of the argument, xx.
// The argument xx will be reduced to 0 <= xx <= pi/2.
bool b_negate_sin = false;
if(eval_get_sign(x) < 0)
{
xx.negate();
b_negate_sin = !b_negate_sin;
}
T n_pi, t;
// Remove even multiples of pi.
if(xx.compare(get_constant_pi<T>()) > 0)
{
eval_divide(n_pi, xx, get_constant_pi<T>());
eval_trunc(n_pi, n_pi);
t = ui_type(2);
eval_fmod(t, n_pi, t);
const bool b_n_pi_is_even = eval_get_sign(t) == 0;
eval_multiply(n_pi, get_constant_pi<T>());
eval_subtract(xx, n_pi);
BOOST_MATH_INSTRUMENT_CODE(xx.str(0, std::ios_base::scientific));
BOOST_MATH_INSTRUMENT_CODE(n_pi.str(0, std::ios_base::scientific));
// Adjust signs if the multiple of pi is not even.
if(!b_n_pi_is_even)
{
b_negate_sin = !b_negate_sin;
}
}
// Reduce the argument to 0 <= xx <= pi/2.
eval_ldexp(t, get_constant_pi<T>(), -1);
if(xx.compare(t) > 0)
{
eval_subtract(xx, get_constant_pi<T>(), xx);
BOOST_MATH_INSTRUMENT_CODE(xx.str(0, std::ios_base::scientific));
}
eval_subtract(t, xx);
const bool b_zero = eval_get_sign(xx) == 0;
const bool b_pi_half = eval_get_sign(t) == 0;
// Check if the reduced argument is very close to 0 or pi/2.
const bool b_near_zero = xx.compare(fp_type(1e-1)) < 0;
const bool b_near_pi_half = t.compare(fp_type(1e-1)) < 0;;
if(b_zero)
{
result = ui_type(0);
}
else if(b_pi_half)
{
result = ui_type(1);
}
else if(b_near_zero)
{
eval_multiply(t, xx, xx);
eval_divide(t, si_type(-4));
T t2;
t2 = fp_type(1.5);
hyp0F1(result, t2, t);
BOOST_MATH_INSTRUMENT_CODE(result.str(0, std::ios_base::scientific));
eval_multiply(result, xx);
}
else if(b_near_pi_half)
{
eval_multiply(t, t);
eval_divide(t, si_type(-4));
T t2;
t2 = fp_type(0.5);
hyp0F1(result, t2, t);
BOOST_MATH_INSTRUMENT_CODE(result.str(0, std::ios_base::scientific));
}
else
{
// Scale to a small argument for an efficient Taylor series,
// implemented as a hypergeometric function. Use a standard
// divide by three identity a certain number of times.
// Here we use division by 3^9 --> (19683 = 3^9).
static const si_type n_scale = 9;
static const si_type n_three_pow_scale = static_cast<si_type>(19683L);
eval_divide(xx, n_three_pow_scale);
// Now with small arguments, we are ready for a series expansion.
eval_multiply(t, xx, xx);
eval_divide(t, si_type(-4));
T t2;
t2 = fp_type(1.5);
hyp0F1(result, t2, t);
BOOST_MATH_INSTRUMENT_CODE(result.str(0, std::ios_base::scientific));
eval_multiply(result, xx);
// Convert back using multiple angle identity.
for(boost::int32_t k = static_cast<boost::int32_t>(0); k < n_scale; k++)
{
// Rescale the cosine value using the multiple angle identity.
eval_multiply(t2, result, ui_type(3));
eval_multiply(t, result, result);
eval_multiply(t, result);
eval_multiply(t, ui_type(4));
eval_subtract(result, t2, t);
}
}
if(b_negate_sin)
result.negate();
}
template <class T>
void eval_cos(T& result, const T& x)
{
BOOST_STATIC_ASSERT_MSG(number_category<T>::value == number_kind_floating_point, "The cos function is only valid for floating point types.");
if(&result == &x)
{
T temp;
eval_cos(temp, x);
result = temp;
return;
}
typedef typename boost::multiprecision::detail::canonical<boost::int32_t, T>::type si_type;
typedef typename boost::multiprecision::detail::canonical<boost::uint32_t, T>::type ui_type;
typedef typename mpl::front<typename T::float_types>::type fp_type;
switch(eval_fpclassify(x))
{
case FP_INFINITE:
case FP_NAN:
if(std::numeric_limits<number<T, et_on> >::has_quiet_NaN)
result = std::numeric_limits<number<T, et_on> >::quiet_NaN().backend();
else
BOOST_THROW_EXCEPTION(std::domain_error("Result is undefined or complex and there is no NaN for this number type."));
return;
case FP_ZERO:
result = ui_type(1);
return;
default: ;
}
// Local copy of the argument
T xx = x;
// Analyze and prepare the phase of the argument.
// Make a local, positive copy of the argument, xx.
// The argument xx will be reduced to 0 <= xx <= pi/2.
bool b_negate_cos = false;
if(eval_get_sign(x) < 0)
{
xx.negate();
}
T n_pi, t;
// Remove even multiples of pi.
if(xx.compare(get_constant_pi<T>()) > 0)
{
eval_divide(t, xx, get_constant_pi<T>());
eval_trunc(n_pi, t);
BOOST_MATH_INSTRUMENT_CODE(n_pi.str(0, std::ios_base::scientific));
eval_multiply(t, n_pi, get_constant_pi<T>());
BOOST_MATH_INSTRUMENT_CODE(t.str(0, std::ios_base::scientific));
eval_subtract(xx, t);
BOOST_MATH_INSTRUMENT_CODE(xx.str(0, std::ios_base::scientific));
// Adjust signs if the multiple of pi is not even.
t = ui_type(2);
eval_fmod(t, n_pi, t);
const bool b_n_pi_is_even = eval_get_sign(t) == 0;
if(!b_n_pi_is_even)
{
b_negate_cos = !b_negate_cos;
}
}
// Reduce the argument to 0 <= xx <= pi/2.
eval_ldexp(t, get_constant_pi<T>(), -1);
int com = xx.compare(t);
if(com > 0)
{
eval_subtract(xx, get_constant_pi<T>(), xx);
b_negate_cos = !b_negate_cos;
BOOST_MATH_INSTRUMENT_CODE(xx.str(0, std::ios_base::scientific));
}
const bool b_zero = eval_get_sign(xx) == 0;
const bool b_pi_half = com == 0;
// Check if the reduced argument is very close to 0.
const bool b_near_zero = xx.compare(fp_type(1e-1)) < 0;
if(b_zero)
{
result = si_type(1);
}
else if(b_pi_half)
{
result = si_type(0);
}
else if(b_near_zero)
{
eval_multiply(t, xx, xx);
eval_divide(t, si_type(-4));
n_pi = fp_type(0.5f);
hyp0F1(result, n_pi, t);
BOOST_MATH_INSTRUMENT_CODE(result.str(0, std::ios_base::scientific));
}
else
{
eval_subtract(t, xx);
eval_sin(result, t);
}
if(b_negate_cos)
result.negate();
}
template <class T>
void eval_tan(T& result, const T& x)
{
BOOST_STATIC_ASSERT_MSG(number_category<T>::value == number_kind_floating_point, "The tan function is only valid for floating point types.");
if(&result == &x)
{
T temp;
eval_tan(temp, x);
result = temp;
return;
}
T t;
eval_sin(result, x);
eval_cos(t, x);
eval_divide(result, t);
}
template <class T>
void hyp2F1(T& result, const T& a, const T& b, const T& c, const T& x)
{
// Compute the series representation of hyperg_2f1 taken from
// Abramowitz and Stegun 15.1.1.
// There are no checks on input range or parameter boundaries.
typedef typename boost::multiprecision::detail::canonical<boost::uint32_t, T>::type ui_type;
T x_pow_n_div_n_fact(x);
T pochham_a (a);
T pochham_b (b);
T pochham_c (c);
T ap (a);
T bp (b);
T cp (c);
eval_multiply(result, pochham_a, pochham_b);
eval_divide(result, pochham_c);
eval_multiply(result, x_pow_n_div_n_fact);
eval_add(result, ui_type(1));
T lim;
eval_ldexp(lim, result, 1 - boost::multiprecision::detail::digits2<number<T, et_on> >::value());
if(eval_get_sign(lim) < 0)
lim.negate();
ui_type n;
T term;
const unsigned series_limit =
boost::multiprecision::detail::digits2<number<T, et_on> >::value() < 100
? 100 : boost::multiprecision::detail::digits2<number<T, et_on> >::value();
// Series expansion of hyperg_2f1(a, b; c; x).
for(n = 2; n < series_limit; ++n)
{
eval_multiply(x_pow_n_div_n_fact, x);
eval_divide(x_pow_n_div_n_fact, n);
eval_increment(ap);
eval_multiply(pochham_a, ap);
eval_increment(bp);
eval_multiply(pochham_b, bp);
eval_increment(cp);
eval_multiply(pochham_c, cp);
eval_multiply(term, pochham_a, pochham_b);
eval_divide(term, pochham_c);
eval_multiply(term, x_pow_n_div_n_fact);
eval_add(result, term);
if(eval_get_sign(term) < 0)
term.negate();
if(lim.compare(term) >= 0)
break;
}
if(n > series_limit)
BOOST_THROW_EXCEPTION(std::runtime_error("H2F1 failed to converge."));
}
template <class T>
void eval_asin(T& result, const T& x)
{
BOOST_STATIC_ASSERT_MSG(number_category<T>::value == number_kind_floating_point, "The asin function is only valid for floating point types.");
typedef typename boost::multiprecision::detail::canonical<boost::uint32_t, T>::type ui_type;
typedef typename mpl::front<typename T::float_types>::type fp_type;
if(&result == &x)
{
T t(x);
eval_asin(result, t);
return;
}
switch(eval_fpclassify(x))
{
case FP_NAN:
case FP_INFINITE:
if(std::numeric_limits<number<T, et_on> >::has_quiet_NaN)
result = std::numeric_limits<number<T, et_on> >::quiet_NaN().backend();
else
BOOST_THROW_EXCEPTION(std::domain_error("Result is undefined or complex and there is no NaN for this number type."));
return;
case FP_ZERO:
result = ui_type(0);
return;
default: ;
}
const bool b_neg = eval_get_sign(x) < 0;
T xx(x);
if(b_neg)
xx.negate();
int c = xx.compare(ui_type(1));
if(c > 0)
{
if(std::numeric_limits<number<T, et_on> >::has_quiet_NaN)
result = std::numeric_limits<number<T, et_on> >::quiet_NaN().backend();
else
BOOST_THROW_EXCEPTION(std::domain_error("Result is undefined or complex and there is no NaN for this number type."));
return;
}
else if(c == 0)
{
result = get_constant_pi<T>();
eval_ldexp(result, result, -1);
if(b_neg)
result.negate();
return;
}
if(xx.compare(fp_type(1e-4)) < 0)
{
// http://functions.wolfram.com/ElementaryFunctions/ArcSin/26/01/01/
eval_multiply(xx, xx);
T t1, t2;
t1 = fp_type(0.5f);
t2 = fp_type(1.5f);
hyp2F1(result, t1, t1, t2, xx);
eval_multiply(result, x);
return;
}
else if(xx.compare(fp_type(1 - 1e-4f)) > 0)
{
T dx1;
T t1, t2;
eval_subtract(dx1, ui_type(1), xx);
t1 = fp_type(0.5f);
t2 = fp_type(1.5f);
eval_ldexp(dx1, dx1, -1);
hyp2F1(result, t1, t1, t2, dx1);
eval_ldexp(dx1, dx1, 2);
eval_sqrt(t1, dx1);
eval_multiply(result, t1);
eval_ldexp(t1, get_constant_pi<T>(), -1);
result.negate();
eval_add(result, t1);
if(b_neg)
result.negate();
return;
}
#ifndef BOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS
typedef typename boost::multiprecision::detail::canonical<long double, T>::type guess_type;
#else
typedef fp_type guess_type;
#endif
// Get initial estimate using standard math function asin.
guess_type dd;
eval_convert_to(&dd, xx);
result = (guess_type)(std::asin(dd));
// Newton-Raphson iteration, we should double our precision with each iteration,
// in practice this seems to not quite work in all cases... so terminate when we
// have at least 2/3 of the digits correct on the assumption that the correction
// we've just added will finish the job...
boost::intmax_t current_precision = eval_ilogb(result);
boost::intmax_t target_precision = current_precision - 1 - (std::numeric_limits<number<T> >::digits * 2) / 3;
// Newton-Raphson iteration
while(current_precision > target_precision)
{
T sine, cosine;
eval_sin(sine, result);
eval_cos(cosine, result);
eval_subtract(sine, xx);
eval_divide(sine, cosine);
eval_subtract(result, sine);
current_precision = eval_ilogb(sine);
#ifdef FP_ILOGB0
if(current_precision == FP_ILOGB0)
break;
#endif
}
if(b_neg)
result.negate();
}
template <class T>
inline void eval_acos(T& result, const T& x)
{
BOOST_STATIC_ASSERT_MSG(number_category<T>::value == number_kind_floating_point, "The acos function is only valid for floating point types.");
typedef typename boost::multiprecision::detail::canonical<boost::uint32_t, T>::type ui_type;
switch(eval_fpclassify(x))
{
case FP_NAN:
case FP_INFINITE:
if(std::numeric_limits<number<T, et_on> >::has_quiet_NaN)
result = std::numeric_limits<number<T, et_on> >::quiet_NaN().backend();
else
BOOST_THROW_EXCEPTION(std::domain_error("Result is undefined or complex and there is no NaN for this number type."));
return;
case FP_ZERO:
result = get_constant_pi<T>();
eval_ldexp(result, result, -1); // divide by two.
return;
}
eval_abs(result, x);
int c = result.compare(ui_type(1));
if(c > 0)
{
if(std::numeric_limits<number<T, et_on> >::has_quiet_NaN)
result = std::numeric_limits<number<T, et_on> >::quiet_NaN().backend();
else
BOOST_THROW_EXCEPTION(std::domain_error("Result is undefined or complex and there is no NaN for this number type."));
return;
}
else if(c == 0)
{
if(eval_get_sign(x) < 0)
result = get_constant_pi<T>();
else
result = ui_type(0);
return;
}
eval_asin(result, x);
T t;
eval_ldexp(t, get_constant_pi<T>(), -1);
eval_subtract(result, t);
result.negate();
}
template <class T>
void eval_atan(T& result, const T& x)
{
BOOST_STATIC_ASSERT_MSG(number_category<T>::value == number_kind_floating_point, "The atan function is only valid for floating point types.");
typedef typename boost::multiprecision::detail::canonical<boost::int32_t, T>::type si_type;
typedef typename boost::multiprecision::detail::canonical<boost::uint32_t, T>::type ui_type;
typedef typename mpl::front<typename T::float_types>::type fp_type;
switch(eval_fpclassify(x))
{
case FP_NAN:
result = x;
return;
case FP_ZERO:
result = ui_type(0);
return;
case FP_INFINITE:
if(eval_get_sign(x) < 0)
{
eval_ldexp(result, get_constant_pi<T>(), -1);
result.negate();
}
else
eval_ldexp(result, get_constant_pi<T>(), -1);
return;
default: ;
}
const bool b_neg = eval_get_sign(x) < 0;
T xx(x);
if(b_neg)
xx.negate();
if(xx.compare(fp_type(0.1)) < 0)
{
T t1, t2, t3;
t1 = ui_type(1);
t2 = fp_type(0.5f);
t3 = fp_type(1.5f);
eval_multiply(xx, xx);
xx.negate();
hyp2F1(result, t1, t2, t3, xx);
eval_multiply(result, x);
return;
}
if(xx.compare(fp_type(10)) > 0)
{
T t1, t2, t3;
t1 = fp_type(0.5f);
t2 = ui_type(1u);
t3 = fp_type(1.5f);
eval_multiply(xx, xx);
eval_divide(xx, si_type(-1), xx);
hyp2F1(result, t1, t2, t3, xx);
eval_divide(result, x);
if(!b_neg)
result.negate();
eval_ldexp(t1, get_constant_pi<T>(), -1);
eval_add(result, t1);
if(b_neg)
result.negate();
return;
}
// Get initial estimate using standard math function atan.
fp_type d;
eval_convert_to(&d, xx);
result = fp_type(std::atan(d));
// Newton-Raphson iteration, we should double our precision with each iteration,
// in practice this seems to not quite work in all cases... so terminate when we
// have at least 2/3 of the digits correct on the assumption that the correction
// we've just added will finish the job...
boost::intmax_t current_precision = eval_ilogb(result);
boost::intmax_t target_precision = current_precision - 1 - (std::numeric_limits<number<T> >::digits * 2) / 3;
T s, c, t;
while(current_precision > target_precision)
{
eval_sin(s, result);
eval_cos(c, result);
eval_multiply(t, xx, c);
eval_subtract(t, s);
eval_multiply(s, t, c);
eval_add(result, s);
current_precision = eval_ilogb(s);
#ifdef FP_ILOGB0
if(current_precision == FP_ILOGB0)
break;
#endif
}
if(b_neg)
result.negate();
}
template <class T>
void eval_atan2(T& result, const T& y, const T& x)
{
BOOST_STATIC_ASSERT_MSG(number_category<T>::value == number_kind_floating_point, "The atan2 function is only valid for floating point types.");
if(&result == &y)
{
T temp(y);
eval_atan2(result, temp, x);
return;
}
else if(&result == &x)
{
T temp(x);
eval_atan2(result, y, temp);
return;
}
typedef typename boost::multiprecision::detail::canonical<boost::uint32_t, T>::type ui_type;
switch(eval_fpclassify(y))
{
case FP_NAN:
result = y;
return;
case FP_ZERO:
{
int c = eval_get_sign(x);
if(c < 0)
result = get_constant_pi<T>();
else if(c >= 0)
result = ui_type(0); // Note we allow atan2(0,0) to be zero, even though it's mathematically undefined
return;
}
case FP_INFINITE:
{
if(eval_fpclassify(x) == FP_INFINITE)
{
if(std::numeric_limits<number<T, et_on> >::has_quiet_NaN)
result = std::numeric_limits<number<T, et_on> >::quiet_NaN().backend();
else
BOOST_THROW_EXCEPTION(std::domain_error("Result is undefined or complex and there is no NaN for this number type."));
}
else
{
eval_ldexp(result, get_constant_pi<T>(), -1);
if(eval_get_sign(y) < 0)
result.negate();
}
return;
}
}
switch(eval_fpclassify(x))
{
case FP_NAN:
result = x;
return;
case FP_ZERO:
{
eval_ldexp(result, get_constant_pi<T>(), -1);
if(eval_get_sign(y) < 0)
result.negate();
return;
}
case FP_INFINITE:
if(eval_get_sign(x) > 0)
result = ui_type(0);
else
result = get_constant_pi<T>();
if(eval_get_sign(y) < 0)
result.negate();
return;
}
T xx;
eval_divide(xx, y, x);
if(eval_get_sign(xx) < 0)
xx.negate();
eval_atan(result, xx);
// Determine quadrant (sign) based on signs of x, y
const bool y_neg = eval_get_sign(y) < 0;
const bool x_neg = eval_get_sign(x) < 0;
if(y_neg != x_neg)
result.negate();
if(x_neg)
{
if(y_neg)
eval_subtract(result, get_constant_pi<T>());
else
eval_add(result, get_constant_pi<T>());
}
}
template<class T, class A>
inline typename enable_if<is_arithmetic<A>, void>::type eval_atan2(T& result, const T& x, const A& a)
{
typedef typename boost::multiprecision::detail::canonical<A, T>::type canonical_type;
typedef typename mpl::if_<is_same<A, canonical_type>, T, canonical_type>::type cast_type;
cast_type c;
c = a;
eval_atan2(result, x, c);
}
template<class T, class A>
inline typename enable_if<is_arithmetic<A>, void>::type eval_atan2(T& result, const A& x, const T& a)
{
typedef typename boost::multiprecision::detail::canonical<A, T>::type canonical_type;
typedef typename mpl::if_<is_same<A, canonical_type>, T, canonical_type>::type cast_type;
cast_type c;
c = x;
eval_atan2(result, c, a);
}
#ifdef BOOST_MSVC
#pragma warning(pop)
#endif
| {
"pile_set_name": "Github"
} |
<!DOCTYPE html>
<% int fontSize; %>
<html>
<head><title>FOR LOOP Example</title></head>
<body>
<%for ( fontSize = 1; fontSize <= 3; fontSize++){ %>
<font color = "green" size = "<%= fontSize %>">
JSP Tutorial
</font><br />
<%}%>
</body>
</html> | {
"pile_set_name": "Github"
} |
fileFormatVersion: 2
guid: d2f7ba1fda708a54e88b763a848d13bb
MonoImporter:
externalObjects: {}
serializedVersion: 2
defaultReferences: []
executionOrder: 0
icon: {instanceID: 0}
userData:
assetBundleName:
assetBundleVariant:
| {
"pile_set_name": "Github"
} |
/**
* Copyright (c) Microsoft Corporation. All rights reserved.
* Licensed under the MIT License. See License.txt in the project root for
* license information.
*
* Code generated by Microsoft (R) AutoRest Code Generator.
*/
package com.microsoft.azure.management.network.v2019_06_01;
import com.microsoft.azure.arm.collection.SupportsCreating;
import rx.Completable;
import rx.Observable;
import com.microsoft.azure.management.network.v2019_06_01.implementation.SubnetsInner;
import com.microsoft.azure.arm.model.HasInner;
/**
* Type representing Subnets.
*/
public interface Subnets extends SupportsCreating<Subnet.DefinitionStages.Blank>, HasInner<SubnetsInner> {
/**
* Prepares a subnet by applying network intent policies.
*
* @param resourceGroupName The name of the resource group.
* @param virtualNetworkName The name of the virtual network.
* @param subnetName The name of the subnet.
* @param prepareNetworkPoliciesRequestParameters Parameters supplied to prepare subnet by applying network intent policies.
* @throws IllegalArgumentException thrown if parameters fail the validation
* @return the observable for the request
*/
Completable prepareNetworkPoliciesAsync(String resourceGroupName, String virtualNetworkName, String subnetName, PrepareNetworkPoliciesRequest prepareNetworkPoliciesRequestParameters);
/**
* Unprepares a subnet by removing network intent policies.
*
* @param resourceGroupName The name of the resource group.
* @param virtualNetworkName The name of the virtual network.
* @param subnetName The name of the subnet.
* @throws IllegalArgumentException thrown if parameters fail the validation
* @return the observable for the request
*/
Completable unprepareNetworkPoliciesAsync(String resourceGroupName, String virtualNetworkName, String subnetName);
/**
* Gets the specified subnet by virtual network and resource group.
*
* @param resourceGroupName The name of the resource group.
* @param virtualNetworkName The name of the virtual network.
* @param subnetName The name of the subnet.
* @throws IllegalArgumentException thrown if parameters fail the validation
* @return the observable for the request
*/
Observable<Subnet> getAsync(String resourceGroupName, String virtualNetworkName, String subnetName);
/**
* Gets all subnets in a virtual network.
*
* @param resourceGroupName The name of the resource group.
* @param virtualNetworkName The name of the virtual network.
* @throws IllegalArgumentException thrown if parameters fail the validation
* @return the observable for the request
*/
Observable<Subnet> listAsync(final String resourceGroupName, final String virtualNetworkName);
/**
* Deletes the specified subnet.
*
* @param resourceGroupName The name of the resource group.
* @param virtualNetworkName The name of the virtual network.
* @param subnetName The name of the subnet.
* @throws IllegalArgumentException thrown if parameters fail the validation
* @return the observable for the request
*/
Completable deleteAsync(String resourceGroupName, String virtualNetworkName, String subnetName);
}
| {
"pile_set_name": "Github"
} |
@echo off
rem makedoc.tcl --
rem Script for creating HTML-files from the raw documentation files
rem
c:\tcl\bin\tclsh.exe c:\tcl\demos\TclApps\apps\dtp\main.tcl doc -out %1.html html %1.man
copy /n /y %1.html ..\..\site
| {
"pile_set_name": "Github"
} |
function initUI(){
var text = "Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?";
$.descriptionLabel.setText(text);
//Init eventListeners
if(OS_IOS)$.photoBtn.getView().addEventListener('click', photo); //review android
$.locationLabel.getView().addEventListener('click', showPopupMap);
}
/* EVENT LISTENERS CALLBACKS */
function photo(e){
Alloy.Globals.alert('Photo!');
}
function showPopupMap(e){
$.mapPopup.show();
}
function showPopupServices(e){
$.servicesPopup.show();
}
function updateScrolls(e){
e.cancelBubble = true;
if(e.source !== $.mainTable) return;
$.img.updateScroll(e);
//$.stickyView.updateScroll(e.contentOffset.y);
}
/* SCROLLABLE VIEW FUNCTIONS */
function linkScrollableView(){
$.scrollableView.removeEventListener('postlayout', linkScrollableView);
$.pagingControl.linkScrollableView($.scrollableView);
}
function updatePagingControl(e){
$.pagingControl.setActiveDot(e.currentPage);
}
/* SHOW GALLERY */
if(OS_IOS) $.photosRow.getView().addEventListener('click', showGallery);
function showGallery(e){
Ti.API.info('show gallery');
Ti.Media.openPhotoGallery({
animated:true,
allowEditing:false,
autohide:true,
mediaTypes: Ti.Media.MEDIA_TYPE_PHOTO,
cancel:function(){},
error:function(){},
success:function(){},
});
}
| {
"pile_set_name": "Github"
} |
// Copyright 2016 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "components/password_manager/core/browser/import/password_csv_reader.h"
#include "base/strings/utf_string_conversions.h"
#include "components/autofill/core/common/password_form.h"
#include "components/password_manager/core/browser/import/password_importer.h"
#include "testing/gtest/include/gtest/gtest.h"
namespace password_manager {
TEST(PasswordCSVReaderTest, DeserializePasswords_ZeroValid) {
const char kCSVInput[] = "title,WEBSITE,login,password\n";
std::vector<autofill::PasswordForm> passwords;
PasswordCSVReader reader;
EXPECT_EQ(PasswordImporter::SUCCESS,
reader.DeserializePasswords(kCSVInput, &passwords));
EXPECT_EQ(0u, passwords.size());
}
TEST(PasswordCSVReaderTest, DeserializePasswords_SingleValid) {
const char kCSVInput[] =
"Url,Username,Password\n"
"https://accounts.google.com/a/LoginAuth,[email protected],test1\n";
std::vector<autofill::PasswordForm> passwords;
PasswordCSVReader reader;
EXPECT_EQ(PasswordImporter::SUCCESS,
reader.DeserializePasswords(kCSVInput, &passwords));
EXPECT_EQ(1u, passwords.size());
GURL expected_origin("https://accounts.google.com/a/LoginAuth");
EXPECT_EQ(expected_origin, passwords[0].origin);
EXPECT_EQ(expected_origin.GetOrigin().spec(), passwords[0].signon_realm);
EXPECT_EQ(base::UTF8ToUTF16("[email protected]"), passwords[0].username_value);
EXPECT_EQ(base::UTF8ToUTF16("test1"), passwords[0].password_value);
}
TEST(PasswordCSVReaderTest, DeserializePasswords_TwoValid) {
const char kCSVInput[] =
"Url,Username,Password,Someotherfield\n"
"https://accounts.google.com/a/LoginAuth,[email protected],test1,test2\n"
"http://example.com/,user,password,pwd\n";
std::vector<autofill::PasswordForm> passwords;
PasswordCSVReader reader;
EXPECT_EQ(PasswordImporter::SUCCESS,
reader.DeserializePasswords(kCSVInput, &passwords));
EXPECT_EQ(2u, passwords.size());
GURL expected_origin("https://accounts.google.com/a/LoginAuth");
EXPECT_EQ(expected_origin, passwords[0].origin);
EXPECT_EQ(expected_origin.GetOrigin().spec(), passwords[0].signon_realm);
EXPECT_EQ(base::UTF8ToUTF16("[email protected]"), passwords[0].username_value);
EXPECT_EQ(base::UTF8ToUTF16("test1"), passwords[0].password_value);
expected_origin = GURL("http://example.com");
EXPECT_EQ(expected_origin, passwords[1].origin);
EXPECT_EQ(expected_origin.GetOrigin().spec(), passwords[1].signon_realm);
EXPECT_EQ(base::UTF8ToUTF16("user"), passwords[1].username_value);
EXPECT_EQ(base::UTF8ToUTF16("password"), passwords[1].password_value);
}
TEST(PasswordCSVReaderTest, DeserializePasswords_SyntaxError) {
const char kCSVInput[] =
"Url,Username,\"Password\n"
"https://accounts.google.com/a/LoginAuth,[email protected],test1\n";
std::vector<autofill::PasswordForm> passwords;
PasswordCSVReader reader;
EXPECT_EQ(PasswordImporter::SYNTAX_ERROR,
reader.DeserializePasswords(kCSVInput, &passwords));
EXPECT_EQ(0u, passwords.size());
}
TEST(PasswordCSVReaderTest, DeserializePasswords_SemanticError) {
const char kCSVInput[] =
"Url,Username,Secret\n" // Password column is missing.
"https://accounts.google.com/a/LoginAuth,[email protected],test1\n";
std::vector<autofill::PasswordForm> passwords;
PasswordCSVReader reader;
EXPECT_EQ(PasswordImporter::SEMANTIC_ERROR,
reader.DeserializePasswords(kCSVInput, &passwords));
EXPECT_EQ(0u, passwords.size());
}
} // namespace password_manager
| {
"pile_set_name": "Github"
} |
version https://git-lfs.github.com/spec/v1
oid sha256:c5fe6c755376ac9366b26c651708de545e56b1191526092de7983fd9d7eb8033
size 11147
| {
"pile_set_name": "Github"
} |
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;
using UnityEngine;
namespace DTS_Addon.Scene
{
[KSPAddon(KSPAddon.Startup.MainMenu, false)]
public class xMainMenu : MonoBehaviour
{
[Conditional("DEBUG")]
void OnGUI()
{
GUI.Label(new Rect(10, 10, 200, 20), "MainMenu");
}
}
}
| {
"pile_set_name": "Github"
} |
#! /bin/sh
# Common stub for a few missing GNU programs while installing.
scriptversion=2005-06-08.21
# Copyright (C) 1996, 1997, 1999, 2000, 2002, 2003, 2004, 2005
# Free Software Foundation, Inc.
# Originally by Fran,cois Pinard <[email protected]>, 1996.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
if test $# -eq 0; then
echo 1>&2 "Try \`$0 --help' for more information"
exit 1
fi
run=:
# In the cases where this matters, `missing' is being run in the
# srcdir already.
if test -f configure.ac; then
configure_ac=configure.ac
else
configure_ac=configure.in
fi
msg="missing on your system"
case "$1" in
--run)
# Try to run requested program, and just exit if it succeeds.
run=
shift
"$@" && exit 0
# Exit code 63 means version mismatch. This often happens
# when the user try to use an ancient version of a tool on
# a file that requires a minimum version. In this case we
# we should proceed has if the program had been absent, or
# if --run hadn't been passed.
if test $? = 63; then
run=:
msg="probably too old"
fi
;;
-h|--h|--he|--hel|--help)
echo "\
$0 [OPTION]... PROGRAM [ARGUMENT]...
Handle \`PROGRAM [ARGUMENT]...' for when PROGRAM is missing, or return an
error status if there is no known handling for PROGRAM.
Options:
-h, --help display this help and exit
-v, --version output version information and exit
--run try to run the given command, and emulate it if it fails
Supported PROGRAM values:
aclocal touch file \`aclocal.m4'
autoconf touch file \`configure'
autoheader touch file \`config.h.in'
automake touch all \`Makefile.in' files
bison create \`y.tab.[ch]', if possible, from existing .[ch]
flex create \`lex.yy.c', if possible, from existing .c
help2man touch the output file
lex create \`lex.yy.c', if possible, from existing .c
makeinfo touch the output file
tar try tar, gnutar, gtar, then tar without non-portable flags
yacc create \`y.tab.[ch]', if possible, from existing .[ch]
Send bug reports to <[email protected]>."
exit $?
;;
-v|--v|--ve|--ver|--vers|--versi|--versio|--version)
echo "missing $scriptversion (GNU Automake)"
exit $?
;;
-*)
echo 1>&2 "$0: Unknown \`$1' option"
echo 1>&2 "Try \`$0 --help' for more information"
exit 1
;;
esac
# Now exit if we have it, but it failed. Also exit now if we
# don't have it and --version was passed (most likely to detect
# the program).
case "$1" in
lex|yacc)
# Not GNU programs, they don't have --version.
;;
tar)
if test -n "$run"; then
echo 1>&2 "ERROR: \`tar' requires --run"
exit 1
elif test "x$2" = "x--version" || test "x$2" = "x--help"; then
exit 1
fi
;;
*)
if test -z "$run" && ($1 --version) > /dev/null 2>&1; then
# We have it, but it failed.
exit 1
elif test "x$2" = "x--version" || test "x$2" = "x--help"; then
# Could not run --version or --help. This is probably someone
# running `$TOOL --version' or `$TOOL --help' to check whether
# $TOOL exists and not knowing $TOOL uses missing.
exit 1
fi
;;
esac
# If it does not exist, or fails to run (possibly an outdated version),
# try to emulate it.
case "$1" in
aclocal*)
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified \`acinclude.m4' or \`${configure_ac}'. You might want
to install the \`Automake' and \`Perl' packages. Grab them from
any GNU archive site."
touch aclocal.m4
;;
autoconf)
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified \`${configure_ac}'. You might want to install the
\`Autoconf' and \`GNU m4' packages. Grab them from any GNU
archive site."
touch configure
;;
autoheader)
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified \`acconfig.h' or \`${configure_ac}'. You might want
to install the \`Autoconf' and \`GNU m4' packages. Grab them
from any GNU archive site."
files=`sed -n 's/^[ ]*A[CM]_CONFIG_HEADER(\([^)]*\)).*/\1/p' ${configure_ac}`
test -z "$files" && files="config.h"
touch_files=
for f in $files; do
case "$f" in
*:*) touch_files="$touch_files "`echo "$f" |
sed -e 's/^[^:]*://' -e 's/:.*//'`;;
*) touch_files="$touch_files $f.in";;
esac
done
touch $touch_files
;;
automake*)
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified \`Makefile.am', \`acinclude.m4' or \`${configure_ac}'.
You might want to install the \`Automake' and \`Perl' packages.
Grab them from any GNU archive site."
find . -type f -name Makefile.am -print |
sed 's/\.am$/.in/' |
while read f; do touch "$f"; done
;;
autom4te)
echo 1>&2 "\
WARNING: \`$1' is needed, but is $msg.
You might have modified some files without having the
proper tools for further handling them.
You can get \`$1' as part of \`Autoconf' from any GNU
archive site."
file=`echo "$*" | sed -n 's/.*--output[ =]*\([^ ]*\).*/\1/p'`
test -z "$file" && file=`echo "$*" | sed -n 's/.*-o[ ]*\([^ ]*\).*/\1/p'`
if test -f "$file"; then
touch $file
else
test -z "$file" || exec >$file
echo "#! /bin/sh"
echo "# Created by GNU Automake missing as a replacement of"
echo "# $ $@"
echo "exit 0"
chmod +x $file
exit 1
fi
;;
bison|yacc)
echo 1>&2 "\
WARNING: \`$1' $msg. You should only need it if
you modified a \`.y' file. You may need the \`Bison' package
in order for those modifications to take effect. You can get
\`Bison' from any GNU archive site."
rm -f y.tab.c y.tab.h
if [ $# -ne 1 ]; then
eval LASTARG="\${$#}"
case "$LASTARG" in
*.y)
SRCFILE=`echo "$LASTARG" | sed 's/y$/c/'`
if [ -f "$SRCFILE" ]; then
cp "$SRCFILE" y.tab.c
fi
SRCFILE=`echo "$LASTARG" | sed 's/y$/h/'`
if [ -f "$SRCFILE" ]; then
cp "$SRCFILE" y.tab.h
fi
;;
esac
fi
if [ ! -f y.tab.h ]; then
echo >y.tab.h
fi
if [ ! -f y.tab.c ]; then
echo 'main() { return 0; }' >y.tab.c
fi
;;
lex|flex)
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified a \`.l' file. You may need the \`Flex' package
in order for those modifications to take effect. You can get
\`Flex' from any GNU archive site."
rm -f lex.yy.c
if [ $# -ne 1 ]; then
eval LASTARG="\${$#}"
case "$LASTARG" in
*.l)
SRCFILE=`echo "$LASTARG" | sed 's/l$/c/'`
if [ -f "$SRCFILE" ]; then
cp "$SRCFILE" lex.yy.c
fi
;;
esac
fi
if [ ! -f lex.yy.c ]; then
echo 'main() { return 0; }' >lex.yy.c
fi
;;
help2man)
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified a dependency of a manual page. You may need the
\`Help2man' package in order for those modifications to take
effect. You can get \`Help2man' from any GNU archive site."
file=`echo "$*" | sed -n 's/.*-o \([^ ]*\).*/\1/p'`
if test -z "$file"; then
file=`echo "$*" | sed -n 's/.*--output=\([^ ]*\).*/\1/p'`
fi
if [ -f "$file" ]; then
touch $file
else
test -z "$file" || exec >$file
echo ".ab help2man is required to generate this page"
exit 1
fi
;;
makeinfo)
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified a \`.texi' or \`.texinfo' file, or any other file
indirectly affecting the aspect of the manual. The spurious
call might also be the consequence of using a buggy \`make' (AIX,
DU, IRIX). You might want to install the \`Texinfo' package or
the \`GNU make' package. Grab either from any GNU archive site."
# The file to touch is that specified with -o ...
file=`echo "$*" | sed -n 's/.*-o \([^ ]*\).*/\1/p'`
if test -z "$file"; then
# ... or it is the one specified with @setfilename ...
infile=`echo "$*" | sed 's/.* \([^ ]*\) *$/\1/'`
file=`sed -n '/^@setfilename/ { s/.* \([^ ]*\) *$/\1/; p; q; }' $infile`
# ... or it is derived from the source name (dir/f.texi becomes f.info)
test -z "$file" && file=`echo "$infile" | sed 's,.*/,,;s,.[^.]*$,,'`.info
fi
# If the file does not exist, the user really needs makeinfo;
# let's fail without touching anything.
test -f $file || exit 1
touch $file
;;
tar)
shift
# We have already tried tar in the generic part.
# Look for gnutar/gtar before invocation to avoid ugly error
# messages.
if (gnutar --version > /dev/null 2>&1); then
gnutar "$@" && exit 0
fi
if (gtar --version > /dev/null 2>&1); then
gtar "$@" && exit 0
fi
firstarg="$1"
if shift; then
case "$firstarg" in
*o*)
firstarg=`echo "$firstarg" | sed s/o//`
tar "$firstarg" "$@" && exit 0
;;
esac
case "$firstarg" in
*h*)
firstarg=`echo "$firstarg" | sed s/h//`
tar "$firstarg" "$@" && exit 0
;;
esac
fi
echo 1>&2 "\
WARNING: I can't seem to be able to run \`tar' with the given arguments.
You may want to install GNU tar or Free paxutils, or check the
command line arguments."
exit 1
;;
*)
echo 1>&2 "\
WARNING: \`$1' is needed, and is $msg.
You might have modified some files without having the
proper tools for further handling them. Check the \`README' file,
it often tells you about the needed prerequisites for installing
this package. You may also peek at any GNU archive site, in case
some other package would contain this missing \`$1' program."
exit 1
;;
esac
exit 0
# Local variables:
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:
| {
"pile_set_name": "Github"
} |
<section id="related-box">
<h2>Related</h2>
<ul id="related-list">
<% if discover_mode? %>
<li id="secondary-links-posts-hot"><%= link_to "Hot", posts_path(:tags => "order:rank") %></li>
<li id="secondary-links-posts-popular"><%= link_to "Popular", popular_explore_posts_path %></li>
<li id="secondary-links-posts-curated"><%= link_to "Curated", curated_explore_posts_path %></li>
<li><%= link_to "Searches", searches_explore_posts_path %></li>
<li><%= link_to "Viewed", viewed_explore_posts_path %></li>
<% end %>
<li><%= link_to "Deleted", posts_path(tags: "#{params[:tags]} status:deleted"), rel: "nofollow" %></li>
<li><%= link_to "Random", random_posts_path(tags: params[:tags]), id: "random-post", "data-shortcut": "r", rel: "nofollow" %></li>
<% if post_set.query.is_simple_tag? %>
<li><%= link_to "History", post_versions_path(search: { changed_tags: params[:tags] }), rel: "nofollow" %></li>
<% end %>
<li><%= link_to "Count", posts_counts_path(tags: params[:tags]), rel: "nofollow" %></li>
</ul>
</section>
| {
"pile_set_name": "Github"
} |
// NO INCLUDE GUARDS, THE HEADER IS INTENDED FOR MULTIPLE INCLUSION
// Copyright Aleksey Gurtovoy 2002-2004
//
// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
// $Source$
// $Date$
// $Revision$
#undef BOOST_TT_AUX_SIZE_T_TRAIT_DEF1
#undef BOOST_TT_AUX_SIZE_T_TRAIT_SPEC1
#undef BOOST_TT_AUX_SIZE_T_TRAIT_PARTIAL_SPEC1_1
| {
"pile_set_name": "Github"
} |
# encoding: UTF-8
# This file contains data derived from the IANA Time Zone Database
# (https://www.iana.org/time-zones).
module TZInfo
module Data
module Definitions
module Asia
module Chungking
include TimezoneDefinition
linked_timezone 'Asia/Chungking', 'Asia/Shanghai'
end
end
end
end
end
| {
"pile_set_name": "Github"
} |
{
"id": 11,
"method": "contacts.move",
"params": {
"contactID": 100,
"sourceFolderID": 1,
"targetFolderID": 10
}
}
| {
"pile_set_name": "Github"
} |
<div class="serendipity_staticblock clearfix">
{if $staticblock.title}
<h3>{$staticblock.title}</h3>
{/if}
{if $staticblock.body}
<div class="serendipity_staticblock_body clearfix">{$staticblock.body}</div>
{/if}
{if $staticblock.extended}
<div class="serendipity_staticblock_extended clearfix">{$staticblock.extended}</div>
{/if}
</div>
| {
"pile_set_name": "Github"
} |
/*Copyright (c) 2017 The Paradox Game Converters Project
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*/
#ifndef PROVINCE_MAPPER_H
#define PROVINCE_MAPPER_H
#include <map>
#include <memory>
#include <unordered_set>
#include <vector>
using namespace std;
class EU4Version;
class Object;
class provinceMapper
{
public:
static const vector<int> getVic2ProvinceNumbers(int EU4ProvinceNumber)
{
return getInstance()->GetVic2ProvinceNumbers(EU4ProvinceNumber);
}
static const vector<int> getEU4ProvinceNumbers(int Vic2ProvinceNumber)
{
return getInstance()->GetEU4ProvinceNumbers(Vic2ProvinceNumber);
}
static bool isProvinceResettable(int Vic2ProvinceNumber)
{
return getInstance()->IsProvinceResettable(Vic2ProvinceNumber);
}
private:
static provinceMapper* instance;
static provinceMapper* getInstance()
{
if (instance == NULL)
{
instance = new provinceMapper;
}
return instance;
}
provinceMapper();
void initProvinceMap(shared_ptr<Object> obj);
int getMappingsIndex(vector<shared_ptr<Object>> versions);
void createMappings(shared_ptr<Object> mapping);
const vector<int> GetVic2ProvinceNumbers(int EU4ProvinceNumber);
const vector<int> GetEU4ProvinceNumbers(int Vic2ProvinceNumber);
bool IsProvinceResettable(int Vic2ProvinceNumber);
map<int, vector<int>> Vic2ToEU4ProvinceMap;
map<int, vector<int>> EU4ToVic2ProvinceMap;
unordered_set<int> resettableProvinces;
};
#endif // PROVINCE_MAPPER_H | {
"pile_set_name": "Github"
} |
{
"matches": "absolute-replaced-height-001-ref.xht",
"assert": "If the height, top, bottom and vertical margins of an absolute positioned inline replaced element are all 'auto', then its use value is determined for inline replaced element, its 'top' is given by its static position and both 'margin-top' and 'margin-bottom' used values are '0'. In this test, the 'height' and 'width' of the inline replaced element are 'auto' and the element also has an intrinsic height, so the intrinsic height and the intrinsic width become the used values.",
"help": [
"http://www.w3.org/TR/CSS21/visudet.html#abs-replaced-height"
],
"test_case": {
"tag": "body",
"children": [
{
"tag": "div",
"children": [
{
"tag": "img",
"style": {
"margin_bottom": "auto",
"position": "absolute",
"margin_top": "auto"
}
}
],
"style": {
"display": "block",
"width": "1in",
"border_top_style": "solid",
"border_bottom_color": "orange",
"height": "15px",
"border_top_width": "initial",
"border_top_color": "orange",
"border_bottom_style": "solid",
"unicode_bidi": "embed",
"border_bottom_width": "initial"
}
}
],
"style": {
"margin_bottom": "8px",
"margin_top": "8px",
"margin_right": "8px",
"margin_left": "8px",
"unicode_bidi": "embed",
"display": "block"
}
}
} | {
"pile_set_name": "Github"
} |
## make-library-glue.pkg
#
# Build much of the code required
# to make a C library like Gtk or OpenGL
# available at the Mythryl level, driven
# by an xxx-construction.plan file.
#
# The format of xxx-construction.plan files
# is documented in Note[1] at bottom of file.
#
# make-library-glue.pkg really shouldn't be in
# standard.lib because it is not of general interest,
# but at the moment that is the path of least
# resistance. -- 2013-01-12 CrT
# Compiled by:
# src/lib/std/standard.lib
stipulate
package paf = patchfile; # patchfile is from src/lib/make-library-glue/patchfile.pkg
package pfj = planfile_junk; # planfile_junk is from src/lib/make-library-glue/planfile-junk.pkg
package pfs = patchfiles; # patchfiles is from src/lib/make-library-glue/patchfiles.pkg
package plf = planfile; # planfile is from src/lib/make-library-glue/planfile.pkg
package sm = string_map; # string_map is from src/lib/src/string-map.pkg
#
Pfs = pfs::Patchfiles;
herein
api Make_Library_Glue
{
Field = { fieldname: String,
filename: String,
lines: List(String), # Not exported.
line_1: Int,
line_n: Int,
used: Ref(Bool)
};
Fields = sm::Map( Field );
State;
#
Paths = { construction_plan : String, # E.g. "src/opt/gtk/etc/gtk-construction.plan"
lib_name : String, # E.g. "opengl" -- Must match the #define CLIB_NAME "opengl" line in .../src/opt/xxx/c/in-main/libmythryl-xxx.c
# # Files which will be patched:
xxx_client_api : String, # E.g. "src/opt/gtk/src/gtk-client.api"
xxx_client_g_pkg : String, # E.g. "src/opt/gtk/src/gtk-client-g.pkg"
xxx_client_driver_api : String, # E.g. "src/opt/gtk/src/gtk-client-driver.api"
xxx_client_driver_for_library_in_c_subprocess_pkg : String, # E.g. "src/opt/gtk/src/gtk-client-driver-for-library-in-c-subprocess.pkg"
xxx_client_driver_for_library_in_main_process_pkg : String, # E.g, "src/opt/gtk/src/gtk-client-driver-for-library-in-main-process.pkg"
mythryl_xxx_library_in_c_subprocess_c : String, # E.g. "src/opt/gtk/c/in-sub/mythryl-gtk-library-in-c-subprocess.c"
libmythryl_xxx_c : String, # E.g. "src/opt/gtk/c/in-main/libmythryl-gtk.c"
section_libref_xxx_tex : String # E.g., "doc/tex/section-libref-gtk.tex";
};
Builder_Stuff
=
{
path: Paths,
#
maybe_get_field: (Fields, String) -> Null_Or(String),
get_field: (Fields, String) -> String,
get_field_location: (Fields, String) -> String,
#
build_table_entry_for_'libmythryl_xxx_c': Pfs -> (String, String) -> Pfs,
build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c': Pfs -> String -> Pfs,
#
build_fun_declaration_for_'xxx_client_driver_api': Pfs -> { c_fn_name: String, libcall: String, result_type: String } -> Pfs,
build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg': Pfs -> { c_fn_name: String, libcall: String, result_type: String } -> Pfs,
#
build_fun_declaration_for_'xxx_client_api': Pfs -> { fn_name: String, fn_type: String, api_doc: String } -> Pfs,
build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg': Pfs -> { fn_name: String, c_fn_name: String, fn_type: String, libcall: String, result_type: String } -> Pfs,
to_xxx_client_driver_api: Pfs -> String -> Pfs,
to_xxx_client_driver_for_library_in_c_subprocess_pkg: Pfs -> String -> Pfs,
to_xxx_client_driver_for_library_in_main_process_pkg: Pfs -> String -> Pfs,
to_xxx_client_g_pkg_funs: Pfs -> String -> Pfs,
to_xxx_client_g_pkg_types: Pfs -> String -> Pfs,
to_xxx_client_api_funs: Pfs -> String -> Pfs,
to_xxx_client_api_types: Pfs -> String -> Pfs,
to_mythryl_xxx_library_in_c_subprocess_c_funs: Pfs -> String -> Pfs,
to_mythryl_xxx_library_in_c_subprocess_c_trie: Pfs -> String -> Pfs,
to_libmythryl_xxx_c_table: Pfs -> String -> Pfs,
to_libmythryl_xxx_c_funs: Pfs -> String -> Pfs,
to_section_libref_xxx_tex_apitable: Pfs -> String -> Pfs,
to_section_libref_xxx_tex_libtable: Pfs -> String -> Pfs,
custom_fns_codebuilt_for_'libmythryl_xxx_c': Ref(Int),
custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c': Ref(Int),
callback_fns_handbuilt_for_'xxx_client_g_pkg': Ref(Int),
note__section_libref_xxx_tex__entry
:
Pfs
->
{ fields: Fields,
fn_name: String, # E.g. "make_window"
fn_type: String, # E.g. "Session -> String"
url: String, # E.g. "http://library.gnome.org/devel/gtk/stable/GtkTable.html#gtk-table-set-col-spacing"
libcall: String # E.g. "gtk_table_set_col_spacing( GTK_TABLE(/*table*/w0), /*col*/i1, /*spacing*/i2)"
}
->
Pfs
};
Custom_Body_Stuff = { fn_name: String, libcall: String, libcall_more: String, to_mythryl_xxx_library_in_c_subprocess_c_funs: Pfs -> String -> Pfs, path: Paths };
Custom_Body_Stuff2 = { fn_name: String, libcall: String, libcall_more: String, to_libmythryl_xxx_c_funs: Pfs -> String -> Pfs, path: Paths };
Plugin = LIBCALL_TO_ARGS_FN (String -> List(String))
#
| BUILD_ARG_LOAD_FOR_'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS' (String, (String, Int, String) -> String)
| BUILD_ARG_LOAD_FOR_'LIBMYTHRYL_XXX_C' (String, (String, Int, String) -> String)
#
| HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS_C' (String, Pfs -> Custom_Body_Stuff -> Pfs)
| HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'LIBMYTHRYL_XXX_C' (String, Pfs -> Custom_Body_Stuff2 -> Pfs)
#
| FIGURE_FUNCTION_RESULT_TYPE (String, String -> String)
#
| DO_COMMAND_FOR_'XXX_CLIENT_DRIVER_FOR_LIBRARY_IN_C_SUBPROCESS_PKG' (String, String)
| DO_COMMAND_TO_STRING_FN (String, String)
#
| CLIENT_DRIVER_ARG_TYPE (String, String)
| CLIENT_DRIVER_RESULT_TYPE (String, String)
;
make_library_glue: Paths -> List(plf::Paragraph_Definition(Builder_Stuff)) -> List(Plugin) -> Void;
};
end;
stipulate
package fil = file__premicrothread; # file__premicrothread is from src/lib/std/src/posix/file--premicrothread.pkg
package lms = list_mergesort; # list_mergesort is from src/lib/src/list-mergesort.pkg
package iow = io_wait_hostthread; # io_wait_hostthread is from src/lib/std/src/hostthread/io-wait-hostthread.pkg
package paf = patchfile; # patchfile is from src/lib/make-library-glue/patchfile.pkg
package pfj = planfile_junk; # planfile_junk is from src/lib/make-library-glue/planfile-junk.pkg
package pfs = patchfiles; # patchfiles is from src/lib/make-library-glue/patchfiles.pkg
package plf = planfile; # planfile is from src/lib/make-library-glue/planfile.pkg
package psx = posixlib; # posixlib is from src/lib/std/src/psx/posixlib.pkg
package sm = string_map; # string_map is from src/lib/src/string-map.pkg
#
Pfs = pfs::Patchfiles;
#
exit_x = winix__premicrothread::process::exit_x;
=~ = regex::(=~);
sort = lms::sort_list;
chomp = string::chomp;
tolower = string::to_lower;
uniquesort = lms::sort_list_and_drop_duplicates;
fun isfile filename
=
psx::stat::is_file (psx::stat filename) except _ = FALSE;
#
fun die_x message
=
{ print message;
exit_x 1;
};
# The following are all duplicates of definitions in
# src/app/makelib/main/makelib-g.pkg
# -- possibly a better place should be found
# for them:
# Convert src/opt/xxx/c/in-sub/mythryl-xxx-library-in-c-subprocess.c
# to mythryl-xxx-library-in-c-subprocess.c
# and such:
#
fun basename filename
=
case (regex::find_first_match_to_ith_group 1 .|/([^/]+)$| filename)
THE x => x;
NULL => filename;
esac;
# Convert src/opt/xxx/c/in-sub/mythryl-xxx-library-in-c-subprocess.c
# to src/opt/xxx/c/in-sub
# and such:
#
fun dirname filename
=
case (regex::find_first_match_to_ith_group 1 .|^(.*)/[^/]+$| filename)
THE x => x;
NULL => "."; # This follows linux dirname(1), and also produces sensible results.
esac;
# Drop leading and trailing
# whitespace from a string.
#
fun trim string
=
{ if (string =~ ./^\s*$/)
#
"";
else
# Drop trailing whitespace:
#
string = case (regex::find_first_match_to_ith_group 1 ./^(.*\S)\s*$/ string)
THE x => x;
NULL => string;
esac;
# Drop leading whitespace:
#
string = case (regex::find_first_match_to_ith_group 1 ./^\s*(\S.*)$/ string)
THE x => x;
NULL => string;
esac;
string;
fi;
};
fun print_strings [] => printf "[]\n";
print_strings [ s ] => printf "[ \"%s\" ]\n" s;
print_strings (s ! rest)
=>
{ printf "[ \"%s\"" s;
apply (\\ s = printf ", \"%s\"" s) rest;
printf "]\n";
};
end;
herein
# This package is invoked in:
#
# src/opt/gtk/sh/make-gtk-glue
# src/opt/opengl/sh/make-opengl-glue
package make_library_glue:
Make_Library_Glue
{
# Field is a contiguous sequence of lines
# all with the same linetype field:
#
# foo: this
# foo: that
#
# Most fields will be single-line, but this format
# supports conveniently including blocks of code,
# such as complete function definitions.
#
# We treat a field as a single string containing
# embedded newlines, stripped of the linetype field
# and the colon.
#
Field = { fieldname: String,
filename: String,
lines: List(String), # Not exported.
line_1: Int,
line_n: Int,
used: Ref(Bool)
};
Fields = sm::Map( Field );
State = { line_number: Ref(Int), # Exported as an opaque type.
fd: fil::Input_Stream,
fields: Ref( sm::Map( Field ))
};
Paths = { construction_plan : String,
lib_name : String, # E.g. "xxx". Must match the #define CLIB_NAME "xxx" line in src/opt/xxx/c/in-main/libmythryl-xxx.c
#
xxx_client_api : String,
xxx_client_g_pkg : String,
xxx_client_driver_api : String,
xxx_client_driver_for_library_in_c_subprocess_pkg : String,
xxx_client_driver_for_library_in_main_process_pkg : String,
mythryl_xxx_library_in_c_subprocess_c : String,
libmythryl_xxx_c : String,
section_libref_xxx_tex : String
};
Builder_Stuff
=
{
path: Paths,
#
maybe_get_field: (Fields, String) -> Null_Or(String),
get_field: (Fields, String) -> String,
get_field_location: (Fields, String) -> String,
#
build_table_entry_for_'libmythryl_xxx_c': Pfs -> (String, String) -> Pfs,
build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c': Pfs -> String -> Pfs,
#
build_fun_declaration_for_'xxx_client_driver_api': Pfs -> { c_fn_name: String, libcall: String, result_type: String } -> Pfs,
build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg': Pfs -> { c_fn_name: String, libcall: String, result_type: String } -> Pfs,
#
build_fun_declaration_for_'xxx_client_api': Pfs -> { fn_name: String, fn_type: String, api_doc: String } -> Pfs,
build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg': Pfs -> { fn_name: String, c_fn_name: String, fn_type: String, libcall: String, result_type: String } -> Pfs,
to_xxx_client_driver_api: Pfs -> String -> Pfs,
to_xxx_client_driver_for_library_in_c_subprocess_pkg: Pfs -> String -> Pfs,
to_xxx_client_driver_for_library_in_main_process_pkg: Pfs -> String -> Pfs,
to_xxx_client_g_pkg_funs: Pfs -> String -> Pfs,
to_xxx_client_g_pkg_types: Pfs -> String -> Pfs,
to_xxx_client_api_funs: Pfs -> String -> Pfs,
to_xxx_client_api_types: Pfs -> String -> Pfs,
to_mythryl_xxx_library_in_c_subprocess_c_funs: Pfs -> String -> Pfs,
to_mythryl_xxx_library_in_c_subprocess_c_trie: Pfs -> String -> Pfs,
to_libmythryl_xxx_c_table: Pfs -> String -> Pfs,
to_libmythryl_xxx_c_funs: Pfs -> String -> Pfs,
to_section_libref_xxx_tex_apitable: Pfs -> String -> Pfs,
to_section_libref_xxx_tex_libtable: Pfs -> String -> Pfs,
custom_fns_codebuilt_for_'libmythryl_xxx_c': Ref(Int),
custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c': Ref(Int),
callback_fns_handbuilt_for_'xxx_client_g_pkg': Ref(Int),
note__section_libref_xxx_tex__entry
:
Pfs
->
{ fields: Fields,
fn_name: String, # E.g. "make_window"
fn_type: String, # E.g. "Session -> String"
url: String, # E.g. "http://library.gnome.org/devel/gtk/stable/GtkTable.html#gtk-table-set-col-spacing"
libcall: String # E.g. "gtk_table_set_col_spacing( GTK_TABLE(/*table*/w0), /*col*/i1, /*spacing*/i2)"
}
->
Pfs
};
Custom_Body_Stuff = { fn_name: String, libcall: String, libcall_more: String, to_mythryl_xxx_library_in_c_subprocess_c_funs: Pfs -> String -> Pfs, path: Paths };
Custom_Body_Stuff2 = { fn_name: String, libcall: String, libcall_more: String, to_libmythryl_xxx_c_funs: Pfs -> String -> Pfs, path: Paths };
Plugin = LIBCALL_TO_ARGS_FN (String -> List(String))
#
| BUILD_ARG_LOAD_FOR_'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS' (String, (String, Int, String) -> String)
| BUILD_ARG_LOAD_FOR_'LIBMYTHRYL_XXX_C' (String, (String, Int, String) -> String)
#
| HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS_C' (String, Pfs -> Custom_Body_Stuff -> Pfs)
| HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'LIBMYTHRYL_XXX_C' (String, Pfs -> Custom_Body_Stuff2 -> Pfs)
#
| FIGURE_FUNCTION_RESULT_TYPE (String, String -> String)
#
| DO_COMMAND_FOR_'XXX_CLIENT_DRIVER_FOR_LIBRARY_IN_C_SUBPROCESS_PKG' (String, String)
| DO_COMMAND_TO_STRING_FN (String, String)
#
| CLIENT_DRIVER_ARG_TYPE (String, String)
| CLIENT_DRIVER_RESULT_TYPE (String, String)
;
#
fun make_library_glue (path: Paths) (paragraph_definitions: List(plf::Paragraph_Definition(Builder_Stuff))) (plugins: List(Plugin))
=
{
note_plugins plugins;
plan = plf::read_planfile paragraph_defs path.construction_plan;
pfs = plf::map_patchfiles_per_plan builder_stuff pfs plan;
pfs = write_section_libref_xxx_tex_table pfs (.fn_name, .libcall, to_section_libref_xxx_tex_apitable);
pfs = write_section_libref_xxx_tex_table pfs (.libcall, .fn_name, to_section_libref_xxx_tex_libtable);
printf "\n";
printf "%4d plain functions codebuilt for %s\n" *plain_fns_codebuilt_for_'libmythryl_xxx_c' (basename path.libmythryl_xxx_c);
printf "%4d custom functions codebuilt for %s\n" *custom_fns_codebuilt_for_'libmythryl_xxx_c' (basename path.libmythryl_xxx_c);
printf "%4d plain functions codebuilt for %s\n" *plain_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c' (basename path.mythryl_xxx_library_in_c_subprocess_c);
printf "%4d custom functions codebuilt for %s\n" *custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c' (basename path.mythryl_xxx_library_in_c_subprocess_c);
printf "%4d plain functions codebuilt for %s\n" *plain_fns_codebuilt_for_'xxx_client_g_pkg' (basename path.xxx_client_g_pkg);
printf "%4d plain functions handbuilt for %s\n" *plain_fns_handbuilt_for_'xxx_client_g_pkg' (basename path.xxx_client_g_pkg);
printf "%4d callback functions codebuilt for %s\n" *callback_fns_handbuilt_for_'xxx_client_g_pkg' (basename path.xxx_client_g_pkg);
printf "%4d callback functions handbuilt for %s\n" *callback_fns_handbuilt_for_'xxx_client_g_pkg' (basename path.xxx_client_g_pkg);
narration = pfs::write_patchfiles pfs; # Narration lines generated via sprintf "Successfully patched %4d lines in %s\n" *patch_lines_written filename; in src/lib/make-library-glue/patchfile.pkg
printf "\n";
apply {. printf "%s\n" #msg; } narration;
printf "\n";
}
where
# First, establish patch ids for our patchpoints:
#
patch_id_'functions'_in_'xxx_client_driver_api' = { patchname => "functions", filename => path.xxx_client_driver_api };
#
patch_id_'body'_in_'xxx_client_driver_for_library_in_main_process_pkg' = { patchname => "body", filename => path.xxx_client_driver_for_library_in_main_process_pkg };
#
patch_id_'body'_in_'xxx_client_driver_for_library_in_c_subprocess_pkg' = { patchname => "body", filename => path.xxx_client_driver_for_library_in_c_subprocess_pkg };
#
patch_id_'functions'_in_'xxx_client_api' = { patchname => "functions", filename => path.xxx_client_api };
patch_id_'types'_in_'xxx_client_api' = { patchname => "types", filename => path.xxx_client_api };
#
patch_id_'functions'_in_'xxx_client_g_pkg' = { patchname => "functions", filename => path.xxx_client_g_pkg };
patch_id_'types'_in_'xxx_client_g_pkg' = { patchname => "types", filename => path.xxx_client_g_pkg };
#
patch_id_'functions'_in_'mythryl_xxx_library_in_c_subprocess_c' = { patchname => "functions", filename => path.mythryl_xxx_library_in_c_subprocess_c };
patch_id_'table'_in_'mythryl_xxx_library_in_c_subprocess_c' = { patchname => "table", filename => path.mythryl_xxx_library_in_c_subprocess_c };
#
patch_id_'functions'_in_'libmythryl_xxx_c' = { patchname => "functions", filename => path.libmythryl_xxx_c };
patch_id_'table'_in_'libmythryl_xxx_c' = { patchname => "table", filename => path.libmythryl_xxx_c };
#
patch_id_'api_calls'_in_'section_libref_xxx_tex' = { patchname => "api_calls", filename => path.section_libref_xxx_tex };
patch_id_'binding_calls'_in_'section_libref_xxx_tex' = { patchname => "binding_calls", filename => path.section_libref_xxx_tex };
# Next, load into memory all the files which we will be patching:
#
pfs = (pfs::load_patchfiles
[
path.xxx_client_driver_api,
path.xxx_client_driver_for_library_in_c_subprocess_pkg,
path.xxx_client_driver_for_library_in_main_process_pkg,
path.xxx_client_g_pkg,
path.xxx_client_api,
path.mythryl_xxx_library_in_c_subprocess_c,
path.libmythryl_xxx_c,
path.section_libref_xxx_tex
]
);
# Clear out the current contents of all patches,
# to make way for the new versions we are about
# to create:
#
pfs = pfs::empty_all_patches pfs;
# Initialize all of our state:
plain_fns_codebuilt_for_'libmythryl_xxx_c' = REF 0;
custom_fns_codebuilt_for_'libmythryl_xxx_c' = REF 0;
plain_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c' = REF 0;
custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c' = REF 0;
plain_fns_handbuilt_for_'xxx_client_g_pkg' = REF 0;
plain_fns_codebuilt_for_'xxx_client_g_pkg' = REF 0;
# XXX SUCKO FIXME was one of thes supposed to be 'codebuilt'?
callback_fns_handbuilt_for_'xxx_client_g_pkg' = REF 0;
callback_fns_handbuilt_for_'xxx_client_g_pkg' = REF 0;
nonstandard_result_type_handlers_for__build_plain_fun_for__'mythryl_xxx_library_in_c_subprocess_c' = REF (sm::empty: sm::Map( Pfs -> Custom_Body_Stuff -> Pfs ));
nonstandard_result_type_handlers_for__build_plain_fun_for__'libmythryl_xxx_c' = REF (sm::empty: sm::Map( Pfs -> Custom_Body_Stuff2 -> Pfs ));
#
arg_load_fns_for_'mythryl_xxx_library_in_c_subprocess_c' = REF (sm::empty: sm::Map( (String,Int,String) -> String ));
arg_load_fns_for_'libmythryl_xxx_c' = REF (sm::empty: sm::Map( (String,Int,String) -> String ));
#
figure_function_result_type_fns = REF (sm::empty: sm::Map( String -> String ));
#
do_command_for = REF (sm::empty: sm::Map( String ));
do_command_to_string_fn = REF (sm::empty: sm::Map( String ));
#
client_driver_arg_type = REF (sm::empty: sm::Map( String ));
client_driver_result_type = REF (sm::empty: sm::Map( String ));
#
fun libcall_to_args_fn libcall
=
# 'libcall' is from a line in (say) src/opt/gtk/etc/gtk-construction.plan
# looking something like libcall: gtk_table_set_row_spacing( GTK_TABLE(/*table*/w0), /*row*/i1, /*spacing*/i2)
#
# 'libcall' contains embedded arguments like 'w0', 'i1', 'f2', 'b3', 's4'.
# They are what we are interested in here;
# our job is to return a sorted, duplicate-free list of them.
#
# The implementation here is generic; glue for a particular library
# may override it to support additional argument types (like 'w').
# See for example libcall_to_args_fn() in src/opt/gtk/sh/make-gtk-glue
#
# The argument letter gives us the argument type:
#
# i == int
# f == double (Mythryl "Float")
# b == bool
# s == string
#
# The argument digit gives us the argument order:
#
# 0 == first arg
# 1 == second arg
# ...
#
# Get list of above args, sorting by trailing digit
# and dropping duplicates:
#
{ raw_list = regex::find_all_matches_to_regex ./\b[bfis][0-9]\b/ libcall;
#
cooked_list = uniquesort compare_fn raw_list;
cooked_list;
}
where
fun compare_fn (xn, yn) # Compare "s0" and "b1" as "0" and "1":
=
{ xn' = string::extract (xn, 1, NULL);
yn' = string::extract (yn, 1, NULL);
string::compare (xn', yn');
};
end;
ref_libcall_to_args_fn = REF libcall_to_args_fn;
#
fun libcall_to_args libcall
=
*ref_libcall_to_args_fn libcall;
# Convenience functions to append
# lines (strings) to our patchpoints:
#
fun to_xxx_client_driver_api pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'xxx_client_driver_api' };
fun to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'body'_in_'xxx_client_driver_for_library_in_c_subprocess_pkg' };
fun to_xxx_client_driver_for_library_in_main_process_pkg pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'body'_in_'xxx_client_driver_for_library_in_main_process_pkg' };
fun to_xxx_client_g_pkg_funs pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'xxx_client_g_pkg' };
fun to_xxx_client_g_pkg_types pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'types'_in_'xxx_client_g_pkg' };
fun to_xxx_client_api_funs pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'xxx_client_api' };
fun to_xxx_client_api_types pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'types'_in_'xxx_client_api' };
fun to_mythryl_xxx_library_in_c_subprocess_c_funs pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'mythryl_xxx_library_in_c_subprocess_c' };
fun to_mythryl_xxx_library_in_c_subprocess_c_trie pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'table'_in_'mythryl_xxx_library_in_c_subprocess_c' };
fun to_libmythryl_xxx_c_table pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'table'_in_'libmythryl_xxx_c' };
fun to_libmythryl_xxx_c_funs pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'functions'_in_'libmythryl_xxx_c' };
fun to_section_libref_xxx_tex_apitable pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'api_calls'_in_'section_libref_xxx_tex' };
fun to_section_libref_xxx_tex_libtable pfs string = pfs::append_to_patch pfs { lines => [ string ], patch_id => patch_id_'binding_calls'_in_'section_libref_xxx_tex' };
# Save and index resources supplied by client:
#
fun note_plugins plugins
=
apply note_plugin plugins
where
fun note_plugin (LIBCALL_TO_ARGS_FN libcall_to_args_fn)
=>
ref_libcall_to_args_fn := libcall_to_args_fn;
note_plugin (BUILD_ARG_LOAD_FOR_'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS' (arg_type, arg_load_builder))
=>
arg_load_fns_for_'mythryl_xxx_library_in_c_subprocess_c' := sm::set (*arg_load_fns_for_'mythryl_xxx_library_in_c_subprocess_c', arg_type, arg_load_builder);
note_plugin (BUILD_ARG_LOAD_FOR_'LIBMYTHRYL_XXX_C' (arg_type, arg_load_builder))
=>
arg_load_fns_for_'libmythryl_xxx_c' := sm::set (*arg_load_fns_for_'libmythryl_xxx_c', arg_type, arg_load_builder);
note_plugin (HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'MYTHRYL_XXX_LIBRARY_IN_C_SUBPROCESS_C' (result_type, function))
=>
nonstandard_result_type_handlers_for__build_plain_fun_for__'mythryl_xxx_library_in_c_subprocess_c' := sm::set (*nonstandard_result_type_handlers_for__build_plain_fun_for__'mythryl_xxx_library_in_c_subprocess_c', result_type, function);
note_plugin (HANDLE_NONSTANDARD_RESULT_TYPE_FOR__BUILD_PLAIN_FUN_FOR__'LIBMYTHRYL_XXX_C' (result_type, function))
=>
nonstandard_result_type_handlers_for__build_plain_fun_for__'libmythryl_xxx_c' := sm::set (*nonstandard_result_type_handlers_for__build_plain_fun_for__'libmythryl_xxx_c', result_type, function);
note_plugin (FIGURE_FUNCTION_RESULT_TYPE (type, function))
=>
figure_function_result_type_fns := sm::set (*figure_function_result_type_fns, type, function);
note_plugin (DO_COMMAND_FOR_'XXX_CLIENT_DRIVER_FOR_LIBRARY_IN_C_SUBPROCESS_PKG' (type, function))
=>
do_command_for := sm::set (*do_command_for, type, function);
note_plugin (DO_COMMAND_TO_STRING_FN (type, function))
=>
do_command_to_string_fn := sm::set (*do_command_to_string_fn, type, function);
note_plugin (CLIENT_DRIVER_ARG_TYPE (type, type2))
=>
client_driver_arg_type := sm::set (*client_driver_arg_type, type, type2);
note_plugin (CLIENT_DRIVER_RESULT_TYPE (type, type2))
=>
client_driver_result_type := sm::set (*client_driver_result_type, type, type2);
end;
end;
#
fun field_location (field: Field)
=
field.line_1 == field.line_n ?? sprintf "line %d" field.line_1
:: sprintf "lines %d-%d" field.line_1 field.line_n;
#
fun maybe_get_field (fields: Fields, field_name)
=
case (sm::get (fields, field_name))
#
THE field => { field.used := TRUE; THE (string::cat field.lines); };
NULL => NULL;
esac;
#
fun get_field (fields: Fields, field_name)
=
case (sm::get (fields, field_name))
#
THE field => { field.used := TRUE;
string::cat field.lines;
};
NULL => die_x (sprintf "Required field %s missing\n" field_name);
esac;
#
fun get_field_location (fields: Fields, field_name)
=
case (sm::get (fields, field_name))
#
THE field => { field.used := TRUE; field_location field; };
#
NULL => die_x (sprintf "Required field %s missing\n" field_name);
esac;
#
fun clear_state (state: State)
=
{ foreach (sm::keyvals_list *state.fields) {.
#
#pair -> (field_name, field);
if (not *field.used)
#
die_x(sprintf "Field %s at %s unsupported.\n"
field_name
(field_location field)
);
fi;
};
state.fields := (sm::empty: sm::Map( Field ));
};
# Count number of arguments.
# We need this for check_argc():
#
fun count_args libcall
=
list::length (libcall_to_args libcall);
#
fun get_nth_arg_type (n, libcall)
=
{ arg_list = libcall_to_args libcall;
if (n < 0
or n >= list::length arg_list
)
raise exception DIE (sprintf "get_nth_arg_type: No %d-th arg in '%s'!" n libcall);
fi;
arg = list::nth (arg_list, n); # Fetch "w0" or "i0" or such.
string::extract (arg, 0, THE 1); # Convert "w0" to "w" or "i0" to "i" etc.
};
#
fun arg_types_are_all_unique libcall
=
{ # Get the list of parameters,
# something like [ "w0", "i1", "i2" ]:
#
args = libcall_to_args libcall;
# Turn parameter list into type list,
# something like [ 'w', 'i', 'i' ]:
#
types = map {. string::get_byte_as_char (#string,0); } args;
# Eliminate duplicate types from above:
#
types = uniquesort char::compare types;
# If 'args' is same length as 'types' then
# all types are unique:
#
list::length args == list::length types;
};
#
fun xxx_client_driver_api_type (libcall, result_type)
=
{ input_type = REF "(Session";
#
arg_count = count_args libcall;
for (a = 0; a < arg_count; ++a) {
#
t = get_nth_arg_type( a, libcall );
case t
"b" => input_type := *input_type + ", Bool";
"i" => input_type := *input_type + ", Int";
"f" => input_type := *input_type + ", Float";
"s" => input_type := *input_type + ", String";
#
x => case (sm::get (*client_driver_arg_type, x))
#
THE type2 => input_type := *input_type + ", " + type2; # Handle "w" etc
NULL => raise exception DIE (sprintf "Unsupported arg type '%s'" t);
esac;
esac;
};
input_type := *input_type + ")";
output_type
=
case result_type
#
"Bool" => "Bool";
"Float" => "Float";
"Int" => "Int";
"Void" => "Void";
#
x => case (sm::get (*client_driver_result_type, x))
#
THE type2 => type2; # "Widget", "new Widget"
#
NULL => { printf "Supported result types:\n";
print_strings (sm::keys_list *client_driver_result_type);
raise exception DIE ("xxx_client_driver_api_type: Unsupported result type: " + result_type);
};
esac;
esac;
(*input_type, output_type);
};
#
stipulate
#
line_count = REF 2;
herein
#
fun build_fun_declaration_for_'xxx_client_driver_api' (pfs: Pfs) { c_fn_name, libcall, result_type }
=
{
# Add a blank line every three declarations:
#
line_count := *line_count + 1;
#
pfs = if ((*line_count % 3) == 0)
#
to_xxx_client_driver_api pfs "\n";
else
pfs;
fi;
pfs = to_xxx_client_driver_api pfs (sprintf " %-40s" (c_fn_name + ":"));
(xxx_client_driver_api_type (libcall, result_type))
->
(input_type, output_type);
pfs = to_xxx_client_driver_api pfs (sprintf "%-40s -> %s;\n" input_type output_type);
pfs;
};
end;
#
fun write_do_command (pfs: Pfs) (do_command, fn_name, libcall, result_prefix, result_expression)
=
{
pfs = if (result_expression != "")
to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" { result = " + do_command + " (session");
else to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" " + do_command + " (session");
fi;
pfs = if (result_prefix != "") to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (.', "' + result_prefix + .'"');
else pfs;
fi;
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (.', "' + fn_name + .'"');
prefix = .' + " " +';
arg_count = count_args libcall;
pfs = for (a = 0, pfs = pfs; a < arg_count; ++a; pfs) {
#
t = get_nth_arg_type( a, libcall );
pfs = case t
"b" => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s bool_to_string %s%d" prefix t a);
"f" => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s eight_byte_float::to_string %s%d" prefix t a);
"i" => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s int::to_string %s%d" prefix t a);
"s" => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s string_to_string %s%d" prefix t a);
#
x => case (sm::get (*do_command_to_string_fn, x))
#
THE to_string => to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf "%s %s %s%d" prefix to_string t a);
#
NULL => raise exception DIE ("Unsupported arg type '" + x + "'");
esac;
esac;
};
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs ");\n";
pfs = if (result_expression != "")
#
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs "\n";
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" " + result_expression + "\n");
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs " };\n\n\n";
pfs;
else
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs "\n\n";
pfs;
fi;
pfs;
};
# Build a function for .../src/opt/xxx/src/xxx-client-driver-for-library-in-c-subprocess.pkg
# looking like
#
# fun make_status_bar_context_id (session, w0, s1) # Int
# =
# do_int_command (session, "make_status_bar_context_id", "make_status_bar_context_id" + " " + widget_to_string w0 + " " + string_to_string s1);
#
fun build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg' (pfs: Pfs) { c_fn_name, libcall, result_type }
=
{ pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" fun " + c_fn_name + " (session");
#
arg_count = count_args( libcall );
pfs = for (a = 0, pfs = pfs; a < arg_count; ++a; pfs) {
#
arg_type = get_nth_arg_type( a, libcall );
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (sprintf ", %s%d" arg_type a);
pfs;
};
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (")\t# " + result_type + "\n");
pfs = to_xxx_client_driver_for_library_in_c_subprocess_pkg pfs (" =\n");
pfs = if (result_type == "Int") write_do_command pfs ("do_int_command", c_fn_name, libcall, c_fn_name, "");
elif (result_type == "Bool") write_do_command pfs ("do_string_command", c_fn_name, libcall, c_fn_name, "the (int::from_string result) != 0;");
elif (result_type == "Float") write_do_command pfs ("do_string_command", c_fn_name, libcall, c_fn_name, "the (eight_byte_float::from_string result);");
elif (result_type == "Void") write_do_command pfs ("do_void_command", c_fn_name, libcall, "", "");
else
case (sm::get (*do_command_for, result_type))
#
THE do_command => write_do_command pfs (do_command, c_fn_name, libcall, c_fn_name, "");
#
NULL => raise exception DIE ("Unsupported result type: " + result_type);
esac;
fi;
pfs;
};
#
fun n_blanks n
=
n_blanks' (n, "")
where
fun n_blanks' (0, string) => string;
n_blanks' (i, string) => n_blanks' (i - 1, " " + string);
end;
end;
# Build a function for .../src/opt/xxx/src/xxx-client-driver-for-library-in-main-process.pkg
# looking like
#
# NEED TO WORK OUT APPROPRIATE VARIATION FOR THIS
#
# fun make_status_bar_context_id (session, w0, s1) # Int
# =
# do_int_command (session, "make_status_bar_context_id", "make_status_bar_context_id" + " " + widget_to_string w0 + " " + string_to_string s1);
#
fun build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg' (pfs: Pfs) { fn_name, c_fn_name, fn_type, libcall, result_type }
=
{
# Construct xxx-client-driver-for-library-in-main-process.pkg level type for this function.
# The xxx-client-g.pkg level type may involve records or tuples,
# but at this level we always have tuples:
#
(xxx_client_driver_api_type (libcall, result_type))
->
(input_type, output_type);
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs "\n";
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs
(sprintf " # %-80s # %s type\n"
( (n_blanks (string::length_in_bytes fn_name))
+ (fn_type =~ ./^\(/ ?? "" :: " ") # If type starts with a paren exdent it one space.
+ fn_type
)
(basename path.xxx_client_api)
);
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs
(sprintf " my %s: %s%s -> %s\n"
c_fn_name
(input_type =~ ./^\(/ ?? "" :: " ") # If type starts with a paren exdent it one space.
input_type
output_type
);
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs " =\n";
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs
#
(sprintf " ci::find_c_function { lib_name => \"%s\", fun_name => \"%s\" };\n"
path.lib_name
c_fn_name
);
pfs = to_xxx_client_driver_for_library_in_main_process_pkg pfs "\n";
pfs;
};
# Convert .|xxx_foo| to .|xxx\_foo|
# to protect it from TeX's ire:
#
fun slash_underlines string
=
regex::replace_all ./_/ .|\_| string;
# Write a trie line into file src/opt/xxx/c/in-sub/mythryl-xxx-library-in-c-subprocess.c
#
fun build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c' (pfs: Pfs) name
=
{
to_mythryl_xxx_library_in_c_subprocess_c_trie pfs
#
(sprintf
" set_trie( trie, %-46s%-46s);\n"
(.'"' + name + .'",')
("do__" + name));
};
# Write a line like
#
# CFUNC("init","init", do__gtk_init, "Void -> Void")
#
# into file src/opt/xxx/c/in-main/libmythryl-xxx.c
#
fun build_table_entry_for_'libmythryl_xxx_c' (pfs: Pfs) (fn_name, fn_type)
=
{ to_libmythryl_xxx_c_table pfs
#
(sprintf "CFUNC(%-44s%-44s%-54s%s%s)\n"
("\"" + fn_name + "\",")
("\"" + fn_name + "\",")
("do__" + fn_name + ",")
(fn_type =~ ./^\(/ ?? "" :: " ") # If type starts with a paren exdent it one space.
("\"" + fn_type + "\"")
);
};
Doc_Entry
=
{ fn_name: String,
libcall: String,
url: String,
fn_type: String
};
doc_entries = REF ([]: List( Doc_Entry ));
# Note a tex documentation table
# line for file section-libref-xxx.tex.
#
fun note__section_libref_xxx_tex__entry
#
(pfs: Pfs) # We don't actually use this at present, but this regularizes the code, and a future version might use it.
#
{ fields: Fields,
fn_name, # E.g. "make_window"
libcall, # E.g. "gtk_table_set_col_spacing( GTK_TABLE(/*table*/w0), /*col*/i1, /*spacing*/i2)"
url, # E.g. "http://library.gnome.org/devel/gtk/stable/GtkTable.html#gtk-table-set-col-spacing"
fn_type # E.g. "Session -> Widget"
}
=
{
# Get name of the C Gtk function/var
# wrapped by this Mythryl function:
#
libcall
=
case (maybe_get_field(fields,"doc-fn"))
#
THE field => field; # doc-fn is a manual override used when libcall is unusable for documentation.
NULL =>
{ # libcall is something like gtk_widget_set_size_request( GTK_WIDGET(/*widget*/w0), /*wide*/i1, /*high*/i2)
# but all we want here is the
# initial function name:
#
libcall = case (regex::find_first_match_to_regex ./[A-Za-z0-9_']+/ libcall)
THE x => x;
NULL => "";
esac;
# If libcall does not begin with [Gg], it
# is probably not useful in this context:
#
libcall = (libcall =~ ./^[Gg]/) ?? libcall
:: "";
libcall;
};
esac;
fn_name = slash_underlines fn_name;
libcall = slash_underlines libcall;
url = slash_underlines url; # Probably not needed.
fn_type = slash_underlines fn_type;
doc_entries := { fn_name, libcall, url, fn_type } ! *doc_entries;
pfs;
};
# Write tex documentation table into file section-libref-xxx.tex:
#
fun write_section_libref_xxx_tex_table
#
(pfs: Pfs)
#
( field1: Doc_Entry -> String,
field2: Doc_Entry -> String,
to_section: Pfs -> String -> Pfs
)
=
{
# Define the sort order for the table:
#
fun compare_fn
( a: Doc_Entry,
b: Doc_Entry
)
=
{ a1 = field1 a; a2 = field2 a;
b1 = field1 b; b2 = field2 b;
# If primary keys are equal,
# sort on the secondary keys:
#
if (a1 != b1) a1 > b1;
else a2 > b2;
fi;
};
entries = sort compare_fn *doc_entries;
pfs = fold_forward
(\\ (entry, pfs)
=
{ entry -> { fn_name, libcall, url, fn_type };
#
entry1 = field1 entry;
entry2 = field2 entry;
pfs = if (entry1 != "")
to_section pfs
(sprintf "%s & %s & %s & %s \\\\ \\hline\n"
entry1
entry2
(url == "" ?? ""
:: (.|\ahref{\url{| + url + "}}{doc}"))
fn_type
);
else
pfs;
fi;
pfs;
}
)
pfs # Initial value of result.
entries # Iterate over this list.
;
pfs;
};
#
fun build_fun_header_for__'mythryl_xxx_library_in_c_subprocess_c' (pfs: Pfs) (fn_name, args)
=
{ pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs "\n";
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs "static void\n";
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs ("do__" + fn_name + "( int argc, unsigned char** argv )\n");
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs "{\n";
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " check_argc( \"do__%s\", %d, argc );\n" fn_name args);
pfs = to_mythryl_xxx_library_in_c_subprocess_c_funs pfs "\n";
pfs;
};
# Build C code
# to fetch all the arguments
# out of argc/argv:
#
fun build_fun_arg_loads_for_'mythryl_xxx_library_in_c_subprocess_c' (pfs: Pfs) (fn_name, args, libcall)
=
{
pfs = for (a = 0, pfs = pfs; a < args; ++a; pfs) {
# Remember type of this arg,
# which will be one of:
# w (widget),
# i (int),
# b (bool)
# s (string)
# f (double):
#
arg_type = get_nth_arg_type( a, libcall );
pfs = if (arg_type == "b") to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " int b%d = bool_arg( argc, argv, %d );\n" a a);
elif (arg_type == "f") to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " double f%d = double_arg( argc, argv, %d );\n" a a);
elif (arg_type == "i") to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " int i%d = int_arg( argc, argv, %d );\n" a a);
elif (arg_type == "s") to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (sprintf " char* s%d = string_arg( argc, argv, %d );\n" a a);
else
case (sm::get (*arg_load_fns_for_'mythryl_xxx_library_in_c_subprocess_c', arg_type)) # Custom library-specific arg type handling for "w" etc.
#
THE build_arg_load_fn => to_mythryl_xxx_library_in_c_subprocess_c_funs pfs (build_arg_load_fn (arg_type, a, libcall));
#
NULL => raise exception DIE ("Bug: unsupported arg type '" + arg_type + "' #" + int::to_string a + " from libcall '" + libcall + "\n");
esac;
fi;
pfs;
};
pfs;
};
# Synthesize a function for mythryl-xxx-library-in-c-subprocess.c like
#
# static void
# do__set_adjustment_value( int argc, unsigned char** argv )
# {
# check_argc( "do__make_label", 2, argc );
#
# { GtkAdjustment* w0 = (GtkAdjustment*) widget_arg( argc, argv, 0 );
# double f1 = double_arg( argc, argv, 1 );
#
# gtk_adjustment_set_value( GTK_ADJUSTMENT(w0), /*value*/f1);
# }
# }
#
fun build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c'
#
(pfs: Pfs)
#
( x: Builder_Stuff,
fields: Fields,
fn_name, # E.g., "make_window2"
fn_type, # E.g., "Session -> Widget".
libcall, # E.g., "gtk_window_new( GTK_WINDOW_TOPLEVEL )".
result # E.g., "Float"
)
=
{ to = to_mythryl_xxx_library_in_c_subprocess_c_funs;
#
arg_count = count_args libcall;
pfs = build_fun_header_for__'mythryl_xxx_library_in_c_subprocess_c' pfs (fn_name, arg_count);
pfs = build_fun_arg_loads_for_'mythryl_xxx_library_in_c_subprocess_c' pfs (fn_name, arg_count, libcall);
libcall_more
=
case (maybe_get_field (fields, "libcal+")) THE field => field;
NULL => "";
esac;
pfs = case result
#
"Void"
=>
{ # Now we just print
# the supplied gtk call
# and wrap up:
#
pfs = to pfs "\n";
pfs = to pfs (" " + libcall + ";\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "}\n";
pfs = to pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' per " + path.construction_plan + ". */\n");
pfs;
};
"Bool"
=>
{ pfs = to pfs "\n";
pfs = to pfs (" int result = " + libcall + ";\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\n";
pfs = to pfs (" printf( \"" + fn_name + "%d\\n\", result); fflush( stdout );\n");
pfs = to pfs (" fprintf(log_fd, \"SENT: " + fn_name + "%d\\n\", result); fflush( log_fd );\n");
pfs = to pfs "}\n";
pfs = to pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' per " + path.construction_plan + ". */\n");
pfs;
};
"Float"
=>
{ pfs = to pfs "\n";
pfs = to pfs (" double result = " + libcall + ";\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\n";
pfs = to pfs (" printf( \"" + fn_name + "%f\\n\", result); fflush( stdout );\n");
pfs = to pfs (" fprintf(log_fd, \"SENT: " + fn_name + "%f\\n\", result); fflush( log_fd );\n");
pfs = to pfs "}\n";
pfs = to pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' per " + path.construction_plan + ". */\n");
pfs;
};
"Int"
=>
{ pfs = to pfs "\n";
pfs = to pfs (" int result = " + libcall + ";\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\n";
pfs = to pfs (" printf( \"" + fn_name + "%d\\n\", result); fflush( stdout );\n");
pfs = to pfs (" fprintf(log_fd, \"SENT: " + fn_name + "%d\\n\", result); fflush( log_fd );\n");
pfs = to pfs "}\n";
pfs = to pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' per " + path.construction_plan + ". */\n");
pfs;
};
_ => case (sm::get (*nonstandard_result_type_handlers_for__build_plain_fun_for__'mythryl_xxx_library_in_c_subprocess_c', result)) # Custom library-specific arg type handling for "Widget", "new Widget" etc.
#
THE build_fn => build_fn pfs { fn_name, libcall, libcall_more, to_mythryl_xxx_library_in_c_subprocess_c_funs, path };
NULL => raise exception DIE (sprintf "Unsupported result type '%s'" result);
esac;
esac;
plain_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c'
:=
*plain_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c'
+ 1;
pfs;
};
#
fun build_fun_header_for__'libmythryl_xxx_c' (pfs: Pfs) (fn_name, fn_type, args, libcall, result_type)
=
{
(xxx_client_driver_api_type (libcall, result_type))
->
(input_type, output_type);
# C comments don't nest, so we must change
# any C comments in input_type or output_type:
#
input_type = regex::replace_all .|/\*| "(*" input_type;
input_type = regex::replace_all .|\*/| "*)" input_type;
#
output_type = regex::replace_all .|/\*| "(*" output_type;
output_type = regex::replace_all .|\*/| "*)" output_type;
pfs = to_libmythryl_xxx_c_funs pfs ("/* do__" + fn_name + "\n");
pfs = to_libmythryl_xxx_c_funs pfs " *\n";
pfs = to_libmythryl_xxx_c_funs pfs (" * " + (basename path.xxx_client_api) + " type: " + ( fn_type =~ ./^\(/ ?? "" :: " ") + fn_type + "\n");
pfs = to_libmythryl_xxx_c_funs pfs (" * " + (basename path.xxx_client_driver_api) + " type: " + (input_type =~ ./^\(/ ?? "" :: " ") + input_type + " -> " + output_type + "\n");
pfs = to_libmythryl_xxx_c_funs pfs " */\n";
pfs = to_libmythryl_xxx_c_funs pfs ("static Val do__" + fn_name + " (Task* task, Val arg)\n");
pfs = to_libmythryl_xxx_c_funs pfs "{\n";
pfs = to_libmythryl_xxx_c_funs pfs "\n";
pfs;
};
#
fun build_fun_trailer_for__'libmythryl_xxx_c' pfs
=
{
pfs = to_libmythryl_xxx_c_funs pfs "}\n";
pfs = to_libmythryl_xxx_c_funs pfs ("/* Above fn built by src/lib/make-library-glue/make-library-glue.pkg: write_libmythryl_xxx_c_plain_fun per " + path.construction_plan + ". */\n");
pfs = to_libmythryl_xxx_c_funs pfs "\n";
pfs = to_libmythryl_xxx_c_funs pfs "\n";
pfs;
};
# Build C code
# to fetch all the arguments
# out of argc/argv:
#
fun build_fun_arg_loads_for__'libmythryl_xxx_c' (pfs: Pfs) (fn_name, fn_type, args, libcall)
=
{
case args
0 => pfs;
# Having just one argument used to be a special case
# because then we passed the argument directly rather
# than packed within a tuple. But the first argument
# to a gtk-client-driver-for-library-in-main-process.pkg function is always a Session,
# and it is more efficient to pass on the tuple from
# that layer to the mythryl-gtk-library-in-main-process.c layer rather than
# unpacking and repacking just to get rid of the Session
# argument, consequently if we have any arguments of
# interest (i.e., non-Session arguments) at this point
# we will always have a tuple, eliminating the special
# case. I've left this code here, commented out, just
# in case this situation changes and it is needed again:
#
#
# 1 => { arg_type = get_nth_arg_type( 0, libcall );
#
# if (arg_type == "b") to_libmythryl_xxx_c_funs " int b0 = TAGGED_INT_TO_C_INT(arg) == HEAP_TRUE;\n";
# elif (arg_type == "f") to_libmythryl_xxx_c_funs " double f0 = *(PTR_CAST(double*, arg));\n";
# elif (arg_type == "i") to_libmythryl_xxx_c_funs " int i0 = TAGGED_INT_TO_C_INT(arg);\n";
# elif (arg_type == "s") to_libmythryl_xxx_c_funs " char* s0 = HEAP_STRING_AS_C_STRING(arg);\n";
# elif (arg_type == "w")
#
# # Usually we fetch a widget as just
# #
# # GtkWidget* widget = widget[ TAGGED_INT_TO_C_INT(arg) ];
# #
# # or such, but in a few cases we must cast to
# # another type:
# # o If we see GTK_ADJUSTMENT(w0) we must do GtkAdjustment* w0 = (GtkAdjustment*) widget[ TAGGED_INT_TO_C_INT(arg) ];
# # o If we see GTK_SCALE(w0) we must do GtkScale* w0 = (GtkScale*) widget[ TAGGED_INT_TO_C_INT(arg) ];
# # o If we wee GTK_RADIO_BUTTON(w0) we must do GtkRadioButton* w0 = (GtkRadioButton*) widget[ TAGGED_INT_TO_C_INT(arg) ];
#
# widget_type = REF "GtkWidget";
#
# if (libcall =~ ./GTK_ADJUSTMENT\(\s*w0\s*\)/) widget_type := "GtkAdjustment";
# elif (libcall =~ ./GTK_SCALE\(\s*w0\s*\)/) widget_type := "GtkScale";
# elif (libcall =~ ./GTK_RADIO_BUTTON\(\s*w0\s*\)/) widget_type := "GtkRadioButton";
# fi;
#
# to_libmythryl_xxx_c_funs (sprintf " %-14s w0 = %-16s widget[ TAGGED_INT_TO_C_INT(arg) ];\n"
# (*widget_type + "*")
# ("(" + *widget_type + "*)")
# );
#
# else
# raise exception DIE ("Bug: unsupported arg type '" + arg_type + "' #0 from libcall '" + libcall + "\n");
# fi;
# };
_ => { if (args < 0) die_x "build_fun_arg_loads_for__'libmythryl_xxx_c': Negative 'args' value not supported."; fi;
#
pfs = for (a = 0, pfs = pfs; a < args; ++a; pfs) {
#
# Remember type of this arg,
# which will be one of:
# w (widget),
# i (int),
# b (bool)
# s (string)
# f (double):
#
arg_type = get_nth_arg_type( a, libcall );
pfs = if (arg_type == "b") to_libmythryl_xxx_c_funs pfs (sprintf " int b%d = GET_TUPLE_SLOT_AS_VAL( arg, %d) == HEAP_TRUE;\n" a (a+1)); # +1 because 1st arg is always Session.
elif (arg_type == "f") to_libmythryl_xxx_c_funs pfs (sprintf " double f%d = *(PTR_CAST(double*, GET_TUPLE_SLOT_AS_VAL( arg, %d)));\n" a (a+1));
elif (arg_type == "i") to_libmythryl_xxx_c_funs pfs (sprintf " int i%d = GET_TUPLE_SLOT_AS_INT( arg, %d);\n" a (a+1));
elif (arg_type == "s") to_libmythryl_xxx_c_funs pfs (sprintf " char* s%d = HEAP_STRING_AS_C_STRING (GET_TUPLE_SLOT_AS_VAL( arg, %d));\n" a (a+1));
else
case (sm::get (*arg_load_fns_for_'libmythryl_xxx_c', arg_type)) # Custom library-specific arg type handling for "w" etc.
#
THE build_arg_load_fn => to_libmythryl_xxx_c_funs pfs (build_arg_load_fn (arg_type, a, libcall));
#
NULL => raise exception DIE ("Bug: unsupported arg type '" + arg_type + "' #" + int::to_string a + " from libcall '" + libcall + "\n");
esac;
fi;
pfs;
};
pfs;
};
esac;
};
#
fun build_fun_body_for__'libmythryl_xxx_c'
#
(pfs: Pfs)
#
( x: Builder_Stuff,
fields: Fields,
fn_name, # E.g., "make_window2"
fn_type, # E.g., "Session -> Widget".
libcall, # E.g., "gtk_window_new( GTK_WINDOW_TOPLEVEL )".
result_type # E.g., "Float"
)
=
{
to = to_libmythryl_xxx_c_funs;
libcall_more
=
case (maybe_get_field (fields, "libcal+")) THE field => field;
NULL => "";
esac;
pfs = case result_type
#
"Void"
=>
{ # Now we just print
# the supplied gtk call
# and wrap up:
#
pfs = to pfs "\n";
pfs = to pfs (" " + libcall + ";\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\n";
pfs = to pfs " return HEAP_VOID;\n";
#
pfs;
};
"Bool"
=>
{ pfs = to pfs "\n";
pfs = to pfs (" int result = " + libcall + ";\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\n";
pfs = to pfs " return result ? HEAP_TRUE : HEAP_FALSE;\n";
#
pfs;
};
"Float"
=>
{ pfs = to pfs "\n";
pfs = to pfs (" double d = " + libcall + ";\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\n";
pfs = to pfs " return make_float64(task, d );\n";
#
pfs;
};
"Int"
=>
{ pfs = to pfs "\n";
pfs = to pfs (" int result = " + libcall + ";\n"); pfs = if (libcall_more != "") to pfs libcall_more; else pfs; fi;
pfs = to pfs "\n";
pfs = to pfs " return TAGGED_INT_FROM_C_INT(result);\n";
#
pfs;
};
_ => case (sm::get (*nonstandard_result_type_handlers_for__build_plain_fun_for__'libmythryl_xxx_c', result_type)) # Custom library-specific arg type handling for "Widget", "new Widget" etc.
#
THE build_fn => build_fn pfs { fn_name, libcall, libcall_more, to_libmythryl_xxx_c_funs, path };
#
NULL => raise exception DIE (sprintf "Unsupported result type '%s'" result_type);
esac;
esac;
pfs;
};
# Synthesize a function for libmythryl-xxx.c like
#
# /* do__gtk_init : Void -> Void
# *
# *
# */
#
# static Val do__gtk_init (Task* task, Val arg)
# {
# int y = INT1_LIB7toC( GET_TUPLE_SLOT_AS_INT(arg, 0) );
# char *symname = HEAP_STRING_AS_C_STRING( GET_TUPLE_SLOT_AS_VAL(arg, 1) );
# int lazy = GET_TUPLE_SLOT_AS_VAL(arg, 2) == HEAP_TRUE;
#
# int result = move( y, x );
#
# if (result == ERR) return RAISE_ERROR__MAY_HEAPCLEAN(task, "move", NULL);
#
# return HEAP_VOID;
# }
#
#
#
# Cheatsheet:
#
# Accepting a lone float arg:
# double d = *(PTR_CAST(double*, arg)); # Example in src/c/lib/math/cos64.c
#
# Accepting a lone int arg:
# int socket = TAGGED_INT_TO_C_INT(arg); # Example in src/c/lib/socket/accept.c
#
# Accepting a lone string arg: # Example in src/c/lib/posix-file-system/readlink.c
# char* path = HEAP_STRING_AS_C_STRING(arg);
#
# Accepting a lone Null_Or( Tuple ) arg: # Example in src/c/lib/socket/get-protocol-by-name.c
#
# Accepting a Bool from a tuple: # Example in src/c/lib/dynamic-loading/dlopen.c
# int lazy = GET_TUPLE_SLOT_AS_VAL (arg, 1) == HEAP_TRUE;
#
# Accepting an Int from a tuple: # Example in src/c/lib/posix-file-system/fchown.c
# int fd = GET_TUPLE_SLOT_AS_INT (arg, 0);
#
# Accepting a String from a tuple: # Example in src/c/lib/dynamic-loading/dlsym.c
# char *symname = HEAP_STRING_AS_C_STRING (GET_TUPLE_SLOT_AS_VAL (arg, 1));
#
# Accepting a Float from a tuple: # THIS IS MY OWN GUESS!
# double d = *(PTR_CAST(double*, GET_TUPLE_SLOT_AS_VAL(arg,%d)));
#
# Accepting a Null_Or(String) from a tuple: # Example in src/c/lib/dynamic-loading/dlopen.c
#
#
# Returning
#
# Void: return HEAP_VOID; # Defined in src/c/h/runtime-values.h
# TRUE: return HEAP_TRUE; # Defined in src/c/h/runtime-values.h
# FALSE: return HEAP_FALSE; # Defined in src/c/h/runtime-values.h
# Int: return TAGGED_INT_FROM_C_INT(size); # Defined in src/c/h/runtime-values.h
# NULL: return OPTION_NULL; # Defined in src/c/h/make-strings-and-vectors-etc.h Example in src/c/machine-dependent/interprocess-signals.c
# THE foo: return OPTION_THE(task, foo); # Defined in src/c/h/make-strings-and-vectors-etc.h
# # Example in src/c/machine-dependent/interprocess-signals.c
#
# Returning a float:
# return make_float64(task, cos(d) ); # Defined in src/c/h/make-strings-and-vectors-etc.h
#
# Returning a string:
# Val result = allocate_nonempty_ascii_string__may_heapclean(task, size, NULL);
# strncpy (HEAP_STRING_AS_C_STRING(result), buf, size);
# return result;
#
# Returning a tuple: # Example from src/c/lib/date/gmtime.c
#
# set_slot_in_nascent_heapchunk(task, 0, MAKE_TAGWORD(PAIRS_AND_RECORDS_BTAG, 9));
# set_slot_in_nascent_heapchunk(task, 1, TAGGED_INT_FROM_C_INT(tm->tm_sec));
# ...
# set_slot_in_nascent_heapchunk(task, 9, TAGGED_INT_FROM_C_INT(tm->tm_isdst));
#
# return commit_nascent_heapchunk(task, 9);
#
#
# Return functions which check ERR
# and optionally raise an exception: src/c/lib/raise-error.h
#
# CHK_RETURN_VAL(task, status, val) Check status for an error (< 0); if okay,
# then return val. Otherwise raise
# SYSTEM_ERROR with the appropriate system
# error message.
#
# CHK_RETURN(task, status) Check status for an error (< 0); if okay,
# then return it as the result (after
# converting to an Lib7 int).
#
# CHK_RETURN_UNIT(task, status) Check status for an error (< 0); if okay,
# then return Void.
#
# GET_TUPLE_SLOT_AS_VAL &Co are from: src/c/h/runtime-values.h
# allocate_nonempty_ascii_string__may_heapclean is from: src/c/h/make-strings-and-vectors-etc.h
# CHK_RETURN_VAL &Co are from: src/c/lib/raise-error.h
#
fun build_plain_fun_for_'libmythryl_xxx_c'
#
(pfs: Pfs)
#
( x: Builder_Stuff,
fields: Fields,
fn_name, # E.g., "make_window2"
fn_type, # E.g., "Session -> Widget".
libcall, # E.g., "gtk_window_new( GTK_WINDOW_TOPLEVEL )".
result_type # E.g., "Float"
)
=
{ arg_count = count_args( libcall );
#
pfs = build_fun_header_for__'libmythryl_xxx_c' pfs ( fn_name, fn_type, arg_count, libcall, result_type);
pfs = build_fun_arg_loads_for__'libmythryl_xxx_c' pfs ( fn_name, fn_type, arg_count, libcall);
pfs = build_fun_body_for__'libmythryl_xxx_c' pfs (x, fields, fn_name, fn_type, libcall, result_type);
pfs = build_fun_trailer_for__'libmythryl_xxx_c' pfs;
plain_fns_codebuilt_for_'libmythryl_xxx_c'
:=
*plain_fns_codebuilt_for_'libmythryl_xxx_c'
+ 1;
pfs;
};
# Given a libcall like "gtk_foo( /*bar_to_int bar*/i0, /*zot*/i1 )"
# and a parameter name like "i0" or "i1"
# return nickname like "bar_to_int bar" or "zot"
# if available, else "i0" or "i1":
#
fun arg_name (arg, libcall)
=
{ regex = .|/\*([A-Za-z0-9_' ]+)\*/| + arg; # Something like: /*([A-Za-z0-9_' ]+)*/f0
#
case (regex::find_first_match_to_ith_group 1 regex libcall)
THE x => x;
NULL => arg;
esac;
};
# Given a libcall like "gtk_foo( /*bar_to_int bar*/i0, /*zot*/i1 )"
# and a parameter name like "i0" or "i1"
# return nickname like "bar" or "zot"
# if available, else "i0" or "i1":
#
fun param_name (arg, libcall)
=
{ regex = .|/\*([A-Za-z0-9_' ]+)\*/| + arg; # Something like: /*([A-Za-z0-9_' ]+)*/f0
#
case (regex::find_first_match_to_ith_group 1 regex libcall)
#
THE name => # If 'name' contains blanks, we want
# only the part after the last blank:
#
case (regex::find_first_match_to_ith_group 1 .|^[:A-Za-z0-9_' ]+ ([A-Za-z0-9_']+)$| name)
THE x => x;
NULL => name;
esac;
NULL => arg;
esac;
};
# Synthesize a function for gtk-client-g.pkg like
#
# #
# fun make_vertical_scale_with_range (session: Session, min, max, step)
# =
# drv::make_vertical_scale_with_range (session.subsession, min, max, step);
#
fun build_plain_fun_for_'xxx_client_g_pkg' (pfs: Pfs) (x: Builder_Stuff, fields: Fields, fn_name, libcall)
=
case (maybe_get_field (fields, "cg-funs"))
#
THE field
=>
{ pfs = to_xxx_client_g_pkg_funs pfs " #\n";
pfs = to_xxx_client_g_pkg_funs pfs field;
pfs = to_xxx_client_g_pkg_funs pfs " \n";
pfs = to_xxx_client_g_pkg_funs pfs " # Above function handbuilt via src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'xxx_client_g_pkg'.\n";
pfs = to_xxx_client_g_pkg_funs pfs "\n";
plain_fns_handbuilt_for_'xxx_client_g_pkg'
:=
*plain_fns_handbuilt_for_'xxx_client_g_pkg' + 1;
pfs;
};
NULL =>
{
arg_count = count_args( libcall );
#
fun make_args pfs get_name # get_name will be arg_name or param_name.
=
{
pfs = for (a = 0, pfs = pfs; a < arg_count; ++a; pfs) {
# Remember type of this arg,
# which will be one of:
# w (widget),
# i (int),
# b (bool)
# s (string)
# f (double):
#
arg_type = get_nth_arg_type( a, libcall );
arg = sprintf "%s%d" arg_type a;
pfs = to_xxx_client_g_pkg_funs pfs (sprintf ", %s" (get_name (arg, libcall)));
pfs;
};
pfs;
};
# Select between foo (session.subsession, bar, zot);
# foo { session.subsession, bar, zot };
#
my (lparen, rparen)
=
# It is a poor idea to have xxx-client-g.pkg functions
# with multiple arguments of the same type use
# argument tuples, because it is too easy to
# mis-order such arguments, and the compiler
# type checking won't flag it -- in such cases
# it is better to use argument records:
#
arg_types_are_all_unique libcall
?? ( "(" , ")" )
:: ( "{ ", " }" );
pfs = to_xxx_client_g_pkg_funs pfs "\n";
pfs = to_xxx_client_g_pkg_funs pfs " #\n";
pfs = to_xxx_client_g_pkg_funs pfs " fun ";
pfs = to_xxx_client_g_pkg_funs pfs fn_name;
pfs = to_xxx_client_g_pkg_funs pfs (sprintf " %ssession: Session" lparen);
pfs = make_args pfs param_name;
pfs = to_xxx_client_g_pkg_funs pfs (sprintf "%s\n" rparen);
# Select between drv::foo session.subsession;
# drv::foo (session.subsession, bar, zot);
#
my (lparen, rparen)
=
arg_count == 0
?? (" ", "" )
:: ("(", ")");
fn_name = regex::replace_all ./'/ "2" fn_name; # Primes don't work in C!
pfs = to_xxx_client_g_pkg_funs pfs " =\n";
pfs = to_xxx_client_g_pkg_funs pfs (sprintf " drv::%s %ssession.subsession" fn_name lparen);
pfs = make_args pfs arg_name;
pfs = to_xxx_client_g_pkg_funs pfs (sprintf "%s;\n" rparen);
pfs = to_xxx_client_g_pkg_funs pfs " \n";
pfs = to_xxx_client_g_pkg_funs pfs (" # Above function autobuilt by src/lib/make-library-glue/make-library-glue.pkg: build_plain_fun_for_'xxx_client_g_pkg' per " + path.construction_plan + ".\n");
pfs = to_xxx_client_g_pkg_funs pfs "\n";
plain_fns_codebuilt_for_'xxx_client_g_pkg'
:=
*plain_fns_codebuilt_for_'xxx_client_g_pkg'
+ 1;
pfs;
};
esac;
# Synthesize a xxx-client.api line like
#
# make_window: Session -> Widget;
#
stipulate
line_count = REF 2;
herein
#
fun build_fun_declaration_for_'xxx_client_api' (pfs: Pfs) { fn_name, fn_type, api_doc }
=
{
# Add a blank line every three declarations:
#
line_count := *line_count + 1;
pfs = if ((*line_count % 3) == 0)
#
to_xxx_client_api_funs pfs "\n";
else
pfs;
fi;
# The 'if' here is just to exdent by one char
# types starting with a paren, so that we get
#
# foo: Session -> Void;
# bar: (Session, Widget) -> Void;
#
# rather than the slightly rattier looking
#
# foo: Session -> Void;
# bar: (Session, Widget) -> Void;
#
pfs = if (fn_type =~ ./^\(/) to_xxx_client_api_funs pfs (sprintf " %-40s%s;\n" (fn_name + ":") fn_type);
else to_xxx_client_api_funs pfs (sprintf " %-41s%s;\n" (fn_name + ":") fn_type);
fi;
pfs = if (api_doc != "") to_xxx_client_api_funs pfs api_doc;
else pfs;
fi;
pfs;
};
end;
#
fun figure_function_result_type (x: Builder_Stuff, fields: Fields, fn_name, fn_type)
=
# result_type can be "Int", "String", "Bool", "Float" or "Void".
#
# It can also be "Widget" or "new Widget", the difference being
# that in the former case the mythryl-xxx-library-in-c-subprocess.c logic can merely
# fetch it out of its array widget[], whereas in the latter a
# new entry is being created in widget[].
#
# We can usually deduce the difference: If fn_name starts with
# "make_" then we have the "new Widget" case, otherwise we have
# the "Widget" case:
#
case (maybe_get_field (fields, "result"))
#
THE string => string;
#
NULL =>
# Pick off terminal " -> Void"
# or whatever from fn_type
# and switch on it:
#
case (regex::find_first_match_to_ith_group 1 ./->\s*([A-Za-z_']+)\s*$/ fn_type)
#
THE "Bool" => "Bool";
THE "Float" => "Float";
THE "Int" => "Int";
THE "String" => "String";
THE "Void" => "Void";
THE result_type => case (sm::get (*figure_function_result_type_fns, result_type)) # Cf figure_function_result_type in src/opt/gtk/sh/make-gtk-glue
#
THE function => function fn_name; # E.g., "Widget" -> ("Widget" or "new Widget")
#
NULL => { printf "SupporTed result types:\n";
print_strings (sm::keys_list *figure_function_result_type_fns);
die_x(sprintf "Unsupported result fn-type %s in type %s at %s..\n"
result_type
fn_type
(get_field_location (fields, "fn-type"))
);
};
esac;
NULL => die_x(sprintf "UNsupported result fn-type %s at %s..\n"
fn_type
(get_field_location (fields, "fn-type"))
);
esac;
esac;
#
fun build_plain_function { patchfiles, paragraph: plf::Paragraph, x: Builder_Stuff }
=
{
pfs = patchfiles;
fields = paragraph.fields;
fn_name = get_field (fields, "fn-name"); # E.g., "make_window".
fn_type = get_field (fields, "fn-type"); # E.g., "Session -> Widget".
libcall = get_field (fields, "libcall"); # E.g., "gtk_window_new( GTK_WINDOW_TOPLEVEL )".
url = case (maybe_get_field(fields,"url")) THE field => field; NULL => ""; esac;
api_doc = case (maybe_get_field(fields,"api-doc")) THE field => field; NULL => ""; esac;
c_fn_name = regex::replace_all ./'/ "2" fn_name; # C fn names cannot contain apostrophes.
result_type = figure_function_result_type (x, fields, fn_name, fn_type);
pfs = build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c' pfs ( c_fn_name );
pfs = build_plain_fun_for_'mythryl_xxx_library_in_c_subprocess_c' pfs (x, fields, c_fn_name, fn_type, libcall, result_type);
pfs = build_plain_fun_for_'libmythryl_xxx_c' pfs (x, fields, c_fn_name, fn_type, libcall, result_type);
pfs = build_table_entry_for_'libmythryl_xxx_c' pfs (c_fn_name, fn_type);
pfs = note__section_libref_xxx_tex__entry pfs { fields, fn_name, libcall, url, fn_type };
pfs = build_fun_declaration_for_'xxx_client_driver_api' pfs { c_fn_name, libcall, result_type };
pfs = build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg' pfs { c_fn_name, libcall, result_type };
pfs = build_fun_declaration_for_'xxx_client_api' pfs { fn_name, fn_type, api_doc };
pfs = build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg' pfs { fn_name, c_fn_name, fn_type, libcall, result_type };
pfs = build_plain_fun_for_'xxx_client_g_pkg' pfs (x, fields, fn_name, libcall);
pfs;
};
#
fun build_function_doc { patchfiles, paragraph: plf::Paragraph, x: Builder_Stuff }
=
{
pfs = patchfiles;
fields = paragraph.fields;
url = case (maybe_get_field(fields,"url"))
#
THE field => field;
NULL => "";
esac;
fn_name = get_field(fields, "fn-name"); # "make_window" or such.
fn_type = get_field(fields, "fn-type"); # "Session -> Widget" or such.
pfs = note__section_libref_xxx_tex__entry pfs { fields, fn_name, libcall => "", url, fn_type };
pfs;
};
#
fun build_mythryl_type { patchfiles, paragraph: plf::Paragraph, x: Builder_Stuff }
=
{
pfs = patchfiles;
fields = paragraph.fields;
type = get_field(fields, "cg-typs");
#
pfs = to_xxx_client_api_types pfs type;
pfs = to_xxx_client_g_pkg_types pfs type;
pfs;
};
#
fun build_mythryl_code { patchfiles, paragraph: plf::Paragraph, x: Builder_Stuff }
=
{
pfs = patchfiles;
fields = paragraph.fields;
code = get_field(fields, "cg-funs");
#
pfs = to_xxx_client_g_pkg_funs pfs code;
pfs;
};
fn_doc__definition
=
{ name => "fn_doc",
do => build_function_doc,
fields => [ { fieldname => "fn-name", traits => [] },
{ fieldname => "fn-type", traits => [] },
{ fieldname => "doc-fn", traits => [ plf::OPTIONAL ] },
{ fieldname => "url", traits => [ plf::OPTIONAL ] }
]
};
plain_fn__definition
=
{ name => "plain_fn",
do => build_plain_function,
fields => [ { fieldname => "fn-name", traits => [] },
{ fieldname => "fn-type", traits => [] },
{ fieldname => "libcall", traits => [] },
{ fieldname => "libcal+", traits => [ plf::OPTIONAL, plf::DO_NOT_TRIM_WHITESPACE, plf::ALLOW_MULTIPLE_LINES ] },
{ fieldname => "lowtype", traits => [ plf::OPTIONAL ] },
{ fieldname => "result", traits => [ plf::OPTIONAL ] },
{ fieldname => "api-doc", traits => [ plf::OPTIONAL ] },
{ fieldname => "doc-fn", traits => [ plf::OPTIONAL ] },
{ fieldname => "url", traits => [ plf::OPTIONAL ] },
{ fieldname => "cg-funs", traits => [ plf::OPTIONAL, plf::DO_NOT_TRIM_WHITESPACE, plf::ALLOW_MULTIPLE_LINES ] }
]
};
mythryl_code__definition
=
{ name => "mythryl_code",
do => build_mythryl_code,
fields => [ { fieldname => "cg-funs", traits => [ plf::DO_NOT_TRIM_WHITESPACE, plf::ALLOW_MULTIPLE_LINES ] }
]
};
mythryl_type__definition
=
{ name => "mythryl_type",
do => build_mythryl_type,
fields => [ { fieldname => "cg-typs", traits => [ plf::DO_NOT_TRIM_WHITESPACE, plf::ALLOW_MULTIPLE_LINES ] }
]
};
builder_stuff = { path,
#
maybe_get_field,
get_field,
get_field_location,
#
build_table_entry_for_'libmythryl_xxx_c',
build_trie_entry_for_'mythryl_xxx_library_in_c_subprocess_c',
#
build_fun_declaration_for_'xxx_client_api',
build_fun_declaration_for_'xxx_client_driver_api',
build_fun_definition_for_'xxx_client_driver_for_library_in_c_subprocess_pkg',
build_fun_definition_for_'xxx_client_driver_for_library_in_main_process_pkg',
to_xxx_client_driver_api,
to_xxx_client_driver_for_library_in_c_subprocess_pkg,
to_xxx_client_driver_for_library_in_main_process_pkg,
to_xxx_client_g_pkg_funs,
to_xxx_client_g_pkg_types,
to_xxx_client_api_funs,
to_xxx_client_api_types,
to_mythryl_xxx_library_in_c_subprocess_c_funs,
to_mythryl_xxx_library_in_c_subprocess_c_trie,
to_libmythryl_xxx_c_table,
to_libmythryl_xxx_c_funs,
to_section_libref_xxx_tex_apitable,
to_section_libref_xxx_tex_libtable,
custom_fns_codebuilt_for_'libmythryl_xxx_c',
custom_fns_codebuilt_for_'mythryl_xxx_library_in_c_subprocess_c',
callback_fns_handbuilt_for_'xxx_client_g_pkg',
note__section_libref_xxx_tex__entry
};
paragraph_defs
=
plf::digest_paragraph_definitions sm::empty "make-library-glue.pkg"
#
( paragraph_definitions
@
[
fn_doc__definition,
plain_fn__definition,
mythryl_code__definition,
mythryl_type__definition
]
);
end;
};
end;
###################################################################################
# Note[1]: Format of xxx-construction.plan files
#
# These notes are outdated; should look at
# src/lib/make-library-glue/planfile.api
# and
# *__definition
# above. Should write more docs, too. :-)
#
#
# An xxx-construction.plan file is broken
# into logical paragraphs separated by blank lines.
#
# In general each paragraph describes one end-user-callable
# function in (say) the Gtk API.
#
# Each paragraph consists of one or more lines;
# each line begins with a colon-delimited type
# field determining its semantics.
#
# Supported line types are:
#
# do: Must appear in every paragraph.
# Determines which make-library-glue function processes the paragraph:
# plain_fn build_plain_function # The usual case.
# callback_fn build_callback_function # Special-purpose variant.
# fn_doc build_function_doc # Document fn without code generation, e.g. for Mythryl-only fns.
# mythryl_code build_mythryl_code # Special hack to deposit verbatim Mythryl code.
# mythryl_type build_mythryl_type # Special hack to deposit verbatim Mythryl declarations.
#
# The 'do' line determines which other
# lines may appear in the paragraph, per the
# following table. ("X" == mandatory, "O" == optional):
#
# callback_fn fn_doc plain_fn mythryl_code mythryl_type
# ----------- ------ -------- ------------ -----------
#
# fn-name: X X X
# fn-type: X X X
# lowtype: X X
# libcall: X
# libcal+: O
# result: O
# api-doc: O
# doc-fn: O O O
# url: O O O
# cg-funs: O O X
# cg-typs: X
#
#
# fn-name: Name of the end-user-callable Mythryl function, e.g. halt_and_catch_fire
# fn-type: Mythryl type for the function, e.g. Int -> Void
# url: URL documenting the underlying C Gtk function, e.g. http://library.gnome.org/devel/gtk/stable/gtk-General.html#gtk-init
# cg-funs: Literal Mythryl code to be inserted near bottom of xxx-client-g.pkg
# cg-typs: Literal Mythryl code to be inserted near top of xxx-client-g.pkg and also in xxx-client.api
# lowtype: Gtk cast macro for widget: Usually G_OBJECT, occasionally GTK_MENU_ITEM or such.
#
# doc-fn: Usually name of fn for documentation purposes is obtained from 'libcall' line,
# but this line may be used to specify it explicitly.
#
# api-doc: Comment line(s) to be appended to fn declaration in xxx-client.api.
#
# libcall: C-level library call to make e.g. gtk_layout_put( GTK_LAYOUT(w0), GTK_WIDGET(w1), i2, i3)
#
# libcall contains embedded arguments like w0, i1, f2, b3, s4.
#
# The argument letter gives us the argument type:
#
# w == widget
# i == int
# f == double (Mythryl "Float")
# b == bool
# s == string
#
# The argument digit gives us the argument order:
#
# 0 == first arg
# 1 == second arg
# ...
#
# libcal+: More code to be inserted immediately after the 'libcall' code
# in libmythryl-xxx.c and mythryl-xxx-library-in-c-subprocess.c.
#
# result: C-level result type for call. In practice we always default
# this and make-library-glue deduces it from the Mythryl type.
# #
# Can be one of "Int", "String", "Bool", "Float" or "Void".
# #
# Can also be "Widget" or "new Widget", the difference being
# that in the former case the mythryl-gtk-server.c logic can merely
# fetch it out of its array widget[], whereas in the latter a
# new entry is being created in widget[].
# #
# We can usually deduce the difference: If fn_name starts with
# "make_" then we have the "new Widget" case, otherwise we have
# the "Widget" case:
##########################################################################
# The following is support for outline-minor-mode in emacs. #
# ^C @ ^T hides all Text. (Leaves all headings.) #
# ^C @ ^A shows All of file. #
# ^C @ ^Q Quickfolds entire file. (Leaves only top-level headings.) #
# ^C @ ^I shows Immediate children of node. #
# ^C @ ^S Shows all of a node. #
# ^C @ ^D hiDes all of a node. #
# ^HFoutline-mode gives more details. #
# (Or do ^HI and read emacs:outline mode.) #
# #
# Local variables: #
# mode: outline-minor #
# outline-regexp: "[{ \t]*\\(fun \\)" #
# End: #
##########################################################################
## Code by Jeff Prothero: Copyright (c) 2010-2015,
## released per terms of SMLNJ-COPYRIGHT.
| {
"pile_set_name": "Github"
} |
{
"title": "SETTINGS",
"groups": [
{
"title": "EDITOR",
"items": [
{
"title": "THEME",
"type": "list",
"key": "editor.theme",
"value": 38,
"items": [
"3024-day",
"3024-night",
"abcdef",
"ambiance-mobile",
"ambiance",
"base16-dark",
"base16-light",
"bespin",
"blackboard",
"cobalt",
"colorforth",
"darcula",
"dracula",
"duotone-dark",
"duotone-light",
"eclipse",
"elegant",
"erlang-dark",
"gruvbox-dark",
"hopscotch",
"icecoder",
"idea",
"isotope",
"lesser-dark",
"liquibyte",
"lucario",
"material-darker",
"material-ocean",
"material-palenight",
"material",
"mbo",
"mdn-like",
"midnight",
"monokai",
"moxer",
"neat",
"neo",
"night",
"nord",
"oceanic-next",
"panda-syntax",
"paraiso-dark",
"paraiso-light",
"pastel-on-dark",
"railscasts",
"rubyblue",
"seti",
"shadowfox",
"solarized",
"ssms",
"the-matrix",
"tomorrow-night-bright",
"tomorrow-night-eighties",
"ttcn",
"twilight",
"vibrant-ink",
"xq-dark",
"xq-light",
"yeti",
"yonce",
"zenburn"
]
},
{
"title": "FONT_NAME",
"type": "list",
"key": "editor.font.name",
"value": 0,
"items": [
"Menlo",
"Source Code Pro",
"Monaco",
"Iosevka",
"Ubuntu Mono",
"Hack",
"Cascadia Code"
]
},
{
"title": "FONT_SIZE",
"type": "number",
"key": "editor.font.size",
"value": 15
},
{
"title": "LINE_HEIGHT",
"type": "number",
"key": "editor.line.height",
"value": 20
},
{
"title": "SHOWS_LINE_NUMBERS",
"type": "boolean",
"key": "editor.line.numbers",
"value": false
},
{
"title": "SHOWS_INVISIBLES",
"type": "boolean",
"key": "editor.invisibles",
"value": false
},
{
"title": "RUN_PRETTIER",
"type": "script",
"value": "$app.notify({'name': 'prettify'})"
}
]
},
{
"title": "WINDOW",
"items": [
{
"title": "MAX_WIDTH",
"type": "string",
"key": "window.width.max",
"value": "150%"
},
{
"title": "HORIZONTAL_PADDING",
"type": "number",
"key": "window.padding.x",
"value": 10
},
{
"title": "VERTICAL_PADDING",
"type": "number",
"key": "window.padding.y",
"value": 10
},
{
"title": "SHADOW_OFFSET_X",
"type": "number",
"key": "window.shadow.x",
"value": 10
},
{
"title": "SHADOW_OFFSET_Y",
"type": "number",
"key": "window.shadow.y",
"value": 10
},
{
"title": "SHADOW_RADIUS",
"type": "number",
"key": "window.shadow.radius",
"value": 15
},
{
"title": "BACKGROUND_COLOR",
"type": "string",
"key": "window.bg.color",
"value": "#FFFFFF"
},
{
"title": "BACKGROUND_ALPHA",
"type": "number",
"key": "window.bg.alpha",
"value": 0
}
]
},
{
"title": "",
"items": [
{
"title": "ABOUT",
"type": "script",
"value": "require('scripts/readme').open();"
}
]
}
]
} | {
"pile_set_name": "Github"
} |
//
// Generated by class-dump 3.5 (64 bit) (Debug version compiled Oct 15 2018 10:31:50).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2015 by Steve Nygard.
//
#import <objc/NSObject.h>
#import <PassKitCore/NSCopying-Protocol.h>
#import <PassKitCore/NSSecureCoding-Protocol.h>
@class NSArray, NSDate, NSDecimalNumber, NSNumber, NSString, PKFelicaTransitAppletState;
@interface PKTransitAppletState : NSObject <NSCopying, NSSecureCoding>
{
_Bool _blacklisted;
_Bool _needsStationProcessing;
_Bool _appletStateDirty;
NSNumber *_historySequenceNumber;
NSNumber *_serverRefreshIdentifier;
NSDecimalNumber *_balance;
NSNumber *_loyaltyBalance;
NSString *_currency;
NSDate *_expirationDate;
NSArray *_balances;
NSArray *_enrouteTransitTypes;
}
+ (BOOL)supportsSecureCoding;
- (void).cxx_destruct;
@property(nonatomic) _Bool appletStateDirty; // @synthesize appletStateDirty=_appletStateDirty;
@property(nonatomic) _Bool needsStationProcessing; // @synthesize needsStationProcessing=_needsStationProcessing;
@property(copy, nonatomic) NSArray *enrouteTransitTypes; // @synthesize enrouteTransitTypes=_enrouteTransitTypes;
@property(copy, nonatomic) NSArray *balances; // @synthesize balances=_balances;
@property(copy, nonatomic) NSDate *expirationDate; // @synthesize expirationDate=_expirationDate;
@property(copy, nonatomic) NSString *currency; // @synthesize currency=_currency;
@property(copy, nonatomic) NSNumber *loyaltyBalance; // @synthesize loyaltyBalance=_loyaltyBalance;
@property(copy, nonatomic) NSDecimalNumber *balance; // @synthesize balance=_balance;
@property(copy, nonatomic) NSNumber *serverRefreshIdentifier; // @synthesize serverRefreshIdentifier=_serverRefreshIdentifier;
@property(copy, nonatomic) NSNumber *historySequenceNumber; // @synthesize historySequenceNumber=_historySequenceNumber;
@property(nonatomic, getter=isBlacklisted) _Bool blacklisted; // @synthesize blacklisted=_blacklisted;
- (void)addEnrouteTransitType:(id)arg1;
- (id)transitPassPropertiesWithPaymentApplication:(id)arg1;
- (void)_resolveTransactionsFromState:(id)arg1 toState:(id)arg2 withHistoryRecords:(id)arg3 concreteTransactions:(id *)arg4 ephemeralTransaction:(id *)arg5 balanceLabels:(id)arg6 unitDictionary:(id)arg7;
- (id)updatedEnrouteTransitTypesFromExistingTypes:(id)arg1 newTypes:(id)arg2;
- (id)processUpdateWithAppletHistory:(id)arg1 concreteTransactions:(id *)arg2 ephemeralTransaction:(id *)arg3 mutatedBalances:(id *)arg4 balanceLabelDictionary:(id)arg5 unitDictionary:(id)arg6;
- (id)processUpdateWithAppletHistory:(id)arg1 concreteTransactions:(id *)arg2 ephemeralTransaction:(id *)arg3 mutatedBalances:(id *)arg4 balanceLabelDictionary:(id)arg5;
- (id)processUpdateWithAppletHistory:(id)arg1 concreteTransactions:(id *)arg2 ephemeralTransaction:(id *)arg3 mutatedBalances:(id *)arg4;
- (id)processUpdateWithAppletHistory:(id)arg1 concreteTransactions:(id *)arg2 ephemeralTransaction:(id *)arg3;
- (unsigned long long)hash;
- (BOOL)isEqual:(id)arg1;
@property(readonly, nonatomic, getter=isInStation) _Bool inStation; // @dynamic inStation;
- (id)copyWithZone:(struct _NSZone *)arg1;
- (void)encodeWithCoder:(id)arg1;
- (id)initWithCoder:(id)arg1;
@property(readonly, nonatomic) PKFelicaTransitAppletState *felicaState;
@end
| {
"pile_set_name": "Github"
} |
<?php
/*
* This file is part of the Symfony package.
*
* (c) Fabien Potencier <[email protected]>
*
* For the full copyright and license information, please view the LICENSE
* file that was distributed with this source code.
*/
namespace Symfony\Component\Translation\Tests\Loader;
use PHPUnit\Framework\TestCase;
abstract class LocalizedTestCase extends TestCase
{
protected function setUp()
{
if (!\extension_loaded('intl')) {
$this->markTestSkipped('Extension intl is required.');
}
}
}
| {
"pile_set_name": "Github"
} |
%YAML 1.1
%TAG !u! tag:unity3d.com,2011:
--- !u!1 &100002
GameObject:
m_ObjectHideFlags: 0
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 100100000}
serializedVersion: 4
m_Component:
- 4: {fileID: 400002}
- 119: {fileID: 11900000}
m_Layer: 0
m_Name: GridProjector
m_TagString: Untagged
m_Icon: {fileID: 0}
m_NavMeshLayer: 0
m_StaticEditorFlags: 0
m_IsActive: 1
--- !u!1002 &100003
EditorExtensionImpl:
serializedVersion: 6
--- !u!4 &400002
Transform:
m_ObjectHideFlags: 1
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 100100000}
m_GameObject: {fileID: 100002}
m_LocalRotation: {x: .707106829, y: 0, z: 0, w: .707106709}
m_LocalPosition: {x: -2.12590003, y: 1.97899997, z: -.925180018}
m_LocalScale: {x: 1, y: 1, z: 1}
m_Children: []
m_Father: {fileID: 0}
m_RootOrder: 0
--- !u!1002 &400003
EditorExtensionImpl:
serializedVersion: 6
--- !u!119 &11900000
Projector:
m_ObjectHideFlags: 1
m_PrefabParentObject: {fileID: 0}
m_PrefabInternal: {fileID: 100100000}
m_GameObject: {fileID: 100002}
m_Enabled: 1
serializedVersion: 2
m_NearClipPlane: .100000001
m_FarClipPlane: 50
m_FieldOfView: 30
m_AspectRatio: 1
m_Orthographic: 1
m_OrthographicSize: .25
m_Material: {fileID: 2100000, guid: c7d1a73cf0f423947bae4e238665d9c5, type: 2}
m_IgnoreLayers:
serializedVersion: 2
m_Bits: 0
--- !u!1002 &11900001
EditorExtensionImpl:
serializedVersion: 6
--- !u!1001 &100100000
Prefab:
m_ObjectHideFlags: 1
serializedVersion: 2
m_Modification:
m_TransformParent: {fileID: 0}
m_Modifications: []
m_RemovedComponents: []
m_ParentPrefab: {fileID: 0}
m_RootGameObject: {fileID: 100002}
m_IsPrefabParent: 1
--- !u!1002 &100100001
EditorExtensionImpl:
serializedVersion: 6
| {
"pile_set_name": "Github"
} |
# test wildcard certifcates
import json
import os
import pytest
import re
import socket
import ssl
import sys
import time
from datetime import datetime
from TestEnv import TestEnv
from TestHttpdConf import HttpdConf
from TestCertUtil import CertUtil
def setup_module(module):
print("setup_module module:%s" % module.__name__)
TestEnv.initv2()
TestEnv.APACHE_CONF_SRC = "data/test_auto"
TestEnv.check_acme()
TestEnv.clear_store()
HttpdConf().install();
assert TestEnv.apache_start() == 0
def teardown_module(module):
print("teardown_module module:%s" % module.__name__)
assert TestEnv.apache_stop() == 0
class TestWildcard:
def setup_method(self, method):
print("setup_method: %s" % method.__name__)
TestEnv.initv2()
TestEnv.clear_store()
self.test_domain = TestEnv.get_method_domain(method)
def teardown_method(self, method):
print("teardown_method: %s" % method.__name__)
#-----------------------------------------------------------------------------------------------
# test case: a wildcard certificate with ACMEv1
#
def test_720_000(self):
domain = self.test_domain
# switch to ACMEv1
TestEnv.initv1()
# generate config with DNS wildcard
domains = [ domain, "*." + domain ]
conf = HttpdConf()
conf.add_admin( "[email protected]" )
conf.add_md( domains )
conf.add_vhost(domains)
conf.install()
# restart, check that md is in store
assert TestEnv.apache_restart() == 0
TestEnv.check_md( domains )
# await drive error as ACMEv1 does not accept DNS wildcards
md = TestEnv.await_error(domain)
assert md
assert md['renewal']['errors'] > 0
assert md['renewal']['last']['problem'] == 'urn:acme:error:malformed'
#-----------------------------------------------------------------------------------------------
# test case: a wildcard certificate with ACMEv2, no dns-01 supported
#
def test_720_001(self):
domain = self.test_domain
# generate config with DNS wildcard
domains = [ domain, "*." + domain ]
conf = HttpdConf()
conf.add_admin( "[email protected]" )
conf.add_md( domains )
conf.add_vhost(domains)
conf.install()
# restart, check that md is in store
assert TestEnv.apache_restart() == 0
TestEnv.check_md( domains )
# await drive completion
md = TestEnv.await_error(domain)
assert md
assert md['renewal']['errors'] > 0
assert md['renewal']['last']['problem'] == 'challenge-mismatch'
#-----------------------------------------------------------------------------------------------
# test case: a wildcard certificate with ACMEv2, only dns-01 configured, invalid command path
#
def test_720_002(self):
dns01cmd = ("%s/dns01-not-found.py" % TestEnv.TESTROOT)
domain = self.test_domain
domains = [ domain, "*." + domain ]
conf = HttpdConf()
conf.add_admin( "[email protected]" )
conf.add_ca_challenges( [ "dns-01" ] )
conf.add_dns01_cmd( dns01cmd )
conf.add_md( domains )
conf.add_vhost(domains)
conf.install()
# restart, check that md is in store
assert TestEnv.apache_restart() == 0
TestEnv.check_md( domains )
# await drive completion
md = TestEnv.await_error(domain)
assert md
assert md['renewal']['errors'] > 0
assert md['renewal']['last']['problem'] == 'challenge-setup-failure'
# variation, invalid cmd path, other challenges still get certificate for non-wildcard
def test_720_002b(self):
dns01cmd = ("%s/dns01-not-found.py" % TestEnv.TESTROOT)
domain = self.test_domain
domains = [ domain, "xxx." + domain ]
conf = HttpdConf()
conf.add_admin( "[email protected]" )
conf.add_dns01_cmd( dns01cmd )
conf.add_md( domains )
conf.add_vhost(domains)
conf.install()
# restart, check that md is in store
assert TestEnv.apache_restart() == 0
TestEnv.check_md( domains )
# await drive completion
assert TestEnv.await_completion( [ domain ] )
TestEnv.check_md_complete(domain)
# check: SSL is running OK
certA = TestEnv.get_cert(domain)
altnames = certA.get_san_list()
for domain in domains:
assert domain in altnames
#-----------------------------------------------------------------------------------------------
# test case: a wildcard certificate with ACMEv2, only dns-01 configured, invalid command option
#
def test_720_003(self):
dns01cmd = ("%s/dns01.py fail" % TestEnv.TESTROOT)
domain = self.test_domain
domains = [ domain, "*." + domain ]
conf = HttpdConf()
conf.add_admin( "[email protected]" )
conf.add_ca_challenges( [ "dns-01" ] )
conf.add_dns01_cmd( dns01cmd )
conf.add_md( domains )
conf.add_vhost(domains)
conf.install()
# restart, check that md is in store
assert TestEnv.apache_restart() == 0
TestEnv.check_md( domains )
# await drive completion
md = TestEnv.await_error(domain)
assert md
assert md['renewal']['errors'] > 0
assert md['renewal']['last']['problem'] == 'challenge-setup-failure'
#-----------------------------------------------------------------------------------------------
# test case: a wildcard name certificate with ACMEv2, only dns-01 configured
#
def test_720_004(self):
dns01cmd = ("%s/dns01.py" % TestEnv.TESTROOT)
domain = self.test_domain
domains = [ domain, "*." + domain ]
conf = HttpdConf()
conf.add_admin( "[email protected]" )
conf.add_ca_challenges( [ "dns-01" ] )
conf.add_dns01_cmd( dns01cmd )
conf.add_md( domains )
conf.add_vhost(domains)
conf.install()
# restart, check that md is in store
assert TestEnv.apache_restart() == 0
TestEnv.check_md( domains )
# await drive completion
assert TestEnv.await_completion( [ domain ] )
TestEnv.check_md_complete(domain)
# check: SSL is running OK
certA = TestEnv.get_cert(domain)
altnames = certA.get_san_list()
for domain in domains:
assert domain in altnames
#-----------------------------------------------------------------------------------------------
# test case: a wildcard name and 2nd normal vhost, not overlapping
#
def test_720_005(self):
dns01cmd = ("%s/dns01.py" % TestEnv.TESTROOT)
domain = self.test_domain
domain2 = "www.x" + domain
domains = [ domain, "*." + domain, domain2 ]
conf = HttpdConf()
conf.add_admin( "[email protected]" )
conf.add_ca_challenges( [ "dns-01" ] )
conf.add_dns01_cmd( dns01cmd )
conf.add_md( domains )
conf.add_vhost(domain2)
conf.add_vhost(domains)
conf.install()
# restart, check that md is in store
assert TestEnv.apache_restart() == 0
TestEnv.check_md( domains )
# await drive completion
assert TestEnv.await_completion( [ domain ] )
TestEnv.check_md_complete(domain)
# check: SSL is running OK
certA = TestEnv.get_cert(domain)
altnames = certA.get_san_list()
for domain in domains:
assert domain in altnames
#-----------------------------------------------------------------------------------------------
# test case: a wildcard name and 2nd normal vhost, overlapping
#
def test_720_006(self):
dns01cmd = ("%s/dns01.py" % TestEnv.TESTROOT)
domain = self.test_domain
dwild = "*." + domain
domain2 = "www." + domain
domains = [ domain, dwild, domain2 ]
conf = HttpdConf()
conf.add_admin( "[email protected]" )
conf.add_ca_challenges( [ "dns-01" ] )
conf.add_dns01_cmd( dns01cmd )
conf.add_md( domains )
conf.add_vhost(domain2)
conf.add_vhost([ domain, dwild ])
conf.install()
# restart, check that md is in store
assert TestEnv.apache_restart() == 0
TestEnv.check_md( domains )
# await drive completion
assert TestEnv.await_completion( [ domain ] )
TestEnv.check_md_complete(domain)
# check: SSL is running OK
certA = TestEnv.get_cert(domain)
altnames = certA.get_san_list()
for domain in [ domain, dwild ]:
assert domain in altnames
| {
"pile_set_name": "Github"
} |
/*
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package testing
import (
"io/ioutil"
"os"
"sync"
yaml "gopkg.in/yaml.v2"
"github.com/googleapis/gnostic/OpenAPIv2"
"github.com/googleapis/gnostic/compiler"
)
// Fake opens and returns a openapi swagger from a file Path. It will
// parse only once and then return the same copy everytime.
type Fake struct {
Path string
once sync.Once
document *openapi_v2.Document
err error
}
// OpenAPISchema returns the openapi document and a potential error.
func (f *Fake) OpenAPISchema() (*openapi_v2.Document, error) {
f.once.Do(func() {
_, err := os.Stat(f.Path)
if err != nil {
f.err = err
return
}
spec, err := ioutil.ReadFile(f.Path)
if err != nil {
f.err = err
return
}
var info yaml.MapSlice
err = yaml.Unmarshal(spec, &info)
if err != nil {
f.err = err
return
}
f.document, f.err = openapi_v2.NewDocument(info, compiler.NewContext("$root", nil))
})
return f.document, f.err
}
type Empty struct{}
func (Empty) OpenAPISchema() (*openapi_v2.Document, error) {
return nil, nil
}
| {
"pile_set_name": "Github"
} |
<?php
/**
* @package Freemius
* @copyright Copyright (c) 2015, Freemius, Inc.
* @license https://www.gnu.org/licenses/gpl-3.0.html GNU General Public License Version 3
* @since 1.0.6
*/
if ( ! defined( 'ABSPATH' ) ) {
exit;
}
class FS_Plugin_Manager {
/**
* @since 1.2.2
*
* @var string|number
*/
protected $_module_id;
/**
* @since 1.2.2
*
* @var FS_Plugin
*/
protected $_module;
/**
* @var FS_Plugin_Manager[]
*/
private static $_instances = array();
/**
* @var FS_Logger
*/
protected $_logger;
/**
* Option names
*
* @author Leo Fajardo (@leorw)
* @since 1.2.2
*/
const OPTION_NAME_PLUGINS = 'plugins';
const OPTION_NAME_THEMES = 'themes';
/**
* @param string|number $module_id
*
* @return FS_Plugin_Manager
*/
static function instance( $module_id ) {
$key = 'm_' . $module_id;
if ( ! isset( self::$_instances[ $key ] ) ) {
self::$_instances[ $key ] = new FS_Plugin_Manager( $module_id );
}
return self::$_instances[ $key ];
}
/**
* @param string|number $module_id
*/
protected function __construct( $module_id ) {
$this->_logger = FS_Logger::get_logger( WP_FS__SLUG . '_' . $module_id . '_' . 'plugins', WP_FS__DEBUG_SDK, WP_FS__ECHO_DEBUG_SDK );
$this->_module_id = $module_id;
$this->load();
}
protected function get_option_manager() {
return FS_Option_Manager::get_manager( WP_FS__ACCOUNTS_OPTION_NAME, true, true );
}
/**
* @author Leo Fajardo (@leorw)
* @since 1.2.2
*
* @param string|bool $module_type "plugin", "theme", or "false" for all modules.
*
* @return array
*/
protected function get_all_modules( $module_type = false ) {
$option_manager = $this->get_option_manager();
if ( false !== $module_type ) {
return fs_get_entities( $option_manager->get_option( $module_type . 's', array() ), FS_Plugin::get_class_name() );
}
return array(
self::OPTION_NAME_PLUGINS => fs_get_entities( $option_manager->get_option( self::OPTION_NAME_PLUGINS, array() ), FS_Plugin::get_class_name() ),
self::OPTION_NAME_THEMES => fs_get_entities( $option_manager->get_option( self::OPTION_NAME_THEMES, array() ), FS_Plugin::get_class_name() ),
);
}
/**
* Load plugin data from local DB.
*
* @author Vova Feldman (@svovaf)
* @since 1.0.6
*/
function load() {
$all_modules = $this->get_all_modules();
if ( ! is_numeric( $this->_module_id ) ) {
unset( $all_modules[ self::OPTION_NAME_THEMES ] );
}
foreach ( $all_modules as $modules ) {
/**
* @since 1.2.2
*
* @var $modules FS_Plugin[]
*/
foreach ( $modules as $module ) {
$found_module = false;
/**
* If module ID is not numeric, it must be a plugin's slug.
*
* @author Leo Fajardo (@leorw)
* @since 1.2.2
*/
if ( ! is_numeric( $this->_module_id ) ) {
if ( $this->_module_id === $module->slug ) {
$this->_module_id = $module->id;
$found_module = true;
}
} else if ( $this->_module_id == $module->id ) {
$found_module = true;
}
if ( $found_module ) {
$this->_module = $module;
break;
}
}
}
}
/**
* Store plugin on local DB.
*
* @author Vova Feldman (@svovaf)
* @since 1.0.6
*
* @param bool|FS_Plugin $module
* @param bool $flush
*
* @return bool|\FS_Plugin
*/
function store( $module = false, $flush = true ) {
if ( false !== $module ) {
$this->_module = $module;
}
$all_modules = $this->get_all_modules( $this->_module->type );
$all_modules[ $this->_module->slug ] = $this->_module;
$options_manager = $this->get_option_manager();
$options_manager->set_option( $this->_module->type . 's', $all_modules, $flush );
return $this->_module;
}
/**
* Update local plugin data if different.
*
* @author Vova Feldman (@svovaf)
* @since 1.0.6
*
* @param \FS_Plugin $plugin
* @param bool $store
*
* @return bool True if plugin was updated.
*/
function update( FS_Plugin $plugin, $store = true ) {
if ( ! ($this->_module instanceof FS_Plugin ) ||
$this->_module->slug != $plugin->slug ||
$this->_module->public_key != $plugin->public_key ||
$this->_module->secret_key != $plugin->secret_key ||
$this->_module->parent_plugin_id != $plugin->parent_plugin_id ||
$this->_module->title != $plugin->title
) {
$this->store( $plugin, $store );
return true;
}
return false;
}
/**
* @author Vova Feldman (@svovaf)
* @since 1.0.6
*
* @param FS_Plugin $plugin
* @param bool $store
*/
function set( FS_Plugin $plugin, $store = false ) {
$this->_module = $plugin;
if ( $store ) {
$this->store();
}
}
/**
* @author Vova Feldman (@svovaf)
* @since 1.0.6
*
* @return bool|\FS_Plugin
*/
function get() {
return isset( $this->_module ) ?
$this->_module :
false;
}
} | {
"pile_set_name": "Github"
} |
#!/usr/bin/ruby
# encoding: UTF-8
#
# BigBlueButton open source conferencing system - http://www.bigbluebutton.org/
#
# Copyright (c) 2012 BigBlueButton Inc. and by respective authors (see below).
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 3.0 of the License, or (at your option)
# any later version.
#
# BigBlueButton is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
# details.
#
# You should have received a copy of the GNU Lesser General Public License along
# with BigBlueButton; if not, see <http://www.gnu.org/licenses/>.
#
require "trollop"
require File.expand_path('../../../lib/recordandplayback', __FILE__)
opts = Trollop::options do
opt :meeting_id, "Meeting id to archive", :type => String
opt :format, "Playback format name", :type => String
end
meeting_id = opts[:meeting_id]
logger = Logger.new("/var/log/bigbluebutton/post_process.log", 'weekly' )
logger.level = Logger::INFO
BigBlueButton.logger = logger
processed_files = "/var/bigbluebutton/recording/process/presentation/#{meeting_id}"
meeting_metadata = BigBlueButton::Events.get_meeting_metadata("/var/bigbluebutton/recording/raw/#{meeting_id}/events.xml")
#
# Put your code here
#
exit 0
| {
"pile_set_name": "Github"
} |
import os
import webapp2 as webapp
from google.appengine.ext.webapp import blobstore_handlers
from google.appengine.ext import blobstore
from google.appengine.api import images
from google.appengine.api import mail
from google.appengine.api import users
from datetime import datetime, timedelta
import json
#from qrcode import main as qr
import handlers
from model import blog
import model.ping
class HomePage(handlers.BaseHandler):
def get(self):
self.render("index.html", {})
class PrivacyPolicyPage(handlers.BaseHandler):
def get(self):
self.render("privacy-policy.html", {})
class TermsOfServicePage(handlers.BaseHandler):
def get(self):
self.render("terms-of-service.html", {})
class RulesPage(handlers.BaseHandler):
def get(self):
self.render("rules.html", {})
class BlobUploadUrlPage(handlers.BaseHandler):
def get(self):
"""Gets a new upload URL for uploading blobs."""
if not users.get_current_user():
self.response.set_status(403)
return
data = {'upload_url': blobstore.create_upload_url('/blob/upload-complete')}
self.response.headers["Content-Type"] = "application/json"
self.response.write(json.dumps(data))
class BlobUploadCompletePage(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
blob_info = self.get_uploads('file')[0]
response = {'success': True,
'blob_key': str(blob_info.key()),
'size': blob_info.size,
'filename': blob_info.filename}
if "X-Blob" in self.request.headers:
response['content_type'] = blob_info.content_type
response['url'] = '/blob/' + str(blob_info.key())
else:
img = images.Image(blob_key=blob_info)
img.im_feeling_lucky() # we have to do a transform in order to get dimensions
img.execute_transforms()
response['width'] = img.width
response['height'] = img.height
response['url'] = images.get_serving_url(blob_info.key(), 100, 0)
self.response.headers["Content-Type"] = "application/json"
self.response.write(json.dumps(response))
class BlobPage(blobstore_handlers.BlobstoreDownloadHandler):
def get(self, blob_key):
if not blobstore.get(blob_key):
self.error(404)
else:
self.response.headers["Cache-Control"] = "public, max-age="+str(30*24*60*60) # 30 days
self.response.headers["Expires"] = (datetime.now() + timedelta(days=30)).strftime("%a, %d %b %Y %H:%M:%S GMT")
self.send_blob(blob_key)
class BlobDownloadPage(blobstore_handlers.BlobstoreDownloadHandler):
def get(self, blob_key):
blob_info = blobstore.get(blob_key)
if not blob_info:
self.error(404)
else:
self.response.headers["Cache-Control"] = "public, max-age="+str(30*24*60*60) # 30 days
self.response.headers["Expires"] = (datetime.now() + timedelta(days=30)).strftime("%a, %d %b %Y %H:%M:%S GMT")
self.response.headers["Content-Type"] = blob_info.content_type
self.response.headers["Content-Disposition"] = str("attachment; filename=" + blob_info.filename)
self.send_blob(blob_info)
class BlobInfoPage(handlers.BaseHandler):
def get(self, blob_key):
info = self._getBlobInfo(blob_key)
if not info:
self.error(404)
self.response.headers["Content-Type"] = "application/json"
self.response.write(json.dumps(info))
def post(self, blob_key):
info = self._getBlobInfo(blob_key)
if not info:
self.error(404)
self.response.headers["Content-Type"] = "application/json"
self.response.write(json.dumps(info))
def _getBlobInfo(self, blob_key):
blob_info = blobstore.get(blob_key)
if not blob_info:
return None
size = None
crop = None
if self.request.POST.get("size"):
size = int(self.request.POST.get("size"))
if self.request.POST.get("crop"):
crop = int(self.request.POST.get("crop"))
return {"size": blob_info.size,
"filename": blob_info.filename,
"url": images.get_serving_url(blob_key, size, crop)}
class StatusPage(handlers.BaseHandler):
def get(self):
min_date = datetime.now() - timedelta(hours=24)
last_ping = None
pings = []
# TODO: cache(?)
query = model.ping.Ping.all().filter("date >", min_date).order("date")
for ping in query:
if not last_ping or last_ping.date < ping.date:
last_ping = ping
pings.append(ping)
self.render("status.html", {"pings": pings, "last_ping": last_ping})
class SitemapPage(handlers.BaseHandler):
def get(self):
pages = []
pages.append({"url": "/", "lastmod": "2012-05-14", "changefreq": "monthly", "priority": "0.8"})
for post in blog.Post.all():
# change frequency is normally going to be monthly, except for recent posts which
# we'll set to daily (to account for possible comments)
changefreq = "monthly"
if post.posted + timedelta(days=30) > datetime.now():
changefreq = "daily"
pages.append({"url": ("/blog/%04d/%02d/%s" % (post.posted.year, post.posted.month, post.slug)),
"lastmod": ("%04d-%02d-%02d" % (post.updated.year, post.updated.month, post.updated.day)),
"changefreq": changefreq,
"priority": "1.0"})
data = {"pages": pages, "base_url": "http://www.war-worlds.com"}
self.response.headers["Content-Type"] = "text/xml"
self.render("sitemap.xml", data)
class PlayStorePage(handlers.BaseHandler):
def get(self):
self.redirect("https://play.google.com/store/apps/details?id=au.com.codeka.warworlds")
class DonateThanksPage(handlers.BaseHandler):
def get(self):
self.render("donate-thanks.html", {})
app = webapp.WSGIApplication([("/", HomePage),
("/privacy-policy", PrivacyPolicyPage),
("/terms-of-service", TermsOfServicePage),
("/rules", RulesPage),
("/blob/upload-url", BlobUploadUrlPage),
("/blob/upload-complete", BlobUploadCompletePage),
("/blob/([^/]+)", BlobPage),
("/blob/([^/]+)/download", BlobDownloadPage),
("/blob/([^/]+)/info", BlobInfoPage),
("/status/?", StatusPage),
("/play-store", PlayStorePage),
("/donate-thanks", DonateThanksPage),
("/sitemap.xml", SitemapPage)],
debug=os.environ["SERVER_SOFTWARE"].startswith("Development"))
| {
"pile_set_name": "Github"
} |
Name: Documentation Man Page
Type: design decision
Invented on: 2010-02-12
Invented by: flonatel
Owner: development
Description: \textsl{rmtoo} \textbf{must} come with a (*nix) man page
describing the basic behaviour.
Rationale: This typically describes the input and output and all the
parameters needed (but not the ideas behind).
Status: not done
Priority: development:10 customers:5
Class: detailable
Effort estimation: 5
Topic: Documentation
Solved by: ManEmacsMode ManOverview ManSecAnalytics ManSecArtifacts ManSecFileFormats ManSecGeneric
# Added by rmtoo-normalize-dependencies
| {
"pile_set_name": "Github"
} |
{
"name": "magento/module-webapi",
"description": "N/A",
"config": {
"sort-packages": true
},
"require": {
"php": "~7.3.0||~7.4.0",
"magento/framework": "*",
"magento/module-authorization": "*",
"magento/module-backend": "*",
"magento/module-integration": "*",
"magento/module-store": "*"
},
"suggest": {
"magento/module-user": "*",
"magento/module-customer": "*"
},
"type": "magento2-module",
"license": [
"OSL-3.0",
"AFL-3.0"
],
"autoload": {
"files": [
"registration.php"
],
"psr-4": {
"Magento\\Webapi\\": ""
}
}
}
| {
"pile_set_name": "Github"
} |
/* WARNING: This file was put in the LibPNG distribution for convenience only.
It is expected to be part of the next zlib release under
"projects\visualc71\README.txt." */
Microsoft Developer Studio Project File, Format Version 7.10 for zlib.
Copyright (C) 2004 Simon-Pierre Cadieux.
Copyright (C) 2004 Cosmin Truta.
This code is released under the libpng license.
For conditions of distribution and use, see copyright notice in zlib.h.
NOTE: This project will be removed from libpng-1.5.0. It has
been replaced with the "vstudio" project.
To use:
1) On the main menu, select "File | Open Solution".
Open "zlib.sln".
2) Display the Solution Explorer view (Ctrl+Alt+L)
3) Set one of the project as the StartUp project. If you just want to build the
binaries set "zlib" as the startup project (Select "zlib" tree view item +
Project | Set as StartUp project). If you want to build and test the
binaries set it to "example" (Select "example" tree view item + Project |
Set as StartUp project), If you want to build the minigzip utility set it to
"minigzip" (Select "minigzip" tree view item + Project | Set as StartUp
project
4) Select "Build | Configuration Manager...".
Choose the configuration you wish to build.
5) Select "Build | Clean Solution".
6) Select "Build | Build Solution (Ctrl-Shift-B)"
This project builds the zlib binaries as follows:
* Win32_DLL_Release\zlib1.dll DLL build
* Win32_DLL_Debug\zlib1d.dll DLL build (debug version)
* Win32_LIB_Release\zlib.lib static build
* Win32_LIB_Debug\zlibd.lib static build (debug version)
| {
"pile_set_name": "Github"
} |
<?php
/**
* Some comment
*/
class Foo{function foo(){}
/**
* @param Baz $baz
*/
public function bar(Baz $baz)
{
}
/**
* @param Foobar $foobar
*/
static public function foobar(Foobar $foobar)
{
}
public function barfoo(Barfoo $barfoo)
{
}
/**
* This docblock does not belong to the baz function
*/
public function baz()
{
}
}
| {
"pile_set_name": "Github"
} |
mandoc: spacing.in:16:2: WARNING: empty block: D1
| {
"pile_set_name": "Github"
} |
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package packet implements parsing and serialization of OpenPGP packets, as
// specified in RFC 4880.
package packet // import "golang.org/x/crypto/openpgp/packet"
import (
"bufio"
"crypto/aes"
"crypto/cipher"
"crypto/des"
"golang.org/x/crypto/cast5"
"golang.org/x/crypto/openpgp/errors"
"io"
"math/big"
)
// readFull is the same as io.ReadFull except that reading zero bytes returns
// ErrUnexpectedEOF rather than EOF.
func readFull(r io.Reader, buf []byte) (n int, err error) {
n, err = io.ReadFull(r, buf)
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
return
}
// readLength reads an OpenPGP length from r. See RFC 4880, section 4.2.2.
func readLength(r io.Reader) (length int64, isPartial bool, err error) {
var buf [4]byte
_, err = readFull(r, buf[:1])
if err != nil {
return
}
switch {
case buf[0] < 192:
length = int64(buf[0])
case buf[0] < 224:
length = int64(buf[0]-192) << 8
_, err = readFull(r, buf[0:1])
if err != nil {
return
}
length += int64(buf[0]) + 192
case buf[0] < 255:
length = int64(1) << (buf[0] & 0x1f)
isPartial = true
default:
_, err = readFull(r, buf[0:4])
if err != nil {
return
}
length = int64(buf[0])<<24 |
int64(buf[1])<<16 |
int64(buf[2])<<8 |
int64(buf[3])
}
return
}
// partialLengthReader wraps an io.Reader and handles OpenPGP partial lengths.
// The continuation lengths are parsed and removed from the stream and EOF is
// returned at the end of the packet. See RFC 4880, section 4.2.2.4.
type partialLengthReader struct {
r io.Reader
remaining int64
isPartial bool
}
func (r *partialLengthReader) Read(p []byte) (n int, err error) {
for r.remaining == 0 {
if !r.isPartial {
return 0, io.EOF
}
r.remaining, r.isPartial, err = readLength(r.r)
if err != nil {
return 0, err
}
}
toRead := int64(len(p))
if toRead > r.remaining {
toRead = r.remaining
}
n, err = r.r.Read(p[:int(toRead)])
r.remaining -= int64(n)
if n < int(toRead) && err == io.EOF {
err = io.ErrUnexpectedEOF
}
return
}
// partialLengthWriter writes a stream of data using OpenPGP partial lengths.
// See RFC 4880, section 4.2.2.4.
type partialLengthWriter struct {
w io.WriteCloser
lengthByte [1]byte
}
func (w *partialLengthWriter) Write(p []byte) (n int, err error) {
for len(p) > 0 {
for power := uint(14); power < 32; power-- {
l := 1 << power
if len(p) >= l {
w.lengthByte[0] = 224 + uint8(power)
_, err = w.w.Write(w.lengthByte[:])
if err != nil {
return
}
var m int
m, err = w.w.Write(p[:l])
n += m
if err != nil {
return
}
p = p[l:]
break
}
}
}
return
}
func (w *partialLengthWriter) Close() error {
w.lengthByte[0] = 0
_, err := w.w.Write(w.lengthByte[:])
if err != nil {
return err
}
return w.w.Close()
}
// A spanReader is an io.LimitReader, but it returns ErrUnexpectedEOF if the
// underlying Reader returns EOF before the limit has been reached.
type spanReader struct {
r io.Reader
n int64
}
func (l *spanReader) Read(p []byte) (n int, err error) {
if l.n <= 0 {
return 0, io.EOF
}
if int64(len(p)) > l.n {
p = p[0:l.n]
}
n, err = l.r.Read(p)
l.n -= int64(n)
if l.n > 0 && err == io.EOF {
err = io.ErrUnexpectedEOF
}
return
}
// readHeader parses a packet header and returns an io.Reader which will return
// the contents of the packet. See RFC 4880, section 4.2.
func readHeader(r io.Reader) (tag packetType, length int64, contents io.Reader, err error) {
var buf [4]byte
_, err = io.ReadFull(r, buf[:1])
if err != nil {
return
}
if buf[0]&0x80 == 0 {
err = errors.StructuralError("tag byte does not have MSB set")
return
}
if buf[0]&0x40 == 0 {
// Old format packet
tag = packetType((buf[0] & 0x3f) >> 2)
lengthType := buf[0] & 3
if lengthType == 3 {
length = -1
contents = r
return
}
lengthBytes := 1 << lengthType
_, err = readFull(r, buf[0:lengthBytes])
if err != nil {
return
}
for i := 0; i < lengthBytes; i++ {
length <<= 8
length |= int64(buf[i])
}
contents = &spanReader{r, length}
return
}
// New format packet
tag = packetType(buf[0] & 0x3f)
length, isPartial, err := readLength(r)
if err != nil {
return
}
if isPartial {
contents = &partialLengthReader{
remaining: length,
isPartial: true,
r: r,
}
length = -1
} else {
contents = &spanReader{r, length}
}
return
}
// serializeHeader writes an OpenPGP packet header to w. See RFC 4880, section
// 4.2.
func serializeHeader(w io.Writer, ptype packetType, length int) (err error) {
var buf [6]byte
var n int
buf[0] = 0x80 | 0x40 | byte(ptype)
if length < 192 {
buf[1] = byte(length)
n = 2
} else if length < 8384 {
length -= 192
buf[1] = 192 + byte(length>>8)
buf[2] = byte(length)
n = 3
} else {
buf[1] = 255
buf[2] = byte(length >> 24)
buf[3] = byte(length >> 16)
buf[4] = byte(length >> 8)
buf[5] = byte(length)
n = 6
}
_, err = w.Write(buf[:n])
return
}
// serializeStreamHeader writes an OpenPGP packet header to w where the
// length of the packet is unknown. It returns a io.WriteCloser which can be
// used to write the contents of the packet. See RFC 4880, section 4.2.
func serializeStreamHeader(w io.WriteCloser, ptype packetType) (out io.WriteCloser, err error) {
var buf [1]byte
buf[0] = 0x80 | 0x40 | byte(ptype)
_, err = w.Write(buf[:])
if err != nil {
return
}
out = &partialLengthWriter{w: w}
return
}
// Packet represents an OpenPGP packet. Users are expected to try casting
// instances of this interface to specific packet types.
type Packet interface {
parse(io.Reader) error
}
// consumeAll reads from the given Reader until error, returning the number of
// bytes read.
func consumeAll(r io.Reader) (n int64, err error) {
var m int
var buf [1024]byte
for {
m, err = r.Read(buf[:])
n += int64(m)
if err == io.EOF {
err = nil
return
}
if err != nil {
return
}
}
}
// packetType represents the numeric ids of the different OpenPGP packet types. See
// http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-2
type packetType uint8
const (
packetTypeEncryptedKey packetType = 1
packetTypeSignature packetType = 2
packetTypeSymmetricKeyEncrypted packetType = 3
packetTypeOnePassSignature packetType = 4
packetTypePrivateKey packetType = 5
packetTypePublicKey packetType = 6
packetTypePrivateSubkey packetType = 7
packetTypeCompressed packetType = 8
packetTypeSymmetricallyEncrypted packetType = 9
packetTypeLiteralData packetType = 11
packetTypeUserId packetType = 13
packetTypePublicSubkey packetType = 14
packetTypeUserAttribute packetType = 17
packetTypeSymmetricallyEncryptedMDC packetType = 18
)
// peekVersion detects the version of a public key packet about to
// be read. A bufio.Reader at the original position of the io.Reader
// is returned.
func peekVersion(r io.Reader) (bufr *bufio.Reader, ver byte, err error) {
bufr = bufio.NewReader(r)
var verBuf []byte
if verBuf, err = bufr.Peek(1); err != nil {
return
}
ver = verBuf[0]
return
}
// Read reads a single OpenPGP packet from the given io.Reader. If there is an
// error parsing a packet, the whole packet is consumed from the input.
func Read(r io.Reader) (p Packet, err error) {
tag, _, contents, err := readHeader(r)
if err != nil {
return
}
switch tag {
case packetTypeEncryptedKey:
p = new(EncryptedKey)
case packetTypeSignature:
var version byte
// Detect signature version
if contents, version, err = peekVersion(contents); err != nil {
return
}
if version < 4 {
p = new(SignatureV3)
} else {
p = new(Signature)
}
case packetTypeSymmetricKeyEncrypted:
p = new(SymmetricKeyEncrypted)
case packetTypeOnePassSignature:
p = new(OnePassSignature)
case packetTypePrivateKey, packetTypePrivateSubkey:
pk := new(PrivateKey)
if tag == packetTypePrivateSubkey {
pk.IsSubkey = true
}
p = pk
case packetTypePublicKey, packetTypePublicSubkey:
var version byte
if contents, version, err = peekVersion(contents); err != nil {
return
}
isSubkey := tag == packetTypePublicSubkey
if version < 4 {
p = &PublicKeyV3{IsSubkey: isSubkey}
} else {
p = &PublicKey{IsSubkey: isSubkey}
}
case packetTypeCompressed:
p = new(Compressed)
case packetTypeSymmetricallyEncrypted:
p = new(SymmetricallyEncrypted)
case packetTypeLiteralData:
p = new(LiteralData)
case packetTypeUserId:
p = new(UserId)
case packetTypeUserAttribute:
p = new(UserAttribute)
case packetTypeSymmetricallyEncryptedMDC:
se := new(SymmetricallyEncrypted)
se.MDC = true
p = se
default:
err = errors.UnknownPacketTypeError(tag)
}
if p != nil {
err = p.parse(contents)
}
if err != nil {
consumeAll(contents)
}
return
}
// SignatureType represents the different semantic meanings of an OpenPGP
// signature. See RFC 4880, section 5.2.1.
type SignatureType uint8
const (
SigTypeBinary SignatureType = 0
SigTypeText = 1
SigTypeGenericCert = 0x10
SigTypePersonaCert = 0x11
SigTypeCasualCert = 0x12
SigTypePositiveCert = 0x13
SigTypeSubkeyBinding = 0x18
SigTypePrimaryKeyBinding = 0x19
SigTypeDirectSignature = 0x1F
SigTypeKeyRevocation = 0x20
SigTypeSubkeyRevocation = 0x28
)
// PublicKeyAlgorithm represents the different public key system specified for
// OpenPGP. See
// http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-12
type PublicKeyAlgorithm uint8
const (
PubKeyAlgoRSA PublicKeyAlgorithm = 1
PubKeyAlgoRSAEncryptOnly PublicKeyAlgorithm = 2
PubKeyAlgoRSASignOnly PublicKeyAlgorithm = 3
PubKeyAlgoElGamal PublicKeyAlgorithm = 16
PubKeyAlgoDSA PublicKeyAlgorithm = 17
// RFC 6637, Section 5.
PubKeyAlgoECDH PublicKeyAlgorithm = 18
PubKeyAlgoECDSA PublicKeyAlgorithm = 19
)
// CanEncrypt returns true if it's possible to encrypt a message to a public
// key of the given type.
func (pka PublicKeyAlgorithm) CanEncrypt() bool {
switch pka {
case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoElGamal:
return true
}
return false
}
// CanSign returns true if it's possible for a public key of the given type to
// sign a message.
func (pka PublicKeyAlgorithm) CanSign() bool {
switch pka {
case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA, PubKeyAlgoECDSA:
return true
}
return false
}
// CipherFunction represents the different block ciphers specified for OpenPGP. See
// http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-13
type CipherFunction uint8
const (
Cipher3DES CipherFunction = 2
CipherCAST5 CipherFunction = 3
CipherAES128 CipherFunction = 7
CipherAES192 CipherFunction = 8
CipherAES256 CipherFunction = 9
)
// KeySize returns the key size, in bytes, of cipher.
func (cipher CipherFunction) KeySize() int {
switch cipher {
case Cipher3DES:
return 24
case CipherCAST5:
return cast5.KeySize
case CipherAES128:
return 16
case CipherAES192:
return 24
case CipherAES256:
return 32
}
return 0
}
// blockSize returns the block size, in bytes, of cipher.
func (cipher CipherFunction) blockSize() int {
switch cipher {
case Cipher3DES:
return des.BlockSize
case CipherCAST5:
return 8
case CipherAES128, CipherAES192, CipherAES256:
return 16
}
return 0
}
// new returns a fresh instance of the given cipher.
func (cipher CipherFunction) new(key []byte) (block cipher.Block) {
switch cipher {
case Cipher3DES:
block, _ = des.NewTripleDESCipher(key)
case CipherCAST5:
block, _ = cast5.NewCipher(key)
case CipherAES128, CipherAES192, CipherAES256:
block, _ = aes.NewCipher(key)
}
return
}
// readMPI reads a big integer from r. The bit length returned is the bit
// length that was specified in r. This is preserved so that the integer can be
// reserialized exactly.
func readMPI(r io.Reader) (mpi []byte, bitLength uint16, err error) {
var buf [2]byte
_, err = readFull(r, buf[0:])
if err != nil {
return
}
bitLength = uint16(buf[0])<<8 | uint16(buf[1])
numBytes := (int(bitLength) + 7) / 8
mpi = make([]byte, numBytes)
_, err = readFull(r, mpi)
return
}
// mpiLength returns the length of the given *big.Int when serialized as an
// MPI.
func mpiLength(n *big.Int) (mpiLengthInBytes int) {
mpiLengthInBytes = 2 /* MPI length */
mpiLengthInBytes += (n.BitLen() + 7) / 8
return
}
// writeMPI serializes a big integer to w.
func writeMPI(w io.Writer, bitLength uint16, mpiBytes []byte) (err error) {
_, err = w.Write([]byte{byte(bitLength >> 8), byte(bitLength)})
if err == nil {
_, err = w.Write(mpiBytes)
}
return
}
// writeBig serializes a *big.Int to w.
func writeBig(w io.Writer, i *big.Int) error {
return writeMPI(w, uint16(i.BitLen()), i.Bytes())
}
// CompressionAlgo Represents the different compression algorithms
// supported by OpenPGP (except for BZIP2, which is not currently
// supported). See Section 9.3 of RFC 4880.
type CompressionAlgo uint8
const (
CompressionNone CompressionAlgo = 0
CompressionZIP CompressionAlgo = 1
CompressionZLIB CompressionAlgo = 2
)
| {
"pile_set_name": "Github"
} |
import ShikiUploader from 'shiki-uploader';
import csrf from 'helpers/csrf';
export class FileUploader extends ShikiUploader {
constructor(node, options = {}) {
super({
...options,
node,
locale: I18n.locale,
xhrEndpoint: node.getAttribute('data-upload_url'),
xhrHeaders: () => csrf().headers
});
this.node.classList.remove('b-ajax');
this._scheduleUnbind();
}
_scheduleUnbind() {
$(document).one('turbolinks:before-cache', this.destroy);
}
}
| {
"pile_set_name": "Github"
} |
//
// BearConstants.h
// Bear
//
// Created by Bear on 30/12/24.
// Copyright © 2015年 Bear. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
#import "UIView+BearSet.h"
#import <MBProgressHUD/MBProgressHUD.h>
// HUD
static MBProgressHUD *_stateHud;
// NotificationCenter字段
static NSString *NotificationTest = @"NotificationTest";
// UserDefaults字段
static NSString *usTest = @"usTest";
@interface BearConstants : NSObject
// 获取当前时间,日期
+ (NSString *)getCurrentTimeStr;
// dict取值并判断是否为空
+ (id)setDataWithDict:(NSDictionary *)dict keyStr:(NSString *)keyStr;
// dict取值并判断是否为空,string类型专用
+ (NSString *)setStringWithDict:(NSDictionary *)dict keyStr:(NSString *)keyStr;
// 防止字符串为<null>
+ (NSString *)avoidStringCrash:(id)string;
// 判断字符串是否为空
+ (BOOL)judgeStringExist:(id)string;
// 判断数组里的字符串是否都存在
+ (BOOL)judgeStringExistFromArray:(NSArray *)array;
// 判断dict中是否包含某字段
+ (BOOL)judgeDictHaveStr:(NSString *)keyStr dict:(NSDictionary *)dict;
// 从URL获取图片
+ (UIImage *)getImageFromURL:(NSString *)imageURL;
// 修改iamge尺寸
+ (UIImage *)scaleToSize:(UIImage *)img size:(CGSize)newsize;
+ (UIImage *)scaleToSize:(UIImage *)img size:(CGSize)newsize opaque:(BOOL)opaque;
// 对View截屏
+ (UIImage *)convertViewToImage:(UIView *)view;
// 图片合成
+ (UIImage *)imageSynthesisWithImage1:(UIImage *)image1 image2:(UIImage *)image2 size:(CGSize)size;
// 验证姓名
+ (BOOL)validateNameString:(NSString *)nameStr;
// 验证手机号码
+ (BOOL)validatePhoneString:(NSString *)phoneStr;
/**
* Block Demo
*/
+ (void)requestClearMessage:(NSNumber *)notificationId success:(void (^) (void))success failure:(void (^) (void))failure;
// 延时block
+ (void)delayAfter:(CGFloat)delayTime dealBlock:(void (^)(void))dealBlock;
// 获取随机颜色
+ (UIColor *)randomColor;
+ (UIColor *)randomColorWithAlpha:(CGFloat)alpha;
// 获取当前页ViewController
+ (id)getCurrentViewController;
/***** Nav Push *****/
// 获取naviVC中指定类型的VC
+ (id)fetchVCWithClassName:(NSString *)className inNaviVC:(UINavigationController *)naviVC;
// pop到指定的VC,如果controllers不存在该VC,pop到RootVC
+ (void)popToDestinationVC:(UIViewController *)destionationVC inVC:(UIViewController *)nowVC;
// pop到指定的VC,如果controllers不存在该VC,pop到RootVC
+ (void)popToDestinationVCClassName:(NSString *)destionationVCClassName inVC:(UIViewController *)nowVC;
// pop到指定的VC,如果controllers不存在该VC,pop到RootVC
+ (BOOL)findAndpopToDestinationVCClassName:(NSString *)destionationVCClassName inVC:(UIViewController *)nowVC;
// pop到指定数量的的VC,如果num超过controllers数量,pop到RootVC
+ (void)popOverNum:(int)num inVC:(UIViewController *)nowVC;
// 获取指定VC的相通Navi下的前一个VC
+ (id)getAheadVCInVC:(UIViewController *)inVC;
// 判断是否存在字符串
+ (BOOL)theString:(NSString *)string containsString:(NSString*)other;
/**
* 将指定VC从Navi数组中移出
*
* @param removeVC 被移除的VC,或VCname
* @param navVC navVC
*/
+ (void)removeVC:(id)removeVC inNavVC:(UINavigationController *)navVC;
/**
* 将指定VC插入到Navi数组中
*
* @param insertVC 被插入的VC
* @param navVC navVC
*/
+ (void)insertVC:(UIViewController *)insertVC inNavVC:(UINavigationController *)navVC atIndex:(NSInteger)index;
/** 字符串解析成字典
*
* 参考解析数据
* para_1=1¶_2=2
*/
+ (NSDictionary *)convertParaStrToDict_paraStr:(NSString *)paraStr;
// frame转换成bounds
+ (CGRect)convertFrameToBounds_frame:(CGRect)frame;
/**
* 循环测试
*
* @param during 循环间隔
* @param eventBlock block事件
*/
+ (void)loopTestDuring:(CGFloat)during eventBlock:(void (^)(void))eventBlock;
// imageView设置tintColor
+ (void)imageView:(UIImageView *)imageView setImage:(UIImage *)image tintColor:(UIColor *)tintColor;
/**
* 创建渐变layer
*
* @param fromColor 渐变起始颜色
* @param toColor 渐变终止颜色
* @param axis 渐变方向
* kLAYOUT_AXIS_Y:从上向下渐变
* kLAYOUT_AXIS_X:从左向右渐变
*/
// 创建渐变layer
+ (CAGradientLayer *)generateGradientLayerWithRect:(CGRect)rect
fromColor:(UIColor *)fromColor
toColor:(UIColor *)toColor
axis:(kLAYOUT_AXIS)axis;
// 计算时间差
+ (NSDateComponents *)caculateDateDValueFromDate:(NSDate *)fromDate toDate:(NSDate *)toDate;
// 校验数组和对应索引是否越界
+ (BOOL)validateArray:(NSArray *)array index:(NSInteger)index;
// 在主线程处理
+ (void)processInMainThreadWithBlock:(void (^)(void))block;
+ (BOOL)isDebug;
+ (BOOL)isRelease;
+ (void)debug:(void (^)(void))debug release:(void (^)(void))release;
+ (void)resignCurrentFirstResponder;
// 判断是否为pad
+ (BOOL)getIsIpad;
// 判断是否为X系列(带刘海)
+ (BOOL)getIsXSeries;
// 获取启动图
+ (UIImage *)getLaunchImage;
/** 支持iPhone和iPad, 获取app的icon图标名称 */
+ (UIImage *)getAppIconImage;
//通过urlStr解析出参数
+ (NSDictionary *)getUrlParams:(NSString *)urlStr;
/**
* 图片压缩到指定大小
* @param targetSize 目标图片的大小
* @param sourceImage 源图片
* @return 目标图片
*/
+ (UIImage*)imageByScalingAndCroppingForSize:(CGSize)targetSize withSourceImage:(UIImage *)sourceImage;
// 生成基本的DeviceInfo
+ (void)generateDeviceInfoBaseInDict:(NSMutableDictionary *)finalParaDict;
@end
| {
"pile_set_name": "Github"
} |
#!/bin/bash
PMS_LIB_DIR=/usr/lib/plexmediaserver
PMS_DOWNLOAD_DIR=/tmp/plex_tmp_download
get_pms_build(){
local cpuinfo=$(grep -i features /proc/cpuinfo)
local cpuarch=$(grep -i architecture /proc/cpuinfo | cut -d' ' -f3 | head -n1)
local bios_board_vendor=""
if [[ -e /sys/class/dmi/id/board_vendor ]]; then
bios_board_vendor=$(cat /sys/class/dmi/id/board_vendor)
elif [[ -e /sys/devices/virtual/dmi/id/board_vendor ]]; then
bios_board_vendor=$(cat /sys/devices/virtual/dmi/id/board_vendor)
fi
if [[ $bios_board_vendor == "AMD" ]]; then
echo "synology"
elif [[ $cpuinfo =~ .*neon.* ]] && [[ $cpuinfo =~ .*vfpv4.* ]] && [[ $cpuinfo =~ .*thumb.* ]] && [[ $cpuinfo =~ .*idiva.* ]]; then
echo "netgear"
elif [[ $cpuarch == "8" ]]; then
echo "netgear"
else
echo "synology"
fi
}
install_netgear(){
local PMS_URL='https://downloads.plex.tv/plex-media-server-new/1.15.3.876-ad6e39743/netgear/plexmediaserver-annapurna_1.15.3.876-ad6e39743_armel.deb'
local PMS_HASH='f1477376d7d1e58810bf8b89f868c2eee7ddd7eb49010ec52e07126a31c20c0a'
echo "Downloading readynas package ..."
cd $PMS_DOWNLOAD_DIR
curl --progress-bar -o readynas.deb $PMS_URL
local PMS_DOWNLOAD_HASH=`sha256sum readynas.deb | cut -d' ' -f1`
if [ "$PMS_HASH" != "$PMS_DOWNLOAD_HASH" ]
then
echo "Checksum mismatch. Downloaded file does not match this package."
exit 1
else
echo "Passed checksum test."
fi
echo "Extracting readynas.deb ..."
dpkg-deb --fsys-tarfile readynas.deb | tar -xf - -C $PMS_LIB_DIR/ --strip-components=4 ./apps/plexmediaserver-annapurna/Binaries
}
install_synology(){
local PMS_URL='https://downloads.plex.tv/plex-media-server-new/1.15.3.876-ad6e39743/synology/PlexMediaServer-1.15.3.876-ad6e39743-armv7hf.spk'
local PMS_HASH='1d6a1df921ac53896b6dff55bb5869c5f9c27ace5cf26806189b54fbbdae5c54'
echo "Downloading synology package ..."
cd $PMS_DOWNLOAD_DIR
curl --progress-bar -o synology.tar $PMS_URL
local PMS_DOWNLOAD_HASH=`sha256sum synology.tar | cut -d' ' -f1`
if [ "$PMS_HASH" != "$PMS_DOWNLOAD_HASH" ]
then
echo "Checksum mismatch. Downloaded file does not match this package."
exit 1
else
echo "Passed checksum test."
fi
echo "Extracting synology.tar ..."
tar -xOf synology.tar package.tgz | tar -xzf - -C $PMS_LIB_DIR/
# remove not used files
rm -r $PMS_LIB_DIR/dsm_config
}
case "$1" in
configure)
adduser --quiet --system --shell /bin/bash --home /var/lib/plexmediaserver --group plex
# fix missing plex group in the old package
addgroup --quiet --system plex
usermod -g plex plex &> /dev/null
# add plex to the video group
gpasswd -a plex video
# create dirs
mkdir -p $PMS_DOWNLOAD_DIR
mkdir -p $PMS_LIB_DIR
pmsbuild=$(get_pms_build)
if [[ $pmsbuild == "netgear" ]]; then
install_netgear
else
install_synology
fi
# remove tmp data
cd /tmp
rm -r $PMS_DOWNLOAD_DIR/
# Ensure we load the udevrule and trigger for any already inserted USB device
if [ -f /sbin/udevadm ]; then
udevadm control --reload-rules || :
udevadm trigger
else
echo
echo "##################################################################"
echo "# NOTE: Your system does not have udev installed. Without udev #"
echo "# you won't be able to use DVBLogic's TVButler for DVR #"
echo "# or for LiveTV #"
echo "# #"
echo "# Please install udev and reinstall Plex Media Server to #"
echo "# to enable TV Butler support in Plex Media Server. #"
echo "# #"
echo "# To install udev run: sudo apt-get install udev #"
echo "# #"
echo "##################################################################"
echo
fi
if [ -f /proc/1/comm ]; then
if [ "`cat /proc/1/comm`" = "systemd" ]; then
# Initiate config consolidation and pull overrides into the correct location.
if [ -f /etc/default/plexmediaserver ]; then
/usr/lib/plexmediaserver/MigratePlexServerConfig.sh
fi
systemctl daemon-reload
systemctl enable plexmediaserver
systemctl start plexmediaserver
else
update-rc.d plexmediaserver defaults
/etc/init.d/plexmediaserver start
fi
fi
cat <<- EOF
##### ATTENTION #####
THIS IS THE LAST UPDATE OF THE dev2day.de PACKAGE!!! UPDATE TO THE OFFICIAL PACKAGE NOW!!!
Finally, official ARMv7 and ARMv8 Plex packages are available:
https://www.plex.tv/media-server-downloads/
The migration is simple and your Plex library will be preserved.
First, remove the dev2day.de package:
sudo apt-get remove plexmediaserver-installer
Then, remove the dev2day.de repo, e.g. with:
sudo rm /etc/apt/sources.list.d/pms.list
Now, download the appropriate package from https://www.plex.tv/media-server-downloads/ and put it on your device, e.g.:
wget https://downloads.plex.tv/plex-media-server-new/1.15.3.876-ad6e39743/debian/plexmediaserver_1.15.3.876-ad6e39743_armhf.deb
Finally, install the package, e.g.:
sudo dpkg -i plexmediaserver_1.15.3.876-ad6e39743_armhf.deb
Also, the official Plex repository is now available:
https://support.plex.tv/articles/235974187-enable-repository-updating-for-supported-linux-server-distributions/
The following thread has more information: https://forums.plex.tv/t/read-me-first-about-server-armv7-and-armv8-ubuntu-debian/226567/
Post your questions here: https://forums.plex.tv/tags/server-linux-arm
####################
EOF
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
exit 0
| {
"pile_set_name": "Github"
} |
# This source code refers to The Go Authors for copyright purposes.
# The master list of authors is in the main Go distribution,
# visible at http://tip.golang.org/AUTHORS.
| {
"pile_set_name": "Github"
} |
/**
*
* Apache License
* Version 2.0, January 2004
* http://www.apache.org/licenses/
*
* TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
*
* 1. Definitions.
*
* "License" shall mean the terms and conditions for use, reproduction,
* and distribution as defined by Sections 1 through 9 of this document.
*
* "Licensor" shall mean the copyright owner or entity authorized by
* the copyright owner that is granting the License.
*
* "Legal Entity" shall mean the union of the acting entity and all
* other entities that control, are controlled by, or are under common
* control with that entity. For the purposes of this definition,
* "control" means (i) the power, direct or indirect, to cause the
* direction or management of such entity, whether by contract or
* otherwise, or (ii) ownership of fifty percent (50%) or more of the
* outstanding shares, or (iii) beneficial ownership of such entity.
*
* "You" (or "Your") shall mean an individual or Legal Entity
* exercising permissions granted by this License.
*
* "Source" form shall mean the preferred form for making modifications,
* including but not limited to software source code, documentation
* source, and configuration files.
*
* "Object" form shall mean any form resulting from mechanical
* transformation or translation of a Source form, including but
* not limited to compiled object code, generated documentation,
* and conversions to other media types.
*
* "Work" shall mean the work of authorship, whether in Source or
* Object form, made available under the License, as indicated by a
* copyright notice that is included in or attached to the work
* (an example is provided in the Appendix below).
*
* "Derivative Works" shall mean any work, whether in Source or Object
* form, that is based on (or derived from) the Work and for which the
* editorial revisions, annotations, elaborations, or other modifications
* represent, as a whole, an original work of authorship. For the purposes
* of this License, Derivative Works shall not include works that remain
* separable from, or merely link (or bind by name) to the interfaces of,
* the Work and Derivative Works thereof.
*
* "Contribution" shall mean any work of authorship, including
* the original version of the Work and any modifications or additions
* to that Work or Derivative Works thereof, that is intentionally
* submitted to Licensor for inclusion in the Work by the copyright owner
* or by an individual or Legal Entity authorized to submit on behalf of
* the copyright owner. For the purposes of this definition, "submitted"
* means any form of electronic, verbal, or written communication sent
* to the Licensor or its representatives, including but not limited to
* communication on electronic mailing lists, source code control systems,
* and issue tracking systems that are managed by, or on behalf of, the
* Licensor for the purpose of discussing and improving the Work, but
* excluding communication that is conspicuously marked or otherwise
* designated in writing by the copyright owner as "Not a Contribution."
*
* "Contributor" shall mean Licensor and any individual or Legal Entity
* on behalf of whom a Contribution has been received by Licensor and
* subsequently incorporated within the Work.
*
* 2. Grant of Copyright License. Subject to the terms and conditions of
* this License, each Contributor hereby grants to You a perpetual,
* worldwide, non-exclusive, no-charge, royalty-free, irrevocable
* copyright license to reproduce, prepare Derivative Works of,
* publicly display, publicly perform, sublicense, and distribute the
* Work and such Derivative Works in Source or Object form.
*
* 3. Grant of Patent License. Subject to the terms and conditions of
* this License, each Contributor hereby grants to You a perpetual,
* worldwide, non-exclusive, no-charge, royalty-free, irrevocable
* (except as stated in this section) patent license to make, have made,
* use, offer to sell, sell, import, and otherwise transfer the Work,
* where such license applies only to those patent claims licensable
* by such Contributor that are necessarily infringed by their
* Contribution(s) alone or by combination of their Contribution(s)
* with the Work to which such Contribution(s) was submitted. If You
* institute patent litigation against any entity (including a
* cross-claim or counterclaim in a lawsuit) alleging that the Work
* or a Contribution incorporated within the Work constitutes direct
* or contributory patent infringement, then any patent licenses
* granted to You under this License for that Work shall terminate
* as of the date such litigation is filed.
*
* 4. Redistribution. You may reproduce and distribute copies of the
* Work or Derivative Works thereof in any medium, with or without
* modifications, and in Source or Object form, provided that You
* meet the following conditions:
*
* (a) You must give any other recipients of the Work or
* Derivative Works a copy of this License; and
*
* (b) You must cause any modified files to carry prominent notices
* stating that You changed the files; and
*
* (c) You must retain, in the Source form of any Derivative Works
* that You distribute, all copyright, patent, trademark, and
* attribution notices from the Source form of the Work,
* excluding those notices that do not pertain to any part of
* the Derivative Works; and
*
* (d) If the Work includes a "NOTICE" text file as part of its
* distribution, then any Derivative Works that You distribute must
* include a readable copy of the attribution notices contained
* within such NOTICE file, excluding those notices that do not
* pertain to any part of the Derivative Works, in at least one
* of the following places: within a NOTICE text file distributed
* as part of the Derivative Works; within the Source form or
* documentation, if provided along with the Derivative Works; or,
* within a display generated by the Derivative Works, if and
* wherever such third-party notices normally appear. The contents
* of the NOTICE file are for informational purposes only and
* do not modify the License. You may add Your own attribution
* notices within Derivative Works that You distribute, alongside
* or as an addendum to the NOTICE text from the Work, provided
* that such additional attribution notices cannot be construed
* as modifying the License.
*
* You may add Your own copyright statement to Your modifications and
* may provide additional or different license terms and conditions
* for use, reproduction, or distribution of Your modifications, or
* for any such Derivative Works as a whole, provided Your use,
* reproduction, and distribution of the Work otherwise complies with
* the conditions stated in this License.
*
* 5. Submission of Contributions. Unless You explicitly state otherwise,
* any Contribution intentionally submitted for inclusion in the Work
* by You to the Licensor shall be under the terms and conditions of
* this License, without any additional terms or conditions.
* Notwithstanding the above, nothing herein shall supersede or modify
* the terms of any separate license agreement you may have executed
* with Licensor regarding such Contributions.
*
* 6. Trademarks. This License does not grant permission to use the trade
* names, trademarks, service marks, or product names of the Licensor,
* except as required for reasonable and customary use in describing the
* origin of the Work and reproducing the content of the NOTICE file.
*
* 7. Disclaimer of Warranty. Unless required by applicable law or
* agreed to in writing, Licensor provides the Work (and each
* Contributor provides its Contributions) on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
* implied, including, without limitation, any warranties or conditions
* of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
* PARTICULAR PURPOSE. You are solely responsible for determining the
* appropriateness of using or redistributing the Work and assume any
* risks associated with Your exercise of permissions under this License.
*
* 8. Limitation of Liability. In no event and under no legal theory,
* whether in tort (including negligence), contract, or otherwise,
* unless required by applicable law (such as deliberate and grossly
* negligent acts) or agreed to in writing, shall any Contributor be
* liable to You for damages, including any direct, indirect, special,
* incidental, or consequential damages of any character arising as a
* result of this License or out of the use or inability to use the
* Work (including but not limited to damages for loss of goodwill,
* work stoppage, computer failure or malfunction, or any and all
* other commercial damages or losses), even if such Contributor
* has been advised of the possibility of such damages.
*
* 9. Accepting Warranty or Additional Liability. While redistributing
* the Work or Derivative Works thereof, You may choose to offer,
* and charge a fee for, acceptance of support, warranty, indemnity,
* or other liability obligations and/or rights consistent with this
* License. However, in accepting such obligations, You may act only
* on Your own behalf and on Your sole responsibility, not on behalf
* of any other Contributor, and only if You agree to indemnify,
* defend, and hold each Contributor harmless for any liability
* incurred by, or claims asserted against, such Contributor by reason
* of your accepting any such warranty or additional liability.
*
* END OF TERMS AND CONDITIONS
*
* APPENDIX: How to apply the Apache License to your work.
*
* To apply the Apache License to your work, attach the following
* boilerplate notice, with the fields enclosed by brackets "[]"
* replaced with your own identifying information. (Don't include
* the brackets!) The text should be enclosed in the appropriate
* comment syntax for the file format. We also recommend that a
* file or class name and description of purpose be included on the
* same "printed page" as the copyright notice for easier
* identification within third-party archives.
*
* Copyright 2016 Alibaba Group
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.taobao.weex;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import com.taobao.weex.utils.WXLogUtils;
import java.util.HashMap;
public class WXGlobalEventReceiver extends BroadcastReceiver {
public static final String EVENT_NAME = "eventName";
public static final String EVENT_PARAMS = "eventParams";
public static final String EVENT_ACTION = "wx_global_action";
public static final String EVENT_WX_INSTANCEID = "wx_instanceid";
private WXSDKInstance mWXSDKInstance;
public WXGlobalEventReceiver() {
}
public WXGlobalEventReceiver(WXSDKInstance instance) {
mWXSDKInstance = instance;
}
@Override
public void onReceive(Context context, Intent intent) {
String eventName = intent.getStringExtra(EVENT_NAME);
String params = intent.getStringExtra(EVENT_PARAMS);
HashMap<String, Object> maps = null;
try {
maps = com.alibaba.fastjson.JSON.parseObject(params, HashMap.class);
mWXSDKInstance.fireGlobalEventCallback(eventName, maps);
} catch (Exception e) {
WXLogUtils.e("global-receive",e);
}
}
}
| {
"pile_set_name": "Github"
} |
# Modifiers\.Internal\_Static\(\) Method
[Home](../../../../README.md)
**Containing Type**: [Modifiers](../README.md)
**Assembly**: Roslynator\.CSharp\.dll
\
Creates a list of modifiers that contains "internal static" modifiers\.
```csharp
public static Microsoft.CodeAnalysis.SyntaxTokenList Internal_Static()
```
### Returns
[SyntaxTokenList](https://docs.microsoft.com/en-us/dotnet/api/microsoft.codeanalysis.syntaxtokenlist)
| {
"pile_set_name": "Github"
} |
package scala.meta.trees
import com.intellij.openapi.progress.ProgressManager
import com.intellij.psi._
import org.jetbrains.plugins.scala.extensions.PsiElementExt
import org.jetbrains.plugins.scala.lang.psi.api.ScalaPsiElement
import org.jetbrains.plugins.scala.lang.psi.api.base._
import org.jetbrains.plugins.scala.lang.psi.api.expr._
import org.jetbrains.plugins.scala.lang.psi.api.statements._
import org.jetbrains.plugins.scala.lang.psi.api.toplevel.imports.ScImportStmt
import org.jetbrains.plugins.scala.lang.psi.api.toplevel.typedef._
import org.jetbrains.plugins.scala.lang.psi.{api => p, types => ptype}
import scala.collection.immutable.Seq
import scala.language.postfixOps
import scala.meta.Defn
import scala.meta.trees.error._
import scala.{meta => m}
trait TreeAdapter {
self: TreeConverter =>
def ideaToMeta(tree: PsiElement): m.Tree = {
ProgressManager.checkCanceled()
tree match {
case t: ScValueDeclaration => toVal(t)
case t: ScVariableDeclaration => toVar(t)
case t: ScTypeAliasDeclaration => toTypeDecl(t)
case t: ScTypeAliasDefinition => toTypeDefn(t)
case t: ScFunctionDeclaration => toFunDecl(t)
case t: ScPatternDefinition => toPatternDefinition(t)
case t: ScVariableDefinition => toVarDefn(t)
case t: ScFunctionDefinition => toFunDefn(t)
case t: ScMacroDefinition => toMacroDefn(t)
case t: ScPrimaryConstructor => ctor(Some(t))
case t: ScTrait => toTrait(t)
case t: ScClass => toClass(t)
case t: ScObject => toObject(t)
case t: ScAnnotation => toAnnot(t)
case t: ScExpression => expression(Some(t)).get
case t: ScImportStmt => m.Import(Seq(t.importExprs.map(imports):_*))
case t: PsiClass => toClass(t)
case t: PsiMethod => t ???
case other => other ?!
}
}
def toAnnotCtor(annot: ScAnnotation): m.Term.New = {
m.Term.New(m.Template(Nil, Seq(toCtor(annot.constructorInvocation)), m.Term.Param(Nil, m.Name.Anonymous(), None, None), None))
}
def toMacroDefn(t: ScMacroDefinition): m.Defn.Macro =
t.definedReturnType match {
case Right(value) =>
m.Defn.Macro(
convertMods(t), toTermName(t),
Seq(t.typeParameters map toTypeParams: _*),
Seq(t.paramClauses.clauses.map(convertParamClause): _*),
Option(toType(value)),
t.macroImplReference.map(getQualifiedReference).get
)
case _ => unreachable("Macro definition must have return type defined")
}
def toFunDefn(t: ScFunctionDefinition): m.Defn.Def = {
m.Defn.Def(convertMods(t), toTermName(t),
Seq(t.typeParameters map toTypeParams: _*),
Seq(t.paramClauses.clauses.map(convertParamClause): _*),
t.returnTypeElement.map(toType),
expression(t.body).getOrElse(m.Term.Block(Nil))
)
}
// Java conversion - beware: we cannot convert java method bodies, so just return a lazy ???
def toMethodDefn(t: PsiMethod): m.Defn.Def = {
t ???
}
def toVarDefn(t: ScVariableDefinition): m.Defn.Var = {
m.Defn.Var(convertMods(t), Seq(t.pList.patterns.map(pattern): _*), t.declaredType.map(toType(_)), expression(t.expr))
}
def toFunDecl(t: ScFunctionDeclaration): m.Decl.Def = {
m.Decl.Def(convertMods(t), toTermName(t), Seq(t.typeParameters map toTypeParams: _*),
Seq(t.paramClauses.clauses.map(convertParamClause): _*),
t.returnTypeElement.map(toType).getOrElse(toStdTypeName(ptype.api.Unit(t.projectContext))))
}
def toTypeDefn(t: ScTypeAliasDefinition): m.Defn.Type = {
m.Defn.Type(convertMods(t), toTypeName(t), Seq(t.typeParameters map toTypeParams: _*), toType(t.aliasedTypeElement.get))
}
def toTypeDecl(t: ScTypeAliasDeclaration): m.Decl.Type = {
m.Decl.Type(convertMods(t), toTypeName(t), Seq(t.typeParameters map toTypeParams: _*), typeBounds(t))
}
def toVar(t: ScVariableDeclaration): m.Decl.Var = {
m.Decl.Var(convertMods(t), Seq(t.getIdList.fieldIds map { it => m.Pat.Var.Term(toTermName(it)) }: _*), toType(t.typeElement.get))
}
def toVal(t: ScValueDeclaration): m.Decl.Val = {
m.Decl.Val(convertMods(t), Seq(t.getIdList.fieldIds map { it => m.Pat.Var.Term(toTermName(it)) }: _*), toType(t.typeElement.get))
}
def toTrait(t: ScTrait): m.Tree = {
val defn = m.Defn.Trait(
convertMods(t),
toTypeName(t),
Seq(t.typeParameters map toTypeParams: _*),
m.Ctor.Primary(Nil, m.Ctor.Ref.Name("this"), Nil),
template(t.physicalExtendsBlock)
)
t.baseCompanionModule match {
case Some(obj: ScObject) => m.Term.Block(Seq(defn, toObject(obj)))
case _ => defn
}
}
def toClass(c: ScClass): m.Tree = {
val defn = m.Defn.Class(
convertMods(c),
toTypeName(c),
Seq(c.typeParameters map toTypeParams: _*),
ctor(c.constructor),
template(c.physicalExtendsBlock)
)
c.baseCompanionModule match {
case Some(obj: ScObject) => m.Term.Block(Seq(defn, toObject(obj)))
case _ => defn
}
}
def toClass(c: PsiClass): Defn.Class = m.Defn.Class(
convertMods(c.getModifierList),
toTypeName(c),
Seq(c.getTypeParameters map toTypeParams:_*),
ctor(c),
template(c.getAllMethods)
)
def toObject(o: ScObject): Defn.Object = m.Defn.Object(
convertMods(o),
toTermName(o),
template(o.physicalExtendsBlock)
)
def ctor(pc: Option[ScPrimaryConstructor]): m.Ctor.Primary = {
pc match {
case Some(ctor) => m.Ctor.Primary(convertMods(ctor), toPrimaryCtorName(ctor), Seq(ctor.parameterList.clauses.map(convertParamClause):_*))
case None => unreachable("no primary constructor in class")
}
}
// FIXME: we don't have explicit information on what ctor has been used, so just select first one
def ctor(c: PsiClass): m.Ctor.Primary = {
// m.Ctor.Primary(Seq.empty, m.Ctor.Ref.Name(c.getName).withDenot())
c ???
}
def caseClause(c: patterns.ScCaseClause): m.Case = {
m.Case(pattern(c.pattern.get), c.guard.map(it => expression(it.expr.get)), expression(c.expr.get).asInstanceOf[m.Term.Block])
}
def pattern(pt: patterns.ScPattern): m.Pat = {
import p.base.patterns._
import m.Pat._
def compose(lst: Seq[ScPattern]): m.Pat = lst match {
case x :: Nil => pattern(x)
case x :: xs => Alternative(pattern(x), compose(xs))
}
// WHY??(((
def arg(pt: patterns.ScPattern): m.Pat.Arg = pt match {
case _: ScSeqWildcard => Arg.SeqWildcard()
case _: ScWildcardPattern => Wildcard()
case ScStableReferencePattern(reference) => toTermName(reference)
case t: ScPattern => pattern(t)
}
pt match {
case t: ScReferencePattern => Var.Term(toTermName(t))
case t: ScConstructorPattern=> Extract(toTermName(t.ref), Nil, Seq(t.args.patterns.map(arg):_*))
case t: ScNamingPattern => Bind(Var.Term(toTermName(t)), arg(t.named))
case t@ ScTypedPattern(_: types.ScWildcardTypeElement) => Typed(if (t.isWildcard) Wildcard() else Var.Term(toTermName(t)), Type.Wildcard())
case t@ ScTypedPattern(te) => Typed(if (t.isWildcard) Wildcard() else Var.Term(toTermName(t)), toType(te).patTpe)
case ScLiteralPattern(scLiteral) => literal(scLiteral)
case t: ScTuplePattern => Tuple(Seq(t.patternList.get.patterns.map(pattern):_*))
case t: ScWildcardPattern => Wildcard()
case t: ScCompositePattern => compose(Seq(t.subpatterns : _*))
case t: ScInfixPattern => ExtractInfix(pattern(t.left), toTermName(t.operation), t.rightOption.map(pt=>Seq(pattern(pt))).getOrElse(Nil))
case ScStableReferencePattern(reference) => toTermName(reference)
case t: ScPattern => t ?!
}
}
def template(t: p.toplevel.templates.ScExtendsBlock): m.Template = {
val exprs = t.templateBody map (it => Seq(it.exprs.map(expression): _*))
val members = t.templateBody map (it => Seq(it.members.map(ideaToMeta(_).asInstanceOf[m.Stat]): _*))
val early = t.earlyDefinitions map (it => Seq(it.members.map(ideaToMeta(_).asInstanceOf[m.Stat]):_*)) getOrElse Seq.empty
val ctor = t.templateParents
.flatMap(_.children.find(_.isInstanceOf[ScConstructorInvocation]))
.map(c=>toCtor(c.asInstanceOf[ScConstructorInvocation]))
.toSeq
val mixins = t.templateParents.map(x=>x.typeElementsWithoutConstructor.map(toType).map(toCtor)).getOrElse(Seq.empty)
val self = t.selfType match {
case Some(tpe: ptype.ScType) => m.Term.Param(Nil, m.Term.Name("self"), Some(toType(tpe)), None)
case None => m.Term.Param(Nil, m.Name.Anonymous(), None, None)
}
// FIXME: preserve expression and member order
val stats = (exprs, members) match {
case (Some(exp), Some(hld)) => Some(hld ++ exp)
case (Some(exp), None) => Some(exp)
case (None, Some(hld)) => Some(hld)
case (None, None) => None
}
m.Template(early, Seq(ctor:_*) ++ mixins, self, stats)
}
// Java conversion
def template(arr: Array[PsiMethod]): m.Template = {
???
}
def newTemplate(t: ScTemplateDefinition): m.Template = {
val early= t.extendsBlock.earlyDefinitions map (it => Seq(it.members.map(ideaToMeta(_).asInstanceOf[m.Stat]):_*)) getOrElse Seq.empty
val ctor = t.extendsBlock.templateParents match {
case Some(parents) =>
parents.constructorInvocation match {
case Some(constrInvocation) => toCtor(constrInvocation)
case None => unreachable(s"no constructor found in class ${t.qualifiedName}")
}
case None => unreachable(s"Class ${t.qualifiedName} has no parents")
}
val self = t.selfType match {
case Some(tpe: ptype.ScType) =>
m.Term.Param(Nil, m.Term.Name("self")/*.withAttrsFor(t)*/, Some(toType(tpe)), None)
case None =>
m.Term.Param(Nil, m.Name.Anonymous(), None, None)
}
m.Template(early, Seq(ctor), self, None)
}
def toCtor(constrInvocation: ScConstructorInvocation): m.Ctor.Call = {
val [email protected](ctorRef, _) = toCtor(toType(constrInvocation.typeElement))
if (constrInvocation.arguments.isEmpty) { ctorCall }
else {
val head = m.Term.Apply(ctorRef, Seq(constrInvocation.arguments.head.exprs.map(callArgs): _*))
constrInvocation.arguments.tail.foldLeft(head)((term, exprList) => m.Term.Apply(term, Seq(exprList.exprs.map(callArgs): _*)))
}
}
private def toCtor(tp: m.Type): m.Term.Apply = {
val ctor = toCtorRef(tp)
m.Term.Apply(ctor, Nil)
}
private def toCtorRef(tp: m.Type): m.Ctor.Call = {
tp match {
case m.Type.Name(value) => m.Ctor.Ref.Name(value)
case m.Type.Select(qual, name) => m.Ctor.Ref.Select(qual, m.Ctor.Ref.Name(name.value))
case m.Type.Project(qual, name) => m.Ctor.Ref.Project(qual, m.Ctor.Ref.Name(name.value))
case m.Type.Apply(tpe, args) => m.Term.ApplyType(toCtorRef(tpe), args)
case other => unreachable(s"Unexpected type in constructor type element - $other")
}
}
def toAnnot(annot: ScAnnotation): m.Mod.Annot = {
m.Mod.Annot(toCtor(annot.constructorInvocation))
}
def expression(e: ScExpression): m.Term = {
import p.expr._
import p.expr.xml._
e match {
case t: ScLiteral =>
import m.Lit
literal(t) match {
case value: Lit.Symbol if paradiseCompatibilityHacks => // apparently, Lit.Symbol is no more in paradise
m.Term.Apply(
m.Term.Select(
m.Term.Name("scala"),
m.Term.Name("Symbol")
),
Seq(Lit.String(value.toString)) // symbol literals in meta contain a string as their value
)
case value => value
}
case _: ScUnitExpr =>
m.Lit.Unit(())
case t: ScReturn =>
m.Term.Return(expression(t.expr).getOrElse(m.Lit.Unit(())))
case t: ScBlockExpr if t.hasCaseClauses =>
m.Term.PartialFunction(Seq(t.caseClauses.get.caseClauses.map(caseClause):_*))
case t: ScBlock =>
m.Term.Block(Seq(t.statements.map(ideaToMeta(_).asInstanceOf[m.Stat]):_*))
case t: ScMethodCall =>
t.withSubstitutionCaching { tp =>
m.Term.Apply(expression(t.getInvokedExpr), Seq(t.args.exprs.map(callArgs): _*))
}
case ScInfixExpr.withAssoc(base, operation, argument) =>
m.Term.ApplyInfix(expression(base), toTermName(operation), Nil, Seq(expression(argument)))
case t: ScPrefixExpr =>
m.Term.ApplyUnary(toTermName(t.operation), expression(t.operand))
case t: ScPostfixExpr =>
t.withSubstitutionCaching { tp =>
m.Term.Apply(m.Term.Select(expression(t.operand), toTermName(t.operation)), Nil)
}
case t: ScIf =>
val unit = m.Lit.Unit(())
m.Term.If(expression(t.condition.get),
t.thenExpression.map(expression).getOrElse(unit), t.elseExpression.map(expression).getOrElse(unit))
case t: ScDo =>
m.Term.Do(t.body.map(expression).getOrElse(m.Term.Placeholder()),
t.condition.map(expression).getOrElse(m.Term.Placeholder()))
case t: ScWhile =>
m.Term.While(t.condition.map(expression).getOrElse(throw new AbortException(Some(t), "Empty while condition")),
t.expression.map(expression).getOrElse(m.Term.Block(Seq.empty)))
case t: ScFor =>
m.Term.For(t.enumerators.map(enumerators).getOrElse(Seq.empty),
t.body.map(expression).getOrElse(m.Term.Block(Seq.empty)))
case t: ScMatch =>
m.Term.Match(expression(t.expression.get), Seq(t.clauses.map(caseClause):_*))
case t: ScReferenceExpression if t.qualifier.isDefined =>
m.Term.Select(expression(t.qualifier.get), toTermName(t))
case t: ScReferenceExpression =>
toTermName(t)
case t: ScSuperReference =>
m.Term.Super(t.drvTemplate.map(ind).getOrElse(m.Name.Anonymous()), getSuperName(t))
case t: ScThisReference =>
m.Term.This(t.reference.map(ind).getOrElse(m.Name.Anonymous()))
case t: ScNewTemplateDefinition =>
m.Term.New(newTemplate(t))
case t: ScFunctionExpr =>
m.Term.Function(Seq(t.parameters.map(convertParam):_*), expression(t.result).get)
case t: ScTuple =>
m.Term.Tuple(Seq(t.exprs.map(expression): _*))
case t: ScThrow =>
m.Term.Throw(expression(t.expression).getOrElse(throw new AbortException(t, "Empty throw expression")))
case t@ScTry(tryBlock, catchBlock, finallyBlock) =>
val fblk = finallyBlock.collect {
case ScFinallyBlock(expr) => expression(expr)
}
def tryTerm = expression(tryBlock).getOrElse(unreachable)
val res = catchBlock match {
case Some(ScCatchBlock(clauses)) if clauses.caseClauses.size == 1 =>
m.Term.TryWithTerm(tryTerm, clauses.caseClause.expr.map(expression).getOrElse(unreachable), fblk)
case Some(ScCatchBlock(clauses)) =>
m.Term.TryWithCases(tryTerm, Seq(clauses.caseClauses.map(caseClause):_*), fblk)
case None =>
m.Term.TryWithCases(tryTerm, Seq.empty, fblk)
case _ => unreachable
}
res
case t: ScGenericCall =>
m.Term.ApplyType(ideaToMeta(t.referencedExpr).asInstanceOf[m.Term], Seq(t.arguments.map(toType):_*))
case t: ScParenthesisedExpr =>
t.innerElement.map(expression).getOrElse(unreachable)
case t: ScAssignment =>
m.Term.Assign(expression(t.leftExpression).asInstanceOf[m.Term.Ref], expression(t.rightExpression.get))
case _: ScUnderscoreSection =>
m.Term.Placeholder()
case t: ScTypedExpression =>
m.Term.Ascribe(expression(t.expr), t.typeElement.map(toType).getOrElse(unreachable))
case t: ScConstrExpr =>
t ???
case t: ScXmlExpr =>
t ???
case other: ScalaPsiElement => other ?!
}
}
def callArgs(e: ScExpression): m.Term.Arg = {
e match {
case t: ScAssignment => m.Term.Arg.Named(toTermName(t.leftExpression), expression(t.rightExpression).get)
case _: ScUnderscoreSection => m.Term.Placeholder()
case t: ScTypedExpression if t.isSequenceArg=> m.Term.Arg.Repeated(expression(t.expr))
case other => expression(e)
}
}
def enumerators(en: ScEnumerators): Seq[m.Enumerator] = {
def toEnumerator(nm: ScEnumerator): m.Enumerator = {
nm match {
case e: ScGenerator =>
val expr = e.expr.getOrElse(unreachable("generator has no expression"))
m.Enumerator.Generator(pattern(e.pattern), expression(expr))
case e: ScGuard =>
val expr = e.expr.getOrElse(unreachable("guard has no condition"))
m.Enumerator.Guard(expression(expr))
case e: ScForBinding =>
val expr = e.expr.getOrElse(unreachable("forBinding has no expression"))
m.Enumerator.Val(pattern(e.pattern), expression(expr))
case _ => unreachable
}
}
Seq(en.children.collect { case enum: ScEnumerator => enum }.map(toEnumerator).toSeq:_*)
}
def toParams(argss: Seq[ScArgumentExprList]): Seq[Seq[m.Term.Param]] = {
argss.toStream map { args =>
args.matchedParameters.toStream map { case (_, param) =>
m.Term.Param(param.psiParam.map(p => convertMods(p.getModifierList)).getOrElse(Seq.empty), toParamName(param), Some(toType(param.paramType)), None)
}
}
}
def expression(tree: Option[ScExpression]): Option[m.Term] = {
tree match {
case Some(_: ScUnderscoreSection) => None
case Some(expr) => Some(expression(expr))
case None => None
}
}
def getQualifier(q: ScStableCodeReference): m.Term.Ref = {
q.pathQualifier match {
case Some(_: ScSuperReference) =>
m.Term.Select(m.Term.Super(m.Name.Anonymous(), m.Name.Anonymous()), toTermName(q))
case Some(_: ScThisReference) =>
m.Term.Select(m.Term.This(m.Name.Anonymous()), toTermName(q))
case Some(parent:ScStableCodeReference) =>
m.Term.Select(getQualifier(parent), toTermName(q))
case None => toTermName(q)
case Some(other) => other ?!
}
}
def getQualifiedReference(q: ScStableCodeReference): m.Term.Ref = {
q.pathQualifier match {
case None => toTermName(q)
case Some(_) => m.Term.Select(getQualifier(q), toTermName(q))
}
}
// TODO: WHY?!
def getCtorRef(q: ScStableCodeReference): m.Ctor.Ref = {
q.pathQualifier match {
case Some(_: ScSuperReference) =>
m.Ctor.Ref.Select(m.Term.Super(m.Name.Anonymous(), m.Name.Anonymous()), toCtorName(q))
case Some(_: ScThisReference) =>
m.Ctor.Ref.Select(m.Term.This(m.Name.Anonymous()), toCtorName(q))
case Some(parent:ScStableCodeReference) =>
m.Ctor.Ref.Select(getQualifier(parent), toCtorName(q))
case None => toCtorName(q)
case Some(other) => other ?!
}
}
def imports(t: p.toplevel.imports.ScImportExpr):m.Importer = {
def selector(sel: p.toplevel.imports.ScImportSelector): m.Importee = {
val importedName = sel.importedName.getOrElse {
throw new AbortException("Imported name is null")
}
val reference = sel.reference.getOrElse {
throw new AbortException("Reference is null")
}
if (sel.isAliasedImport && importedName == "_")
m.Importee.Unimport(ind(reference))
else if (sel.isAliasedImport)
m.Importee.Rename(m.Name.Indeterminate(reference.qualName), m.Name.Indeterminate(importedName))
else
m.Importee.Name(m.Name.Indeterminate(importedName))
}
if (t.selectors.nonEmpty)
m.Importer(getQualifier(t.qualifier), Seq(t.selectors.map(selector): _*) ++ (if (t.isSingleWildcard) Seq(m.Importee.Wildcard()) else Seq.empty))
else if (t.isSingleWildcard)
m.Importer(getQualifier(t.qualifier), Seq(m.Importee.Wildcard()))
else
m.Importer(getQualifier(t.qualifier), Seq(m.Importee.Name(m.Name.Indeterminate(t.importedNames.head))))
}
def literal(literal: ScLiteral): m.Lit = {
import m.Lit._
literal.getValue match {
case value: Integer => Int(value)
case value: java.lang.Long => Long(value)
case value: java.lang.Float => Float(value)
case value: java.lang.Double => Double(value)
case value: java.lang.Boolean => Boolean(value)
case value: Character => Char(value)
case value: java.lang.Byte => Byte(value)
case value: java.lang.String => String(value)
case value: scala.Symbol => Symbol(value)
case null => Null(())
case _ => literal ?!
}
}
def toPatternDefinition(t: ScPatternDefinition): m.Tree = {
if(t.bindings.exists(_.isVal))
m.Defn.Val(convertMods(t), Seq(t.pList.patterns.map(pattern):_*), t.typeElement.map(toType), expression(t.expr).get)
else if(t.bindings.exists(_.isVar))
m.Defn.Var(convertMods(t), Seq(t.pList.patterns.map(pattern):_*), t.typeElement.map(toType), expression(t.expr))
else unreachable
}
def convertMods(t: p.toplevel.ScModifierListOwner): Seq[m.Mod] = {
def extractClassParameter(param: params.ScClassParameter): Seq[m.Mod] = {
if (param.isVar) Seq(m.Mod.VarParam())
else if (param.isVal) Seq(m.Mod.ValParam())
else Seq.empty
}
if (t.getModifierList == null) return Nil // workaround for 9ec0b8a44
val name = t.getModifierList.accessModifier match {
case Some(mod) => mod.idText match {
case Some(qual) => m.Name.Indeterminate(qual)
case None => m.Name.Anonymous()
}
case None => m.Name.Anonymous()
}
val classParam = t match {
case param: params.ScClassParameter => extractClassParameter(param)
case _ => Seq.empty
}
val common = for {
modifier <- t.getModifierList.accessModifier.toSeq
term = if (modifier.isThis) m.Term.This(name)
else name
newTerm = if (modifier.isPrivate) m.Mod.Private(term)
else m.Mod.Protected(term)
} yield newTerm
val caseMod = if (t.hasModifierPropertyScala("case")) Seq(m.Mod.Case()) else Nil
val finalMod = if (t.hasModifierPropertyScala("final")) Seq(m.Mod.Final()) else Nil
val implicitMod = if(t.hasModifierPropertyScala("implicit")) Seq(m.Mod.Implicit()) else Nil
val sealedMod = if (t.hasModifierPropertyScala("sealed")) Seq(m.Mod.Sealed()) else Nil
val annotations: Seq[m.Mod.Annot] = t match {
case ah: ScAnnotationsHolder => Seq(ah.annotations.filterNot(_ == annotationToSkip).map(toAnnot):_*)
case _ => Seq.empty
}
val overrideMod = if (t.hasModifierProperty("override")) Seq(m.Mod.Override()) else Nil
annotations ++ implicitMod ++ sealedMod ++ finalMod ++ caseMod ++ overrideMod ++ common ++ classParam
}
// Java conversion
def convertMods(t: PsiModifierList): Seq[m.Mod] = {
val mods = scala.collection.mutable.ListBuffer[m.Mod]()
if (t.hasModifierProperty("private")) mods += m.Mod.Private(m.Name.Indeterminate.apply("this"))
if (t.hasModifierProperty("protected")) mods += m.Mod.Protected(m.Name.Indeterminate.apply("this"))
if (t.hasModifierProperty("abstract")) mods += m.Mod.Abstract()
Seq(mods:_*)
}
def convertParamClause(paramss: params.ScParameterClause): Seq[m.Term.Param] = {
Seq(paramss.parameters.map(convertParam):_*)
}
protected def convertParam(param: params.ScParameter): m.Term.Param = {
val mods = convertMods(param) ++ (if (param.isImplicitParameter) Seq(m.Mod.Implicit()) else Seq.empty)
val default = param.getActualDefaultExpression.map(expression)
if (param.isVarArgs)
m.Term.Param(mods, toTermName(param), param.typeElement.map(tp => m.Type.Arg.Repeated(toType(tp))), default)
else
m.Term.Param(mods, toTermName(param), param.typeElement.map(toType), default)
}
}
| {
"pile_set_name": "Github"
} |
local mod = DBM:NewMod(2030, "DBM-Party-BfA", 1, 968)
local L = mod:GetLocalizedStrings()
mod:SetRevision("20200803045206")
mod:SetCreatureID(122968)
mod:SetEncounterID(2087)
mod:RegisterCombat("combat")
mod:RegisterEventsInCombat(
"SPELL_AURA_APPLIED 250036",
"SPELL_CAST_START 249923 259187 250096 249919 250050",
"SPELL_PERIODIC_DAMAGE 250036",
"SPELL_PERIODIC_MISSED 250036",
"CHAT_MSG_RAID_BOSS_EMOTE"
)
--(ability.id = 249923 or ability.id = 250096 or ability.id = 250050 or ability.id = 249919) and type = "begincast"
--TODO: Verify CHAT_MSG_RAID_BOSS_EMOTE for soulrend. I know i saw it but not sure I got spellId right since chatlog only grabs parsed name
local warnSoulRend = mod:NewTargetAnnounce(249923, 4)
local specWarnSoulRend = mod:NewSpecialWarningRun(249923, nil, nil, nil, 4, 2)
local yellSoulRend = mod:NewYell(249923)
local specWarnWrackingPain = mod:NewSpecialWarningInterrupt(250096, "HasInterrupt", nil, nil, 1, 2)
local specWarnSkewer = mod:NewSpecialWarningDefensive(249919, "Tank", nil, nil, 1, 2)
local specWarnEchoes = mod:NewSpecialWarningDodge(250050, nil, nil, nil, 2, 2)
local specWarnGTFO = mod:NewSpecialWarningGTFO(250036, nil, nil, nil, 1, 8)
local timerSoulrendCD = mod:NewCDTimer(41.3, 249923, nil, nil, nil, 3, nil, DBM_CORE_L.DAMAGE_ICON)
local timerWrackingPainCD = mod:NewCDTimer(17, 250096, nil, "HasInterrupt", nil, 4, nil, DBM_CORE_L.INTERRUPT_ICON)--17-23
local timerSkewerCD = mod:NewCDTimer(12, 249919, nil, "Tank", nil, 5, nil, DBM_CORE_L.TANK_ICON)
local timerEchoesCD = mod:NewCDTimer(32.8, 250050, nil, nil, nil, 3)
function mod:OnCombatStart(delay)
timerWrackingPainCD:Start(3.5-delay)
timerSkewerCD:Start(5-delay)
timerSoulrendCD:Start(10-delay)
timerEchoesCD:Start(16.9-delay)
end
function mod:SPELL_AURA_APPLIED(args)
local spellId = args.spellId
if spellId == 250036 and args:IsPlayer() and self:AntiSpam(2, 1) then
specWarnGTFO:Show(args.spellName)
specWarnGTFO:Play("watchfeet")
end
end
function mod:SPELL_CAST_START(args)
local spellId = args.spellId
if spellId == 249923 or spellId == 259187 then
timerSoulrendCD:Start()
if not self:IsNormal() and not self:IsTank() then
specWarnSoulRend:Show()
specWarnSoulRend:Play("runout")
end
elseif spellId == 250096 then
timerWrackingPainCD:Start()
if self:CheckInterruptFilter(args.sourceGUID, false, true) then
specWarnWrackingPain:Show(args.sourceName)
specWarnWrackingPain:Play("kickcast")
end
elseif spellId == 249919 then
specWarnSkewer:Show()
specWarnSkewer:Play("defensive")
timerSkewerCD:Start()
elseif spellId == 250050 then
specWarnEchoes:Show()
specWarnEchoes:Play("watchstep")
timerEchoesCD:Start()
end
end
--Same time as SPELL_CAST_START but has target information on normal
function mod:CHAT_MSG_RAID_BOSS_EMOTE(msg, _, _, _, targetname)
if msg:find("spell:249924") then
if targetname then--Normal, only one person affected, name in emote
if targetname == UnitName("player") then
specWarnSoulRend:Show()
specWarnSoulRend:Play("runout")
yellSoulRend:Yell()
else
warnSoulRend:Show(targetname)
end
end
end
end
function mod:SPELL_PERIODIC_DAMAGE(_, _, _, _, destGUID, _, _, _, spellId, spellName)
if spellId == 250036 and destGUID == UnitGUID("player") and self:AntiSpam(2, 1) then
specWarnGTFO:Show(spellName)
specWarnGTFO:Play("watchfeet")
end
end
mod.SPELL_PERIODIC_MISSED = mod.SPELL_PERIODIC_DAMAGE
| {
"pile_set_name": "Github"
} |
<?php
$this->css($this->assetModule('admin/admin.css'));
$this->angular([
'angular-route.js',
'angular-translate.js',
]);
| {
"pile_set_name": "Github"
} |
/* This file was generated automatically by the Snowball to ANSI C compiler */
#include "../runtime/header.h"
#ifdef __cplusplus
extern "C" {
#endif
extern int german_UTF_8_stem(struct SN_env * z);
#ifdef __cplusplus
}
#endif
static int r_standard_suffix(struct SN_env * z);
static int r_R2(struct SN_env * z);
static int r_R1(struct SN_env * z);
static int r_mark_regions(struct SN_env * z);
static int r_postlude(struct SN_env * z);
static int r_prelude(struct SN_env * z);
#ifdef __cplusplus
extern "C" {
#endif
extern struct SN_env * german_UTF_8_create_env(void);
extern void german_UTF_8_close_env(struct SN_env * z);
#ifdef __cplusplus
}
#endif
static const symbol s_0_1[1] = { 'U' };
static const symbol s_0_2[1] = { 'Y' };
static const symbol s_0_3[2] = { 0xC3, 0xA4 };
static const symbol s_0_4[2] = { 0xC3, 0xB6 };
static const symbol s_0_5[2] = { 0xC3, 0xBC };
static const struct among a_0[6] =
{
/* 0 */ { 0, 0, -1, 6, 0},
/* 1 */ { 1, s_0_1, 0, 2, 0},
/* 2 */ { 1, s_0_2, 0, 1, 0},
/* 3 */ { 2, s_0_3, 0, 3, 0},
/* 4 */ { 2, s_0_4, 0, 4, 0},
/* 5 */ { 2, s_0_5, 0, 5, 0}
};
static const symbol s_1_0[1] = { 'e' };
static const symbol s_1_1[2] = { 'e', 'm' };
static const symbol s_1_2[2] = { 'e', 'n' };
static const symbol s_1_3[3] = { 'e', 'r', 'n' };
static const symbol s_1_4[2] = { 'e', 'r' };
static const symbol s_1_5[1] = { 's' };
static const symbol s_1_6[2] = { 'e', 's' };
static const struct among a_1[7] =
{
/* 0 */ { 1, s_1_0, -1, 2, 0},
/* 1 */ { 2, s_1_1, -1, 1, 0},
/* 2 */ { 2, s_1_2, -1, 2, 0},
/* 3 */ { 3, s_1_3, -1, 1, 0},
/* 4 */ { 2, s_1_4, -1, 1, 0},
/* 5 */ { 1, s_1_5, -1, 3, 0},
/* 6 */ { 2, s_1_6, 5, 2, 0}
};
static const symbol s_2_0[2] = { 'e', 'n' };
static const symbol s_2_1[2] = { 'e', 'r' };
static const symbol s_2_2[2] = { 's', 't' };
static const symbol s_2_3[3] = { 'e', 's', 't' };
static const struct among a_2[4] =
{
/* 0 */ { 2, s_2_0, -1, 1, 0},
/* 1 */ { 2, s_2_1, -1, 1, 0},
/* 2 */ { 2, s_2_2, -1, 2, 0},
/* 3 */ { 3, s_2_3, 2, 1, 0}
};
static const symbol s_3_0[2] = { 'i', 'g' };
static const symbol s_3_1[4] = { 'l', 'i', 'c', 'h' };
static const struct among a_3[2] =
{
/* 0 */ { 2, s_3_0, -1, 1, 0},
/* 1 */ { 4, s_3_1, -1, 1, 0}
};
static const symbol s_4_0[3] = { 'e', 'n', 'd' };
static const symbol s_4_1[2] = { 'i', 'g' };
static const symbol s_4_2[3] = { 'u', 'n', 'g' };
static const symbol s_4_3[4] = { 'l', 'i', 'c', 'h' };
static const symbol s_4_4[4] = { 'i', 's', 'c', 'h' };
static const symbol s_4_5[2] = { 'i', 'k' };
static const symbol s_4_6[4] = { 'h', 'e', 'i', 't' };
static const symbol s_4_7[4] = { 'k', 'e', 'i', 't' };
static const struct among a_4[8] =
{
/* 0 */ { 3, s_4_0, -1, 1, 0},
/* 1 */ { 2, s_4_1, -1, 2, 0},
/* 2 */ { 3, s_4_2, -1, 1, 0},
/* 3 */ { 4, s_4_3, -1, 3, 0},
/* 4 */ { 4, s_4_4, -1, 2, 0},
/* 5 */ { 2, s_4_5, -1, 2, 0},
/* 6 */ { 4, s_4_6, -1, 3, 0},
/* 7 */ { 4, s_4_7, -1, 4, 0}
};
static const unsigned char g_v[] = { 17, 65, 16, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 32, 8 };
static const unsigned char g_s_ending[] = { 117, 30, 5 };
static const unsigned char g_st_ending[] = { 117, 30, 4 };
static const symbol s_0[] = { 0xC3, 0x9F };
static const symbol s_1[] = { 's', 's' };
static const symbol s_2[] = { 'u' };
static const symbol s_3[] = { 'U' };
static const symbol s_4[] = { 'y' };
static const symbol s_5[] = { 'Y' };
static const symbol s_6[] = { 'y' };
static const symbol s_7[] = { 'u' };
static const symbol s_8[] = { 'a' };
static const symbol s_9[] = { 'o' };
static const symbol s_10[] = { 'u' };
static const symbol s_11[] = { 's' };
static const symbol s_12[] = { 'n', 'i', 's' };
static const symbol s_13[] = { 'i', 'g' };
static const symbol s_14[] = { 'e' };
static const symbol s_15[] = { 'e' };
static const symbol s_16[] = { 'e', 'r' };
static const symbol s_17[] = { 'e', 'n' };
static int r_prelude(struct SN_env * z) {
{ int c_test = z->c; /* test, line 35 */
while(1) { /* repeat, line 35 */
int c1 = z->c;
{ int c2 = z->c; /* or, line 38 */
z->bra = z->c; /* [, line 37 */
if (!(eq_s(z, 2, s_0))) goto lab2;
z->ket = z->c; /* ], line 37 */
{ int ret = slice_from_s(z, 2, s_1); /* <-, line 37 */
if (ret < 0) return ret;
}
goto lab1;
lab2:
z->c = c2;
{ int ret = skip_utf8(z->p, z->c, 0, z->l, 1);
if (ret < 0) goto lab0;
z->c = ret; /* next, line 38 */
}
}
lab1:
continue;
lab0:
z->c = c1;
break;
}
z->c = c_test;
}
while(1) { /* repeat, line 41 */
int c3 = z->c;
while(1) { /* goto, line 41 */
int c4 = z->c;
if (in_grouping_U(z, g_v, 97, 252, 0)) goto lab4;
z->bra = z->c; /* [, line 42 */
{ int c5 = z->c; /* or, line 42 */
if (!(eq_s(z, 1, s_2))) goto lab6;
z->ket = z->c; /* ], line 42 */
if (in_grouping_U(z, g_v, 97, 252, 0)) goto lab6;
{ int ret = slice_from_s(z, 1, s_3); /* <-, line 42 */
if (ret < 0) return ret;
}
goto lab5;
lab6:
z->c = c5;
if (!(eq_s(z, 1, s_4))) goto lab4;
z->ket = z->c; /* ], line 43 */
if (in_grouping_U(z, g_v, 97, 252, 0)) goto lab4;
{ int ret = slice_from_s(z, 1, s_5); /* <-, line 43 */
if (ret < 0) return ret;
}
}
lab5:
z->c = c4;
break;
lab4:
z->c = c4;
{ int ret = skip_utf8(z->p, z->c, 0, z->l, 1);
if (ret < 0) goto lab3;
z->c = ret; /* goto, line 41 */
}
}
continue;
lab3:
z->c = c3;
break;
}
return 1;
}
static int r_mark_regions(struct SN_env * z) {
z->I[0] = z->l;
z->I[1] = z->l;
{ int c_test = z->c; /* test, line 52 */
{ int ret = skip_utf8(z->p, z->c, 0, z->l, + 3);
if (ret < 0) return 0;
z->c = ret; /* hop, line 52 */
}
z->I[2] = z->c; /* setmark x, line 52 */
z->c = c_test;
}
{ /* gopast */ /* grouping v, line 54 */
int ret = out_grouping_U(z, g_v, 97, 252, 1);
if (ret < 0) return 0;
z->c += ret;
}
{ /* gopast */ /* non v, line 54 */
int ret = in_grouping_U(z, g_v, 97, 252, 1);
if (ret < 0) return 0;
z->c += ret;
}
z->I[0] = z->c; /* setmark p1, line 54 */
/* try, line 55 */
if (!(z->I[0] < z->I[2])) goto lab0;
z->I[0] = z->I[2];
lab0:
{ /* gopast */ /* grouping v, line 56 */
int ret = out_grouping_U(z, g_v, 97, 252, 1);
if (ret < 0) return 0;
z->c += ret;
}
{ /* gopast */ /* non v, line 56 */
int ret = in_grouping_U(z, g_v, 97, 252, 1);
if (ret < 0) return 0;
z->c += ret;
}
z->I[1] = z->c; /* setmark p2, line 56 */
return 1;
}
static int r_postlude(struct SN_env * z) {
int among_var;
while(1) { /* repeat, line 60 */
int c1 = z->c;
z->bra = z->c; /* [, line 62 */
among_var = find_among(z, a_0, 6); /* substring, line 62 */
if (!(among_var)) goto lab0;
z->ket = z->c; /* ], line 62 */
switch(among_var) {
case 0: goto lab0;
case 1:
{ int ret = slice_from_s(z, 1, s_6); /* <-, line 63 */
if (ret < 0) return ret;
}
break;
case 2:
{ int ret = slice_from_s(z, 1, s_7); /* <-, line 64 */
if (ret < 0) return ret;
}
break;
case 3:
{ int ret = slice_from_s(z, 1, s_8); /* <-, line 65 */
if (ret < 0) return ret;
}
break;
case 4:
{ int ret = slice_from_s(z, 1, s_9); /* <-, line 66 */
if (ret < 0) return ret;
}
break;
case 5:
{ int ret = slice_from_s(z, 1, s_10); /* <-, line 67 */
if (ret < 0) return ret;
}
break;
case 6:
{ int ret = skip_utf8(z->p, z->c, 0, z->l, 1);
if (ret < 0) goto lab0;
z->c = ret; /* next, line 68 */
}
break;
}
continue;
lab0:
z->c = c1;
break;
}
return 1;
}
static int r_R1(struct SN_env * z) {
if (!(z->I[0] <= z->c)) return 0;
return 1;
}
static int r_R2(struct SN_env * z) {
if (!(z->I[1] <= z->c)) return 0;
return 1;
}
static int r_standard_suffix(struct SN_env * z) {
int among_var;
{ int m1 = z->l - z->c; (void)m1; /* do, line 79 */
z->ket = z->c; /* [, line 80 */
if (z->c <= z->lb || z->p[z->c - 1] >> 5 != 3 || !((811040 >> (z->p[z->c - 1] & 0x1f)) & 1)) goto lab0;
among_var = find_among_b(z, a_1, 7); /* substring, line 80 */
if (!(among_var)) goto lab0;
z->bra = z->c; /* ], line 80 */
{ int ret = r_R1(z);
if (ret == 0) goto lab0; /* call R1, line 80 */
if (ret < 0) return ret;
}
switch(among_var) {
case 0: goto lab0;
case 1:
{ int ret = slice_del(z); /* delete, line 82 */
if (ret < 0) return ret;
}
break;
case 2:
{ int ret = slice_del(z); /* delete, line 85 */
if (ret < 0) return ret;
}
{ int m_keep = z->l - z->c;/* (void) m_keep;*/ /* try, line 86 */
z->ket = z->c; /* [, line 86 */
if (!(eq_s_b(z, 1, s_11))) { z->c = z->l - m_keep; goto lab1; }
z->bra = z->c; /* ], line 86 */
if (!(eq_s_b(z, 3, s_12))) { z->c = z->l - m_keep; goto lab1; }
{ int ret = slice_del(z); /* delete, line 86 */
if (ret < 0) return ret;
}
lab1:
;
}
break;
case 3:
if (in_grouping_b_U(z, g_s_ending, 98, 116, 0)) goto lab0;
{ int ret = slice_del(z); /* delete, line 89 */
if (ret < 0) return ret;
}
break;
}
lab0:
z->c = z->l - m1;
}
{ int m2 = z->l - z->c; (void)m2; /* do, line 93 */
z->ket = z->c; /* [, line 94 */
if (z->c - 1 <= z->lb || z->p[z->c - 1] >> 5 != 3 || !((1327104 >> (z->p[z->c - 1] & 0x1f)) & 1)) goto lab2;
among_var = find_among_b(z, a_2, 4); /* substring, line 94 */
if (!(among_var)) goto lab2;
z->bra = z->c; /* ], line 94 */
{ int ret = r_R1(z);
if (ret == 0) goto lab2; /* call R1, line 94 */
if (ret < 0) return ret;
}
switch(among_var) {
case 0: goto lab2;
case 1:
{ int ret = slice_del(z); /* delete, line 96 */
if (ret < 0) return ret;
}
break;
case 2:
if (in_grouping_b_U(z, g_st_ending, 98, 116, 0)) goto lab2;
{ int ret = skip_utf8(z->p, z->c, z->lb, z->l, - 3);
if (ret < 0) goto lab2;
z->c = ret; /* hop, line 99 */
}
{ int ret = slice_del(z); /* delete, line 99 */
if (ret < 0) return ret;
}
break;
}
lab2:
z->c = z->l - m2;
}
{ int m3 = z->l - z->c; (void)m3; /* do, line 103 */
z->ket = z->c; /* [, line 104 */
if (z->c - 1 <= z->lb || z->p[z->c - 1] >> 5 != 3 || !((1051024 >> (z->p[z->c - 1] & 0x1f)) & 1)) goto lab3;
among_var = find_among_b(z, a_4, 8); /* substring, line 104 */
if (!(among_var)) goto lab3;
z->bra = z->c; /* ], line 104 */
{ int ret = r_R2(z);
if (ret == 0) goto lab3; /* call R2, line 104 */
if (ret < 0) return ret;
}
switch(among_var) {
case 0: goto lab3;
case 1:
{ int ret = slice_del(z); /* delete, line 106 */
if (ret < 0) return ret;
}
{ int m_keep = z->l - z->c;/* (void) m_keep;*/ /* try, line 107 */
z->ket = z->c; /* [, line 107 */
if (!(eq_s_b(z, 2, s_13))) { z->c = z->l - m_keep; goto lab4; }
z->bra = z->c; /* ], line 107 */
{ int m4 = z->l - z->c; (void)m4; /* not, line 107 */
if (!(eq_s_b(z, 1, s_14))) goto lab5;
{ z->c = z->l - m_keep; goto lab4; }
lab5:
z->c = z->l - m4;
}
{ int ret = r_R2(z);
if (ret == 0) { z->c = z->l - m_keep; goto lab4; } /* call R2, line 107 */
if (ret < 0) return ret;
}
{ int ret = slice_del(z); /* delete, line 107 */
if (ret < 0) return ret;
}
lab4:
;
}
break;
case 2:
{ int m5 = z->l - z->c; (void)m5; /* not, line 110 */
if (!(eq_s_b(z, 1, s_15))) goto lab6;
goto lab3;
lab6:
z->c = z->l - m5;
}
{ int ret = slice_del(z); /* delete, line 110 */
if (ret < 0) return ret;
}
break;
case 3:
{ int ret = slice_del(z); /* delete, line 113 */
if (ret < 0) return ret;
}
{ int m_keep = z->l - z->c;/* (void) m_keep;*/ /* try, line 114 */
z->ket = z->c; /* [, line 115 */
{ int m6 = z->l - z->c; (void)m6; /* or, line 115 */
if (!(eq_s_b(z, 2, s_16))) goto lab9;
goto lab8;
lab9:
z->c = z->l - m6;
if (!(eq_s_b(z, 2, s_17))) { z->c = z->l - m_keep; goto lab7; }
}
lab8:
z->bra = z->c; /* ], line 115 */
{ int ret = r_R1(z);
if (ret == 0) { z->c = z->l - m_keep; goto lab7; } /* call R1, line 115 */
if (ret < 0) return ret;
}
{ int ret = slice_del(z); /* delete, line 115 */
if (ret < 0) return ret;
}
lab7:
;
}
break;
case 4:
{ int ret = slice_del(z); /* delete, line 119 */
if (ret < 0) return ret;
}
{ int m_keep = z->l - z->c;/* (void) m_keep;*/ /* try, line 120 */
z->ket = z->c; /* [, line 121 */
if (z->c - 1 <= z->lb || (z->p[z->c - 1] != 103 && z->p[z->c - 1] != 104)) { z->c = z->l - m_keep; goto lab10; }
among_var = find_among_b(z, a_3, 2); /* substring, line 121 */
if (!(among_var)) { z->c = z->l - m_keep; goto lab10; }
z->bra = z->c; /* ], line 121 */
{ int ret = r_R2(z);
if (ret == 0) { z->c = z->l - m_keep; goto lab10; } /* call R2, line 121 */
if (ret < 0) return ret;
}
switch(among_var) {
case 0: { z->c = z->l - m_keep; goto lab10; }
case 1:
{ int ret = slice_del(z); /* delete, line 123 */
if (ret < 0) return ret;
}
break;
}
lab10:
;
}
break;
}
lab3:
z->c = z->l - m3;
}
return 1;
}
extern int german_UTF_8_stem(struct SN_env * z) {
{ int c1 = z->c; /* do, line 134 */
{ int ret = r_prelude(z);
if (ret == 0) goto lab0; /* call prelude, line 134 */
if (ret < 0) return ret;
}
lab0:
z->c = c1;
}
{ int c2 = z->c; /* do, line 135 */
{ int ret = r_mark_regions(z);
if (ret == 0) goto lab1; /* call mark_regions, line 135 */
if (ret < 0) return ret;
}
lab1:
z->c = c2;
}
z->lb = z->c; z->c = z->l; /* backwards, line 136 */
{ int m3 = z->l - z->c; (void)m3; /* do, line 137 */
{ int ret = r_standard_suffix(z);
if (ret == 0) goto lab2; /* call standard_suffix, line 137 */
if (ret < 0) return ret;
}
lab2:
z->c = z->l - m3;
}
z->c = z->lb;
{ int c4 = z->c; /* do, line 138 */
{ int ret = r_postlude(z);
if (ret == 0) goto lab3; /* call postlude, line 138 */
if (ret < 0) return ret;
}
lab3:
z->c = c4;
}
return 1;
}
extern struct SN_env * german_UTF_8_create_env(void) { return SN_create_env(0, 3, 0); }
extern void german_UTF_8_close_env(struct SN_env * z) { SN_close_env(z, 0); }
| {
"pile_set_name": "Github"
} |
# makefile for libpng
# Copyright (C) 2002, 2006 Glenn Randers-Pehrson
# Copyright (C) 1995 Guy Eric Schalnat, Group 42, Inc.
#
# This code is released under the libpng license.
# For conditions of distribution and use, see the disclaimer
# and license in png.h
# where make install puts libpng.a and png.h
prefix=/usr/local
INCPATH=$(prefix)/include
LIBPATH=$(prefix)/lib
# override DESTDIR= on the make install command line to easily support
# installing into a temporary location. Example:
#
# make install DESTDIR=/tmp/build/libpng
#
# If you're going to install into a temporary location
# via DESTDIR, $(DESTDIR)$(prefix) must already exist before
# you execute make install.
DESTDIR=
# Where the zlib library and include files are located
#ZLIBLIB=/usr/local/lib
#ZLIBINC=/usr/local/include
ZLIBLIB=../zlib
ZLIBINC=../zlib
WARNMORE=-Wwrite-strings -Wpointer-arith -Wshadow -Wconversion \
-Wmissing-declarations -Wtraditional -Wcast-align \
-Wstrict-prototypes -Wmissing-prototypes
CC=gcc
AR_RC=ar rc
MKDIR_P=mkdir -p
LN_SF=ln -f -s
RANLIB=ranlib
RM_F=/bin/rm -f
CFLAGS=-I$(ZLIBINC) -O # $(WARNMORE) -DPNG_DEBUG=5
LDFLAGS=-L. -L$(ZLIBLIB) -lpng -lz -lm
OBJS = png.o pngset.o pngget.o pngrutil.o pngtrans.o pngwutil.o \
pngread.o pngrio.o pngwio.o pngwrite.o pngrtran.o \
pngwtran.o pngmem.o pngerror.o pngpread.o
all: libpng.a pngtest
# see scripts/pnglibconf.mak for more options
pnglibconf.h: scripts/pnglibconf.h.prebuilt
cp scripts/pnglibconf.h.prebuilt $@
libpng.a: $(OBJS)
$(AR_RC) $@ $(OBJS)
$(RANLIB) $@
pngtest: pngtest.o libpng.a
$(CC) -o pngtest $(CFLAGS) pngtest.o $(LDFLAGS)
test: pngtest
./pngtest
install: libpng.a
-@$(MKDIR_P) $(DESTDIR)$(INCPATH)
-@$(MKDIR_P) $(DESTDIR)$(INCPATH)/libpng
-@$(MKDIR_P) $(DESTDIR)$(LIBPATH)
-@$(RM_F) $(DESTDIR)$(INCPATH)/png.h
-@$(RM_F) $(DESTDIR)$(INCPATH)/pngconf.h
-@$(RM_F) $(DESTDIR)$(INCPATH)/pnglibconf.h
cp png.h $(DESTDIR)$(INCPATH)/libpng
cp pngconf.h $(DESTDIR)$(INCPATH)/libpng
cp pnglibconf.h $(DESTDIR)$(INCPATH)/libpng
chmod 644 $(DESTDIR)$(INCPATH)/libpng/png.h
chmod 644 $(DESTDIR)$(INCPATH)/libpng/pngconf.h
chmod 644 $(DESTDIR)$(INCPATH)/libpng/pnglibconf.h
(cd $(DESTDIR)$(INCPATH); $(LN_SF) libpng/* .)
cp libpng.a $(DESTDIR)$(LIBPATH)
chmod 644 $(DESTDIR)$(LIBPATH)/libpng.a
clean:
$(RM_F) *.o libpng.a pngtest pngout.png pnglibconf.h
DOCS = ANNOUNCE CHANGES INSTALL KNOWNBUG LICENSE README TODO Y2KINFO
writelock:
chmod a-w *.[ch35] $(DOCS) scripts/*
# DO NOT DELETE THIS LINE -- make depend depends on it.
png.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngerror.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngrio.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngwio.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngmem.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngset.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngget.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngread.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngrtran.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngrutil.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngtrans.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngwrite.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngwtran.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngwutil.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngpread.o: png.h pngconf.h pnglibconf.h pngpriv.h pngstruct.h pnginfo.h pngdebug.h
pngtest.o: png.h pngconf.h pnglibconf.h
| {
"pile_set_name": "Github"
} |
package main
import (
"flag"
"fmt"
"os/exec"
)
func BuildUnfocusCommand() *Command {
return &Command{
Name: "unfocus",
AltName: "blur",
FlagSet: flag.NewFlagSet("unfocus", flag.ExitOnError),
UsageCommand: "ginkgo unfocus (or ginkgo blur)",
Usage: []string{
"Recursively unfocuses any focused tests under the current directory",
},
Command: unfocusSpecs,
}
}
func unfocusSpecs([]string, []string) {
unfocus("Describe")
unfocus("Context")
unfocus("It")
unfocus("Measure")
}
func unfocus(component string) {
fmt.Printf("Removing F%s...\n", component)
cmd := exec.Command("gofmt", fmt.Sprintf("-r=F%s -> %s", component, component), "-w", ".")
out, _ := cmd.CombinedOutput()
if string(out) != "" {
println(string(out))
}
}
| {
"pile_set_name": "Github"
} |
# This source code was written by the Go contributors.
# The master list of contributors is in the main Go distribution,
# visible at http://tip.golang.org/CONTRIBUTORS.
| {
"pile_set_name": "Github"
} |
import {GraphicalStaffEntry} from "./GraphicalStaffEntry";
import {GraphicalObject} from "./GraphicalObject";
export abstract class AbstractGraphicalInstruction extends GraphicalObject {
protected parent: GraphicalStaffEntry;
constructor(parent: GraphicalStaffEntry) {
super();
this.parent = parent;
}
public get Parent(): GraphicalStaffEntry {
return this.parent;
}
public set Parent(value: GraphicalStaffEntry) {
this.parent = value;
}
}
| {
"pile_set_name": "Github"
} |
/**
* @license AngularJS v1.6.2
* (c) 2010-2017 Google, Inc. http://angularjs.org
* License: MIT
*/
(function(window, angular) {'use strict';
/* global ngTouchClickDirectiveFactory: false */
/**
* @ngdoc module
* @name ngTouch
* @description
*
* # ngTouch
*
* The `ngTouch` module provides touch events and other helpers for touch-enabled devices.
* The implementation is based on jQuery Mobile touch event handling
* ([jquerymobile.com](http://jquerymobile.com/)).
*
*
* See {@link ngTouch.$swipe `$swipe`} for usage.
*
* <div doc-module-components="ngTouch"></div>
*
*/
// define ngTouch module
/* global -ngTouch */
var ngTouch = angular.module('ngTouch', []);
ngTouch.provider('$touch', $TouchProvider);
function nodeName_(element) {
return angular.lowercase(element.nodeName || (element[0] && element[0].nodeName));
}
/**
* @ngdoc provider
* @name $touchProvider
*
* @description
* The `$touchProvider` allows enabling / disabling {@link ngTouch.ngClick ngTouch's ngClick directive}.
*/
$TouchProvider.$inject = ['$provide', '$compileProvider'];
function $TouchProvider($provide, $compileProvider) {
/**
* @ngdoc method
* @name $touchProvider#ngClickOverrideEnabled
*
* @param {boolean=} enabled update the ngClickOverrideEnabled state if provided, otherwise just return the
* current ngClickOverrideEnabled state
* @returns {*} current value if used as getter or itself (chaining) if used as setter
*
* @kind function
*
* @description
* Call this method to enable/disable {@link ngTouch.ngClick ngTouch's ngClick directive}. If enabled,
* the default ngClick directive will be replaced by a version that eliminates the 300ms delay for
* click events on browser for touch-devices.
*
* The default is `false`.
*
*/
var ngClickOverrideEnabled = false;
var ngClickDirectiveAdded = false;
// eslint-disable-next-line no-invalid-this
this.ngClickOverrideEnabled = function(enabled) {
if (angular.isDefined(enabled)) {
if (enabled && !ngClickDirectiveAdded) {
ngClickDirectiveAdded = true;
// Use this to identify the correct directive in the delegate
ngTouchClickDirectiveFactory.$$moduleName = 'ngTouch';
$compileProvider.directive('ngClick', ngTouchClickDirectiveFactory);
$provide.decorator('ngClickDirective', ['$delegate', function($delegate) {
if (ngClickOverrideEnabled) {
// drop the default ngClick directive
$delegate.shift();
} else {
// drop the ngTouch ngClick directive if the override has been re-disabled (because
// we cannot de-register added directives)
var i = $delegate.length - 1;
while (i >= 0) {
if ($delegate[i].$$moduleName === 'ngTouch') {
$delegate.splice(i, 1);
break;
}
i--;
}
}
return $delegate;
}]);
}
ngClickOverrideEnabled = enabled;
return this;
}
return ngClickOverrideEnabled;
};
/**
* @ngdoc service
* @name $touch
* @kind object
*
* @description
* Provides the {@link ngTouch.$touch#ngClickOverrideEnabled `ngClickOverrideEnabled`} method.
*
*/
// eslint-disable-next-line no-invalid-this
this.$get = function() {
return {
/**
* @ngdoc method
* @name $touch#ngClickOverrideEnabled
*
* @returns {*} current value of `ngClickOverrideEnabled` set in the {@link ngTouch.$touchProvider $touchProvider},
* i.e. if {@link ngTouch.ngClick ngTouch's ngClick} directive is enabled.
*
* @kind function
*/
ngClickOverrideEnabled: function() {
return ngClickOverrideEnabled;
}
};
};
}
/* global ngTouch: false */
/**
* @ngdoc service
* @name $swipe
*
* @description
* The `$swipe` service is a service that abstracts the messier details of hold-and-drag swipe
* behavior, to make implementing swipe-related directives more convenient.
*
* Requires the {@link ngTouch `ngTouch`} module to be installed.
*
* `$swipe` is used by the `ngSwipeLeft` and `ngSwipeRight` directives in `ngTouch`.
*
* # Usage
* The `$swipe` service is an object with a single method: `bind`. `bind` takes an element
* which is to be watched for swipes, and an object with four handler functions. See the
* documentation for `bind` below.
*/
ngTouch.factory('$swipe', [function() {
// The total distance in any direction before we make the call on swipe vs. scroll.
var MOVE_BUFFER_RADIUS = 10;
var POINTER_EVENTS = {
'mouse': {
start: 'mousedown',
move: 'mousemove',
end: 'mouseup'
},
'touch': {
start: 'touchstart',
move: 'touchmove',
end: 'touchend',
cancel: 'touchcancel'
},
'pointer': {
start: 'pointerdown',
move: 'pointermove',
end: 'pointerup',
cancel: 'pointercancel'
}
};
function getCoordinates(event) {
var originalEvent = event.originalEvent || event;
var touches = originalEvent.touches && originalEvent.touches.length ? originalEvent.touches : [originalEvent];
var e = (originalEvent.changedTouches && originalEvent.changedTouches[0]) || touches[0];
return {
x: e.clientX,
y: e.clientY
};
}
function getEvents(pointerTypes, eventType) {
var res = [];
angular.forEach(pointerTypes, function(pointerType) {
var eventName = POINTER_EVENTS[pointerType][eventType];
if (eventName) {
res.push(eventName);
}
});
return res.join(' ');
}
return {
/**
* @ngdoc method
* @name $swipe#bind
*
* @description
* The main method of `$swipe`. It takes an element to be watched for swipe motions, and an
* object containing event handlers.
* The pointer types that should be used can be specified via the optional
* third argument, which is an array of strings `'mouse'`, `'touch'` and `'pointer'`. By default,
* `$swipe` will listen for `mouse`, `touch` and `pointer` events.
*
* The four events are `start`, `move`, `end`, and `cancel`. `start`, `move`, and `end`
* receive as a parameter a coordinates object of the form `{ x: 150, y: 310 }` and the raw
* `event`. `cancel` receives the raw `event` as its single parameter.
*
* `start` is called on either `mousedown`, `touchstart` or `pointerdown`. After this event, `$swipe` is
* watching for `touchmove`, `mousemove` or `pointermove` events. These events are ignored until the total
* distance moved in either dimension exceeds a small threshold.
*
* Once this threshold is exceeded, either the horizontal or vertical delta is greater.
* - If the horizontal distance is greater, this is a swipe and `move` and `end` events follow.
* - If the vertical distance is greater, this is a scroll, and we let the browser take over.
* A `cancel` event is sent.
*
* `move` is called on `mousemove`, `touchmove` and `pointermove` after the above logic has determined that
* a swipe is in progress.
*
* `end` is called when a swipe is successfully completed with a `touchend`, `mouseup` or `pointerup`.
*
* `cancel` is called either on a `touchcancel` or `pointercancel` from the browser, or when we begin scrolling
* as described above.
*
*/
bind: function(element, eventHandlers, pointerTypes) {
// Absolute total movement, used to control swipe vs. scroll.
var totalX, totalY;
// Coordinates of the start position.
var startCoords;
// Last event's position.
var lastPos;
// Whether a swipe is active.
var active = false;
pointerTypes = pointerTypes || ['mouse', 'touch', 'pointer'];
element.on(getEvents(pointerTypes, 'start'), function(event) {
startCoords = getCoordinates(event);
active = true;
totalX = 0;
totalY = 0;
lastPos = startCoords;
if (eventHandlers['start']) {
eventHandlers['start'](startCoords, event);
}
});
var events = getEvents(pointerTypes, 'cancel');
if (events) {
element.on(events, function(event) {
active = false;
if (eventHandlers['cancel']) {
eventHandlers['cancel'](event);
}
});
}
element.on(getEvents(pointerTypes, 'move'), function(event) {
if (!active) return;
// Android will send a touchcancel if it thinks we're starting to scroll.
// So when the total distance (+ or - or both) exceeds 10px in either direction,
// we either:
// - On totalX > totalY, we send preventDefault() and treat this as a swipe.
// - On totalY > totalX, we let the browser handle it as a scroll.
if (!startCoords) return;
var coords = getCoordinates(event);
totalX += Math.abs(coords.x - lastPos.x);
totalY += Math.abs(coords.y - lastPos.y);
lastPos = coords;
if (totalX < MOVE_BUFFER_RADIUS && totalY < MOVE_BUFFER_RADIUS) {
return;
}
// One of totalX or totalY has exceeded the buffer, so decide on swipe vs. scroll.
if (totalY > totalX) {
// Allow native scrolling to take over.
active = false;
if (eventHandlers['cancel']) {
eventHandlers['cancel'](event);
}
return;
} else {
// Prevent the browser from scrolling.
event.preventDefault();
if (eventHandlers['move']) {
eventHandlers['move'](coords, event);
}
}
});
element.on(getEvents(pointerTypes, 'end'), function(event) {
if (!active) return;
active = false;
if (eventHandlers['end']) {
eventHandlers['end'](getCoordinates(event), event);
}
});
}
};
}]);
/* global ngTouch: false,
nodeName_: false
*/
/**
* @ngdoc directive
* @name ngClick
* @deprecated
* sinceVersion="v1.5.0"
* This directive is deprecated and **disabled** by default.
* The directive will receive no further support and might be removed from future releases.
* If you need the directive, you can enable it with the {@link ngTouch.$touchProvider $touchProvider#ngClickOverrideEnabled}
* function. We also recommend that you migrate to [FastClick](https://github.com/ftlabs/fastclick).
* To learn more about the 300ms delay, this [Telerik article](http://developer.telerik.com/featured/300-ms-click-delay-ios-8/)
* gives a good overview.
*
* @description
* A more powerful replacement for the default ngClick designed to be used on touchscreen
* devices. Most mobile browsers wait about 300ms after a tap-and-release before sending
* the click event. This version handles them immediately, and then prevents the
* following click event from propagating.
*
* Requires the {@link ngTouch `ngTouch`} module to be installed.
*
* This directive can fall back to using an ordinary click event, and so works on desktop
* browsers as well as mobile.
*
* This directive also sets the CSS class `ng-click-active` while the element is being held
* down (by a mouse click or touch) so you can restyle the depressed element if you wish.
*
* @element ANY
* @param {expression} ngClick {@link guide/expression Expression} to evaluate
* upon tap. (Event object is available as `$event`)
*
* @example
<example module="ngClickExample" deps="angular-touch.js" name="ng-touch-ng-click">
<file name="index.html">
<button ng-click="count = count + 1" ng-init="count=0">
Increment
</button>
count: {{ count }}
</file>
<file name="script.js">
angular.module('ngClickExample', ['ngTouch']);
</file>
</example>
*/
var ngTouchClickDirectiveFactory = ['$parse', '$timeout', '$rootElement',
function($parse, $timeout, $rootElement) {
var TAP_DURATION = 750; // Shorter than 750ms is a tap, longer is a taphold or drag.
var MOVE_TOLERANCE = 12; // 12px seems to work in most mobile browsers.
var PREVENT_DURATION = 2500; // 2.5 seconds maximum from preventGhostClick call to click
var CLICKBUSTER_THRESHOLD = 25; // 25 pixels in any dimension is the limit for busting clicks.
var ACTIVE_CLASS_NAME = 'ng-click-active';
var lastPreventedTime;
var touchCoordinates;
var lastLabelClickCoordinates;
// TAP EVENTS AND GHOST CLICKS
//
// Why tap events?
// Mobile browsers detect a tap, then wait a moment (usually ~300ms) to see if you're
// double-tapping, and then fire a click event.
//
// This delay sucks and makes mobile apps feel unresponsive.
// So we detect touchstart, touchcancel and touchend ourselves and determine when
// the user has tapped on something.
//
// What happens when the browser then generates a click event?
// The browser, of course, also detects the tap and fires a click after a delay. This results in
// tapping/clicking twice. We do "clickbusting" to prevent it.
//
// How does it work?
// We attach global touchstart and click handlers, that run during the capture (early) phase.
// So the sequence for a tap is:
// - global touchstart: Sets an "allowable region" at the point touched.
// - element's touchstart: Starts a touch
// (- touchcancel ends the touch, no click follows)
// - element's touchend: Determines if the tap is valid (didn't move too far away, didn't hold
// too long) and fires the user's tap handler. The touchend also calls preventGhostClick().
// - preventGhostClick() removes the allowable region the global touchstart created.
// - The browser generates a click event.
// - The global click handler catches the click, and checks whether it was in an allowable region.
// - If preventGhostClick was called, the region will have been removed, the click is busted.
// - If the region is still there, the click proceeds normally. Therefore clicks on links and
// other elements without ngTap on them work normally.
//
// This is an ugly, terrible hack!
// Yeah, tell me about it. The alternatives are using the slow click events, or making our users
// deal with the ghost clicks, so I consider this the least of evils. Fortunately Angular
// encapsulates this ugly logic away from the user.
//
// Why not just put click handlers on the element?
// We do that too, just to be sure. If the tap event caused the DOM to change,
// it is possible another element is now in that position. To take account for these possibly
// distinct elements, the handlers are global and care only about coordinates.
// Checks if the coordinates are close enough to be within the region.
function hit(x1, y1, x2, y2) {
return Math.abs(x1 - x2) < CLICKBUSTER_THRESHOLD && Math.abs(y1 - y2) < CLICKBUSTER_THRESHOLD;
}
// Checks a list of allowable regions against a click location.
// Returns true if the click should be allowed.
// Splices out the allowable region from the list after it has been used.
function checkAllowableRegions(touchCoordinates, x, y) {
for (var i = 0; i < touchCoordinates.length; i += 2) {
if (hit(touchCoordinates[i], touchCoordinates[i + 1], x, y)) {
touchCoordinates.splice(i, i + 2);
return true; // allowable region
}
}
return false; // No allowable region; bust it.
}
// Global click handler that prevents the click if it's in a bustable zone and preventGhostClick
// was called recently.
function onClick(event) {
if (Date.now() - lastPreventedTime > PREVENT_DURATION) {
return; // Too old.
}
var touches = event.touches && event.touches.length ? event.touches : [event];
var x = touches[0].clientX;
var y = touches[0].clientY;
// Work around desktop Webkit quirk where clicking a label will fire two clicks (on the label
// and on the input element). Depending on the exact browser, this second click we don't want
// to bust has either (0,0), negative coordinates, or coordinates equal to triggering label
// click event
if (x < 1 && y < 1) {
return; // offscreen
}
if (lastLabelClickCoordinates &&
lastLabelClickCoordinates[0] === x && lastLabelClickCoordinates[1] === y) {
return; // input click triggered by label click
}
// reset label click coordinates on first subsequent click
if (lastLabelClickCoordinates) {
lastLabelClickCoordinates = null;
}
// remember label click coordinates to prevent click busting of trigger click event on input
if (nodeName_(event.target) === 'label') {
lastLabelClickCoordinates = [x, y];
}
// Look for an allowable region containing this click.
// If we find one, that means it was created by touchstart and not removed by
// preventGhostClick, so we don't bust it.
if (checkAllowableRegions(touchCoordinates, x, y)) {
return;
}
// If we didn't find an allowable region, bust the click.
event.stopPropagation();
event.preventDefault();
// Blur focused form elements
if (event.target && event.target.blur) {
event.target.blur();
}
}
// Global touchstart handler that creates an allowable region for a click event.
// This allowable region can be removed by preventGhostClick if we want to bust it.
function onTouchStart(event) {
var touches = event.touches && event.touches.length ? event.touches : [event];
var x = touches[0].clientX;
var y = touches[0].clientY;
touchCoordinates.push(x, y);
$timeout(function() {
// Remove the allowable region.
for (var i = 0; i < touchCoordinates.length; i += 2) {
if (touchCoordinates[i] === x && touchCoordinates[i + 1] === y) {
touchCoordinates.splice(i, i + 2);
return;
}
}
}, PREVENT_DURATION, false);
}
// On the first call, attaches some event handlers. Then whenever it gets called, it creates a
// zone around the touchstart where clicks will get busted.
function preventGhostClick(x, y) {
if (!touchCoordinates) {
$rootElement[0].addEventListener('click', onClick, true);
$rootElement[0].addEventListener('touchstart', onTouchStart, true);
touchCoordinates = [];
}
lastPreventedTime = Date.now();
checkAllowableRegions(touchCoordinates, x, y);
}
// Actual linking function.
return function(scope, element, attr) {
var clickHandler = $parse(attr.ngClick),
tapping = false,
tapElement, // Used to blur the element after a tap.
startTime, // Used to check if the tap was held too long.
touchStartX,
touchStartY;
function resetState() {
tapping = false;
element.removeClass(ACTIVE_CLASS_NAME);
}
element.on('touchstart', function(event) {
tapping = true;
tapElement = event.target ? event.target : event.srcElement; // IE uses srcElement.
// Hack for Safari, which can target text nodes instead of containers.
if (tapElement.nodeType === 3) {
tapElement = tapElement.parentNode;
}
element.addClass(ACTIVE_CLASS_NAME);
startTime = Date.now();
// Use jQuery originalEvent
var originalEvent = event.originalEvent || event;
var touches = originalEvent.touches && originalEvent.touches.length ? originalEvent.touches : [originalEvent];
var e = touches[0];
touchStartX = e.clientX;
touchStartY = e.clientY;
});
element.on('touchcancel', function(event) {
resetState();
});
element.on('touchend', function(event) {
var diff = Date.now() - startTime;
// Use jQuery originalEvent
var originalEvent = event.originalEvent || event;
var touches = (originalEvent.changedTouches && originalEvent.changedTouches.length) ?
originalEvent.changedTouches :
((originalEvent.touches && originalEvent.touches.length) ? originalEvent.touches : [originalEvent]);
var e = touches[0];
var x = e.clientX;
var y = e.clientY;
var dist = Math.sqrt(Math.pow(x - touchStartX, 2) + Math.pow(y - touchStartY, 2));
if (tapping && diff < TAP_DURATION && dist < MOVE_TOLERANCE) {
// Call preventGhostClick so the clickbuster will catch the corresponding click.
preventGhostClick(x, y);
// Blur the focused element (the button, probably) before firing the callback.
// This doesn't work perfectly on Android Chrome, but seems to work elsewhere.
// I couldn't get anything to work reliably on Android Chrome.
if (tapElement) {
tapElement.blur();
}
if (!angular.isDefined(attr.disabled) || attr.disabled === false) {
element.triggerHandler('click', [event]);
}
}
resetState();
});
// Hack for iOS Safari's benefit. It goes searching for onclick handlers and is liable to click
// something else nearby.
element.onclick = function(event) { };
// Actual click handler.
// There are three different kinds of clicks, only two of which reach this point.
// - On desktop browsers without touch events, their clicks will always come here.
// - On mobile browsers, the simulated "fast" click will call this.
// - But the browser's follow-up slow click will be "busted" before it reaches this handler.
// Therefore it's safe to use this directive on both mobile and desktop.
element.on('click', function(event, touchend) {
scope.$apply(function() {
clickHandler(scope, {$event: (touchend || event)});
});
});
element.on('mousedown', function(event) {
element.addClass(ACTIVE_CLASS_NAME);
});
element.on('mousemove mouseup', function(event) {
element.removeClass(ACTIVE_CLASS_NAME);
});
};
}];
/* global ngTouch: false */
/**
* @ngdoc directive
* @name ngSwipeLeft
*
* @description
* Specify custom behavior when an element is swiped to the left on a touchscreen device.
* A leftward swipe is a quick, right-to-left slide of the finger.
* Though ngSwipeLeft is designed for touch-based devices, it will work with a mouse click and drag
* too.
*
* To disable the mouse click and drag functionality, add `ng-swipe-disable-mouse` to
* the `ng-swipe-left` or `ng-swipe-right` DOM Element.
*
* Requires the {@link ngTouch `ngTouch`} module to be installed.
*
* @element ANY
* @param {expression} ngSwipeLeft {@link guide/expression Expression} to evaluate
* upon left swipe. (Event object is available as `$event`)
*
* @example
<example module="ngSwipeLeftExample" deps="angular-touch.js" name="ng-swipe-left">
<file name="index.html">
<div ng-show="!showActions" ng-swipe-left="showActions = true">
Some list content, like an email in the inbox
</div>
<div ng-show="showActions" ng-swipe-right="showActions = false">
<button ng-click="reply()">Reply</button>
<button ng-click="delete()">Delete</button>
</div>
</file>
<file name="script.js">
angular.module('ngSwipeLeftExample', ['ngTouch']);
</file>
</example>
*/
/**
* @ngdoc directive
* @name ngSwipeRight
*
* @description
* Specify custom behavior when an element is swiped to the right on a touchscreen device.
* A rightward swipe is a quick, left-to-right slide of the finger.
* Though ngSwipeRight is designed for touch-based devices, it will work with a mouse click and drag
* too.
*
* Requires the {@link ngTouch `ngTouch`} module to be installed.
*
* @element ANY
* @param {expression} ngSwipeRight {@link guide/expression Expression} to evaluate
* upon right swipe. (Event object is available as `$event`)
*
* @example
<example module="ngSwipeRightExample" deps="angular-touch.js" name="ng-swipe-right">
<file name="index.html">
<div ng-show="!showActions" ng-swipe-left="showActions = true">
Some list content, like an email in the inbox
</div>
<div ng-show="showActions" ng-swipe-right="showActions = false">
<button ng-click="reply()">Reply</button>
<button ng-click="delete()">Delete</button>
</div>
</file>
<file name="script.js">
angular.module('ngSwipeRightExample', ['ngTouch']);
</file>
</example>
*/
function makeSwipeDirective(directiveName, direction, eventName) {
ngTouch.directive(directiveName, ['$parse', '$swipe', function($parse, $swipe) {
// The maximum vertical delta for a swipe should be less than 75px.
var MAX_VERTICAL_DISTANCE = 75;
// Vertical distance should not be more than a fraction of the horizontal distance.
var MAX_VERTICAL_RATIO = 0.3;
// At least a 30px lateral motion is necessary for a swipe.
var MIN_HORIZONTAL_DISTANCE = 30;
return function(scope, element, attr) {
var swipeHandler = $parse(attr[directiveName]);
var startCoords, valid;
function validSwipe(coords) {
// Check that it's within the coordinates.
// Absolute vertical distance must be within tolerances.
// Horizontal distance, we take the current X - the starting X.
// This is negative for leftward swipes and positive for rightward swipes.
// After multiplying by the direction (-1 for left, +1 for right), legal swipes
// (ie. same direction as the directive wants) will have a positive delta and
// illegal ones a negative delta.
// Therefore this delta must be positive, and larger than the minimum.
if (!startCoords) return false;
var deltaY = Math.abs(coords.y - startCoords.y);
var deltaX = (coords.x - startCoords.x) * direction;
return valid && // Short circuit for already-invalidated swipes.
deltaY < MAX_VERTICAL_DISTANCE &&
deltaX > 0 &&
deltaX > MIN_HORIZONTAL_DISTANCE &&
deltaY / deltaX < MAX_VERTICAL_RATIO;
}
var pointerTypes = ['touch'];
if (!angular.isDefined(attr['ngSwipeDisableMouse'])) {
pointerTypes.push('mouse');
}
$swipe.bind(element, {
'start': function(coords, event) {
startCoords = coords;
valid = true;
},
'cancel': function(event) {
valid = false;
},
'end': function(coords, event) {
if (validSwipe(coords)) {
scope.$apply(function() {
element.triggerHandler(eventName);
swipeHandler(scope, {$event: event});
});
}
}
}, pointerTypes);
};
}]);
}
// Left is negative X-coordinate, right is positive.
makeSwipeDirective('ngSwipeLeft', -1, 'swipeleft');
makeSwipeDirective('ngSwipeRight', 1, 'swiperight');
})(window, window.angular);
| {
"pile_set_name": "Github"
} |
#import "Three20/TTViewController.h"
#import "Three20/TTPhotoSource.h"
#import "Three20/TTScrollView.h"
#import "Three20/TTThumbsViewController.h"
@class TTScrollView, TTPhotoView;
@interface TTPhotoViewController : TTViewController
<TTScrollViewDelegate, TTScrollViewDataSource, TTPhotoSourceDelegate,
TTThumbsViewControllerDelegate> {
id<TTPhotoSource> _photoSource;
id<TTPhoto> _centerPhoto;
NSUInteger _centerPhotoIndex;
UIView* _innerView;
TTScrollView* _scrollView;
TTPhotoView* _photoStatusView;
UIToolbar* _toolbar;
UIBarButtonItem* _nextButton;
UIBarButtonItem* _previousButton;
UIImage* _defaultImage;
NSString* _statusText;
TTThumbsViewController* _thumbsController;
NSTimer* _slideshowTimer;
NSTimer* _loadTimer;
BOOL _delayLoad;
}
/**
* The source of a sequential photo collection that will be displayed.
*/
@property(nonatomic,retain) id<TTPhotoSource> photoSource;
/**
* The photo that is currently visible and centered.
*
* You can assign this directly to change the photoSource to the one that contains the photo.
*/
@property(nonatomic,retain) id<TTPhoto> centerPhoto;
/**
* The index of the currently visible photo.
*
* Because centerPhoto can be nil while waiting for the source to load the photo, this property
* must be maintained even though centerPhoto has its own index property.
*/
@property(nonatomic,readonly) NSUInteger centerPhotoIndex;
/**
* The default image to show before a photo has been loaded.
*/
@property(nonatomic,retain) UIImage* defaultImage;
/**
* Creates a photo view for a new page.
*
* Do not call this directly. It is meant to be overriden by subclasses.
*/
- (TTPhotoView*)createPhotoView;
/**
* Creates the thumbnail controller used by the "See All" button.
*
* Do not call this directly. It is meant to be overriden by subclasses.
*/
- (TTThumbsViewController*)createThumbsViewController;
@end
| {
"pile_set_name": "Github"
} |
// -------------------------------------------------------------------------------------------------
// <copyright file="IKernel.cs" company="Ninject Project Contributors">
// Copyright (c) 2007-2010 Enkari, Ltd. All rights reserved.
// Copyright (c) 2010-2020 Ninject Project Contributors. All rights reserved.
//
// Dual-licensed under the Apache License, Version 2.0, and the Microsoft Public License (Ms-PL).
// You may not use this file except in compliance with one of the Licenses.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// or
// http://www.microsoft.com/opensource/licenses.mspx
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// </copyright>
// -------------------------------------------------------------------------------------------------
namespace Ninject
{
using System;
using Ninject.Planning.Bindings;
/// <summary>
/// A super-factory that can create objects of all kinds, following hints provided by <see cref="IBinding"/>s.
/// </summary>
[Obsolete("Use IKernelConfiguration and IReadOnlyKernel")]
public interface IKernel : IKernelConfiguration, IReadOnlyKernel
{
/// <summary>
/// Gets the ninject settings.
/// </summary>
[Obsolete]
INinjectSettings Settings { get; }
}
} | {
"pile_set_name": "Github"
} |
# React + MobX TodoMVC Example
This repository provides a reference implementation of the [TodoMVC](http://todomvc.com) application written using [MobX](https://github.com/mobxjs/mobx), [React](https://facebook.github.io/react) JSX and ES6.
## Running the example
```
npm install
npm start
open http://localhost:3000
```
The example requires node 4.0 or higher

## Changing the example
If you are new to MobX, take a look at the [ten minutes, interactive introduction](https://mobxjs.github.io/mobx/getting-started.html) to MobX and React. MobX provides a refreshing way to manage your app state by combining mutable data structures with transparent reactive programming.
The state and actions of this app are defined in two stores; `todoModel` and `viewModel`.
This is not necessary but it provides a nice separation of concerns between data that effects the domain of the application and data that affects the user interface of the application.
This is a useful distinction for testing, reuse in backend services etc.
The project uses hot-reloading so most changes made to the app will be picked automatically.
By default the `mobx-react-devtools` are enabled as well. During each rendering a small render report is printed on all updated components.
The dev-tools can be disabled by commenting the `import` statement in `src/index.js`.
| {
"pile_set_name": "Github"
} |
// Convenience method, almost need to turn it into a middleware of sorts
import { Meteor } from 'meteor/meteor';
import { Users } from '../../../models';
import { API } from '../api';
API.helperMethods.set('getUserFromParams', function _getUserFromParams() {
const doesntExist = { _doesntExist: true };
let user;
const params = this.requestParams();
if (params.userId && params.userId.trim()) {
user = Users.findOneById(params.userId) || doesntExist;
} else if (params.username && params.username.trim()) {
user = Users.findOneByUsernameIgnoringCase(params.username) || doesntExist;
} else if (params.user && params.user.trim()) {
user = Users.findOneByUsernameIgnoringCase(params.user) || doesntExist;
} else {
throw new Meteor.Error('error-user-param-not-provided', 'The required "userId" or "username" param was not provided');
}
if (user._doesntExist) {
throw new Meteor.Error('error-invalid-user', 'The required "userId" or "username" param provided does not match any users');
}
return user;
});
| {
"pile_set_name": "Github"
} |
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Copyright 2012 Freescale Semiconductor, Inc.
*/
#ifndef __PINCTRL_MXS_H
#define __PINCTRL_MXS_H
#include <linux/platform_device.h>
#include <linux/pinctrl/pinctrl.h>
#define SET 0x4
#define CLR 0x8
#define TOG 0xc
#define MXS_PINCTRL_PIN(pin) PINCTRL_PIN(pin, #pin)
#define PINID(bank, pin) ((bank) * 32 + (pin))
/*
* pinmux-id bit field definitions
*
* bank: 15..12 (4)
* pin: 11..4 (8)
* muxsel: 3..0 (4)
*/
#define MUXID_TO_PINID(m) PINID((m) >> 12 & 0xf, (m) >> 4 & 0xff)
#define MUXID_TO_MUXSEL(m) ((m) & 0xf)
#define PINID_TO_BANK(p) ((p) >> 5)
#define PINID_TO_PIN(p) ((p) % 32)
/*
* pin config bit field definitions
*
* pull-up: 6..5 (2)
* voltage: 4..3 (2)
* mA: 2..0 (3)
*
* MSB of each field is presence bit for the config.
*/
#define PULL_PRESENT (1 << 6)
#define PULL_SHIFT 5
#define VOL_PRESENT (1 << 4)
#define VOL_SHIFT 3
#define MA_PRESENT (1 << 2)
#define MA_SHIFT 0
#define CONFIG_TO_PULL(c) ((c) >> PULL_SHIFT & 0x1)
#define CONFIG_TO_VOL(c) ((c) >> VOL_SHIFT & 0x1)
#define CONFIG_TO_MA(c) ((c) >> MA_SHIFT & 0x3)
struct mxs_function {
const char *name;
const char **groups;
unsigned ngroups;
};
struct mxs_group {
const char *name;
unsigned int *pins;
unsigned npins;
u8 *muxsel;
u8 config;
};
struct mxs_regs {
u16 muxsel;
u16 drive;
u16 pull;
};
struct mxs_pinctrl_soc_data {
const struct mxs_regs *regs;
const struct pinctrl_pin_desc *pins;
unsigned npins;
struct mxs_function *functions;
unsigned nfunctions;
struct mxs_group *groups;
unsigned ngroups;
};
int mxs_pinctrl_probe(struct platform_device *pdev,
struct mxs_pinctrl_soc_data *soc);
#endif /* __PINCTRL_MXS_H */
| {
"pile_set_name": "Github"
} |
<?php
/**
* ConfigValue.php
* 29-May-2020
*
* PHP Version 7
*
* @category Services
* @package Services_OpenStreetMap
* @author Ken Guest <[email protected]>
* @license BSD http://www.opensource.org/licenses/bsd-license.php
* @version Release: @package_version@
* @link ConfigValue.php
*/
/**
* Services_OpenStreetMap_Validator_ConfigValue
*
* @category Services
* @package Services_OpenStreetMap
* @author Ken Guest <[email protected]>
* @license BSD http://www.opensource.org/licenses/bsd-license.php
* @link ConfigValue.php
*/
class Services_OpenStreetMap_Validator_ConfigValue
{
/**
* __construct
*
* @param string $value Possible valid config key
* @param array $config Array of valid config keys
*
* @return void
*/
public function __construct($value = '', array $config = [])
{
if ($value === '') {
return;
}
$this->validate($value, $config);
}
/**
* Validate potential config key
*
* @param string $value Possible valid config key
* @param array $config Array of valid config keys
*
* @return void
*/
public function validate(string $value, array $config): void
{
if (!array_key_exists($value, $config)) {
throw new Services_OpenStreetMap_InvalidArgumentException(
"Unknown config parameter '$value'"
);
}
}
}
| {
"pile_set_name": "Github"
} |
/*
* Copyright (c) 2004-2011 Atheros Communications Inc.
* Copyright (c) 2011-2012 Qualcomm Atheros, Inc.
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#include "core.h"
#include "hif-ops.h"
#include "target.h"
#include "debug.h"
int ath6kl_bmi_done(struct ath6kl *ar)
{
int ret;
u32 cid = BMI_DONE;
if (ar->bmi.done_sent) {
ath6kl_dbg(ATH6KL_DBG_BMI, "bmi done skipped\n");
return 0;
}
ar->bmi.done_sent = true;
ret = ath6kl_hif_bmi_write(ar, (u8 *)&cid, sizeof(cid));
if (ret) {
ath6kl_err("Unable to send bmi done: %d\n", ret);
return ret;
}
return 0;
}
int ath6kl_bmi_get_target_info(struct ath6kl *ar,
struct ath6kl_bmi_target_info *targ_info)
{
int ret;
u32 cid = BMI_GET_TARGET_INFO;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
ret = ath6kl_hif_bmi_write(ar, (u8 *)&cid, sizeof(cid));
if (ret) {
ath6kl_err("Unable to send get target info: %d\n", ret);
return ret;
}
if (ar->hif_type == ATH6KL_HIF_TYPE_USB) {
ret = ath6kl_hif_bmi_read(ar, (u8 *)targ_info,
sizeof(*targ_info));
} else {
ret = ath6kl_hif_bmi_read(ar, (u8 *)&targ_info->version,
sizeof(targ_info->version));
}
if (ret) {
ath6kl_err("Unable to recv target info: %d\n", ret);
return ret;
}
if (le32_to_cpu(targ_info->version) == TARGET_VERSION_SENTINAL) {
/* Determine how many bytes are in the Target's targ_info */
ret = ath6kl_hif_bmi_read(ar,
(u8 *)&targ_info->byte_count,
sizeof(targ_info->byte_count));
if (ret) {
ath6kl_err("unable to read target info byte count: %d\n",
ret);
return ret;
}
/*
* The target's targ_info doesn't match the host's targ_info.
* We need to do some backwards compatibility to make this work.
*/
if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) {
WARN_ON(1);
return -EINVAL;
}
/* Read the remainder of the targ_info */
ret = ath6kl_hif_bmi_read(ar,
((u8 *)targ_info) +
sizeof(targ_info->byte_count),
sizeof(*targ_info) -
sizeof(targ_info->byte_count));
if (ret) {
ath6kl_err("Unable to read target info (%d bytes): %d\n",
targ_info->byte_count, ret);
return ret;
}
}
ath6kl_dbg(ATH6KL_DBG_BMI, "target info (ver: 0x%x type: 0x%x)\n",
targ_info->version, targ_info->type);
return 0;
}
int ath6kl_bmi_read(struct ath6kl *ar, u32 addr, u8 *buf, u32 len)
{
u32 cid = BMI_READ_MEMORY;
int ret;
u32 offset;
u32 len_remain, rx_len;
u16 size;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
size = ar->bmi.max_data_size + sizeof(cid) + sizeof(addr) + sizeof(len);
if (size > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
}
memset(ar->bmi.cmd_buf, 0, size);
ath6kl_dbg(ATH6KL_DBG_BMI,
"bmi read memory: device: addr: 0x%x, len: %d\n",
addr, len);
len_remain = len;
while (len_remain) {
rx_len = (len_remain < ar->bmi.max_data_size) ?
len_remain : ar->bmi.max_data_size;
offset = 0;
memcpy(&(ar->bmi.cmd_buf[offset]), &cid, sizeof(cid));
offset += sizeof(cid);
memcpy(&(ar->bmi.cmd_buf[offset]), &addr, sizeof(addr));
offset += sizeof(addr);
memcpy(&(ar->bmi.cmd_buf[offset]), &rx_len, sizeof(rx_len));
offset += sizeof(len);
ret = ath6kl_hif_bmi_write(ar, ar->bmi.cmd_buf, offset);
if (ret) {
ath6kl_err("Unable to write to the device: %d\n",
ret);
return ret;
}
ret = ath6kl_hif_bmi_read(ar, ar->bmi.cmd_buf, rx_len);
if (ret) {
ath6kl_err("Unable to read from the device: %d\n",
ret);
return ret;
}
memcpy(&buf[len - len_remain], ar->bmi.cmd_buf, rx_len);
len_remain -= rx_len; addr += rx_len;
}
return 0;
}
int ath6kl_bmi_write(struct ath6kl *ar, u32 addr, u8 *buf, u32 len)
{
u32 cid = BMI_WRITE_MEMORY;
int ret;
u32 offset;
u32 len_remain, tx_len;
const u32 header = sizeof(cid) + sizeof(addr) + sizeof(len);
u8 aligned_buf[400];
u8 *src;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
if ((ar->bmi.max_data_size + header) > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
}
if (WARN_ON(ar->bmi.max_data_size > sizeof(aligned_buf)))
return -E2BIG;
memset(ar->bmi.cmd_buf, 0, ar->bmi.max_data_size + header);
ath6kl_dbg(ATH6KL_DBG_BMI,
"bmi write memory: addr: 0x%x, len: %d\n", addr, len);
len_remain = len;
while (len_remain) {
src = &buf[len - len_remain];
if (len_remain < (ar->bmi.max_data_size - header)) {
if (len_remain & 3) {
/* align it with 4 bytes */
len_remain = len_remain +
(4 - (len_remain & 3));
memcpy(aligned_buf, src, len_remain);
src = aligned_buf;
}
tx_len = len_remain;
} else {
tx_len = (ar->bmi.max_data_size - header);
}
offset = 0;
memcpy(&(ar->bmi.cmd_buf[offset]), &cid, sizeof(cid));
offset += sizeof(cid);
memcpy(&(ar->bmi.cmd_buf[offset]), &addr, sizeof(addr));
offset += sizeof(addr);
memcpy(&(ar->bmi.cmd_buf[offset]), &tx_len, sizeof(tx_len));
offset += sizeof(tx_len);
memcpy(&(ar->bmi.cmd_buf[offset]), src, tx_len);
offset += tx_len;
ret = ath6kl_hif_bmi_write(ar, ar->bmi.cmd_buf, offset);
if (ret) {
ath6kl_err("Unable to write to the device: %d\n",
ret);
return ret;
}
len_remain -= tx_len; addr += tx_len;
}
return 0;
}
int ath6kl_bmi_execute(struct ath6kl *ar, u32 addr, u32 *param)
{
u32 cid = BMI_EXECUTE;
int ret;
u32 offset;
u16 size;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
size = sizeof(cid) + sizeof(addr) + sizeof(param);
if (size > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
}
memset(ar->bmi.cmd_buf, 0, size);
ath6kl_dbg(ATH6KL_DBG_BMI, "bmi execute: addr: 0x%x, param: %d)\n",
addr, *param);
offset = 0;
memcpy(&(ar->bmi.cmd_buf[offset]), &cid, sizeof(cid));
offset += sizeof(cid);
memcpy(&(ar->bmi.cmd_buf[offset]), &addr, sizeof(addr));
offset += sizeof(addr);
memcpy(&(ar->bmi.cmd_buf[offset]), param, sizeof(*param));
offset += sizeof(*param);
ret = ath6kl_hif_bmi_write(ar, ar->bmi.cmd_buf, offset);
if (ret) {
ath6kl_err("Unable to write to the device: %d\n", ret);
return ret;
}
ret = ath6kl_hif_bmi_read(ar, ar->bmi.cmd_buf, sizeof(*param));
if (ret) {
ath6kl_err("Unable to read from the device: %d\n", ret);
return ret;
}
memcpy(param, ar->bmi.cmd_buf, sizeof(*param));
return 0;
}
int ath6kl_bmi_set_app_start(struct ath6kl *ar, u32 addr)
{
u32 cid = BMI_SET_APP_START;
int ret;
u32 offset;
u16 size;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
size = sizeof(cid) + sizeof(addr);
if (size > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
}
memset(ar->bmi.cmd_buf, 0, size);
ath6kl_dbg(ATH6KL_DBG_BMI, "bmi set app start: addr: 0x%x\n", addr);
offset = 0;
memcpy(&(ar->bmi.cmd_buf[offset]), &cid, sizeof(cid));
offset += sizeof(cid);
memcpy(&(ar->bmi.cmd_buf[offset]), &addr, sizeof(addr));
offset += sizeof(addr);
ret = ath6kl_hif_bmi_write(ar, ar->bmi.cmd_buf, offset);
if (ret) {
ath6kl_err("Unable to write to the device: %d\n", ret);
return ret;
}
return 0;
}
int ath6kl_bmi_reg_read(struct ath6kl *ar, u32 addr, u32 *param)
{
u32 cid = BMI_READ_SOC_REGISTER;
int ret;
u32 offset;
u16 size;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
size = sizeof(cid) + sizeof(addr);
if (size > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
}
memset(ar->bmi.cmd_buf, 0, size);
ath6kl_dbg(ATH6KL_DBG_BMI, "bmi read SOC reg: addr: 0x%x\n", addr);
offset = 0;
memcpy(&(ar->bmi.cmd_buf[offset]), &cid, sizeof(cid));
offset += sizeof(cid);
memcpy(&(ar->bmi.cmd_buf[offset]), &addr, sizeof(addr));
offset += sizeof(addr);
ret = ath6kl_hif_bmi_write(ar, ar->bmi.cmd_buf, offset);
if (ret) {
ath6kl_err("Unable to write to the device: %d\n", ret);
return ret;
}
ret = ath6kl_hif_bmi_read(ar, ar->bmi.cmd_buf, sizeof(*param));
if (ret) {
ath6kl_err("Unable to read from the device: %d\n", ret);
return ret;
}
memcpy(param, ar->bmi.cmd_buf, sizeof(*param));
return 0;
}
int ath6kl_bmi_reg_write(struct ath6kl *ar, u32 addr, u32 param)
{
u32 cid = BMI_WRITE_SOC_REGISTER;
int ret;
u32 offset;
u16 size;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
size = sizeof(cid) + sizeof(addr) + sizeof(param);
if (size > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
}
memset(ar->bmi.cmd_buf, 0, size);
ath6kl_dbg(ATH6KL_DBG_BMI,
"bmi write SOC reg: addr: 0x%x, param: %d\n",
addr, param);
offset = 0;
memcpy(&(ar->bmi.cmd_buf[offset]), &cid, sizeof(cid));
offset += sizeof(cid);
memcpy(&(ar->bmi.cmd_buf[offset]), &addr, sizeof(addr));
offset += sizeof(addr);
memcpy(&(ar->bmi.cmd_buf[offset]), ¶m, sizeof(param));
offset += sizeof(param);
ret = ath6kl_hif_bmi_write(ar, ar->bmi.cmd_buf, offset);
if (ret) {
ath6kl_err("Unable to write to the device: %d\n", ret);
return ret;
}
return 0;
}
int ath6kl_bmi_lz_data(struct ath6kl *ar, u8 *buf, u32 len)
{
u32 cid = BMI_LZ_DATA;
int ret;
u32 offset;
u32 len_remain, tx_len;
const u32 header = sizeof(cid) + sizeof(len);
u16 size;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
size = ar->bmi.max_data_size + header;
if (size > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
}
memset(ar->bmi.cmd_buf, 0, size);
ath6kl_dbg(ATH6KL_DBG_BMI, "bmi send LZ data: len: %d)\n",
len);
len_remain = len;
while (len_remain) {
tx_len = (len_remain < (ar->bmi.max_data_size - header)) ?
len_remain : (ar->bmi.max_data_size - header);
offset = 0;
memcpy(&(ar->bmi.cmd_buf[offset]), &cid, sizeof(cid));
offset += sizeof(cid);
memcpy(&(ar->bmi.cmd_buf[offset]), &tx_len, sizeof(tx_len));
offset += sizeof(tx_len);
memcpy(&(ar->bmi.cmd_buf[offset]), &buf[len - len_remain],
tx_len);
offset += tx_len;
ret = ath6kl_hif_bmi_write(ar, ar->bmi.cmd_buf, offset);
if (ret) {
ath6kl_err("Unable to write to the device: %d\n",
ret);
return ret;
}
len_remain -= tx_len;
}
return 0;
}
int ath6kl_bmi_lz_stream_start(struct ath6kl *ar, u32 addr)
{
u32 cid = BMI_LZ_STREAM_START;
int ret;
u32 offset;
u16 size;
if (ar->bmi.done_sent) {
ath6kl_err("bmi done sent already, cmd %d disallowed\n", cid);
return -EACCES;
}
size = sizeof(cid) + sizeof(addr);
if (size > ar->bmi.max_cmd_size) {
WARN_ON(1);
return -EINVAL;
}
memset(ar->bmi.cmd_buf, 0, size);
ath6kl_dbg(ATH6KL_DBG_BMI,
"bmi LZ stream start: addr: 0x%x)\n",
addr);
offset = 0;
memcpy(&(ar->bmi.cmd_buf[offset]), &cid, sizeof(cid));
offset += sizeof(cid);
memcpy(&(ar->bmi.cmd_buf[offset]), &addr, sizeof(addr));
offset += sizeof(addr);
ret = ath6kl_hif_bmi_write(ar, ar->bmi.cmd_buf, offset);
if (ret) {
ath6kl_err("Unable to start LZ stream to the device: %d\n",
ret);
return ret;
}
return 0;
}
int ath6kl_bmi_fast_download(struct ath6kl *ar, u32 addr, u8 *buf, u32 len)
{
int ret;
u32 last_word = 0;
u32 last_word_offset = len & ~0x3;
u32 unaligned_bytes = len & 0x3;
ret = ath6kl_bmi_lz_stream_start(ar, addr);
if (ret)
return ret;
if (unaligned_bytes) {
/* copy the last word into a zero padded buffer */
memcpy(&last_word, &buf[last_word_offset], unaligned_bytes);
}
ret = ath6kl_bmi_lz_data(ar, buf, last_word_offset);
if (ret)
return ret;
if (unaligned_bytes)
ret = ath6kl_bmi_lz_data(ar, (u8 *)&last_word, 4);
if (!ret) {
/* Close compressed stream and open a new (fake) one.
* This serves mainly to flush Target caches. */
ret = ath6kl_bmi_lz_stream_start(ar, 0x00);
}
return ret;
}
void ath6kl_bmi_reset(struct ath6kl *ar)
{
ar->bmi.done_sent = false;
}
int ath6kl_bmi_init(struct ath6kl *ar)
{
if (WARN_ON(ar->bmi.max_data_size == 0))
return -EINVAL;
/* cmd + addr + len + data_size */
ar->bmi.max_cmd_size = ar->bmi.max_data_size + (sizeof(u32) * 3);
ar->bmi.cmd_buf = kzalloc(ar->bmi.max_cmd_size, GFP_ATOMIC);
if (!ar->bmi.cmd_buf)
return -ENOMEM;
return 0;
}
void ath6kl_bmi_cleanup(struct ath6kl *ar)
{
kfree(ar->bmi.cmd_buf);
ar->bmi.cmd_buf = NULL;
}
| {
"pile_set_name": "Github"
} |
{
"domain": "go.th",
"nameServers": [
"a.thains.co.th",
"b.thains.co.th",
"c.thains.co.th",
"ns.thnic.net",
"p.thains.co.th"
],
"policies": [
{
"type": "idn-disallowed"
}
]
}
| {
"pile_set_name": "Github"
} |
<?php
/**
* Open Source Social Network
*
* A generic parent class for Class exceptions
*
* @package (softlab24.com).ossn
* @author OSSN Core Team <[email protected]>
* @copyright (C) SOFTLAB24 LIMITED
* @license Open Source Social Network License (OSSN LICENSE) http://www.opensource-socialnetwork.org/licence
* @link https://www.opensource-socialnetwork.org/
*/
class OssnDatabaseException extends Exception {} | {
"pile_set_name": "Github"
} |
<?php
namespace GuzzleHttp\Promise;
/**
* Represents a promise that iterates over many promises and invokes
* side-effect functions in the process.
*/
class EachPromise implements PromisorInterface
{
private $pending = [];
/** @var \Iterator */
private $iterable;
/** @var callable|int */
private $concurrency;
/** @var callable */
private $onFulfilled;
/** @var callable */
private $onRejected;
/** @var Promise */
private $aggregate;
/** @var bool */
private $mutex;
/**
* Configuration hash can include the following key value pairs:
*
* - fulfilled: (callable) Invoked when a promise fulfills. The function
* is invoked with three arguments: the fulfillment value, the index
* position from the iterable list of the promise, and the aggregate
* promise that manages all of the promises. The aggregate promise may
* be resolved from within the callback to short-circuit the promise.
* - rejected: (callable) Invoked when a promise is rejected. The
* function is invoked with three arguments: the rejection reason, the
* index position from the iterable list of the promise, and the
* aggregate promise that manages all of the promises. The aggregate
* promise may be resolved from within the callback to short-circuit
* the promise.
* - concurrency: (integer) Pass this configuration option to limit the
* allowed number of outstanding concurrently executing promises,
* creating a capped pool of promises. There is no limit by default.
*
* @param mixed $iterable Promises or values to iterate.
* @param array $config Configuration options
*/
public function __construct($iterable, array $config = [])
{
$this->iterable = iter_for($iterable);
if (isset($config['concurrency'])) {
$this->concurrency = $config['concurrency'];
}
if (isset($config['fulfilled'])) {
$this->onFulfilled = $config['fulfilled'];
}
if (isset($config['rejected'])) {
$this->onRejected = $config['rejected'];
}
}
public function promise()
{
if ($this->aggregate) {
return $this->aggregate;
}
try {
$this->createPromise();
$this->iterable->rewind();
$this->refillPending();
} catch (\Throwable $e) {
$this->aggregate->reject($e);
} catch (\Exception $e) {
$this->aggregate->reject($e);
}
return $this->aggregate;
}
private function createPromise()
{
$this->mutex = false;
$this->aggregate = new Promise(function () {
reset($this->pending);
if (empty($this->pending) && !$this->iterable->valid()) {
$this->aggregate->resolve(null);
return;
}
// Consume a potentially fluctuating list of promises while
// ensuring that indexes are maintained (precluding array_shift).
while ($promise = current($this->pending)) {
next($this->pending);
$promise->wait();
if ($this->aggregate->getState() !== PromiseInterface::PENDING) {
return;
}
}
});
// Clear the references when the promise is resolved.
$clearFn = function () {
$this->iterable = $this->concurrency = $this->pending = null;
$this->onFulfilled = $this->onRejected = null;
};
$this->aggregate->then($clearFn, $clearFn);
}
private function refillPending()
{
if (!$this->concurrency) {
// Add all pending promises.
while ($this->addPending() && $this->advanceIterator());
return;
}
// Add only up to N pending promises.
$concurrency = is_callable($this->concurrency)
? call_user_func($this->concurrency, count($this->pending))
: $this->concurrency;
$concurrency = max($concurrency - count($this->pending), 0);
// Concurrency may be set to 0 to disallow new promises.
if (!$concurrency) {
return;
}
// Add the first pending promise.
$this->addPending();
// Note this is special handling for concurrency=1 so that we do
// not advance the iterator after adding the first promise. This
// helps work around issues with generators that might not have the
// next value to yield until promise callbacks are called.
while (--$concurrency
&& $this->advanceIterator()
&& $this->addPending());
}
private function addPending()
{
if (!$this->iterable || !$this->iterable->valid()) {
return false;
}
$promise = promise_for($this->iterable->current());
$idx = $this->iterable->key();
$this->pending[$idx] = $promise->then(
function ($value) use ($idx) {
if ($this->onFulfilled) {
call_user_func(
$this->onFulfilled, $value, $idx, $this->aggregate
);
}
$this->step($idx);
},
function ($reason) use ($idx) {
if ($this->onRejected) {
call_user_func(
$this->onRejected, $reason, $idx, $this->aggregate
);
}
$this->step($idx);
}
);
return true;
}
private function advanceIterator()
{
// Place a lock on the iterator so that we ensure to not recurse,
// preventing fatal generator errors.
if ($this->mutex) {
return false;
}
$this->mutex = true;
try {
$this->iterable->next();
$this->mutex = false;
return true;
} catch (\Throwable $e) {
$this->aggregate->reject($e);
$this->mutex = false;
return false;
} catch (\Exception $e) {
$this->aggregate->reject($e);
$this->mutex = false;
return false;
}
}
private function step($idx)
{
// If the promise was already resolved, then ignore this step.
if ($this->aggregate->getState() !== PromiseInterface::PENDING) {
return;
}
unset($this->pending[$idx]);
// Only refill pending promises if we are not locked, preventing the
// EachPromise to recursively invoke the provided iterator, which
// cause a fatal error: "Cannot resume an already running generator"
if ($this->advanceIterator() && !$this->checkIfFinished()) {
// Add more pending promises if possible.
$this->refillPending();
}
}
private function checkIfFinished()
{
if (!$this->pending && !$this->iterable->valid()) {
// Resolve the promise if there's nothing left to do.
$this->aggregate->resolve(null);
return true;
}
return false;
}
}
| {
"pile_set_name": "Github"
} |
// @flow
export default {
LIGHT: 'light',
DARK: 'dark',
};
| {
"pile_set_name": "Github"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.