content
stringlengths
10
4.9M
Integrated human physiology: breathing, blood pressure and blood flow to the brain The cerebral vasculature rapidly adapts to changes in perfusion pressure (cerebral autoregulation), regional metabolic requirements of the brain (neurovascular coupling), autonomic neural activity, and humoral factors (cerebrovascular reactivity). Regulation of cerebral blood flow (CBF) is therefore highly controlled and involves a wide spectrum of regulatory mechanisms that together work to maintain optimum oxygen and nutrient supply. It is well-established that the cerebral vasculature is highly sensitive to changes in arterial blood gases, in particular the partial pressure of arterial carbon dioxide (). The teleological relevance of this unique feature of the brain is likely to lie in the need to tightly control brain pH and its related impact on ventilatory control at the level of the central chemoreceptors. Changes in arterial blood gases, in particular those that cause hypoxaemia and hypercapnia, also lead to widespread effects on the systemic vasculature often leading to sympathoexcitation and related blood pressure (BP) elevations via vasoconstriction (Ainslie et al. 2005). In this issue of The Journal of Physiology an elegant study by Battisti-Charbonney and co-workers provide a relevant example of integrative human physiology (Battisti-Charbonney et al. 2011). Using continuous bilateral measurements of blood flow velocity in the middle cerebral arteries (as a surrogate index of CBF) and BP, the authors gauged the CBF responses to CO2 changes under the background condition of either hyperoxia or hypoxia. The key findings indicate that the relationship between CBF velocity over a wide range of end-tidal () values during hypocapnia (: ∼25 mmHg) and hyperoxic or hypoxic rebreathing (: 55–60 mmHg and 45–50 mmHg, respectively) are optimally fitted using a sigmoid (logistic) curve rather than a linear curve. Above the upper limits of CO2 reactivity (i.e. near the threshold (∼55 to 60 mmHg) where CBF velocity has plateaued despite further elevations in ) linear elevations in BP then progressed, presumably via chemoreflex-induced elevations in sympathetic nerve activity (SNA). Notably, the authors are the first to integrate this logistic and linear fitting approach to document the influence of and related changes in mean arterial pressure (MAP) on CBF. Collectively, these experiments demonstrate that rebreathing tests – when analysed as described – may provide an estimate of the cerebrovascular response to CO2 (and O2) at a constant BP, as well as an estimate of the cerebrovascular passive response to both BP and CO2. In the broader context of integrative physiology, these findings are noteworthy on many levels. For example, impairment in cerebrovascular reactivity to CO2 and failure to effectively counter-regulate (or autoregulate) against systemic BP fluctuations could lead to a predisposition to adverse cerebrovascular events such as stroke, infarct extension and haemorrhagic transformation of existing strokes (Aries et al. 2010). However, the critical physiological and methodological consideration is that traditional tests to assess cerebrovascular reactivity to CO2 or cerebrovascular autoregulation treat these factors as separate identities. Clearly they are not: elevations in will lead to sympathoexcitation and increases in BP via vasoconstriction (Ainslie et al. 2005). The latter, as exampled by Battisti-Charbonney and co-workers, will have independent effects on CBF from those of (Lucas et al. 2010). Conversely, emerging evidence indicates that acute changes in BP may then impact on alveolar ventilation and thus , in part via the aptly named ‘ventilatory baroreflex’ (Stewart et al. 2011). Moreover, because the brain is relatively pressure-passive (Lucas et al. 2010) and since elevations in also ‘impair’ the brain's capability to defend against BP changes (Panerai et al. 1999), considerations of BP as a critical determinant of CBF is warranted in these conditions. An example of these integrated changes in and BP occur in a myriad everyday activities: postural change, coughing, laughing, defecation, exercise, sexual activity, to name but a few. The merit of the newly proposed method as a useful clinical tool to explore the separate and combined quantification of the cerebrovascular reactivity to CO2 and BP needs to be established. However, consideration of the combined influence of both and BP on the brain would seem meritorious from a systems physiology viewpoint. In summary, in view of the article by Battisti-Charbonney et al., we have attempted to highlight some of the common factors that independently, synergistically and often antagonistically participate in the regulation of CBF. Research exploring these complex interactions is currently lacking. Future studies with particular focus on these integrative physiological mechanisms are clearly warranted in both health and disease states.
/* * Sets the value of the "stroke-width" attribute of this GraphicalPrimitive1D. */ int GraphicalPrimitive1D::setStrokeWidth(double strokeWidth) { mStrokeWidth = strokeWidth; mIsSetStrokeWidth = true; return LIBSBML_OPERATION_SUCCESS; }
The Effect of Organizational Commitment on Higher Education Services Quality Relationship between employee attitudes and services quality still unclear. Therefore, the aim of this study is to uncover the relationship between organizational commitment and service quality, by investigate under which mechanism that organizational commitment effect service quality. Data were collected by means of self-administered survey, total 247 faculty staff and 1235 of their student complete responses were obtained from Aden University. The results show that, organizational commitment effect service quality by social mechanism (social exchange). More specifically, organizational commitment effect service quality by social exchange mechanism. Important implications, limitation and future studies are discussed. This study aims to investigate the direct effect of organizational commitment on higher education service quality, therefore, the data of this study was collected through questionnaire. Two survey instruments were used to test the hypotheses: first instrument to measure organization commitment, and second to measure their student perceptions about service quality. The questionnaires distributed personally to the respondents. Each respondent received envelop contains one organization commitment survey for the faculty and other surveys for students. The questionnaire hand-distributed in the classes before or after lecture. During this process, 400 questionnaire were distributed to the faculty members, and 257 questionnaires was return response was rate 61.7%, and 247 were used. In addition, distributed 2100 questionnaires to the faculty customers (students), 1250 questionnaires were return response rate was 60%.
/* * Copyright (c) 2018 Faiz & Siegeln Software GmbH * * Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. * * The Software shall be used for Good, not Evil. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ package com.faizsiegeln.njams.messageformat.v3.projectmessageNewNamespace; import com.faizsiegeln.njams.messageformat.v3.projectmessage.IExtract; import com.faizsiegeln.njams.messageformat.v3.projectmessage.IExtractionrules; import com.faizsiegeln.njams.messageformat.v3.projectmessage.IExtractruleType; import java.util.ArrayList; import java.util.List; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlAttribute; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlType; /** * <p> * Java class for anonymous complex type. * * <p> * The following schema fragment specifies the expected content contained within this class. * * <pre> * &lt;complexType&gt; * &lt;complexContent&gt; * &lt;restriction base="{http://www.w3.org/2001/XMLSchema}anyType"&gt; * &lt;sequence&gt; * &lt;element name="extractionrules"&gt; * &lt;complexType&gt; * &lt;complexContent&gt; * &lt;restriction base="{http://www.w3.org/2001/XMLSchema}anyType"&gt; * &lt;sequence&gt; * &lt;element name="extractrule" type="{http://www.faizsiegeln.com/schema/njams/extracts/2012-10-22/}extractruleType" maxOccurs="unbounded"/&gt; * &lt;/sequence&gt; * &lt;/restriction&gt; * &lt;/complexContent&gt; * &lt;/complexType&gt; * &lt;/element&gt; * &lt;/sequence&gt; * &lt;attribute name="domain" type="{http://www.w3.org/2001/XMLSchema}string" /&gt; * &lt;attribute name="deployment" type="{http://www.w3.org/2001/XMLSchema}string" /&gt; * &lt;attribute name="engine" type="{http://www.w3.org/2001/XMLSchema}string" /&gt; * &lt;attribute name="process" type="{http://www.w3.org/2001/XMLSchema}string" /&gt; * &lt;attribute name="activity" type="{http://www.w3.org/2001/XMLSchema}string" /&gt; * &lt;attribute name="name" type="{http://www.w3.org/2001/XMLSchema}string" /&gt; * &lt;/restriction&gt; * &lt;/complexContent&gt; * &lt;/complexType&gt; * </pre> * * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "extractionrules" }) @XmlRootElement(name = "extract") public class Extract implements IExtract<Extract.Extractionrules> { @XmlElement(required = true) protected Extract.Extractionrules extractionrules; @XmlAttribute(name = "domain") protected String domain; @XmlAttribute(name = "deployment") protected String deployment; @XmlAttribute(name = "engine") protected String engine; @XmlAttribute(name = "process") protected String process; @XmlAttribute(name = "activity") protected String activity; @XmlAttribute(name = "name") protected String name; /** * Gets the value of the extractionrules property. * * @return possible object is {@link Extract.Extractionrules } * */ @Override public Extract.Extractionrules getExtractionrules() { return extractionrules; } /** * Sets the value of the extractionrules property. * * @param value * allowed object is {@link Extract.Extractionrules } * */ @Override public void setExtractionrules(Extract.Extractionrules value) { this.extractionrules = value; } /** * Gets the value of the domain property. * * @return possible object is {@link String } * */ @Override public String getDomain() { return domain; } /** * Sets the value of the domain property. * * @param value * allowed object is {@link String } * */ @Override public void setDomain(String value) { this.domain = value; } /** * Gets the value of the deployment property. * * @return possible object is {@link String } * */ @Override public String getDeployment() { return deployment; } /** * Sets the value of the deployment property. * * @param value * allowed object is {@link String } * */ @Override public void setDeployment(String value) { this.deployment = value; } /** * Gets the value of the engine property. * * @return possible object is {@link String } * */ @Override public String getEngine() { return engine; } /** * Sets the value of the engine property. * * @param value * allowed object is {@link String } * */ @Override public void setEngine(String value) { this.engine = value; } /** * Gets the value of the process property. * * @return possible object is {@link String } * */ @Override public String getProcess() { return process; } /** * Sets the value of the process property. * * @param value * allowed object is {@link String } * */ @Override public void setProcess(String value) { this.process = value; } /** * Gets the value of the activity property. * * @return possible object is {@link String } * */ @Override public String getActivity() { return activity; } /** * Sets the value of the activity property. * * @param value * allowed object is {@link String } * */ @Override public void setActivity(String value) { this.activity = value; } /** * Gets the value of the name property. * * @return possible object is {@link String } * */ @Override public String getName() { return name; } /** * Sets the value of the name property. * * @param value * allowed object is {@link String } * */ @Override public void setName(String value) { this.name = value; } @Override public Extract.Extractionrules createExtractionrules() { this.extractionrules = new Extractionrules(); return this.extractionrules; } /** * <p> * Java class for anonymous complex type. * * <p> * The following schema fragment specifies the expected content contained within this class. * * <pre> * &lt;complexType&gt; * &lt;complexContent&gt; * &lt;restriction base="{http://www.w3.org/2001/XMLSchema}anyType"&gt; * &lt;sequence&gt; * &lt;element name="extractrule" type="{http://www.faizsiegeln.com/schema/njams/extracts/2012-10-22/}extractruleType" maxOccurs="unbounded"/&gt; * &lt;/sequence&gt; * &lt;/restriction&gt; * &lt;/complexContent&gt; * &lt;/complexType&gt; * </pre> * * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = {"extractrule"}) public static class Extractionrules implements IExtractionrules<ExtractruleType> { @XmlElement(required = true) protected List<ExtractruleType> extractrule; /** * Gets the value of the extractrule property. * * <p> * This accessor method returns a reference to the live list, not a snapshot. Therefore any modification you make to the * returned list will be present inside the JAXB object. This is why there is not a <CODE>set</CODE> method for the * extractrule property. * * <p> * For example, to add a new item, do as follows: * * <pre> * getExtractrule().add(newItem); * </pre> * * * <p> * Objects of the following type(s) are allowed in the list {@link ExtractruleType } * * */ public List<ExtractruleType> getExtractrule() { if (extractrule == null) { extractrule = new ArrayList<>(); } return this.extractrule; } @Override public ExtractruleType createExtractruleType() { return new ExtractruleType(); } } }
# Copyright (c) 2020 Club Raiders Project # https://github.com/HausReport/ClubRaiders # # SPDX-License-Identifier: BSD-3-Clause import json if __name__ == '__main__': with open('addresses.jsonl') as f: data = json.load(f) # Output: {'name': 'Bob', 'languages': ['English', 'Fench']} print(data)
package main // Options are things that are specified about the game upon table creation (before the game starts) // All of these are stored in the database as columns of the "games" table // A pointer to these options is copied into the Game struct when the game starts for convenience type Options struct { NumPlayers int `json:"numPlayers"` // StartingPlayer is a legacy field for games prior to April 2020 StartingPlayer int `json:"startingPlayer"` VariantID int `json:"variantID"` VariantName string `json:"variantName"` Timed bool `json:"timed"` TimeBase int `json:"timeBase"` TimePerTurn int `json:"timePerTurn"` Speedrun bool `json:"speedrun"` CardCycle bool `json:"cardCycle"` DeckPlays bool `json:"deckPlays"` EmptyClues bool `json:"emptyClues"` OneExtraCard bool `json:"oneExtraCard"` OneLessCard bool `json:"oneLessCard"` AllOrNothing bool `json:"allOrNothing"` DetrimentalCharacters bool `json:"detrimentalCharacters"` TableName string `json:"tableName,omitempty"` MaxPlayers int `json:"maxPlayers,omitempty"` } // ExtraOptions are extra specifications for the game; they are not recorded in the database // Similar to Options, a pointer to ExtraOptions is copied into the Game struct for convenience type ExtraOptions struct { // -1 if an ongoing game, 0 if a JSON replay, // a positive number if a database replay (or a "!replay" table) DatabaseID int // Normal games are written to the database // Replays are not written to the database NoWriteToDatabase bool JSONReplay bool // Replays have some predetermined values // Some special game types also use these fields (e.g. "!replay" games) CustomNumPlayers int CustomCharacterAssignments []*CharacterAssignment CustomSeed string CustomDeck []*CardIdentity CustomActions []*GameAction Restarted bool // Whether or not this game was created by clicking "Restart" in a shared replay SetSeedSuffix string // Parsed from the game name for "!seed" games SetReplay bool // True during "!replay" games SetReplayTurn int // Parsed from the game name for "!replay" games } // To minimize JSON output, we need to use pointers to each option instead of the normal type type OptionsJSON struct { StartingPlayer *int `json:"startingPlayer,omitempty"` Variant *string `json:"variant,omitempty"` Timed *bool `json:"timed,omitempty"` TimeBase *int `json:"timeBase,omitempty"` TimePerTurn *int `json:"timePerTurn,omitempty"` Speedrun *bool `json:"speedrun,omitempty"` CardCycle *bool `json:"cardCycle,omitempty"` DeckPlays *bool `json:"deckPlays,omitempty"` EmptyClues *bool `json:"emptyClues,omitempty"` OneExtraCard *bool `json:"oneExtraCard,omitempty"` OneLessCard *bool `json:"oneLessCard,omitempty"` AllOrNothing *bool `json:"allOrNothing,omitempty"` DetrimentalCharacters *bool `json:"detrimentalCharacters,omitempty"` } func NewOptions() *Options { return &Options{ NumPlayers: 0, // This will be written when the game starts StartingPlayer: 0, VariantID: 0, VariantName: DefaultVariantName, Timed: false, TimeBase: 0, TimePerTurn: 0, Speedrun: false, CardCycle: false, DeckPlays: false, EmptyClues: false, OneExtraCard: false, OneLessCard: false, AllOrNothing: false, DetrimentalCharacters: false, TableName: "", MaxPlayers: 0, } } // GetModifier computes the integer modifier for the game options, // corresponding to the "ScoreModifier" constants in "constants.go" func (o *Options) GetModifier() Bitmask { var modifier Bitmask if o.DeckPlays { modifier.AddFlag(ScoreModifierDeckPlays) } if o.EmptyClues { modifier.AddFlag(ScoreModifierEmptyClues) } if o.OneExtraCard { modifier.AddFlag(ScoreModifierOneExtraCard) } if o.OneLessCard { modifier.AddFlag(ScoreModifierOneLessCard) } if o.AllOrNothing { modifier.AddFlag(ScoreModifierAllOrNothing) } return modifier }
Instance Optimal Learning We consider the following basic learning task: given independent draws from an unknown distribution over a discrete support, output an approximation of the distribution that is as accurate as possible in $\ell_1$ distance (equivalently, total variation distance, or"statistical distance"). Perhaps surprisingly, it is often possible to"de-noise"the empirical distribution of the samples to return an approximation of the true distribution that is significantly more accurate than the empirical distribution, without relying on any prior assumptions on the distribution. We present an instance optimal learning algorithm which, up to an additive sub-constant factor, optimally performs this de-noising for every distribution for which such a de-noising is possible. More formally, given $n$ independent draws from a distribution $p$, our algorithm returns a labelled vector whose expected distance from $p$ is equal to the minimum possible expected error that could be obtained by any algorithm that knows the true unlabeled vector of probabilities of distribution $p$ and simply needs to assign labels, up to an additive subconstant term that is independent of $p$ and depends only on the number of samples, $n$. This somewhat surprising result has several conceptual implications, including the fact that, for any large sample, Bayesian assumptions on the"shape"or bounds on the tail probabilities of a distribution over discrete support are not helpful for the task of learning the distribution. Introduction Given independent draws from an unknown distribution over an unknown discrete support, what is the best way to aggregate those samples into an approximation of the true distribution? This is, perhaps, the most fundamental learning problem. The most obvious and most widely employed approach is to simply output the empirical distribution of the sample. To what extent can one improve over this naive approach? To what extent can one "de-noise" the empirical distribution, without relying on any assumptions on the structure of the underlying distribution? Perhaps surprisingly, there are many settings in which de-noising can be done without a priori assumptions on the distribution. We begin by presenting two motivating examples illustrating rather different settings in which significant de-noising of the empirical distribution is possible. Example 1. Suppose you are given 100,000 independent draws from some unknown distribution, and you find that there are roughly 1,000 distinct elements, each of which appears roughly 100 times. Furthermore, suppose you compute the variance in the number of times the different domain elements occur, and it is close to 100. Based on these samples, you can confidently deduce that the true distribution is very close to a uniform distribution over 1,000 domain elements, and that the true probability of a domain element seen 90 times is roughly the same as that of an element observed 110 times. The basic reasoning is as follows: if the true distribution were the uniform distribution, then the noise from the random sampling would exhibit the observed variance in the number of occurrences; if there was any significant variation in the true probabilities of the different domain elements, then, combined with the noise added via the random sampling, the observed variance would be significantly larger than 100. The ℓ 1 error of the empirical distribution would be roughly 0.1, whereas the "de-noised" distribution would have error less than 0.01. Example 2. Suppose you are given 1,000 independent draws from an unknown distribution, and all 1000 samples are unique domain elements. You can safely conclude that the combined probability of all the observed domain elements is likely to be much less than 1/100-if this were not the case, one would expect at least one of the observed elements to occur twice in the sample. Hence the empirical distribution of the samples is likely to have ℓ 1 distance nearly 2 from the true distribution, whereas this reasoning would suggest that one should return the zero vector, which would have ℓ 1 distance at most 1. In both of the above examples, the key to the "de-noising" was the realization that the true distributions possessed some structure-structure that was both easily deduced from the samples, and structure that, once known, could then be leveraged to de-noise the empirical distribution. Our main result is an algorithm which de-noises the empirical distribution as much as is possible, whenever such denoising is possible. Specifically, our algorithm achieves, up to a subconstant term, the best error that any algorithm could achieve-even an algorithm that is given the unlabeled vector of true probabilities and simply needs to correctly label the probabilities. Theorem 1. There is a function err(n) that goes to zero as n gets large, and an algorithm, which given n independent draws from any distribution p of discrete support, outputs a labelled vector q, such that E ≤ opt(p, n) + err(n), where opt(p, n) is the minimum expected error that any algorithm could achieve on the following learning task: given p, and given n samples drawn independently from a distribution that is identical to p up to an arbitrary relabeling of the domain elements, learn the distribution. The performance guarantees of the above algorithm can be equivalently stated as follows: let S ← − n p denote that S is a set of n independent draws from distribution p, and let π(p) denote a distribution that is identical to p, up to relabeling the domain elements according a labeling scheme π chosen from a sufficiently large support. Our algorithm, which maps a set of samples S to a labelled vector q = f (S), satisfies the following: For any distribution p, where o n (1) → 0 as n → ∞ is independent of p and depends only on n. One surprising implication of the above result is that, for large samples, prior knowledge of the "shape" of the distribution, or knowledge of the rate of decay of the tails of the distribution, cannot improve the accuracy of the learning task. For example, typical Bayesian assumptions that the frequency of words in natural language satisfy Zipf distributions, or the frequencies of different species of bacteria in the human gut satisfy Gamma distributions or various power-law distributions, etc, can improve the expected error of the learned distribution by at most subconstant factors. The key intuition behind this optimal de-noising, and the core of our algorithm, is the ability to very accurately approximate the unlabeled vector of probabilities of the true distribution, given access to independent samples. In some sense, our result can be interpreted as the following statement: up to an additive subconstant factor, one can always recover an approximation of the unlabeled vector of probabilities more accurately than one can disambiguate and label such a vector. That is, if one has enough samples to accurately label the unlabeled vector of probabilities, then one also has more than enough samples to accurately learn that unlabeled vector. Of course, this statement can only hold up to some additive error term, as the following example illustrates. Example 3. Given samples drawn from a distribution supported on two unknown domain elements, if one is told that both probabilities are exactly 1/2, then as soon as one observes both domain elements, one knows the distribution exactly, and thus the expected error given n samples will be O(1/2 n ) as this bounds the probability that one of the two domain elements is not observed in a set of n samples. Without the prior knowledge that the two probabilities are 1/2, the best algorithm will have expected error ≈ 1/ √ n. The above example illustrates that prior knowledge of the vector of probabilities can be helpful. Our result, however, shows that this phenomena only occurs to a significant extent for very small sample sizes; for larger samples, no distribution exists for which prior knowledge of the vector of probabilities improves the expected error of a learning algorithm beyond a universal subconstant additive term that goes to zero as a function of the sample size. Our algorithm proceeds via two steps. In the first step, the samples are used to output an approximation of the vector of true probabilities. We show that, with high probability over the randomness of the n independent draws from the distribution, we accurately recover the portion of the vector of true probabilities consisting of values asymptotically larger than 1/n log n. The following proposition formally quantifies this initial step: Proposition 1. There exists an algorithm such that, for any function f (n) = ω n (1) that goes to infinity as n gets large (e.g. f (n) = log log n), there is a function o n (1) of n that goes to zero as n gets large, such that given n samples drawn independently from any distribution p, the algorithm outputs an unlabeled vector, q, such that, with probability 1 − e −n Ω(1) , there exists a labeling π(q) of the vector q such that where p(x) denotes the true probability of domain element x in distribution p. The power of the above proposition lies in the following trivial observation: for any function g(n) = o(1/n), the domain elements x that both occur in the n samples and have true probability p(x) < g(n), can account for at most o(1) probability mass, in aggregate. Hence the fact that Proposition 1 only guarantees that we are learning the probabilities above 1/n log n = o(1/n) gives rise to, at most, an ℓ 1 error of o(1) in our final returned vector. The second step of our algorithm leverages the accurate approximation of the unlabeled vector of probabilities to optimally assign probability values to each of the observed domain elements. This step of the algorithm can be interpreted as solving the following optimization problem: given n independent draws from a distribution, and an unlabeled vector v representing the true vector of probabilities of distribution p, for each observed domain element x, assign the probability q(x) that minimizes the expected ℓ 1 distance |q(x) − p(x)|. This optimization task is well-defined, though computationally intractable. Nevertheless, we show that a very natural and computationally tractable scheme, which assigns a probability q(x) that is a function of only v and the number of occurrences of x, incurs an expected error within o(1) of the expected error of the optimal scheme (which assigns q(x) as a function of v and the entire set of samples). Beyond yielding a near optimal learning algorithm, there are several additional benefits to our approach of first accurately reconstructing the unlabeled vector of probabilities. For instance, such an unlabeled vector allows us to estimate properties of the underlying distribution including estimating the error of our returned vector, and estimating the error in our estimate of each observed domain element's probability. Related Work Perhaps the first work on correcting the empirical distribution-which serves as the jumping-off point for nearly all of the subsequent work on this problem that we are aware of-is the work of Turing, and I.J. Good (see also ). In the context of their work at Bletchley Park as part of the British WWII effort to crack the German enigma machine ciphers, Turing and Good developed a simple estimator that corrected the empirical distribution, in some sense to capture the "missing" probability mass of the distribution. This estimator and its variants have been employed widely, particularly in the context of natural language processing and other settings in which significant portions of the distribution are comprised of domain elements with small probabilities (e.g. ). In its most simple form, the Good-Turing frequency estimation scheme estimates the total probability of all domain elements that appear exactly i times in a set of n samples as (i+1)F i+1 n , where F j is the total number of species that occur exactly j times in the samples. The total probability mass consisting of domain elements that are not seen in the samples-the "missing" mass, or, equivalently, the probability that the next sample drawn will be a new domain element that has not been seen previously-can be estimated via this formula as F 1 /n, namely the fraction of the samples consisting of domain elements seen exactly once. The Good-Turing estimate is especially suited to estimating the total mass of elements that appear few times; for more frequently occurring domain elements, this estimate has high variancefor example, if F i+1 = 0, as will be the case for most large i, then the estimate is 0. However, for frequently occurring domain elements, the empirical distribution will give an accurate estimate of their probability mass. There is an extremely long and successful line of work, spanning the past 60 years, from the computer science, statistics, and information theory communities, proposing approaches to "smoothing" the Good-Turing estimates, and combining such smoothed estimates with the empirical distribution (e.g. ). Our approach-to first recover an estimate of the unlabeled vector of probabilities of the true distribution-deviates fundamentally from this previous work, which all attempts to accurately estimate the total probability mass of the domain elements observed i times. As the following example illustrates, even if one knows the exact total probability comprised of the elements observed i times, for all i, such knowledge can not be used to yield an optimal learning algorithm, and could result in an ℓ 1 error that is a factor of two larger than that of our approach. Example 4. Consider n independent draws from a distribution in which 90% of the domain elements occur with probability 10/n, and the remaining 10% occur with probability 11/n. All variants of the Good-Turing frequency estimation scheme would end up, at best, assigning probability 10.1/n to most of the domain elements, incurring an ℓ 1 error of roughly 0.2. This is because, for elements seen roughly 10 times, the scheme would first calculate that the average mass of such elements is 10.1/n, and then assign this probability to all such elements. Our scheme, on the other hand, would realize that approximately 90% of such elements have probability 10/n, and 10% have probability 11/n, but then would assign the probability minimizing the expected error-namely, in this case, our algorithm would assign the median probability, 10/n, to all such elements, incurring an ℓ 1 error of approximately 0.1. Worst-case vs Instance Optimal Testing and Learning Sparked by the seminal work of Goldreich, Goldwasser and Ron and that of Batu et al. , there has been a long line of work considering distributional property testing, estimation, and learning questions from a worst case standpoint-typically parameterized via an upper bound on the support size of the distribution from which the samples are drawn (e.g. ). The desire to go beyond this type of worst-case analysis and develop algorithms which provably perform better on "easy" distributions has led to two different veins of further work. One vein considers different common types of structure that a distribution might possess-such as monotonicity, unimodality, skinny tails, etc., and how such structure can be leveraged to yield improved algorithms . While this direction is still within the framework of worst-case analysis, the emphasis is on developing a more nuanced understanding of why "easy" instances are easy. Another vein of very recent work beyond worst-case analysis (of which this paper is an example) seeks to develop "instance-optimal" algorithms that are capable of exploiting whatever structure is present in the instance. For the problem of identity testing-given the explicit description of description p, deciding whether a set of samples was drawn from p versus a distribution with ℓ 1 distance at least ǫ from p-recent work gave an algorithm and an explicit function of p and ǫ that represents the sample complexity of this task, for each p . In a similar spirit, with the dual goals of developing optimal algorithms as well as understanding the fundamental limits of when such instance-optimality is not possible, Acharya et al. have a line of work from the perspective of competitive analysis . Broadly, this work explores the following question: to what extent can an algorithm perform as well as if it knew, a priori, the structure of the problem instance on which it was run? For example, the work considers the two-distribution identity testing question: given samples drawn from two unknown distributions, p and q, how many samples are required to distinguish the case that p = q from ||p − q|| 1 ≥ ǫ? They show that if n p,q is the number of samples required by an algorithm that knows, ahead of time, the unlabeled vector of probabilities of p and q, then the sample complexity is bounded by n 3/2 p,q , and that, in general, a polynomial blowup is necessary-there exists p, q for which no algorithm can perform this task using fewer than n 7/6 p,q samples. Relation to This present paper has two technical parts: the first component is recovering an approximation to the unlabeled vector of probabilities, and the second part is a leveraging of the recovered unlabeled vector of probabilities to output a labeled vector. The majority of the approach and technical machinery that we employ for the first part is based the ideas and techniques in -particularly a Chebyshev polynomial earthmover scheme, which was also repurposed for a rather different purpose in . The papers → N ∪ {0}, where h p (x) is equal to the number of domain elements that each occur in distribution p with probability x. Formally, h p (x) = |{α : p(α) = x}|, where p(α) is the probability mass that distribution p assigns to domain element α. We will also allow for "generalized histograms" in which h p does not necessarily take integral values. In analogy with the histogram of a distribution, it will be convenient to have an unlabeled representation of the set of samples. We define the fingerprint of a set of samples, which essentially removes all the label-information: Definition 2. Given samples X = (x 1 , . . . , x n ), the associated fingerprint, F = (F 1 , F 2 , . . .), is the "histogram of the histogram" of the sample. Formally, F is the vector whose i th component, F i , is the number of elements in the domain that occur exactly i times in X. We note that in some of the literature, the fingerprint is alternately termed the pattern, histogram, histogram of the histogram or collision statistics of the samples. The following metric will be useful for comparing histograms: Definition 3. For two distributions p 1 , p 2 with respective histograms h 1 , h 2 , and a real number τ ∈ , we define the τ -truncated relative earthmover distance between them, R τ (p 1 , p 2 ) := R τ (h 1 , h 2 ), as the minimum over all schemes of moving the probability mass in the first histogram to yield the second histogram, where the cost per unit mass of moving from probability x to probability y is | log max(x, τ ) − log max(y, τ )|. The following fact, whose proof is contained in Appendix A, relates the τ -truncated relative earthmover distance between two distributions, p 1 , p 2 , to an analogous but weaker statement about the ℓ 1 distance between p 1 and a distribution obtained from p 2 by choosing an optimal relabeling of the support: Fact 1. Given two distributions p 1 , p 2 satisfying R τ (p 1 , p 2 ) ≤ ǫ, there exists a relabeling π of the support of p 2 such that i |max(p 1 (i), τ ) − max(p 2 (π(i)), τ )| ≤ 2ǫ. Recovering the histogram For clarity of exposition, we state the algorithm and its analysis in terms of two positive constants, B, C, which can be defined arbitrarily provided the following inequalities hold: • Define the set X := { 1 n 2 , 2 n 2 , 3 n 2 , . . . , n B +n C n }. • For each x ∈ X, define the associated variable v x , and consider the solution to the following linear program: Subject to: is the solution to the linear program, and then for each integer i > n B + 2n C , incrementing h LP ( i n ) by F i . The following theorem quantifies the performance of the above algorithm: There exists an absolute constant c such that for sufficiently large n and any w ∈ , given n independent draws from a distribution p with histogram h, with probability 1 − e −n Ω(1) the generalized histogram h LP returned by Algorithm 1 satisfies By Fact 1, this theorem is stronger than Proposition 1, modulo the fact that the entries of the histogram returned by the above algorithm are non-integral. In Appendix C we provide a simple algorithm that rounds a generalized histogram to an (integral) histogram, while changing it very little in relative earthmover distance R 0 (·, ·). Together with the above theorem, this yields the specific statement of Proposition 1. The proof of the above theorem relies on an explicit earthmover scheme that leverages a Chebyshev polynomial construction similar to that employed in . The two key properties of the scheme are 1) the truncated relative earthmover cost of the scheme is small, and 2) given two histograms that have similar expected fingerprints, the results of applying the scheme to the pair of histograms will result in histograms that are very close to each other in truncated relative earthmover distance. The technical details differ slightly from those in , where by the first condition of "faithful", none of these probabilities are above 2 n log 2 n for large enough n. Further, let S j,k be the multiset of probabilities of those domain elements from bucket k of h that each get seen exactly j times in the sample. The total error of our estimate m j on bucket k is thus x∈S j,k |m j − x|, which since buckets have width 1/(n log 2 n), is within |S j,k |/(n log 2 n) of |S j,k | · |m j − k/(n log 2 n)|, where we have approximated each x by the left endpoint of the bucket containing x. By the second condition of "faithful", S j,k is within n 0.6 of its expectation, B poi (j, k), and since by assumption m j < 2 n log 2 n, we have that our previous error bound |S j,k | · |m j − k/(n log 2 n)| is within 2 n 0.4 log 2 n of B poi (j, k) · |m j − k/(n log 2 n)|. We rewrite this final expression via the definition of B poi as x:h k (x) =0 |m − k/(n log 2 n)|h(x)poi(nx, j). We compare this final expression to the portion of the deviation dev j,n (h, m j ) that comes from bucket k, namely x:h k (x) =0 |m j −x|h(x)poi(nx, j), where since x:h k (x) =0 |m j −x|h(x)poi(nx, j) = B poi (j, k) and x is within 1/(n log 2 n) of k/(n log 2 n), the difference between them is clearly bounded by B poi (j, k)/(n log 2 n). Using the triangle inequality to add up the three error terms we have accrued yields that our estimate for the L 1 error we make for elements seen j times from bucket k is accurate to within |S j,k |/(n log 2 n) + 2 n 0.4 log 2 n + B poi (j, k)/(n log 2 n). We sum this error bound over all 2 log 2·2 n buckets k and all indices j < log 2 n. The middle term 2 n 0.4 log 2 n clearly sums up to o(1) over all j, k pairs. Further, since S j,k is within n 0.6 of B poi (j, k) by the definition of faithful, the sum of the first term is within o(1) of the sum of the third term and it remains only to analyze the third term involving B poi (j, k). From its definition, j,k B poi (j, k) is the expected number of distinct items seen, when making P oi(n) draws from the distribution, throwing out those elements which violate the j and k constraints; hence this sum over all j, k pairs is at most n, bounding the total error of our "dev" estimates by O(1/ log 2 n), as desired. Proof of Theorem 1 We now assemble the pieces and prove Theorem 1. Proof of Theorem 1. Consider the output of Algorithm 1 as run in the first step of Algorithm 2. Proposition 1 outlines two cases: with o(1) probability the closeness property outlined in the proposition fails to hold, and in this case, Algorithm 2 may output a distribution up to L 1 distance 2 from the true distribution; because this is a low-probability event, this contributes 2·o(1) = o(1) to the expected error. Otherwise, u is close to h, and the fattened versionū is similarly close, which lets us apply Lemma 4 to conclude that j<log 2 n dev j,n (h, mū ,j,n ) ≤ o(1) + j<log 2 n dev j,n (h, m h,j,n ). Corollary 1 says that j<log 2 n dev j,n (h, m h,j,n ) essentially lowerbounds the optimal error opt(h, n), which we combine with the previous bound to yield Lemma 1 guarantees that the samples will be faithful except with o(1) probability, which, as above, means that even if these unfaithful cases contribute the maximum possible distance 2 to the L 1 error, the expected contribution from these cases is still o(1), and thus we will assume a faithful set of samples below. Lemmas 5 and 6 imply that for any faithful sample, the error made by Algorithm 2 on attributing those elements seen fewer than log 2 n times is within o(1) of j<log 2 n dev j,n (h, mū ,j,n ), and hence at most o(1) worse than opt(h, n). Condition 1 of the definition of faithful (Definition 6) implies that all of the elements seen at least log 2 n times originally had probability at least 1 n (log 2 n − log 1.75 n) and that the relative error between the number of times each of these elements is seen and its expectation is thus at most log −1/4 n. Thus using the empirical estimate on those elements appearing at least log 2 n times-as Algorithm 2 does-contributes O(log −1/4 n) total error on these elements. Thus all the sources of error add up to at most o(1) worse than opt(h, n) in expectation, yielding the theorem. A Proof of Fact 1 For convenience, we restate Fact 1: Fact 1 Given two distributions p 1 , p 2 satisfying R τ (p 1 , p 2 ) ≤ ǫ, there exists a relabeling π of the support of p 2 such that Proof of Fact 1. We relate relative earthmover distance to the minimum L 1 distance between relabled histograms, with a proof that extends to the case where both distances are defined above a cutoff threshold τ . The main idea is to point out that "minimum rearranged" L 1 distance can be expressed in a very similar form to earthmover distance. Given two histograms h 1 , h 2 , the minimum L 1 distance between any labelings of h 1 and h 2 is clearly the L 1 distance between the labelings where we match up elements of the two histograms in sorted order. Further, this is seen to equal the (regular, not relative) earthmover distance between the histograms h 1 and h 2 , where we consider there to be h 1 (x) "histogram mass" at each location x (instead of h 1 (x) · x "probability mass" as we did for relative earthmover distance), and place extra histogram entries at 0 as needed so the two histograms have the same total mass. Given this correspondence, consider an optimal relative earthmoving scheme between h 1 and h 2 , and in particular, consider an arbitrary component of this scheme, where some probability mass α gets moved from some location x in one of the distributions to some location y in the other, at cost α log max(x,τ ) max(y,τ ) , and suppose without loss of generality that x ≥ y. We now reinterpret this move in the L 1 sense, translating from moving probability mass to moving histogram mass. In the non-relative earthmover problem, α probability mass at location x corresponds to α x "histogram mass" at x, which we then move to y at cost (max(x, τ )−max(y, τ )) α x ; however, to simulate the relative earthmover scheme, we need the full α y mass to appear at y, so we move the remaining α y − α x mass up from 0, at cost ( α y − α x )(max(y, τ ) − τ ). To relate these 3 costs (the original relative earthmover cost, and the two components of the non-relative histogram earthmover cost), we note that if both x and y are less than or equal to τ then all 3 costs are 0. Otherwise, if x, y > τ then the first component of the histogram cost equals (1 − y x )α and the second is bounded by this, as ( α y − α x )(max(y, τ ) − τ ) < ( α y − α x )y = (1 − y x )α. Further, for the case under consideration where τ < y ≤ x, we have (1 − y x )α ≤ α log x y , which equals the relative earthmover cost. Thus the histogram cost in this case is at most twice the relative earthmover cost. In the remaining case, y ≤ τ < x, and the second component of the histogram cost equals 0 because max(y, τ )−τ = 0. The first component simplifies as (max(x, τ )−max(y, τ )) α where this last expression is the relative earthmover cost. Thus in all cases, the histogram cost is at most twice the relative earthmoving cost. Since the histogram cost was one particular "histogram moving scheme", and as we argued above, the "minimum permuted L 1 distance" is the minimum over all such schemes, we conclude that this L 1 distance is at most twice the relative earthmover distance, as desired. B Proof of Theorem 2 In this section, we prove Theorem 2, characterizing the performance of the Algorithm 1 which recovers an accurate approximation of the histogram of the true distribution. For convenience, we restate Theorem 2: Fact 1 There exists an absolute constant c such that for sufficiently large n and any w ∈ , given n independent draws from a distribution p with histogram h, with probability 1 − e −n Ω(1) the generalized histogram h LP returned by Algorithm 1 satisfies The proof decomposes into three parts. In Appendix B.1 we compartmentalize the probabilistic portion of the proof by defining a set of conditions that are satisfied with high probability, such that if the samples in question satisfy the properties, then the algorithm will succeed. This section is analogous to the definition of a "faithful" set of samples of Definition 6, and we re-use the terminology of "faithful". In Appendix B.2 we show that, provided the samples in question are "faithful", there exists a feasible solution to the linear program defined in Algorithm 1, which 1) has small objective function value, and 2) is very close to the true histogram from which the samples were drawn, in terms of τ -truncated relative earthmover distance-for an appropriate choice of τ . In Appendix B.3 we show that if two feasible solutions to the linear program defined in Algorithm 1 both have small objective function value, then they are close in tau-truncated relative earthmover distance. The key tool here is a Chebyshev polynomial earthmover scheme. Finally, in Appendix B.4, we put together the above pieces to prove Theorem 2: given the existence of a feasible point that has low-objective function value that is close to the true histogram, and the fact that any two solutions that both have low objective function value must be close to each other, it follows that the solution to the linear program that is found in Algorithm 1 must be close to the true histogram. B.1 Compartmentalizing the Probabilistic Portion The following condition defines what it means for a set of samples drawn from a distribution to be "faithful" with respect to positive constants B, D ∈ (0, 1): Definition 9. A set of n samples with fingerprint F, drawn from a distribution p with histogram h, is said to be faithful with respect to positive constants B, D ∈ (0, 1) if the following conditions hold: • For all i, • For all domain elements i, letting p(i) denote the true probability of i, the number of times i occurs in the samples from p differs from n · p(i) by at most max (n · p(i)) 1 2 +D , n B( 1 2 +D) . • The "large" portion of the fingerprint F does not contain too many more samples than expected: Specifically, The following proposition is proven via the standard "Poissonization" technique and Chernoff bounds. Proposition 2. For any constants B, D ∈ (0, 1), there is a constant α > 0 and integer n 0 such that for any n ≥ n 0 , a set of n samples consisting of independent draws from a distribution is "faithful" with respect to B, D with probability at least 1 − e −n α . Proof. We first analyze the case of a P oi(n)-sized sample drawn from a distribution with histogram h. Thus Additionally, the number of times each domain element occurs is independent of the number of times the other domain elements occur, and thus each fingerprint entry F i is the sum of independent random 0/1 variables, representing whether each domain element occurred exactly i times in the samples (i.e. contributing 1 towards F i ). By independence, Chernoff bounds apply. We split the analysis into two cases, according to whether E ≥ n B . In the case that E < n B , we leverage the basic Chernoff bound that if X is the sum of independent 0/1 random variables with E ≤ S, then for any δ ∈ (0, 1), Applied to our present setting where F i is a sum of independent 0/1 random variables, provided E < n B , we have: In the case that E ≥ n B , the same Chernoff bound yields A union bound over the first n fingerprints shows that the probability that given a set of samples (consisting of P oi(n) draws), the probability that any of the fingerprint entries violate the first condition of faithful is at most n · 2e − n 2BD 3 ≤ e −n Ω(1) as desired. For the second condition of "faithful", in analogy with the above argument, for any λ ≤ S, and δ ∈ (0, 1), Hence for x = n · p(i) ≥ n B , the probability that the number of occurrences of domain element i differs from its expectation of n·p(i) by at least (n·p(i)) 1 2 +D is bounded by 2e −(n·p(i)) 2D /3 ≤ e −n Ω (1) . Similarly, in the case that x = n · p(i) < n B , For the third condition, by the Poisson tail bounds of the previous paragraph, the total aggregate number of occurrences of all elements with probability greater than n B +n C n will differ from its expectation by at most n 1/2+D , with probability 1 − e −n Ω (1) . Additionally, by the first condition of "faithful", with probability 1 − e −n Ω(1) no domain element i with p(i) < n B +n C n will appear more than n B + 2n C . Hence with probability 1 − e −n Ω(1) all elements that contribute to the sum i>n B +2n C F i will have probability greater than n B +n C n . The third condition then follows by a union bound over these two e −n Ω(1) failure probabilities. Thus we have shown that provided we are considering a sample size of P oi(n), the probability that the conditions hold is at least 1 − e −n Ω (1) . To conclude, note that Pr > 1 3 √ n , and hence the probability that the conditions do not hold for a set of exactly n samples (namely, the probability that they do not hold for a set of P oi(n) samples, conditioned on the sample size being exactly n), is at most a factor of 3 √ n larger, and hence this probability of failure is still e −n Ω(1) , as desired. Proof. Let (v x ) be defined as follows: initialize (v x ) to be identically zero. For each y ≤ n B +n C n s.t. h(y) > 0, increment v x by h(y) y x , where x = min{x ∈ X : x ≥ y}. Finally, define B.2 Existence of a Good Feasible Point If m > 0, increment v x by m/x for x = n B +n C n . If m < 0, then arbitrarily reduce v x until a total of m units of mass have been removed. We first argue that the τ -truncated relative earthmover distance is small, and then will argue about the objective function value. Let h ′ denote the histogram obtained by appending the empirical fingerprint F i>n B +2n C to (v x ). We construct an earthmoving scheme between h and h ′ as follows: 1) for all y ≤ n B +n C n s.t. h(y) > 0, we move h(y) · y mass to location x = min{x ∈ X : x ≥ y}; 2) for each domain element i that occurs more than n B + 2n C times, we move p(i) mass from location p(i) to X i n where X i denotes the number of occurrences of the ith domain element; 3) finally, whatever discrepancy remains between h and h ′ after the first two earthmoving phases, we move to probability n B n . Clearly this is an earthmoving scheme. For τ ≥ 1/n 3/2 , the τ -truncated relative earthmover cost of the first phase is trivially at most log 1/n 3/2 +1/n 2 1/n 3/2 = O(1/ √ n). By the second condition of "faithful", the relative earthmover cost of the second phase of the scheme is bounded by log( n B −n B(1/2+D) ). To bound the cost of the third phase, note that the first phase equates the two histograms below probability n B n. By the second condition of "faithful", after the second phase , there is at most O(n −B( 1 2 −D) ) unmatched probability caused by the discrepancy between X i n and p(i) for elements observed at least n B + 2n C times. Hence after this O(n −B( 1 2 −D) ) discrepancy is moved to probability n B n , the entirety of the remaining discrepancy lies in the probability range → R such that ∞ i=0 f i (x) = 1 for each x, and each function f i may be expressed as a linear combination of Poisson functions, f i (x) = ∞ j=0 a ij poi(nx, j), such that ∞ j=0 |a ij | ≤ β. Given a generalized histogram h, the scheme works as follows: for each x such that h(x) = 0, and each integer i ≥ 0, move xh(x) · f i (x) units of probability mass from x to c i . We denote the histogram resulting from this scheme by (c, f )(h). Definition 11. A bump earthmoving scheme (c, f ) is -good if for any generalized histogram h the τ -truncated relative earthmover distance between h and (c, f )(h) is at most ǫ. Below we define the Chebyshev bumps to be a "third order" trigonometric construction: Definition 12. The Chebyshev bumps are defined in terms of n as follows. Let s = 0.2 log n. Define g 1 (y) = s−1 j=−s cos(jy). Define and, for i ∈ {1, . . . , s − 1} define g i 3 (y) := g 2 (y − iπ s ) + g 2 (y + iπ s ), and g 0 3 = g 2 (y), and g s 3 = g 2 (y + π). Let t i (x) be the linear combination of Chebyshev polynomials so that t i (cos(y)) = g i 3 (y). We thus define s + 1 functions, the "skinny bumps", to be B i (x) = t i (1 − xn 2s ) s−1 j=0 poi(xn, j), for i ∈ {0, . . . , s}. That is, B i (x) is related to g i 3 (y) by the coordinate transformation x = 2s n (1−cos(y)), and scaling by s−1 j=0 poi(xn, j). The following proposition characterizes the key properties of the Chebyshev earthmoving scheme. Namely, that the scheme is, in fact, an earthmoving scheme, that each bump can be expressed as a low-weight linear combination of Poisson functions, and that the scheme incurs a small truncated relative earthmover cost. Proposition 5. The Chebyshev earthmoving scheme of Definition 13, defined in terms of n, has the following properties: hence the Chebyshev earthmoving scheme is a valid earthmoving scheme. • The Chebyshev earthmoving scheme is O(1/ √ w), w n log n -good, for any w ∈ , where the O notation hides an absolute constant factor. The proof of the first two bullets of the proposition closely follow the arguments in , we seek to bound the per-unit-mass relative earthmover cost of, for each i ≥ 0, moving g i 3 (y) mass from 2s n (1 − cos(y)) to c i . By the above comments, it suffices to consider y ∈ , Indeed, this holds because the derivative of log(1 − cos(x)) is positive, and strictly less than the derivative of 2 log x; this can be seen by noting that the respective derivatives are sin(y) 1−cos(y) and 2 y , and we claim that the second expression is always greater. To compare the two expressions, cross-multiply and take the difference, to yield y sin y − 2 + 2 cos y, which we show is always at most 0 by noting that it is 0 when y = 0 and has derivative y cos y − sin y, which is negative since y < tan y. Thus we have that | log(1 − cos(y)) − log(1 − cos( iπ s ))| ≤ 2| log y − log iπ s |; we use this bound in all but the last step of the analysis. Additionally, we ignore the s−1 j=0 poi(xn, j) term as it is always at most 1. We will now show that where the first term is the contribution from f 0 , c 0 . For i such that y ∈ ( (i−3)π s , (i+3)π s ), by the second bounds on |g 2 | in the statement of Lemma 9, g i 3 (y) < 1, and for each of the at most 6 such i, |(log y − log max{1,i}π s )| < 1 sy , to yield a contribution of O( 1 sy ). For the contribution from i such that y ≤ (i−3)π s or y ≥ (i−3)π s , the first bound of Lemma 9 yields |g i 3 (y)| = O( 1 (ys−iπ) 4 ). Roughly, the bound will follow from noting that this sum of inverse fourth powers is dominated by the first few terms. Formally, we split up our sum over i ∈ \ , and are both equal to the empirical distribution of the samples above this region. We will now leverage the Chebyshev earthmoving scheme, via Proposition 5 to argue that for any w ∈ , R w n log n (h 1 , h 2 ) ≤ O( 1 √ w ), and hence by the triangle inequality, R w n log n (h, h 2 ) ≤ O( 1 √ w ). To leverage the Chebyshev earthmoving scheme, recall that the earthmoving scheme that moves all the probability mass of a histogram to a discrete set of "bump centers" (c i ), such that the earth moving scheme incurs a small truncated relative earthmover distance, and also has the property that when applied to any histogram g, the amount of probability mass that ends up at each bump center, c i is given as j≥0 α i,j x:g(x) =0 poi(nx, j)xg(x), for some set of coefficients α i,j satisfying for all i, j≥0 |α i,j | ≤ 2n 0.3 . Consider the results of applying the Chebyshev earthmoving scheme to histograms h 1 and h 2 . We first argue that the discrepancy in the amount of probability mass that results at the ith bump center will be negligible for any i ≥ n B +2n C . Indeed, since h 1 and h 2 are identical above probability n B +n C n and i≥n B +2C poi(λ, i) = e −n Ω(1) for λ ≤ n B + n C , the discrepancy in the mass at all bump centers c i for i ≥≥ n B + 2n C is trivially bounded by o(1/n). We now address the discrepancy in the mass at the bump centers c i for i < n B + 2n C . For any such i the discrepancy is bounded by the following quantity: Where, in the third line, we leveraged the bound j |α i,j | ≤ n 0.3 and the bound of O(n 1 2 +B+C ) on the linear program objective function corresponding to h 1 and h 2 , which measures the discrepancies between x poi(nx, j)h · (x) and the corresponding fingerprint entries. Note that the entirety of this discrepancy can be trivially equalized at a relative earthmover cost of O(n 0.3+ 1 2 +3B+C−1 log(n)), by, for example, moving this discrepancy to probability value 1. To complete the proof, by the triangle inequality we have that for any w ∈ , letting g 1 and g 2 denote the respective results of applying the Chebyshev earthmoving scheme to histograms h 1 and h 2 , we have the following: R w n log n (h, h 2 ) ≤ R w n log n (h, h 1 ) + R w n log n (h 1 , g 1 ) + R w n log n (g 1 , g 2 ) + R w n log n (g 2 , h 2 ) ≤ O max(n −B( 1 2 −D) , n −(B−C) ) + O(1/ √ w) + O(n 0.3+ 1 2 +3B+C−1 log(n)) + O(1/ √ w) C Rounding a Generalized Histogram Algorithm 1 returns a generalized histogram. Recall that generalized histograms are histograms but without the condition that their values are integers, and thus may not correspond to actual distributions-whose histogram entries are always integral. While a generalized distribution suffices to establish Theorem 1, we observe that it is possible to round a generalized histogram without significantly altering it, in truncated relative earthmover distance. The following algorithm and lemma characterizing its performance show one way to round the generalized histogram to obtain a histogram that is close in truncated relative earthmover distance. This, together with Theorem 2, establishes Proposition 1. Algorithm 3. Round to Histogram Input: Generalized histogram g. Output: Histogram h. • Initialize h to consist of the integral elements of g. • For each integer j ≥ 0: -Let x j1 , x j2 , . . . , x jℓ be the elements of the support of g that lie in the range and that have non-integral histogram entries; let m := ℓ i=1 x ji g(x ji ) be the total mass represented; initialize histogram h ′ to be identically 0 and set variable dif f := 0. Lemma 11. Let h be the output of running Algorithm 3 on generalized histogram g. The following conditions hold: • For all x, h(x) ∈ N ∪ {0}, and x:h(x) =0 xh(x) = 1, hence h is a histogram of a distribution. Proof. For each stage j of Algorithm 3, the algorithm goes through each of the histogram entries g(x ji ) rounding them up or down to corresponding values h ′ (x ji ) and storing the cumulative difference in probability mass in the variable dif f . Thus if this region of g initially had probability mass m, then h ′ will have probability mass m + dif f . We bound this by noting that since the first element of each stage is always rounded up, and 2 −(j+1) is the smallest possible coordinate in this stage, the mass of h ′ , namely m + dif f , is thus always at least 2 −(j+1) . Since each element of h ′ is scaled by m m+dif f before being added to h, the total mass contributed by stage j to h is exactly m, meaning that each stage of rounding is "mass-preserving". Denoting by g j the portion of g considered in stage j, and denoting by h j this stage's contribution to h, we now seek to bound R(h j , g j ). Recall the cumulative distribution, which for any distribution over the reals, and any number y, is the total amount of probability mass in the distribution between 0 and y. Given a generalized histogram g, we can define its (generalized) cumulative distribution by c(g)(x) := x≤y:g(x) =0 xg(x). We note that at each stage j of Algorithm 3 and in each iteration i of the inner loop, the variable dif f equals the difference between the cumulative distributions of h ′ and g j at x ji , and hence also on the region immediately to the right of x ji . Further, we note that at iteration i, |dif f | is bounded by x ji since at each iteration, if dif f is positive it will decrease and if it is negative it will increase, and since h ′ (x ji ) is a rounded version of g(x ji ), dif f will be changed by x ji (h ′ (x ji ) − g(x ji )) which has magnitude at most x ji . Combining these two observations yields that for all x, |c(h ′ )(x) − c(g j )(x)| ≤ x. To bound the relative earthmover distance we note that for distributions over the reals, the earthmover distance between two distributions can be expressed as the integral of the absolute value of the difference between their cumulative distributions; since relative earthmover distance can be related to the standard earthmover distance by changing each x value to log x, the change of variables theorem gives us that R(a, b) = 1 x |c(b)(x) − c(a)(x)| dx. We can thus use the bound from the previous paragraph in this equation after one modification: since h ′ has total probability mass m + dif f , its relative earthmover distance to g j with probability mass m is undefined, and we thus define h ′′ to be h ′ with the modification that we subtract dif f probability mass from location 2 −j (it does not matter to this formalism if dif f is negative, or if this makes h ′′ (2 −j ) negative). We thus have that R(h ′′ , g j ) = 2 −j 2 −(j+1) 1 x |c(h ′ )(x) − c(g j )(x)| dx ≤ 2 −j 2 −(j+1) 1 x x dx = 2 −(j+1) . We now bound the relative earthmover distance from h ′′ to h j via the following two-part earthmoving scheme: all of the mass in h ′′ that comes from h ′ (specifically, all the mass except the −dif f mass added at 2 −j ) is moved to a m m+dif f fraction of its original location, at a relative earthmover cost (m + dif f ) · | log m m+dif f |; the remaining −dif f mass is moved wherever needed, involving changing its location by a factor as much as 2 · max{ m m+dif f , m+dif f m } at a relative earthmover cost of at most |dif f | · (log 2 + | log m m+dif f |). Thus our total bound on R(g j , h j ), by the triangle inequality, is 2 −(j+1) + (m + dif f ) · | log m m+dif f | + |dif f | · (log 2 + | log m m+dif f |), which we use when m ≥ 2 −j , in conjunction with the two bounds derived above, that |dif f | ≤ 2 −j and that m + dif f ≥ 2 −(j+1) , yielding a total bound on the earthmover distance of 5 · 2 −j for the jth stage when m ≥ 2 −j . When m ≤ 2 −j we note directly that m mass is being moved a relative distance of at most 2 · max{ m m+dif f , m+dif f m } at a cost of m · (log 2 + | log m m+dif f |) which we again bound by 5 · 2 −j . Thus, summing over all j ≥ ⌊| log 2 α|⌋, yields a bound of 20α.
/** * This class is part of JCodec ( www.jcodec.org ) This software is distributed * under FreeBSD License * * @author The JCodec project * */ public class MTSUtils { public static enum StreamType { RESERVED(0x0, false, false), VIDEO_MPEG1(0x01, true, false), VIDEO_MPEG2(0x02, true, false), AUDIO_MPEG1(0x03, false, true), AUDIO_MPEG2(0x04, false, true), PRIVATE_SECTION(0x05, false, false), PRIVATE_DATA(0x06, false, false), MHEG(0x7, false, false), DSM_CC(0x8, false, false), ATM_SYNC(0x9, false, false), DSM_CC_A(0xa, false, false), DSM_CC_B(0xb, false, false), DSM_CC_C(0xc, false, false), DSM_CC_D(0xd, false, false), MPEG_AUX(0xe, false, false), AUDIO_AAC_ADTS(0x0f, false, true), VIDEO_MPEG4(0x10, true, false), AUDIO_AAC_LATM(0x11, false, true), FLEXMUX_PES(0x12, false, false), FLEXMUX_SEC(0x13, false, false), DSM_CC_SDP(0x14, false, false), META_PES(0x15, false, false), META_SEC(0x16, false, false), DSM_CC_DATA_CAROUSEL(0x17, false, false), DSM_CC_OBJ_CAROUSEL(0x18, false, false), DSM_CC_SDP1(0x19, false, false), IPMP(0x1a, false, false), VIDEO_H264(0x1b, true, false), AUDIO_AAC_RAW(0x1c, false, true), SUBS(0x1d, false, false), AUX_3D(0x1e, false, false), VIDEO_AVC_SVC(0x1f, true, false), VIDEO_AVC_MVC(0x20, true, false), VIDEO_J2K(0x21, true, false), VIDEO_MPEG2_3D(0x22, true, false), VIDEO_H264_3D(0x23, true, false), VIDEO_CAVS(0x42, false, true), IPMP_STREAM(0x7f, false, false), AUDIO_AC3(0x81, false, true), AUDIO_DTS(0x8a, false, true); private int tag; private boolean video; private boolean audio; private static EnumSet<StreamType> typeEnum = EnumSet.allOf(StreamType.class); private StreamType(int tag, boolean video, boolean audio) { this.tag = tag; this.video = video; this.audio = audio; } public static StreamType fromTag(int streamTypeTag) { for (StreamType streamType : typeEnum) { if (streamType.tag == streamTypeTag) return streamType; } return null; } public int getTag() { return tag; } public boolean isVideo() { return video; } public boolean isAudio() { return audio; } }; public static class Section { private int sectionNumber; private int lastSectionNumber; public int getSectionNumber() { return sectionNumber; } public int getLastSectionNumber() { return lastSectionNumber; } } public static class PMT extends Section { private int pcrPid; private List<Tag> tags; private List<PMTStream> streams; public PMT(int pcrPid, List<Tag> tags, List<PMTStream> streams) { this.pcrPid = pcrPid; this.tags = tags; this.streams = streams; } public int getPcrPid() { return pcrPid; } public List<Tag> getTags() { return tags; } public List<PMTStream> getStreams() { return streams; } } public static class Tag { private int tag; private ByteBuffer content; public Tag(int tag, ByteBuffer content) { this.tag = tag; this.content = content; } public int getTag() { return tag; } public ByteBuffer getContent() { return content; } } public static class PMTStream { private int streamTypeTag; private int pid; private List<MPEGMediaDescriptor> descriptors; private StreamType streamType; public PMTStream(int streamTypeTag, int pid, List<MPEGMediaDescriptor> descriptors) { this.streamTypeTag = streamTypeTag; this.pid = pid; this.descriptors = descriptors; this.streamType = StreamType.fromTag(streamTypeTag); } public int getStreamTypeTag() { return streamTypeTag; } public StreamType getStreamType() { return streamType; } public int getPid() { return pid; } public List<MPEGMediaDescriptor> getDesctiptors() { return descriptors; } } public static int parsePAT(ByteBuffer data) { parseSection(data); int pmtPid = -1; while (data.remaining() > 4) { int programNum = data.getShort() & 0xffff; int w = data.getShort(); if (programNum != 0) pmtPid = w & 0x1fff; } return pmtPid; } public static PMT parsePMT(ByteBuffer data) { parseSection(data); // PMT itself int w1 = data.getShort() & 0xffff; int pcrPid = w1 & 0x1fff; int w2 = data.getShort() & 0xffff; int programInfoLength = w2 & 0xfff; List<Tag> tags = parseTags(NIOUtils.read(data, programInfoLength)); List<PMTStream> streams = new ArrayList<PMTStream>(); while (data.remaining() > 4) { int streamType = data.get() & 0xff; int wn = data.getShort() & 0xffff; int elementaryPid = wn & 0x1fff; System.out.println(String.format("Elementary stream: [%d,%d]", streamType, elementaryPid)); int wn1 = data.getShort() & 0xffff; int esInfoLength = wn1 & 0xfff; ByteBuffer read = NIOUtils.read(data, esInfoLength); streams.add(new PMTStream(streamType, elementaryPid, MPSUtils.parseDescriptors(read))); } return new PMT(pcrPid, tags, streams); } public static void parseSection(ByteBuffer data) { int tableId = data.get() & 0xff; int w0 = data.getShort() & 0xffff; int sectionSyntaxIndicator = w0 >> 15; if (((w0 >> 14) & 1) != 0) throw new RuntimeException("Invalid PMT"); int sectionLength = w0 & 0xfff; data.limit(data.position() + sectionLength); int programNumber = data.getShort() & 0xffff; int b0 = data.get() & 0xff; int versionNumber = (b0 >> 1) & 0x1f; int currentNextIndicator = b0 & 1; int sectionNumber = data.get() & 0xff; int lastSectionNumber = data.get() & 0xff; } private static void parseEsInfo(ByteBuffer read) { } private static List<Tag> parseTags(ByteBuffer bb) { List<Tag> tags = new ArrayList<Tag>(); while (bb.hasRemaining()) { int tag = bb.get(); int tagLen = bb.get(); System.out.println(String.format("TAG: [0x%x, 0x%x]", tag, tagLen)); tags.add(new Tag(tag, NIOUtils.read(bb, tagLen))); } return tags; } public static List<PMTStream> getPrograms(File src) throws IOException { SeekableByteChannel ch = null; try { ch = NIOUtils.readableFileChannel(src); return getProgramGuids(ch); } finally { NIOUtils.closeQuietly(ch); } } public static List<PMTStream> getProgramGuids(SeekableByteChannel in) throws IOException { PMTExtractor ex = new PMTExtractor(); ex.readTsFile(in); PMT pmt = ex.getPmt(); return pmt.getStreams(); } private static class PMTExtractor extends TSReader { private int pmtGuid = -1; private PMT pmt; @Override protected boolean onPkt(int guid, boolean payloadStart, ByteBuffer tsBuf, long filePos) { if (guid == 0) { pmtGuid = parsePAT(tsBuf); } else if (pmtGuid != -1 && guid == pmtGuid) { pmt = parsePMT(tsBuf); return false; } return true; } public PMT getPmt() { return pmt; } }; public abstract static class TSReader { // Buffer must have an integral number of MPEG TS packets public static final int BUFFER_SIZE = 188 << 9; public void readTsFile(SeekableByteChannel ch) throws IOException { ch.position(0); ByteBuffer buf = ByteBuffer.allocate(BUFFER_SIZE); for (long pos = ch.position(); ch.read(buf) != -1; pos = ch.position()) { buf.flip(); while (buf.hasRemaining()) { ByteBuffer tsBuf = NIOUtils.read(buf, 188); pos += 188; Assert.assertEquals(0x47, tsBuf.get() & 0xff); int guidFlags = ((tsBuf.get() & 0xff) << 8) | (tsBuf.get() & 0xff); int guid = (int) guidFlags & 0x1fff; int payloadStart = (guidFlags >> 14) & 0x1; int b0 = tsBuf.get() & 0xff; int counter = b0 & 0xf; if ((b0 & 0x20) != 0) { NIOUtils.skip(tsBuf, tsBuf.get() & 0xff); } boolean sectionSyntax = payloadStart == 1 && (getRel(tsBuf, getRel(tsBuf, 0) + 2) & 0x80) == 0x80; if (sectionSyntax) { NIOUtils.skip(tsBuf, tsBuf.get() & 0xff); } if (!onPkt(guid, payloadStart == 1, tsBuf, pos - tsBuf.remaining())) return; } buf.flip(); } } protected abstract boolean onPkt(int guid, boolean payloadStart, ByteBuffer tsBuf, long filePos); } public static int getVideoPid(File src) throws IOException { List<PMTStream> streams = MTSUtils.getPrograms(src); for (PMTStream stream : streams) { if (stream.getStreamType().isVideo()) return stream.getPid(); } throw new RuntimeException("No video stream"); } public static int getAudioPid(File src) throws IOException { List<PMTStream> streams = MTSUtils.getPrograms(src); for (PMTStream stream : streams) { if (stream.getStreamType().isVideo()) return stream.getPid(); } throw new RuntimeException("No video stream"); } public static int[] getMediaPids(File src) throws IOException { IntArrayList result = new IntArrayList(); List<PMTStream> streams = MTSUtils.getPrograms(src); for (PMTStream stream : streams) { if (stream.getStreamType().isVideo() || stream.getStreamType().isAudio()) result.add(stream.getPid()); } return result.toArray(); } }
/** * <p>The device has presented incorrect entropy</p> */ public void incorrectEntropy() { Preconditions.checkState(SwingUtilities.isEventDispatchThread(), "Must be on EDT"); setOperationText(MessageKey.TREZOR_FAILURE_OPERATION); setDisplayVisible(false); setSpinnerVisible(false); }
<reponame>felipecesargomes/medical_sistema package br.com.medical_sistema.servlets; import java.io.IOException; import javax.servlet.RequestDispatcher; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import br.com.medical_sistema.controlador.Acao; @WebServlet("/entrada") public class ServletControlador extends HttpServlet { private static final long serialVersionUID = 1L; protected void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String acao = request.getParameter("acao"); String nameClass = "br.com.medical_sistema.controlador." + acao; String name = null; try { Class<?> classe = Class.forName(nameClass); @SuppressWarnings("deprecation") Acao action = (Acao) classe.newInstance(); name = action.executar(request, response); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InstantiationException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IllegalAccessException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println(name); String[] link = name.split(":"); if(link[0].equals("redirect")) { response.sendRedirect(link[1]); }else { RequestDispatcher rd = request.getRequestDispatcher("/WEB-INF/views/" + link[1]); rd.forward(request, response); } } }
/** * Test whether a URL identifies a Fuseki server. This operation can not guarantee to * detect a Fuseki server - for example, it may be behind a reverse proxy that masks * the signature. */ public static boolean isFuseki(String datasetURL) { HttpRequest.Builder builder = HttpRequest.newBuilder().uri(toRequestURI(datasetURL)).method(HttpNames.METHOD_HEAD, BodyPublishers.noBody()); HttpRequest request = builder.build(); HttpClient httpClient = HttpEnv.getDftHttpClient(); HttpResponse<InputStream> response = execute(httpClient, request); handleResponseNoBody(response); Optional<String> value1 = response.headers().firstValue(FusekiRequestIdHeader); if ( value1.isPresent() ) return true; Optional<String> value2 = response.headers().firstValue("Server"); if ( value2.isEmpty() ) return false; String headerValue = value2.get(); boolean isFuseki = headerValue.startsWith("Apache Jena Fuseki") || headerValue.toLowerCase().contains("fuseki"); return isFuseki; }
package saga.eventuate.tram.hotelservice.model; import javax.persistence.*; import java.util.Objects; @Embeddable public class HotelBookingInformation { @Embedded private Destination destination; @Embedded private StayDuration duration; private String boardType; private final long tripId; private HotelBookingInformation() { tripId = -1; // no trip assigned to this booking } public HotelBookingInformation(final Destination destination, final StayDuration duration, final String boardType) { tripId = -1; // no trip assigned to this booking this.destination = destination; this.duration = duration; this.boardType = boardType; } public HotelBookingInformation(final long tripId, final Destination destination, final StayDuration duration, final String boardType) { this.tripId = tripId; this.destination = destination; this.duration = duration; this.boardType = boardType; } public void setDestination(final Destination destination) { this.destination = destination; } public Destination getDestination() { return destination; } public void setDuration(final StayDuration duration) { this.duration = duration; } public StayDuration getDuration() { return duration; } public void setBoardType(final String boardType) { this.boardType = boardType; } public String getBoardType() { return boardType; } public long getTripId() { return tripId; } @Override public String toString() { return "HotelBookingInformation{" + "destination=" + destination + ", duration=" + duration + ", boardType=" + boardType + ", tripId=" + tripId + '}'; } @Override public boolean equals(Object o) { if (this == o) { return true; } if (! (o instanceof HotelBookingInformation)) { return false; } HotelBookingInformation hotelInfo = (HotelBookingInformation) o; if (!Objects.equals(hotelInfo.getDuration(), this.getDuration())) { return false; } if (hotelInfo.getBoardType() == null || !hotelInfo.getBoardType().equalsIgnoreCase(this.getBoardType())) { return false; } if (!Objects.equals(hotelInfo.getDestination(), this.getDestination())) { return false; } return Objects.equals(hotelInfo.getTripId(), this.getTripId()); } }
The Reconceptualization of the City’s Ugliness Between the 1950s and 1970s in the British, Italian, and Australian Milieus The paper examines the reorientations of the appreciation of ugliness within different national contexts in a comparative or relational frame, juxtaposing the British, Italian, and Australian milieus, and to relate them to the ways in which the transformation of the urban fabric and the effect of suburbanization were perceived in the aforementioned national contexts. Special attention is paid to the production and dissemination of the ways the city’s uglification was conceptualized between the 1950s and 1970s. Pivotal for the issues that this paper addresses are Ian Nairn’s Outrage: On the Disfigurement of Town and Countryside (1956) Robin Boyd’s Australian Ugliness (1960), and the way the phenomenon of urban expansion is treated in these books in comparison with other books from the four national contexts under study, such as Ludovico Quaroni’s La torre di Babele (1967) and Reyner Banham’s The New Brutalism: Ethic Or Aesthetic? (1966).
<gh_stars>100-1000 // Copyright 2019 The WPT Dashboard Project. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. //go:generate packr2 package webapp import ( "net/http" "text/template" "github.com/gobuffalo/packr/v2" "github.com/web-platform-tests/wpt.fyi/shared" ) var componentTemplates *template.Template func init() { box := packr.New("dynamic components", "./dynamic-components/templates/") componentTemplates = template.New("all.js") for _, t := range box.List() { tmpl := componentTemplates.New(t) body, err := box.FindString(t) if err != nil { panic(err) } else if _, err = tmpl.Parse(body); err != nil { panic(err) } } } func flagsComponentHandler(w http.ResponseWriter, r *http.Request) { w.Header().Add("content-type", "text/javascript") ctx := r.Context() ds := shared.NewAppEngineDatastore(ctx, false) flags, err := shared.GetFeatureFlags(ds) if err != nil { // Errors aren't a big deal; log them and ignore. log := shared.GetLogger(ctx) log.Errorf("Error loading flags: %s", err.Error()) } data := struct{ Flags []shared.Flag }{flags} if componentTemplates.ExecuteTemplate(w, "wpt-env-flags.js", data); err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) } }
n,ab=list(map(int,input().split(" "))) l=list(map(int,input().split(" "))) d=l[:] c=[] # k=ab k=n-ab sum1=(n*(n+1))//2 sum2=(k*(k+1))//2 sum3=sum1-sum2 d={} # print(sum1) # print(sum2) for i in range(len(l)): d[l[i]]=i fin=sum3 print(fin,end=" ") # mini=[] # for i in range(n,n-k,-1): # d[i] # for i,key in enumerate(d): # mini.append() d=[] d.append(0) for i in range(n): if l[i]>k: d.append(i) # print(d) j=d[0] for i in d[1:]: c.append(i-j) j=i # print(c) mul=1 c=c[1:] for i in c: mul=mul*i print(mul%998244353)
/** * If a transaction is active. * * @return {@code true} if the transaction is active. */ static boolean isActive() { try { return UserTransaction.userTransaction().getStatus() != Status.STATUS_NO_TRANSACTION; } catch (SystemException e) { throw new QuarkusTransactionException(e); } }
// This file was generated by gir (https://github.com/gtk-rs/gir) // from gir-files (https://github.com/gtk-rs/gir-files) // from gst-gir-files (https://gitlab.freedesktop.org/gstreamer/gir-files-rs.git) // DO NOT EDIT use crate::BaseEffect; use crate::Extractable; use crate::MetaContainer; use crate::Operation; use crate::TimelineElement; use crate::TrackElement; use glib::object::IsA; use glib::translate::*; use glib::StaticType; glib::wrapper! { #[doc(alias = "GESEffect")] pub struct Effect(Object<ffi::GESEffect, ffi::GESEffectClass>) @extends BaseEffect, Operation, TrackElement, TimelineElement, @implements Extractable, MetaContainer; match fn { type_ => || ffi::ges_effect_get_type(), } } impl Effect { pub const NONE: Option<&'static Effect> = None; #[doc(alias = "ges_effect_new")] pub fn new(bin_description: &str) -> Result<Effect, glib::BoolError> { assert_initialized_main_thread!(); unsafe { Option::<_>::from_glib_none(ffi::ges_effect_new(bin_description.to_glib_none().0)) .ok_or_else(|| glib::bool_error!("Failed to create effect from description")) } } } pub trait EffectExt: 'static { #[doc(alias = "bin-description")] fn bin_description(&self) -> Option<glib::GString>; } impl<O: IsA<Effect>> EffectExt for O { fn bin_description(&self) -> Option<glib::GString> { glib::ObjectExt::property(self.as_ref(), "bin-description") } }
/*--------------------------------------------------------------------------------------------- * Copyright (c) Microsoft Corporation. All rights reserved. * Licensed under the MIT License. See License.txt in the project root for license information. *--------------------------------------------------------------------------------------------*/ import * as nls from 'vs/nls'; import * as lifecycle from 'vs/base/common/lifecycle'; import * as errors from 'vs/base/common/errors'; import { IAction, IActionRunner } from 'vs/base/common/actions'; import { KeyCode } from 'vs/base/common/keyCodes'; import * as dom from 'vs/base/browser/dom'; import { StandardKeyboardEvent } from 'vs/base/browser/keyboardEvent'; import { SelectBox } from 'vs/base/browser/ui/selectBox/selectBox'; import { SelectActionItem, IActionItem } from 'vs/base/browser/ui/actionbar/actionbar'; import { EventEmitter } from 'vs/base/common/eventEmitter'; import { IConfigurationService } from 'vs/platform/configuration/common/configuration'; import { IDebugService } from 'vs/workbench/parts/debug/common/debug'; const $ = dom.$; export class StartDebugActionItem extends EventEmitter implements IActionItem { public actionRunner: IActionRunner; private container: HTMLElement; private start: HTMLElement; private selectBox: SelectBox; private toDispose: lifecycle.IDisposable[]; constructor( private context: any, private action: IAction, @IDebugService private debugService: IDebugService, @IConfigurationService private configurationService: IConfigurationService ) { super(); this.toDispose = []; this.selectBox = new SelectBox([], -1); this.registerListeners(); } private registerListeners(): void { this.toDispose.push(this.configurationService.onDidUpdateConfiguration(e => { if (e.sourceConfig.launch) { this.updateOptions(); } })); this.toDispose.push(this.selectBox.onDidSelect(configurationName => { this.debugService.getViewModel().setSelectedConfigurationName(configurationName); })); } public render(container: HTMLElement): void { this.container = container; dom.addClass(container, 'start-debug-action-item'); this.start = dom.append(container, $('.icon')); this.start.title = this.action.label; this.start.tabIndex = 0; this.toDispose.push(dom.addDisposableListener(this.start, dom.EventType.CLICK, () => { this.start.blur(); this.actionRunner.run(this.action, this.context).done(null, errors.onUnexpectedError); })); this.toDispose.push(dom.addDisposableListener(this.start, dom.EventType.MOUSE_DOWN, () => { if (this.selectBox.enabled) { dom.addClass(this.start, 'active'); } })); this.toDispose.push(dom.addDisposableListener(this.start, dom.EventType.MOUSE_UP, () => { dom.removeClass(this.start, 'active'); })); this.toDispose.push(dom.addDisposableListener(this.start, dom.EventType.MOUSE_OUT, () => { dom.removeClass(this.start, 'active'); })); this.toDispose.push(dom.addDisposableListener(this.start, dom.EventType.KEY_UP, (e: KeyboardEvent) => { let event = new StandardKeyboardEvent(e); if (event.equals(KeyCode.Enter)) { this.actionRunner.run(this.action, this.context).done(null, errors.onUnexpectedError); } })); this.selectBox.render(dom.append(container, $('.configuration'))); this.updateOptions(); } public setActionContext(context: any): void { this.context = context; } public isEnabled(): boolean { return this.selectBox.enabled; } public focus(): void { this.start.focus(); } public blur(): void { this.container.blur(); } public dispose(): void { this.toDispose = lifecycle.dispose(this.toDispose); } private setEnabled(enabled: boolean): void { this.selectBox.enabled = enabled; if (!enabled) { this.selectBox.setOptions([nls.localize('noConfigurations', "No Configurations")], 0); } } private updateOptions(): void { const options = this.debugService.getConfigurationManager().getConfigurationNames(); if (options.length === 0) { this.setEnabled(false); } else { this.setEnabled(true); const selected = options.indexOf(this.debugService.getViewModel().selectedConfigurationName); this.selectBox.setOptions(options, selected); } } } export class FocusProcessActionItem extends SelectActionItem { constructor( action: IAction, @IDebugService private debugService: IDebugService ) { super(null, action, [], -1); this.debugService.getViewModel().onDidFocusStackFrame(() => { const process = this.debugService.getViewModel().focusedProcess; if (process) { const names = this.debugService.getModel().getProcesses().map(p => p.name); this.select(names.indexOf(process.name)); } }); this.debugService.getModel().onDidChangeCallStack(() => { this.setOptions(this.debugService.getModel().getProcesses().map(p => p.name)); }); } }
The ecology of anxiety: situational stress and rate of self-stimulation in Turkey. Twelve community behavior settings in Ankara, Turkey, were ranked with high agreement by 30 judges (average r = .80, p less than .001) according to the amount of situational stress, defined as evaluative apprehension and uncertainty, generated by each setting. The rate of hand-to- face or body (self-stimulation) behavior was observed systemically in stressful compared to relaxed settings. Analysis by a stepwise multiple regression procedure showed higher rates in stressful settings, F(1, 587) = 9.33, p less than .01. This finding was successfully replicated one year after the original study with new samples of settings, observers, and observed individuals, F(1, 351) = 7.38, p less than .01. Sex of the observed individuals had no relationship to rate of self-stimulation, and smoking appeared to act as a suppressor variable. These results suggest that other sources of variance from persons and Person X Situation interactions can be safely ignored if one's purpose in an investigation is to make ecological comparisons in anxiety rates.
/** * Tests for Account */ @RunWith(MockitoJUnitRunner.class) public class AWSEnvironmentTest { @Test public void testConstructorGetters() { String account = "Hello"; String region = "World"; AWSEnvironment awsEnvironment = new AWSEnvironment(account, region); Assert.assertEquals("Test Account:", account, awsEnvironment.getAccount()); Assert.assertEquals("Test Region:", region, awsEnvironment.getRegion()); } @Test public void testEquals(){ String account = "Hello"; String region = "World"; AWSEnvironment awsEnvironment1 = new AWSEnvironment(account, region); AWSEnvironment awsEnvironment2 = new AWSEnvironment(account, region); String account2 = "Hellos"; String region2 = "Worlds"; AWSEnvironment awsEnvironment3 = new AWSEnvironment(account2, region2); AWSEnvironment awsEnvironment4 = new AWSEnvironment(account, region2); Assert.assertEquals("Test Equals on self", awsEnvironment1, awsEnvironment1); Assert.assertNotEquals("Test Equals on different object", awsEnvironment1, new ArrayList()); Assert.assertEquals("Test Equals on 2 different, but equal objects", awsEnvironment1, awsEnvironment2); Assert.assertNotEquals("Test Equals on different Account", awsEnvironment1, awsEnvironment3); Assert.assertNotEquals("Test Equals on different Region", awsEnvironment2, awsEnvironment4); } @Test public void testHashCode(){ String account = "Hello"; String region = "World"; AWSEnvironment awsEnvironment1 = new AWSEnvironment(account, region); Assert.assertEquals("Test hashCode() method", Objects.hash(account, region), awsEnvironment1.hashCode()); } @Test public void testToString(){ String account = "Hello"; String region = "World"; AWSEnvironment awsEnvironment1 = new AWSEnvironment(account, region); Assert.assertEquals("Test toString() Method", MoreObjects.toStringHelper(AWSEnvironment.class).add("account", account).add("region", region).toString(), awsEnvironment1.toString()); } }
package alertmanager import ( "net/http" "text/template" "github.com/go-kit/log/level" util_log "github.com/cortexproject/cortex/pkg/util/log" "github.com/cortexproject/cortex/pkg/util/services" ) var ( ringStatusPageTemplate = template.Must(template.New("ringStatusPage").Parse(` <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Cortex Alertmanager Ring</title> </head> <body> <h1>Cortex Alertmanager Ring</h1> <p>{{ .Message }}</p> </body> </html>`)) statusTemplate = template.Must(template.New("statusPage").Parse(` <!doctype html> <html> <head><title>Cortex Alertmanager Status</title></head> <body> <h1>Cortex Alertmanager Status</h1> {{ if not .ClusterInfo }} <p>Alertmanager gossip-based clustering is disabled.</p> {{ else }} <h2>Node</h2> <dl> <dt>Name</dt><dd>{{.ClusterInfo.self.Name}}</dd> <dt>Addr</dt><dd>{{.ClusterInfo.self.Addr}}</dd> <dt>Port</dt><dd>{{.ClusterInfo.self.Port}}</dd> </dl> <h3>Members</h3> {{ with .ClusterInfo.members }} <table> <tr><th>Name</th><th>Addr</th></tr> {{ range . }} <tr><td>{{ .Name }}</td><td>{{ .Addr }}</td></tr> {{ end }} </table> {{ else }} <p>No peers</p> {{ end }} {{ end }} </body> </html>`)) ) func writeRingStatusMessage(w http.ResponseWriter, message string) { w.WriteHeader(http.StatusOK) err := ringStatusPageTemplate.Execute(w, struct { Message string }{Message: message}) if err != nil { level.Error(util_log.Logger).Log("msg", "unable to serve alertmanager ring page", "err", err) } } func (am *MultitenantAlertmanager) RingHandler(w http.ResponseWriter, req *http.Request) { if !am.cfg.ShardingEnabled { writeRingStatusMessage(w, "Alertmanager has no ring because sharding is disabled.") return } if am.State() != services.Running { // we cannot read the ring before the alertmanager is in Running state, // because that would lead to race condition. writeRingStatusMessage(w, "Alertmanager is not running yet.") return } am.ring.ServeHTTP(w, req) } // GetStatusHandler returns the status handler for this multi-tenant // alertmanager. func (am *MultitenantAlertmanager) GetStatusHandler() StatusHandler { return StatusHandler{ am: am, } } // StatusHandler shows the status of the alertmanager. type StatusHandler struct { am *MultitenantAlertmanager } // ServeHTTP serves the status of the alertmanager. func (s StatusHandler) ServeHTTP(w http.ResponseWriter, _ *http.Request) { var clusterInfo map[string]interface{} if s.am.peer != nil { clusterInfo = s.am.peer.Info() } err := statusTemplate.Execute(w, struct { ClusterInfo map[string]interface{} }{ ClusterInfo: clusterInfo, }) if err != nil { level.Error(util_log.Logger).Log("msg", "unable to serve alertmanager status page", "err", err) http.Error(w, err.Error(), http.StatusInternalServerError) } }
def process(self, beam_stack): self.beam_stack[beam_stack.attrs["source_name"]] = beam_stack return None
import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { UsageComponent } from './usage.component'; import { UsageDownloadsComponent } from './pages/usage-downloads/usage-downloads.component'; import { UsageByOrganizationComponent } from './pages/usage-by-organization/usage-by-organization.component'; import { UsageMyDownloadsComponent } from './pages/usage-my-downloads/usage-my-downloads.component'; const routes: Routes = [ { path: '', pathMatch: 'full', component: UsageComponent }, { path: 'by-organization', pathMatch: 'full', component: UsageByOrganizationComponent }, { path: 'my-downloads', pathMatch: 'full', component: UsageMyDownloadsComponent }, { path: 'downloads', pathMatch: 'full', component: UsageDownloadsComponent } ]; @NgModule({ imports: [RouterModule.forChild(routes)], exports: [RouterModule] }) export class UsageRoutingModule { }
<gh_stars>1-10 package com.newer.kt.entity; import java.io.Serializable; import java.util.List; /** * Created by huangbo on 2016/10/3. */ public class AllSchoolBean implements Serializable{ /** * response : success * school_clubs_count : 12 * school_clubs_count_list : [{"name":"上海","count":3},{"name":"河南","count":1},{"name":"北京","count":8}] */ private String response; private int school_clubs_count; /** * name : 上海 * count : 3 */ private List<SchoolClubsCountListBean> school_clubs_count_list; private List<SchoolClubsCountListBean> school_clubs_list; public List<SchoolClubsCountListBean> getSchool_clubs_list() { return school_clubs_list; } public void setSchool_clubs_list(List<SchoolClubsCountListBean> school_clubs_list) { this.school_clubs_list = school_clubs_list; } public String getResponse() { return response; } public void setResponse(String response) { this.response = response; } public int getSchool_clubs_count() { return school_clubs_count; } public void setSchool_clubs_count(int school_clubs_count) { this.school_clubs_count = school_clubs_count; } public List<SchoolClubsCountListBean> getSchool_clubs_count_list() { return school_clubs_count_list; } public void setSchool_clubs_count_list(List<SchoolClubsCountListBean> school_clubs_count_list) { this.school_clubs_count_list = school_clubs_count_list; } public static class SchoolClubsCountListBean implements Serializable{ private String name; private int count; private int id; public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getCount() { return count; } public void setCount(int count) { this.count = count; } } }
def normalize(matrix): return sklearn.preprocessing.normalize(matrix, norm="l1", axis=0)
import { Action, createReducer, on } from '@ngrx/store'; import * as ScoreboardPageActions from '../actions/scoreboard-page.actions'; export interface State { home: number; away: number; } export const initialState: State = { home: 0, away: 0, }; export const scoreboardReducer = createReducer( initialState, on(ScoreboardPageActions.homeScore, (state) => ({ ...state, home: state.home + 1, })), on(ScoreboardPageActions.awayScore, (state) => ({ ...state, away: state.away + 1, })), on(ScoreboardPageActions.resetScore, () => ({ home: 0, away: 0 })) );
Friend, do you have doubts in your heart about the afterlife? Have you ever wondered if heaven is a real place? I myself had questions like these once, but a near-death experience erased my doubts forever. Why? Because during those few minutes when my heart stopped and the doctors worked to revive me, I saw God’s glorious kingdom with my own eyes, and while I was there I abducted an angel. Heaven is real, my friend, and so is the immortal being of pure light and love I kidnapped from it! Advertisement I hope that by hearing my story, you can experience a small part of the joy and exultation I felt the night of that car accident, when I left my earthly body, ascended into the divine light of eternal paradise, tackled an angel, and dragged it back with me to this world. God’s blessed realm is more beautiful than anything you’ve ever imagined, and it’s just as real as you or me or the radiant angel that is being held in my home against its will. When you get to heaven, the first thing you notice is a brilliant, shining light more magnificent than any sight on earth. It’s impossible to describe to someone who hasn’t seen it for himself or, at the very least, taken a good look at the glowing aura of the angel I forcibly abducted and now keep locked in my hallway closet. In that heavenly light, I felt a love so complete and unconditional and unlike anything I’ve felt before that I might be tempted to doubt my memory if I didn’t have living, breathing proof of it. I no longer fear death, because I know that when I die, I will experience an eternity of bliss just like what I felt in God’s everlasting paradise while I repeatedly punched one of His angels in the face in order to subdue it. Advertisement Instead, whenever I ask myself, “Did that really happen?” all I have to do is crack open that closet door and peer in at the angel I kidnapped. If I had the choice, I would’ve stayed there at the foot of God’s throne forever, listening to the majestic choir of angels singing endless hymns to His glory, but when I heard the voices of the ER doctors calling me back, I knew it wasn’t my time yet. I also knew I had to act quickly if I wanted to prove to everyone just how glorious heaven really is. That’s when I decided to grab the nearest angel and hold on with all my strength as it kicked and struggled so that I could bring it back to earth with me. And now I’m here to attest that His heavenly creation is real—so real that you can reach out and touch a small portion of it, just as long as it’s securely tied down or you hold it firmly by its wings to make sure that it doesn’t escape! Advertisement Since waking up after that accident with a memory of heaven etched indelibly in my mind and a frightened seraphim clutched tightly in my arms, I’ve known a new kind of peace and serenity. I no longer fear death, because I know that when I die, I will experience an eternity of bliss just like what I felt in God’s everlasting paradise while I repeatedly punched one of His angels in the face in order to subdue it. It was the most beautiful thing I’ve ever experienced. I hope my testament is enough to convince you that heaven is real, but if you find it hard to believe every detail of my account, don’t be troubled. One day, you too will see what I have seen and know what I know. Until then, try to love God in your own way, and know that He has promised all of us a place by His side in the next world. Or better yet, just come on over to my house and see this angel I’ve got. Advertisement I definitely won’t be letting it go!
My kids LOVE their DIY Marshmallow Guns!! My sister made them for my kids, and every day they ask to have a Marshmallow Gun Fight. We may have had a few in the last week and I’m still randomly finding marshmallows through out the house. 😉 I convinced her to do a tutorial for you guys so that you could make some too. They are so cheap to make and can be customized by the color you paint them. AND they make great gifts! In fact, I wanted to try and save these guns to give for Christmas, but just had to give them early so we could have a little fun before the 25th! MY LATEST VIDEOS These Marshmallow Guns are so simple and will definitely be loved by the kids. Here is what you’ll need to make them: SUPPLIES: 14 inches in 1/2″ SCH40 pvc pipe PVC cutters Spray paint, gloss or semi gloss Latex gloves 1/2″ PVC fittings: (2) 1/2″ Elbows (1) 1/2″ Tee (1) 1/2″ Coupling (1) 1/2″ Cap INSTRUCTIONS: 1. Gather your materials at a nearby hardware store. Both Lowes and Home Depot sell the pvc fittings individually or in packs of 10. Be sure to grab an oil base spray paint so that you can put your mouth on the gun (once its dried for a week). 2. Cut your pvc as follows; (3) 2″ pieces, (1) 3″ piece and (1) 4″ piece. We’ve used a pvc cutter (around $10) or a saw. The pvc cutter will give you a smoother finish. You can cut the piece longer if you would like a larger gun. We found that these sizes worked perfect for small kids. Otherwise it might be too hard for them to blow out the marshmallows. 3. Begin with a two inch piece and connect an elbow fitting. This 2″ pvc piece is where you will insert the mallow and will serve as your mouthpiece. Add another 2″ piece to the other end of your elbow piece. Add your second elbow fitting to the 2nd 2″ piece. Attach the last 2″ piece to other elbow opening. Next will come your tee and then your 4″ pvc out the other straight end, finished with the coupling fitting (this is where the marshmallow will come out). Connect the 3″ piece to the bottom of the tee and complete with the cap (this will serve as your handle). Be sure that all pieces are pushed tightly together. 4. Before you begin spray painting, you’ll want to make sure that you have a spot to hang the gun to dry. Anything with a hook, where you can set the mouth piece opening on. It will need at least 24 hours to dry. 5. Spray the tip of the gun first (the coupling, where the marshmallow will exit). Let dry. 6. With a latex glove on, insert your index finger into the coupling opening. This will allow you hold the gun while spraying all the different angles of your gun. Hang to dry. Your gun should be dry within 24 hours. *For a cute gift idea, attach a small bag of marshmallows. For a fun white elephant gift, include two guns with a bag of marshmallows. My husband and I (6 years into marriage) spent a date building, then shooting mallow guns. It was the best date we’ve ever had! For more great gifts my kids love, check out these: DIY Lava Lamps Homemade Gak DIY Dinosaur Fossils SO many great ideas for kids. Hope you like some of them! For all Gift Ideas on the site go HERE. For all kid activities go HERE. And get weekly emails with monthly freebies by signing up for the Lil’ Luna newsletter. 🙂 For even more great ideas follow me on Facebook – Pinterest – Instagram – Twitter – Periscope – Snapchat. ENJOY!
/* Copyright 2020 <NAME> (https://github.com/adamgreen/) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ /* Declares registers, bit fields, and inline routines to utilize the debug hardware on the Cortex-M architecture. */ #ifndef DEBUG_CM3_H_ #define DEBUG_CM3_H_ #include <cmsis.h> #include <stdio.h> #include <core/try_catch.h> /* Data Watchpoint and Trace Registers */ typedef struct { /* Comparator register. */ __IO uint32_t COMP; /* Comparator Mask register. */ __IO uint32_t MASK; /* Comparator Function register. */ __IO uint32_t FUNCTION; /* Reserved 4 bytes to pad struct size out to 16 bytes. */ __I uint32_t Reserved; } DWT_COMP_Type; /* Flash Patch and Breakpoint Registers */ typedef struct { /* FlashPatch Control Register. */ __IO uint32_t CTRL; /* FlashPatch Remap Register. */ __IO uint32_t REMAP; } FPB_Type; /* Memory mapping of Cortex-M3 Debug Hardware */ #define DWT_COMP_BASE (0xE0001020) #define DWT_COMP_ARRAY ((DWT_COMP_Type*) DWT_COMP_BASE) #define FPB_BASE (0xE0002000) #define FPB_COMP_BASE (0xE0002008) #define FPB ((FPB_Type*) FPB_BASE) #define FPB_COMP_ARRAY ((uint32_t*) FPB_COMP_BASE) /* Debug Halting Control and Status Register Bits */ /* Enable halt mode debug. If set to 1 then JTAG debugging is being used. */ #define CoreDebug_DHCSR_C_DEBUGEN (1 << 0) /* Debug Exception and Monitor Control Registers Bits */ /* Global enable for all DWT and ITM features. */ #define CoreDebug_DEMCR_TRCENA (1 << 24) /* Monitor Single Step. Set to 1 to single step instruction when exiting monitor. */ #define CoreDebug_DEMCR_MON_STEP (1 << 18) /* Monitor Pending. Set to 1 to pend a monitor exception. */ #define CoreDebug_DEMCR_MON_PEND (1 << 17) /* Monitor Enable. Set to 1 to enable the debug monitor exception. */ #define CoreDebug_DEMCR_MON_EN (1 << 16) /* Debug Fault Status Register Bits. Clear a bit by writing a 1 to it. */ /* Indicates that EDBGRQ was asserted. */ #define SCB_DFSR_EXTERNAL (1 << 4) /* Indicates that a vector catch was triggered. */ #define SCB_DFSR_VCATCH (1 << 3) /* Indicates that a DWT debug event was triggered. */ #define SCB_DFSR_DWTTRAP (1 << 2) /* Indicates a BKPT instruction or FPB match was encountered. */ #define SCB_DFSR_BKPT (1 << 1) /* Indicates that a single step has occurred. */ #define SCB_DFSR_HALTED 1 static __INLINE int isDebuggerAttached(void) { return (CoreDebug->DHCSR & CoreDebug_DHCSR_C_DEBUGEN); } static __INLINE void waitForDebuggerToDetach(uint32_t timeOut) { while (timeOut-- > 0 && isDebuggerAttached()) { } if (isDebuggerAttached()) __throw(timeoutException); } static __INLINE void enableDebugMonitor() { CoreDebug->DEMCR |= CoreDebug_DEMCR_MON_EN; } static __INLINE void enableDWTandITM(void) { CoreDebug->DEMCR |= CoreDebug_DEMCR_TRCENA; } static __INLINE void disableDWTandITM(void) { CoreDebug->DEMCR &= ~CoreDebug_DEMCR_TRCENA; } static __INLINE void disableSingleStep(void) { CoreDebug->DEMCR &= ~CoreDebug_DEMCR_MON_STEP; } static __INLINE void enableSingleStep(void) { CoreDebug->DEMCR |= CoreDebug_DEMCR_MON_STEP; } static __INLINE void clearMonitorPending(void) { CoreDebug->DEMCR &= ~CoreDebug_DEMCR_MON_PEND; } static __INLINE void setMonitorPending(void) { CoreDebug->DEMCR |= CoreDebug_DEMCR_MON_PEND; } static __INLINE uint32_t isMonitorPending(void) { return CoreDebug->DEMCR & CoreDebug_DEMCR_MON_PEND; } /* Data Watchpoint and Trace Comparator Function Bits. */ /* Matched. Read-only. Set to 1 to indicate that this comparator has been matched. Cleared on read. */ #define DWT_COMP_FUNCTION_MATCHED (1 << 24) /* Data Address Linked Index 1. */ #define DWT_COMP_FUNCTION_DATAVADDR1 (0xF << 16) /* Data Address Linked Index 0. */ #define DWT_COMP_FUNCTION_DATAVADDR0 (0xF << 12) /* Selects size for data value matches. */ #define DWT_COMP_FUNCTION_DATAVSIZE_MASK (3 << 10) /* Byte */ #define DWT_COMP_FUNCTION_DATAVSIZE_BYTE (0 << 10) /* Halfword */ #define DWT_COMP_FUNCTION_DATAVSIZE_HALFWORD (1 << 10) /* Word */ #define DWT_COMP_FUNCTION_DATAVSIZE_WORD (2 << 10) /* Data Value Match. Set to 0 for address compare and 1 for data value compare. */ #define DWT_COMP_FUNCTION_DATAVMATCH (1 << 8) /* Cycle Count Match. Set to 1 for enabling cycle count match and 0 otherwise. Only valid on comparator 0. */ #define DWT_COMP_FUNCTION_CYCMATCH (1 << 7) /* Enable Data Trace Address offset packets. 0 to disable. */ #define DWT_COMP_FUNCTION_EMITRANGE (1 << 5) /* Selects action to be taken on match. */ #define DWT_COMP_FUNCTION_FUNCTION_MASK 0xF /* Disabled */ #define DWT_COMP_FUNCTION_FUNCTION_DISABLED 0x0 /* Instruction Watchpoint */ #define DWT_COMP_FUNCTION_FUNCTION_INSTRUCTION 0x4 /* Data Read Watchpoint */ #define DWT_COMP_FUNCTION_FUNCTION_DATA_READ 0x5 /* Data Write Watchpoint */ #define DWT_COMP_FUNCTION_FUNCTION_DATA_WRITE 0x6 /* Data Read/Write Watchpoint */ #define DWT_COMP_FUNCTION_FUNCTION_DATA_READWRITE 0x7 /* DWT - Data Watchpoint Trace Routines */ static __INLINE uint32_t getDWTComparatorCount(void) { return (DWT->CTRL >> 28); } static __INLINE void clearDWTComparator(DWT_COMP_Type* pComparatorStruct) { pComparatorStruct->COMP = 0; pComparatorStruct->MASK = 0; pComparatorStruct->FUNCTION &= ~(DWT_COMP_FUNCTION_DATAVMATCH | DWT_COMP_FUNCTION_CYCMATCH | DWT_COMP_FUNCTION_EMITRANGE | DWT_COMP_FUNCTION_FUNCTION_MASK); } static __INLINE void clearDWTComparators(void) { DWT_COMP_Type* pComparatorStruct = DWT_COMP_ARRAY; uint32_t comparatorCount; uint32_t i; comparatorCount = getDWTComparatorCount(); for (i = 0 ; i < comparatorCount ; i++) { clearDWTComparator(pComparatorStruct); pComparatorStruct++; } } static __INLINE void initDWT(void) { clearDWTComparators(); } static __INLINE uint32_t maskOffDWTFunctionBits(uint32_t functionValue) { return functionValue & (DWT_COMP_FUNCTION_DATAVADDR1 | DWT_COMP_FUNCTION_DATAVADDR0 | DWT_COMP_FUNCTION_DATAVSIZE_MASK | DWT_COMP_FUNCTION_DATAVMATCH | DWT_COMP_FUNCTION_CYCMATCH | DWT_COMP_FUNCTION_EMITRANGE | DWT_COMP_FUNCTION_FUNCTION_MASK); } static __INLINE int doesDWTComparatorAddressMatch(DWT_COMP_Type* pComparator, uint32_t address) { return pComparator->COMP == address; } static __INLINE uint32_t calculateLog2(uint32_t value) { uint32_t log2 = 0; while (value > 1) { value >>= 1; log2++; } return log2; } static __INLINE int doesDWTComparatorMaskMatch(DWT_COMP_Type* pComparator, uint32_t size) { return pComparator->MASK == calculateLog2(size); } static __INLINE int doesDWTComparatorFunctionMatch(DWT_COMP_Type* pComparator, uint32_t function) { uint32_t importantFunctionBits = maskOffDWTFunctionBits(pComparator->FUNCTION); return importantFunctionBits == function; } static __INLINE int doesDWTComparatorMatch(DWT_COMP_Type* pComparator, uint32_t address, uint32_t size, uint32_t function) { return doesDWTComparatorFunctionMatch(pComparator, function) && doesDWTComparatorAddressMatch(pComparator, address) && doesDWTComparatorMaskMatch(pComparator, size); } static __INLINE DWT_COMP_Type* findDWTComparator(uint32_t watchpointAddress, uint32_t watchpointSize, uint32_t watchpointType) { DWT_COMP_Type* pCurrentComparator = DWT_COMP_ARRAY; uint32_t comparatorCount; uint32_t i; comparatorCount = getDWTComparatorCount(); for (i = 0 ; i < comparatorCount ; i++) { if (doesDWTComparatorMatch(pCurrentComparator, watchpointAddress, watchpointSize, watchpointType)) return pCurrentComparator; pCurrentComparator++; } /* Return NULL if no DWT comparator is already enabled for this watchpoint. */ return NULL; } static __INLINE int isDWTComparatorFree(DWT_COMP_Type* pComparator) { return (pComparator->FUNCTION & DWT_COMP_FUNCTION_FUNCTION_MASK) == DWT_COMP_FUNCTION_FUNCTION_DISABLED; } static __INLINE DWT_COMP_Type* findFreeDWTComparator(void) { DWT_COMP_Type* pCurrentComparator = DWT_COMP_ARRAY; uint32_t comparatorCount; uint32_t i; comparatorCount = getDWTComparatorCount(); for (i = 0 ; i < comparatorCount ; i++) { if (isDWTComparatorFree(pCurrentComparator)) { return pCurrentComparator; } pCurrentComparator++; } /* Return NULL if there are no free DWT comparators. */ return NULL; } static __INLINE int isPowerOf2(uint32_t value) { return (value & (value - 1)) == 0; } static __INLINE int isAddressAlignedToSize(uint32_t address, uint32_t size) { uint32_t addressMask = ~(size - 1); return address == (address & addressMask); } static __INLINE int isValidDWTComparatorSize(uint32_t watchpointSize) { return isPowerOf2(watchpointSize); } static __INLINE int isValidDWTComparatorAddress(uint32_t watchpointAddress, uint32_t watchpointSize) { return isAddressAlignedToSize(watchpointAddress, watchpointSize); } static __INLINE int isValidDWTComparatorType(uint32_t watchpointType) { return (watchpointType == DWT_COMP_FUNCTION_FUNCTION_DATA_READ) || (watchpointType == DWT_COMP_FUNCTION_FUNCTION_DATA_WRITE) || (watchpointType == DWT_COMP_FUNCTION_FUNCTION_DATA_READWRITE); } static __INLINE int isValidDWTComparatorSetting(uint32_t watchpointAddress, uint32_t watchpointSize, uint32_t watchpointType) { return isValidDWTComparatorSize(watchpointSize) && isValidDWTComparatorAddress(watchpointAddress, watchpointSize) && isValidDWTComparatorType(watchpointType); } static __INLINE int attemptToSetDWTComparatorMask(DWT_COMP_Type* pComparator, uint32_t watchpointSize) { uint32_t maskBitCount; maskBitCount = calculateLog2(watchpointSize); pComparator->MASK = maskBitCount; /* Processor may limit number of bits to be masked off so check. */ return pComparator->MASK == maskBitCount; } static __INLINE int attemptToSetDWTComparator(DWT_COMP_Type* pComparator, uint32_t watchpointAddress, uint32_t watchpointSize, uint32_t watchpointType) { if (!attemptToSetDWTComparatorMask(pComparator, watchpointSize)) return 0; pComparator->COMP = watchpointAddress; pComparator->FUNCTION = watchpointType; return 1; } static __INLINE DWT_COMP_Type* enableDWTWatchpoint(uint32_t watchpointAddress, uint32_t watchpointSize, uint32_t watchpointType) { DWT_COMP_Type* pComparator = NULL; pComparator = findDWTComparator(watchpointAddress, watchpointSize, watchpointType); if (pComparator) { /* This watchpoint has already been set so return a pointer to it. */ return pComparator; } pComparator = findFreeDWTComparator(); if (!pComparator) { /* There are no free comparators left. */ return NULL; } if (!attemptToSetDWTComparator(pComparator, watchpointAddress, watchpointSize, watchpointType)) { /* Failed set due to the size being larger than supported by CPU. */ return NULL; } /* Successfully configured a free comparator for this watchpoint. */ return pComparator; } static __INLINE DWT_COMP_Type* disableDWTWatchpoint(uint32_t watchpointAddress, uint32_t watchpointSize, uint32_t watchpointType) { DWT_COMP_Type* pComparator = NULL; pComparator = findDWTComparator(watchpointAddress, watchpointSize, watchpointType); if (!pComparator) { /* This watchpoint not set so return NULL. */ return NULL; } clearDWTComparator(pComparator); return pComparator; } /* FlashPatch Control Register Bits. */ /* Flash Patch breakpoint architecture revision. 0 for revision 1 and 1 for revision 2. */ #define FP_CTRL_REV_SHIFT 28 #define FP_CTRL_REV_MASK (0xF << FP_CTRL_REV_SHIFT) #define FP_CTRL_REVISION2 0x1 /* Most significant bits of number of instruction address comparators. Read-only */ #define FP_CTRL_NUM_CODE_MSB_SHIFT 12 #define FP_CTRL_NUM_CODE_MSB_MASK (0x7 << FP_CTRL_NUM_CODE_MSB_SHIFT) /* Least significant bits of number of instruction address comparators. Read-only */ #define FP_CTRL_NUM_CODE_LSB_SHIFT 4 #define FP_CTRL_NUM_CODE_LSB_MASK (0xF << FP_CTRL_NUM_CODE_LSB_SHIFT) /* Number of instruction literal address comparators. Read only */ #define FP_CTRL_NUM_LIT_SHIFT 8 #define FP_CTRL_NUM_LIT_MASK (0xF << FP_CTRL_NUM_LIT_SHIFT) /* This Key field must be set to 1 when writing or the write will be ignored. */ #define FP_CTRL_KEY (1 << 1) /* Enable bit for the FPB. Set to 1 to enable FPB. */ #define FP_CTRL_ENABLE 1 /* FlashPatch Comparator Register Bits for revision 1. */ /* Defines the behaviour for code address comparators. */ #define FP_COMP_REPLACE_SHIFT 30 #define FP_COMP_REPLACE_MASK (0x3U << FP_COMP_REPLACE_SHIFT) /* Remap to specified address in SRAM. */ #define FP_COMP_REPLACE_REMAP (0x0U << FP_COMP_REPLACE_SHIFT) /* Breakpoint on lower halfword. */ #define FP_COMP_REPLACE_BREAK_LOWER (0x1U << FP_COMP_REPLACE_SHIFT) /* Breakpoint on upper halfword. */ #define FP_COMP_REPLACE_BREAK_UPPER (0x2U << FP_COMP_REPLACE_SHIFT) /* Breakpoint on word. */ #define FP_COMP_REPLACE_BREAK (0x3U << FP_COMP_REPLACE_SHIFT) /* Specified bits 28:2 of the address to be use for match on this comparator. */ #define FP_COMP_COMP_SHIFT 2 #define FP_COMP_COMP_MASK (0x07FFFFFF << FP_COMP_COMP_SHIFT) /* Enables this comparator. Set to 1 to enable. */ #define FP_COMP_ENABLE 1 /* FlashPatch Comparator Register Bits for revision 2. */ /* Enables this comparator for flash patching when FP_COMP_BE is 0. Set to 1 to enable. */ #define FP_COMP_FE (1 << 31) /* Enables this comparator as a breakpoint. Set to 1 to enable. */ #define FP_COMP_BE 1 /* FPB - Flash Patch Breakpoint Routines. */ static __INLINE uint32_t getFPBRevision(void) { uint32_t controlValue = FPB->CTRL; return ((controlValue & FP_CTRL_REV_MASK) >> FP_CTRL_REV_SHIFT); } static __INLINE uint32_t getFPBCodeComparatorCount(void) { uint32_t controlValue = FPB->CTRL; return (((controlValue & FP_CTRL_NUM_CODE_MSB_MASK) >> 8) | ((controlValue & FP_CTRL_NUM_CODE_LSB_MASK) >> 4)); } static __INLINE uint32_t getFPBLiteralComparatorCount(void) { uint32_t controlValue = FPB->CTRL; return ((controlValue & FP_CTRL_NUM_LIT_MASK) >> FP_CTRL_NUM_LIT_SHIFT); } static __INLINE void clearFPBComparator(uint32_t* pComparator) { *pComparator = 0; } static __INLINE int isAddressAboveLowestHalfGig(uint32_t address) { return (int)(address & 0xE0000000); } static __INLINE int isAddressOdd(uint32_t address) { return (int)(address & 0x1); } static __INLINE int isBreakpointAddressInvalid(uint32_t breakpointAddress) { if (getFPBRevision() == FP_CTRL_REVISION2) { /* On revision 2, can set breakpoint at any address in the 4GB range, except for at an odd addresses. */ return isAddressOdd(breakpointAddress); } else { /* On revision 1, can only set a breakpoint on addresses where the upper 3-bits are all 0 (upper 3.5GB is off limits) and the address is half-word aligned */ return (isAddressAboveLowestHalfGig(breakpointAddress) || isAddressOdd(breakpointAddress)); } } static __INLINE int isAddressInUpperHalfword(uint32_t address) { return (int)(address & 0x2); } static __INLINE uint32_t calculateFPBComparatorReplaceValue(uint32_t breakpointAddress, int32_t is32BitInstruction) { if (is32BitInstruction) return FP_COMP_REPLACE_BREAK; else if (isAddressInUpperHalfword(breakpointAddress)) return FP_COMP_REPLACE_BREAK_UPPER; else return FP_COMP_REPLACE_BREAK_LOWER; } static __INLINE uint32_t calculateFPBComparatorValueRevision1(uint32_t breakpointAddress, int32_t is32BitInstruction) { uint32_t comparatorValue; comparatorValue = (breakpointAddress & FP_COMP_COMP_MASK); comparatorValue |= FP_COMP_ENABLE; comparatorValue |= calculateFPBComparatorReplaceValue(breakpointAddress, is32BitInstruction); return comparatorValue; } static __INLINE uint32_t calculateFPBComparatorValueRevision2(uint32_t breakpointAddress) { return breakpointAddress | FP_COMP_BE; } static __INLINE uint32_t calculateFPBComparatorValue(uint32_t breakpointAddress, int32_t is32BitInstruction) { if (isBreakpointAddressInvalid(breakpointAddress)) return (uint32_t)~0U; if (getFPBRevision() == FP_CTRL_REVISION2) return calculateFPBComparatorValueRevision2(breakpointAddress); else return calculateFPBComparatorValueRevision1(breakpointAddress, is32BitInstruction); } static __INLINE uint32_t maskOffFPBComparatorReservedBits(uint32_t comparatorValue) { if (getFPBRevision() == FP_CTRL_REVISION2) return comparatorValue; else return (comparatorValue & (FP_COMP_REPLACE_MASK | FP_COMP_COMP_MASK | FP_COMP_ENABLE)); } static __INLINE int isFPBComparatorEnabledRevision1(uint32_t comparator) { return (int)(comparator & FP_COMP_ENABLE); } static __INLINE int isFPBComparatorEnabledRevision2(uint32_t comparator) { return (int)((comparator & FP_COMP_BE) || (comparator & FP_COMP_FE)); } static __INLINE int isFPBComparatorEnabled(uint32_t comparator) { if (getFPBRevision() == FP_CTRL_REVISION2) return isFPBComparatorEnabledRevision2(comparator); else return isFPBComparatorEnabledRevision1(comparator); } static __INLINE uint32_t* findFPBBreakpointComparator(uint32_t breakpointAddress, int32_t is32BitInstruction) { uint32_t* pCurrentComparator = FPB_COMP_ARRAY; uint32_t comparatorValueForThisBreakpoint; uint32_t codeComparatorCount; uint32_t i; comparatorValueForThisBreakpoint = calculateFPBComparatorValue(breakpointAddress, is32BitInstruction); codeComparatorCount = getFPBCodeComparatorCount(); for (i = 0 ; i < codeComparatorCount ; i++) { uint32_t maskOffReservedBits; maskOffReservedBits = maskOffFPBComparatorReservedBits(*pCurrentComparator); if (comparatorValueForThisBreakpoint == maskOffReservedBits) return pCurrentComparator; pCurrentComparator++; } /* Return NULL if no FPB comparator is already enabled for this breakpoint. */ return NULL; } static __INLINE uint32_t* findFreeFPBBreakpointComparator(void) { uint32_t* pCurrentComparator = FPB_COMP_ARRAY; uint32_t codeComparatorCount; uint32_t i; codeComparatorCount = getFPBCodeComparatorCount(); for (i = 0 ; i < codeComparatorCount ; i++) { if (!isFPBComparatorEnabled(*pCurrentComparator)) return pCurrentComparator; pCurrentComparator++; } /* Return NULL if no FPB breakpoint comparators are free. */ return NULL; } static __INLINE uint32_t* enableFPBBreakpoint(uint32_t breakpointAddress, int32_t is32BitInstruction) { uint32_t* pExistingFPBBreakpoint; uint32_t* pFreeFPBBreakpointComparator; pExistingFPBBreakpoint = findFPBBreakpointComparator(breakpointAddress, is32BitInstruction); if (pExistingFPBBreakpoint) { /* This breakpoint is already set to just return pointer to existing comparator. */ return pExistingFPBBreakpoint; } pFreeFPBBreakpointComparator = findFreeFPBBreakpointComparator(); if (!pFreeFPBBreakpointComparator) { /* All FPB breakpoint comparator slots are used so return NULL as error indicator. */ return NULL; } *pFreeFPBBreakpointComparator = calculateFPBComparatorValue(breakpointAddress, is32BitInstruction); return pFreeFPBBreakpointComparator; } static __INLINE uint32_t* disableFPBBreakpointComparator(uint32_t breakpointAddress, int32_t is32BitInstruction) { uint32_t* pExistingFPBBreakpoint; pExistingFPBBreakpoint = findFPBBreakpointComparator(breakpointAddress, is32BitInstruction); if (pExistingFPBBreakpoint) clearFPBComparator(pExistingFPBBreakpoint); return pExistingFPBBreakpoint; } static __INLINE void clearFPBComparators(void) { uint32_t* pCurrentComparator = FPB_COMP_ARRAY; uint32_t codeComparatorCount; uint32_t literalComparatorCount; uint32_t totalComparatorCount; uint32_t i; codeComparatorCount = getFPBCodeComparatorCount(); literalComparatorCount = getFPBLiteralComparatorCount(); totalComparatorCount = codeComparatorCount + literalComparatorCount; for (i = 0 ; i < totalComparatorCount ; i++) { clearFPBComparator(pCurrentComparator); pCurrentComparator++; } } static __INLINE void enableFPB(void) { FPB->CTRL |= (FP_CTRL_KEY | FP_CTRL_ENABLE); } static __INLINE void disableFPB(void) { FPB->CTRL = FP_CTRL_KEY | (FPB->CTRL & ~FP_CTRL_ENABLE); } static __INLINE void initFPB(void) { clearFPBComparators(); enableFPB(); } /* Memory Protection Unit Type Register Bits. */ /* Number of instruction regions supported by MPU. 0 for Cortex-M3 */ #define MPU_TYPE_IREGION_SHIFT 16 #define MPU_TYPE_IREGION_MASK (0xFF << MPU_TYPE_IREGION_SHIFT) /* Number of data regions supported by MPU. */ #define MPU_TYPE_DREGION_SHIFT 8 #define MPU_TYPE_DREGION_MASK (0xFF << MPU_TYPE_DREGION_SHIFT) /* Are instruction and data regions configured separately? 1 for yes and 0 otherwise. */ #define MPU_TYPE_SEPARATE 0x1 /* Memory Protection Unit Control Register Bits. */ /* Default memory map as background region for privileged access. 1 enables. */ #define MPU_CTRL_PRIVDEFENA (1 << 2) /* Hard fault and NMI exceptions to use MPU. 0 disables MPU for these handlers. */ #define MPU_CTRL_HFNMIENA (1 << 1) /* MPU Enable. 1 enables and disabled otherwise. */ #define MPU_CTRL_ENABLE 1 /* Memory Protection Unit Region Region Number Register Bits. */ #define MPU_RNR_REGION_MASK 0xFF /* Memory Protection Unit Region Base Address Register Bits. */ /* Base address of this region. */ #define MPU_RBAR_ADDR_SHIFT 5 #define MPU_RBAR_ADDR_MASK (0x7FFFFFF << MPU_RBAR_ADDR_SHIFT) /* Are the region bits in this register valid or should RNR be used instead. */ #define MPU_RBAR_VALID (1 << 4) /* The region number. Only used when MPU_RBAR_VALID is one. */ #define MPU_RBAR_REGION_MASK 0xF /* Memory Protection Unit Region Attribute and Size Register Bits. */ /* eXecute Never bit. 1 means code can't execute from this region. */ #define MPU_RASR_XN (1 << 28) /* Access permission bits. */ #define MPU_RASR_AP_SHIFT 24 #define MPU_RASR_AP_MASK (0x7 << MPU_RASR_AP_SHIFT) /* TEX, C, and B bits together determine memory type. */ #define MPU_RASR_TEX_SHIFT 19 #define MPU_RASR_TEX_MASK (0x7 << MPU_RASR_TEX_SHIFT) #define MPU_RASR_S (1 << 18) #define MPU_RASR_C (1 << 17) #define MPU_RASR_B (1 << 16) /* Sub-region disable bits. */ #define MPU_RASR_SRD_SHIFT 8 #define MPU_RASR_SRD_MASK (0xff << MPU_RASR_SRD_SHIFT) /* Region size in 2^(value + 1) */ #define MPU_RASR_SIZE_SHIFT 1 #define MPU_RASR_SIZE_MASK (0x1F << MPU_RASR_SIZE_SHIFT) /* Region enable. 1 enables. */ #define MPU_RASR_ENABLE 1 /* MPU - Memory Protection Unit Routines. */ static __INLINE uint32_t getMPUDataRegionCount(void) { return (MPU->TYPE & MPU_TYPE_DREGION_MASK) >> MPU_TYPE_DREGION_SHIFT; } static __INLINE uint32_t getHighestMPUDataRegionIndex(void) { return getMPUDataRegionCount() - 1; } static __INLINE int isMPURegionNumberValid(uint32_t regionNumber) { return regionNumber < getMPUDataRegionCount(); } static __INLINE int isMPUNotPresent(void) { return getMPUDataRegionCount() == 0; } static __INLINE uint32_t getMPUControlValue(void) { if (isMPUNotPresent()) return ~0U; return (MPU->CTRL); } static __INLINE void setMPUControlValue(uint32_t newControlValue) { if (isMPUNotPresent()) return; MPU->CTRL = newControlValue; __DSB(); __ISB(); } static __INLINE void disableMPU(void) { if (isMPUNotPresent()) return; MPU->CTRL &= ~MPU_CTRL_ENABLE; __DSB(); __ISB(); } static __INLINE void enableMPU(void) { if (isMPUNotPresent()) return; MPU->CTRL |= MPU_CTRL_ENABLE; __DSB(); __ISB(); } static __INLINE void enableMPUWithHardAndNMIFaults(void) { if (isMPUNotPresent()) return; MPU->CTRL |= MPU_CTRL_ENABLE | MPU_CTRL_HFNMIENA; __DSB(); __ISB(); } static __INLINE int prepareToAccessMPURegion(uint32_t regionNumber) { if (!isMPURegionNumberValid(regionNumber)) return 0; MPU->RNR = regionNumber; return 1; } static __INLINE uint32_t getCurrentMPURegionNumber(void) { return MPU->RNR; } static __INLINE void setMPURegionAddress(uint32_t address) { if (isMPUNotPresent()) return; MPU->RBAR = address & MPU_RBAR_ADDR_MASK; } static __INLINE uint32_t getMPURegionAddress(void) { if (isMPUNotPresent()) return 0; return MPU->RBAR & MPU_RBAR_ADDR_MASK; } static __INLINE void setMPURegionAttributeAndSize(uint32_t attributeAndSize) { if (isMPUNotPresent()) return; MPU->RASR = attributeAndSize; } static __INLINE uint32_t getMPURegionAttributeAndSize(void) { if (isMPUNotPresent()) return 0; return MPU->RASR; } static __INLINE uint32_t getCurrentlyExecutingExceptionNumber(void) { return (__get_IPSR() & 0xFF); } static __INLINE uint32_t getCurrentSysTickControlValue(void) { return SysTick->CTRL; } static __INLINE uint32_t getCurrentSysTickReloadValue(void) { return SysTick->LOAD; } static __INLINE void setSysTickControlValue(uint32_t controlValue) { SysTick->CTRL = controlValue; } static __INLINE void setSysTickReloadValue(uint32_t reloadValue) { SysTick->LOAD = reloadValue & SysTick_LOAD_RELOAD_Msk; } static __INLINE uint32_t getSysTick10MillisecondInterval(void) { return SysTick->CALIB & SysTick_CALIB_TENMS_Msk; } static __INLINE void disableSysTick(void) { SysTick->CTRL = 0; } static __INLINE void enableSysTickWithCClkNoInterrupt(void) { SysTick->VAL = 0; SysTick->CTRL = SysTick_CTRL_ENABLE_Msk | SysTick_CTRL_CLKSOURCE_Msk; } static __INLINE void start10MillisecondSysTick(void) { if (getSysTick10MillisecondInterval() == 0) return; disableSysTick(); setSysTickReloadValue(getSysTick10MillisecondInterval()); enableSysTickWithCClkNoInterrupt(); } static __INLINE int has10MillisecondSysTickExpired(void) { if (getSysTick10MillisecondInterval() == 0) return 1; return SysTick->CTRL & SysTick_CTRL_COUNTFLAG_Msk; } /* Program Status Register Bits. */ /* Was the stack 8-byte aligned during auto stacking. */ #define PSR_STACK_ALIGN (1 << 9) #endif /* DEBUG_CM3_H_ */
/** * Updates the "Press Menu for more options" hint based on the current * state of the Phone. */ private void updateMenuButtonHint() { if (VDBG) log("updateMenuButtonHint()..."); boolean hintVisible = true; final boolean hasRingingCall = !mRingingCall.isIdle(); final boolean hasActiveCall = !mForegroundCall.isIdle(); final boolean hasHoldingCall = !mBackgroundCall.isIdle(); if (mInCallScreenMode == InCallScreenMode.CALL_ENDED) { hintVisible = false; } else if (hasRingingCall && !(hasActiveCall && !hasHoldingCall)) { hintVisible = false; } else if (!phoneIsInUse()) { hintVisible = false; } if (isTouchUiEnabled()) { hintVisible = false; } int hintVisibility = (hintVisible) ? View.VISIBLE : View.GONE; mCallCard.getMenuButtonHint().setVisibility(hintVisibility); }
<reponame>pwcong/rc-component-x import React, { useState } from 'react'; import Icon from '@rc-x/icon'; import { classNames, getPrefixCls } from '@rc-x/utils'; import Input, { IInputProps } from '@rc-x/input'; import './style.scss'; const baseCls = getPrefixCls('input-number'); const defaultStep = 1; const defaultParser = (value: string) => Number(value); const defaultFormatter = (value: number | string) => value.toString(); const defaultDecimalSeparator = '.'; export interface IInputNumberProps extends IInputProps { /** 最大值 */ max?: number; /** 最小值 */ min?: number; /** 步数 */ step?: number; /** 默认值 */ defaultValue?: number; /** 当前值 */ value?: number; /** 数值精度 */ precision?: number; /** 小数点 */ decimalSeparator?: string; /** 变更回调 */ onChange?: (value: number) => void; /** 字符转数字 */ formatter?: (value: number | string) => string; /** 数字转字符 */ parser: (value: string) => number; } interface IForwardRefProps extends IInputNumberProps { forwardedRef?: React.Ref<any>; } const InputNumber: React.FunctionComponent<IForwardRefProps> = props => { const { className, forwardedRef, wrapperClassName, innerClassName, defaultValue, value: customValue, min, max, step = defaultStep, precision, decimalSeparator = defaultDecimalSeparator, formatter: customFormatter = defaultFormatter, parser: customParser = defaultParser, onChange } = props; const formatter = (value: number) => { if (precision !== undefined) { return customFormatter( !decimalSeparator || decimalSeparator === defaultDecimalSeparator ? value.toFixed(precision) : value.toFixed(precision).replace('.', decimalSeparator) ); } return customFormatter( !decimalSeparator || decimalSeparator === defaultDecimalSeparator ? value.toString() : value.toString().replace('.', decimalSeparator) ); }; const valid = (value: number) => { if (min !== undefined && value < min) { return min; } if (max !== undefined && value > max) { return max; } return value; }; const parser = (value: string) => { let v = customParser( !decimalSeparator || decimalSeparator === defaultDecimalSeparator ? value : value.replace(decimalSeparator, '.') ); console.log(v); if (precision !== undefined) { return Number(v.toFixed(precision)); } return v; }; const arrowCls = getPrefixCls('arrow', baseCls); const [stateValue, setStateValue] = useState( defaultValue !== undefined ? defaultValue : 0 ); const [displayValue, setDisplayValue] = useState( formatter(customValue !== undefined ? customValue : stateValue) ); return ( <Input {...props} value={displayValue} className={classNames(baseCls, className)} wrapperClassName={classNames( getPrefixCls('wrapper', baseCls), wrapperClassName )} innerClassName={classNames( getPrefixCls('inner', baseCls), innerClassName )} ref={forwardedRef} slot={ <div className={getPrefixCls('tools', baseCls)}> <div className={classNames(arrowCls, getPrefixCls('up', arrowCls))} onClick={() => { const nextValue = customValue !== undefined ? customValue + step : stateValue + step; if (precision !== undefined) { const v = valid(Number(nextValue.toFixed(precision))); setStateValue(v); onChange && onChange(v); setDisplayValue(formatter(v)); } else { const v = valid(nextValue); setStateValue(v); onChange && onChange(v); setDisplayValue(formatter(v)); } }} > <Icon type="chevron-up" /> </div> <div className={classNames(arrowCls, getPrefixCls('down', arrowCls))} onClick={() => { const nextValue = customValue !== undefined ? customValue - step : stateValue - step; if (precision !== undefined) { const v = valid(Number(nextValue.toFixed(precision))); setStateValue(v); onChange && onChange(v); setDisplayValue(formatter(v)); } else { const v = valid(nextValue); setStateValue(v); onChange && onChange(v); setDisplayValue(formatter(v)); } }} > <Icon type="chevron-down" /> </div> </div> } onChange={value => { if (value === undefined || value === '') { setStateValue(0); onChange && onChange(0); setDisplayValue(formatter(0)); } else { if (/^.*\.$/.test(value)) { setDisplayValue(value); } else { const res = parser(value); if (!isNaN(res)) { const v = valid(res); setStateValue(v); onChange && onChange(v); setDisplayValue(formatter(v)); } else { setDisplayValue(value); } } } }} /> ); }; InputNumber.defaultProps = { step: defaultStep, decimalSeparator: defaultDecimalSeparator, formatter: defaultFormatter, parser: defaultParser }; export default React.forwardRef<any, IInputNumberProps>((props, ref) => { return <InputNumber {...props} forwardedRef={ref} />; });
Senior members of the Polish government have warned the European Union that the Islamist terror attack in Barcelona is further proof that encouraging millions of migrants to settle in its territory has undermined public safety. Polish Minister of the Interior Mariusz Błaszczak reminded listeners that “Poland is safe” — at least relative to Western and Northern European countries — because it has resisted attempts by the bloc to impose compulsory migrant quotas on it. He explained that, unlike Spain, his country has “no enclaves where people do not integrate to the country where they emigrated”. For the continent at large, however, he warned that the Barcelona atrocity was simply the “tragic end” of policies “inciting millions of people to cross the sea to come to Europe”. He added: “We are dealing with a clash of civilizations, it must be said openly and this is the problem of all Europe.” Polish Deputy Minister of Defence Michał Dworczyk echoed the interior minister’s sentiments, saying the attack on the Catalan capital was “another proof that migration policy and security policy must be conducted in a very thoughtful and responsible way”. He also credited the relatively superior security situation in Poland as “among other things, the result of the government’s consistent policy” of opposition to mass immigration. Dworczyk urged EU leaders to “review their ideas on migration policy” in light of the attack. “We are all shaken by the information that comes to us from Spain, and we share the pain of the families of the victims. “However, we cannot ignore the fact that we have a serious problem in Europe with the influx of illegal immigrants. It is a very bad idea to invite people who can not be controlled [and can] be said to pose a threat to EU citizens.” Poland: EU Will Not ‘Impose a Social Catastrophe’ with Migrant Quotas https://t.co/dAlcrPH05O — Breitbart London (@BreitbartLondon) July 3, 2017 Underlining the fact that “we do not accept and we will not accept that enclaves [are established] with people who do not assimilate, who do not want to belong to the society”, the defence minister reiterated his government’s opposition to migrant quotas. “I hope that these dramatic events will also be an occasion for reflection for certain officials of the European Commission and some EU political leaders,” he said. He suggested Barcelona might provide these individuals with “the opportunity to review their ideas on migration policy and the forced relocation of people whose identity can not be established with certainty”. Fo llow Jack Montgomery on Twitter: @JackBMontgomery
/** * * @author Ikasan Development Team * */ public class ExclusionEventActionImpl implements ExclusionEventAction<byte[]> { public static final String RESUBMIT = "re-submitted"; public static final String IGNORED = "ignored"; private Long id; private String moduleName; private String flowName; private String errorUri; private String actionedBy; private String action; private byte[] event; private long timestamp; /** * Default constructor for Hibernate */ @SuppressWarnings("unused") private ExclusionEventActionImpl() { } /** * Constructor * * @param errorUri * @param actionedBy * @param action * @param event * @param moduleName * @param flowName */ public ExclusionEventActionImpl(String errorUri, String actionedBy, String action, byte[] event, String moduleName, String flowName) { super(); this.errorUri = errorUri; this.actionedBy = actionedBy; this.action = action; this.event = event; this.moduleName = moduleName; this.flowName = flowName; this.timestamp = new Date().getTime(); } /** * @return the id */ public Long getId() { return id; } /** * @param id the id to set */ public void setId(Long id) { this.id = id; } /** * @return the errorUri */ public String getErrorUri() { return errorUri; } /** * @param errorUri the errorUri to set */ public void setErrorUri(String errorUri) { this.errorUri = errorUri; } /** * @return the actionedBy */ public String getActionedBy() { return actionedBy; } /** * @param actionedBy the actionedBy to set */ public void setActionedBy(String actionedBy) { this.actionedBy = actionedBy; } /** * @return the action */ public String getAction() { return action; } /** * @param action the action to set */ public void setAction(String action) { this.action = action; } /** * @return the event */ public byte[] getEvent() { return event; } /** * @param event the event to set */ public void setEvent(byte[] event) { this.event = event; } /** * @return the moduleName */ public String getModuleName() { return moduleName; } /** * @param moduleName the moduleName to set */ public void setModuleName(String moduleName) { this.moduleName = moduleName; } /** * @return the flowName */ public String getFlowName() { return flowName; } /** * @param flowName the flowName to set */ public void setFlowName(String flowName) { this.flowName = flowName; } /** * @param timestamp the timestamp to set */ public void setTimestamp(long timestamp) { this.timestamp = timestamp; } /** * @return the timestamp */ public long getTimestamp() { return timestamp; } @Override public void setComment(String comment) { // not required for relational DB implementation. } @Override public String getComment() { // not required for relational DB implementation. return null; } /* (non-Javadoc) * @see java.lang.Object#hashCode() */ @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((action == null) ? 0 : action.hashCode()); result = prime * result + ((actionedBy == null) ? 0 : actionedBy.hashCode()); result = prime * result + ((errorUri == null) ? 0 : errorUri.hashCode()); result = prime * result + Arrays.hashCode(event); result = prime * result + ((flowName == null) ? 0 : flowName.hashCode()); result = prime * result + ((id == null) ? 0 : id.hashCode()); result = prime * result + ((moduleName == null) ? 0 : moduleName.hashCode()); result = prime * result + (int) (timestamp ^ (timestamp >>> 32)); return result; } /* (non-Javadoc) * @see java.lang.Object#equals(java.lang.Object) */ @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; ExclusionEventActionImpl other = (ExclusionEventActionImpl) obj; if (action == null) { if (other.action != null) return false; } else if (!action.equals(other.action)) return false; if (actionedBy == null) { if (other.actionedBy != null) return false; } else if (!actionedBy.equals(other.actionedBy)) return false; if (errorUri == null) { if (other.errorUri != null) return false; } else if (!errorUri.equals(other.errorUri)) return false; if (!Arrays.equals(event, other.event)) return false; if (flowName == null) { if (other.flowName != null) return false; } else if (!flowName.equals(other.flowName)) return false; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; if (moduleName == null) { if (other.moduleName != null) return false; } else if (!moduleName.equals(other.moduleName)) return false; if (timestamp != other.timestamp) return false; return true; } /* (non-Javadoc) * @see java.lang.Object#toString() */ @Override public String toString() { return "ExclusionEventAction [id=" + id + ", moduleName=" + moduleName + ", flowName=" + flowName + ", errorUri=" + errorUri + ", actionedBy=" + actionedBy + ", action=" + action + ", event=" + Arrays.toString(event) + ", timestamp=" + timestamp + "]"; } }
/// Add range constraints to the model. pub fn add_ranges(&mut self, names: &[&str], expr: &[LinExpr], lb: &[f64], ub: &[f64]) -> Result<(Vec<Var>, Vec<Constr>)> { let mut constrnames = Vec::with_capacity(names.len()); for &s in names.iter() { let name = try!(CString::new(s)); constrnames.push(name.as_ptr()); } let expr: Vec<(_, _, _)> = expr.into_iter().cloned().map(|e| e.into()).collect_vec(); let lhs = Zip::new((lb, &expr)).map(|(lb, expr)| lb - expr.2).collect_vec(); let rhs = Zip::new((ub, &expr)).map(|(ub, expr)| ub - expr.2).collect_vec(); let mut beg = Vec::with_capacity(expr.len()); let numnz = expr.iter().map(|expr| expr.0.len()).sum(); let mut ind = Vec::with_capacity(numnz); let mut val = Vec::with_capacity(numnz); for expr in expr.iter() { let nz = ind.len(); beg.push(nz as i32); ind.extend(&expr.0); val.extend(&expr.1); } try!(self.check_apicall(unsafe { ffi::GRBaddrangeconstrs(self.model, constrnames.len() as ffi::c_int, beg.len() as ffi::c_int, beg.as_ptr(), ind.as_ptr(), val.as_ptr(), lhs.as_ptr(), rhs.as_ptr(), constrnames.as_ptr()) })); let mode = try!(self.get_update_mode()); let xcols = self.vars.len(); let cols = self.vars.len() + names.len(); for col_no in xcols..cols { self.vars.push(Var::new(if mode != 0 { col_no as i32 } else { -1 })); } let xrows = self.constrs.len(); let rows = self.constrs.len() + constrnames.len(); for row_no in xrows..rows { self.constrs.push(Constr::new(if mode != 0 { row_no as i32 } else { -1 })); } Ok((self.vars[xcols..].iter().cloned().collect_vec(), self.constrs[xrows..].iter().cloned().collect_vec())) }
<filename>codeforces/round-760-d3/src/bin/f.py def main(): a, b = map(int, input().split()) ba = bin(a)[2:] bb = bin(b)[2:] if ba == bb: return True if bb[-1] == "0": return False rba = ba[::-1] for ta in (ba, rba, "1" + rba): ta = ta.lstrip("0") if ta not in bb: continue start = bb.index(ta) if (not bb[:start] or set(bb[:start]) == {"1"}) and ( not bb[start + len(ta) :] or set(bb[start + len(ta) :]) == {"1"} ): return True return False print("YES" if main() else "NO")
package org.systems.dipe.srs.person; import org.assertj.core.api.Assertions; import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.systems.dipe.srs.SrsDbTest; import org.systems.dipe.srs.person.config.TestConfig; import org.systems.dipe.srs.person.roles.Role; import org.systems.dipe.srs.person.roles.RolesClient; import java.util.Collection; @SpringBootTest(classes = TestConfig.class) class RolesClientImplTest extends SrsDbTest { @Autowired private RolesClient rolesClient; @Test void create() { Collection<Role> roles = rolesClient.all(); Assertions.assertThat(roles).isNotEmpty(); } }
/** * {@link BluetoothDevice} wrapper supporting extras available from Intents during specific * actions. For example, RSSI becomes available during discovery. * * @author kendavidson * */ public class NativeDevice implements MapWritable { private BluetoothDevice device; private Map<String,Object> extra; public NativeDevice(BluetoothDevice device) { this.device = device; this.extra = new HashMap<String,Object>(); } public BluetoothDevice getDevice() { return device; } public void addExtra(String name, Object value) { extra.put(name, value); } public <T> T getExtra(String name) { return (T) extra.get(name); } public WritableMap map() { if (device == null) return null; WritableMap map = Arguments.createMap(); map.putString("name", device.getName()); map.putString("address", device.getAddress()); map.putString("id", device.getAddress()); map.putInt("class", (device.getBluetoothClass() != null) ? device.getBluetoothClass().getDeviceClass() : -1); map.putMap("extra", Arguments.makeNativeMap(extra)); WritableMap map1 = Arguments.makeNativeMap(extra); return map; } }
/** * Update or initialize the records of the file's state. * * @param fireEvent if {@code true}, fire a change event on the listener * if state has changed */ protected void updateFileState(boolean fireEvent) { boolean newExists; long newTimeStamp; if (monitoredFile.exists()) { newExists = true; newTimeStamp = monitoredFile.lastModified(); } else { newExists = false; newTimeStamp = 0L; } if (fireEvent) { FileChangeListener listener = this.listener.get(); if (listener != null) { if (newExists != exists) { if (newExists) { listener.fileChanged(monitoredFile, ChangeType.CREATED); } else { listener.fileChanged(monitoredFile, ChangeType.DELETED); } } else if (newTimeStamp != timeStamp) { listener.fileChanged(monitoredFile, ChangeType.MODIFIED); } } } exists = newExists; timeStamp = newTimeStamp; }
// SaveStatus saves a status file to the root of both replicas. func (r *Result) SaveStatus() error { for i, fs := range []tree.Tree{r.fs1, r.fs2} { replica := r.rs.Get(i) if !replica.ChangedAny() { continue } err := r.serializeStatus(replica, fs) if err != nil { return err } } return nil }
Sporadic Intradural Extramedullary Hemangioblastoma Not Associated with von Hippel-Lindau Syndrome: A Case Report and Literature Review Hemangioblastomas are low-grade, highly vascular tumors that are usually associated with von Hippel-Lindau syndrome. Hemangioblastomas most commonly occur in the cerebellum, and intradural extramedullary hemangioblastoma of the cauda equina is very rare, especially in patients without von Hippel-Lindau syndrome. Herein, we report a case of intradural extramedullary hemangioblastoma of the cauda equina that was not associated with von Hippel-Lindau syndrome, with a focus on its imaging characteristics and differential diagnoses. We compared the clinical presentation and imaging features of our case with those of previously reported cases in the review of the literature. INTRODUCTION Hemangioblastomas are low-grade, highly vascular tumors that account for 1-3% of all central nervous system tumors, and they most often occur in the cerebellum. Hemangioblastomas are rarely observed in the spine, with an incidence of only 1-5% of all spinal cord tumors (1). Intradural extramedullary (IDEM) hemangioblastomas, especially in the lumbar spine, are also very uncommon. Most cases of IDEM hemangioblastoma affect the cervical or thoracic spine (2). In addition, many spinal hemangioblastomas are associated with von Hippel-Lindau (VHL) syndrome. In fact, isolated IDEM hemangioblastoma of the cauda equina without VHL syndrome is extremely rare, with only 24 cases reported till date. The radiologic features of isolated IDEM hemangioblastoma of the cauda equina are difficult to differentiate from those of other hypervascular tumors in the lower lumbar spine, especially those not associated with VHL syndrome. Moreover, none of the previous case reports focused on the imaging characteristics and differential diagnosis of hemangioblastoma. Herein, we present a rare case of a sporadic IDEM hemangioblastoma in a patient without a clinical diagnosis of VHL syndrome, and focused on the imaging features and differential diagnosis. In addition, we compared the clinical and radiologic features of our case with those of the previously diagnosed 24 cases. CASE REPORT A 70-year-old female presented with pain in the left buttock and in the posterior part of the lower limb since 1 year. The pain increased in intensity and did not improve despite taking medications and epidural injections. The patient's family history was unremarkable. Physical examination revealed paresthesia in the left S1 and S2 dermatomes. However, there was no sign of motor weakness. Lumbosacral spinal radiography revealed thoracolumbar scoliosis and degenerative changes. MRI revealed a 2.7-cm well-defined IDEM mass at the L2-3 spinal level (Fig. 1A). This mass was characterized by isointensity on T1-weighted imaging (T1WI) and heterogeneous hyperintensity on T2-weighted imaging (T2WI), compared to the intensity of the spinal cord (Fig. 1A). The nerve roots of the cauda equina were peripherally displaced on axial scans ( Fig. 1A 4th, arrowheads). Post-contrast fat-saturated T1WI revealed intense enhancement of the lesion (Fig. 1A 5th, 6th). Multiple dilated, tortuous vessels with signal voids were observed in the IDEM compartment extending from the T11 to the L2 level ( Fig. 1A 3rd, arrow). These imaging features indicated that the presence of a hypervascular tumor. On the basis of these findings, we considered a differential diagnosis of ependymoma, paraganglioma, or hemangioblastoma. It was thought to be a hypervascular tumor, and angiography was planned. The patient underwent bilateral L1-L4 lumbar artery arteriography for preoperative embolization, and no feeders to the tumor were detected. The patient underwent L2 laminectomy and L3 partial laminectomy through a midline incision. Durotomy revealed a well-defined, firm, lobulated mass with an orange-red hue. The mass was intermingled with the nerves of the cauda equina, and there were dilated vascular channels at both poles of the tumor (Fig. 1B 1st, 2nd). The tumor was dissected circumferentially by preserving all nerve roots of the cauda equina, and en bloc resection was performed. The dilated venous channels that entered and left the tumor capsule were coagulated and sharply divided. Frozen section biopsy initially indicated a diagnosis of paraganglioma. Histopathology revealed a highly vascular tumor composed of different-sized vascular Sporadic IDEM Hemangioblastoma channels with intervening stromal cells (Fig. 1C 1st). No atypia or mitotic figures were observed. Accordingly, a histopathologic diagnosis of hemangioblastoma was made and the diagnosis was confirmed on the basis of immunohistochemistry findings. Immunohistochemical staining revealed that the tumor was positive for S100 ( Fig. 1C 2nd), and the Ki-67 proliferative index was 3%. Among ependymoma, paraganglioma, and hemangioblastoma, only paraganglioma is positive for synaptophysin, which was negative in the current case. A. MRI shows a well-defined intradural extramedullary mass at the L2-3 vertebral level. The mass is isointense on T1-weighted imaging and heterogeneously hyperintense on T2-weighted imaging (A1-A4). A signal void of the tortuous feeding vessel is observed in the proximal portion of the mass (A3, arrow). An axial T2-weighted image shows the intradural mass filling most of the thecal sac with peripheral displacement of the nerve roots (A4, arrowheads). Post-contrast fat-saturated T1-weighted imaging shows intense enhancement of the lesion (A5, A6). After treatment, the patient did not have any pain in the left buttock and lower extremity. Accordingly, the tumor was diagnosed as a hemangioblastoma
// Converts output artifacts into expected command-line arguments. private List<String> outputArgs(Set<Artifact> outputs) { ImmutableList.Builder<String> result = new ImmutableList.Builder<>(); for (String output : Artifact.toExecPaths(outputs)) { if (output.endsWith(".o")) { result.add("-o", output); } else if (output.endsWith(".d")) { result.add("-MD", "-MF", output); } else { throw new IllegalArgumentException( "output " + output + " has unknown ending (not in (.d, .o)"); } } return result.build(); }
/** * Filter to log responses. */ public class ResponseLoggingFilter extends AbstractLoggingFilter { private static final Logger LOGGER = LoggerFactory.getLogger(RequestLoggingFilter.class); @Override public String filterType() { return "post"; } @Override public Object run() { RequestContext ctx = RequestContext.getCurrentContext(); HttpServletRequest request = ctx.getRequest(); HttpServletResponse response = ctx.getResponse(); String headers = response.getHeaderNames() .stream() .map(headername -> headername + "=" + response.getHeader(headername)) .collect(Collectors.joining("; ")); LOGGER.info(String.format("Response for %s request to %s has status %s and headers %s", request.getMethod(), request.getRequestURL().toString(), response.getStatus(), headers)); return null; } }
/** * dequeue - A function that fetches first item in queue * @q: queue * @peek: whether to perform dequeue or just a peek * Return: first item in queue on success, -1 on failure */ int dequeue(queue_t *q, int peek) { int item; if (q->rear == -1) { printf("queue_t is empty"); item = -1; } else { if (peek) return (q->items[q->front]); item = q->items[q->front]; q->front++; if (q->front > q->rear) { q->front = q->rear = -1; } } return (item); }
/* api.h - Copyright (c) 2018, <NAME> (see LICENSE.md) */ #define TT_NLINES 24 #define TT_NCOLS 40 /* 3 color RGB */ enum ttcolor { TT_BLACK, TT_RED, TT_GREEN, TT_YELLOW, TT_BLUE, TT_MAGENTA, TT_CYAN, TT_WHITE }; enum tterr { TT_OK, TT_EARG, TT_ECURL, TT_EAPI, TT_EDATA }; struct ttattrs { enum ttcolor fg; enum ttcolor bg; }; /* * Note on block drawing characters: * * Teletext supports 6-cell (2x3) block drawing characters. The NOS * viewer and API use a custom font with these characters in the 0xF000 * Unicode range ('private use'). * * These are all mapped to SUBST_CHAR, defined in api.c */ struct ttpage { wchar_t chars[TT_NLINES][TT_NCOLS]; struct ttattrs attrs[TT_NLINES][TT_NCOLS]; char id[6]; char nextpage[6]; char nextsub[6]; }; enum tterr tt_get(const char *id, struct ttpage *page); const char *tt_errstr(enum tterr err);
def process_checkpoints(checkpoints: List[Checkpoint]) -> List[OrderedDict]: result = [] for checkpoint in checkpoints: next_checkpoint = OrderedDict([(KEY_CHECKPOINT, checkpoint.name)]) if checkpoint.conditions: next_checkpoint[KEY_CHECKPOINT_SLOTS] = [ {key: value} for key, value in checkpoint.conditions.items() ] result.append(next_checkpoint) return result
// Copyright (C) 2018-2019, Cloudflare, Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // * Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS // IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, // THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR // PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR // CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, // EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR // PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF // LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING // NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS // SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. use std::cmp; use std::convert::TryInto; use std::time; use std::time::Duration; use std::time::Instant; use std::collections::BTreeMap; use std::collections::HashMap; use crate::Config; use crate::Error; use crate::Result; use crate::cc; use crate::fec; use crate::frame; use crate::minmax; use crate::packet; use crate::ranges; // Loss Recovery const PACKET_THRESHOLD: u64 = 3; const TIME_THRESHOLD: f64 = 9.0 / 8.0; const GRANULARITY: Duration = Duration::from_millis(1); const INITIAL_RTT: Duration = Duration::from_millis(500); const PERSISTENT_CONGESTION_THRESHOLD: u32 = 3; #[derive(Debug)] pub struct Sent { pub pkt_num: u64, pub frames: Vec<frame::Frame>, pub time: Instant, pub size: usize, pub ack_eliciting: bool, pub in_flight: bool, pub fec_info: fec::FecStatus, } pub struct Recovery { loss_detection_timer: Option<Instant>, pto_count: u32, time_of_last_sent_ack_eliciting_pkt: [Option<Instant>; packet::EPOCH_COUNT], largest_acked_pkt: [u64; packet::EPOCH_COUNT], largest_sent_pkt: [u64; packet::EPOCH_COUNT], latest_rtt: Duration, smoothed_rtt: Option<Duration>, rttvar: Duration, min_rtt: Duration, pub max_ack_delay: Duration, loss_time: [Option<Instant>; packet::EPOCH_COUNT], sent: [BTreeMap<u64, Sent>; packet::EPOCH_COUNT], pub lost: [Vec<frame::Frame>; packet::EPOCH_COUNT], pub acked: [Vec<frame::Frame>; packet::EPOCH_COUNT], pub total_pkt_nums: usize, pub lost_count: usize, pub loss_probes: [usize; packet::EPOCH_COUNT], pub cc: Box<dyn cc::CongestionControl>, app_limited: bool, /// fec group packet numbers pns_in_fec_group: HashMap<u32, fec::FecPacketNumbers>, pub m: u8, pub n: u8, last_adjust_time: Option<Instant>, min_filter: minmax::Minmax<i8>, delta: i8, } impl Recovery { pub fn new(config: &Config) -> Self { Recovery { loss_detection_timer: None, pto_count: 0, time_of_last_sent_ack_eliciting_pkt: [None; packet::EPOCH_COUNT], largest_acked_pkt: [std::u64::MAX; packet::EPOCH_COUNT], largest_sent_pkt: [0; packet::EPOCH_COUNT], latest_rtt: Duration::new(0, 0), smoothed_rtt: None, min_rtt: Duration::new(0, 0), rttvar: Duration::new(0, 0), max_ack_delay: Duration::from_millis(25), loss_time: [None; packet::EPOCH_COUNT], sent: [BTreeMap::new(), BTreeMap::new(), BTreeMap::new()], lost: [Vec::new(), Vec::new(), Vec::new()], acked: [Vec::new(), Vec::new(), Vec::new()], total_pkt_nums: 0, lost_count: 0, loss_probes: [0; packet::EPOCH_COUNT], cc: cc::new_congestion_control( config.cc_algorithm, config.init_cwnd, config.init_pacing_rate, ), app_limited: false, pns_in_fec_group: Default::default(), m: 10, n: 1, last_adjust_time: None, min_filter: minmax::Minmax::<i8>::new(), delta: 0, } } pub fn on_packet_sent( &mut self, pkt: Sent, epoch: packet::Epoch, handshake_completed: bool, now: Instant, trace_id: &str, ) { // Process fec group packet number list. if pkt.fec_info.group_id != 0 { trace!("send: {} {} {} {} {}", pkt.fec_info.group_id, pkt.fec_info.m, pkt.fec_info.n, pkt.size, pkt.fec_info.index); let pns_group = self .pns_in_fec_group .entry(pkt.fec_info.group_id) .or_insert(fec::FecPacketNumbers::new( pkt.fec_info.m, pkt.fec_info.n, )); pns_group.packet_sent(pkt.fec_info.index, pkt.pkt_num); } else { trace!("send: {} {} {} {} {}", pkt.fec_info.group_id, pkt.fec_info.m, pkt.fec_info.n, pkt.size, pkt.fec_info.index); } let ack_eliciting = pkt.ack_eliciting; let in_flight = pkt.in_flight; let sent_bytes = pkt.size; self.largest_sent_pkt[epoch] = cmp::max(self.largest_sent_pkt[epoch], pkt.pkt_num); // self.sent[epoch].insert(pkt.pkt_num, pkt); // TODO: posision 1 self.total_pkt_nums += 1; if in_flight { if ack_eliciting { self.time_of_last_sent_ack_eliciting_pkt[epoch] = Some(now); } self.app_limited = (self.cc.bytes_in_flight() + sent_bytes + 1350) < self.cc.cwnd(); // OnPacketSentCC if epoch >= 2 { self.cc.on_packet_sent_cc(&pkt, sent_bytes, trace_id); } self.set_loss_detection_timer(handshake_completed); } self.sent[epoch].insert(pkt.pkt_num, pkt); // TODO: from position1 is valid ? trace!("{} {:?}", trace_id, self); } pub fn on_ack_received( &mut self, ranges: &ranges::RangeSet, ack_delay: u64, epoch: packet::Epoch, handshake_completed: bool, now: Instant, trace_id: &str, ) -> Result<()> { //println!("receive"); self.cc.cc_bbr_begin_ack(now); let largest_acked = ranges.largest().unwrap(); // If the largest packet number acked exceeds any packet number we have // sent, then the ACK is obviously invalid, so there's no need to // continue further. if largest_acked > self.largest_sent_pkt[epoch] { if cfg!(feature = "fuzzing") { return Ok(()); } return Err(Error::InvalidPacket); } if self.largest_acked_pkt[epoch] == std::u64::MAX { self.largest_acked_pkt[epoch] = largest_acked; } else { self.largest_acked_pkt[epoch] = cmp::max(self.largest_acked_pkt[epoch], largest_acked); } if let Some(pkt) = self.sent[epoch].get(&self.largest_acked_pkt[epoch]) { if pkt.ack_eliciting { debug!( "recovery cal rtt : {}", Instant::now().duration_since(pkt.time).as_millis() ); let latest_rtt = now - pkt.time; let ack_delay = if epoch == packet::EPOCH_APPLICATION { Duration::from_micros(ack_delay) } else { Duration::from_micros(0) }; self.update_rtt(latest_rtt, ack_delay); } } let mut has_newly_acked = false; // Processing acked packets in reverse order (from largest to smallest) // appears to be faster, possibly due to the BTreeMap implementation. for pn in ranges.flatten().rev() { // If the acked packet number is lower than the lowest unacked packet // number it means that the packet is not newly acked, so return // early. // // Since we process acked packets from largest to lowest, this means // that as soon as we see an already-acked packet number // all following packet numbers will also be already // acked. if let Some(lowest) = self.sent[epoch].values().nth(0) { if pn < lowest.pkt_num { break; } } let newly_acked = self.on_packet_acked(pn, epoch, trace_id); has_newly_acked = cmp::max(has_newly_acked, newly_acked); if newly_acked { trace!("{} packet newly acked {}", trace_id, pn); } } if !has_newly_acked { self.cc.cc_bbr_end_ack(); return Ok(()); } self.cc.cc_bbr_end_ack(); self.detect_lost_packets(epoch, now, trace_id); self.pto_count = 0; self.set_loss_detection_timer(handshake_completed); trace!("{} {:?}", trace_id, self); Ok(()) } pub fn on_loss_detection_timeout( &mut self, handshake_completed: bool, now: Instant, trace_id: &str, ) { let (earliest_loss_time, epoch) = self.earliest_loss_time(self.loss_time, handshake_completed); if earliest_loss_time.is_some() { self.detect_lost_packets(epoch, now, trace_id); self.set_loss_detection_timer(handshake_completed); trace!("{} {:?}", trace_id, self); return; } // TODO: handle client without 1-RTT keys case. let (_, epoch) = self.earliest_loss_time( self.time_of_last_sent_ack_eliciting_pkt, handshake_completed, ); self.loss_probes[epoch] = 2; self.pto_count += 1; self.set_loss_detection_timer(handshake_completed); trace!("{} {:?}", trace_id, self); } pub fn drop_unacked_data(&mut self, epoch: packet::Epoch) { let mut unacked_bytes = 0; for p in self.sent[epoch].values_mut().filter(|p| p.in_flight) { unacked_bytes += p.size; } debug!("drop_unacked_data"); if epoch >= 2 { self.cc.decrease_bytes_in_flight(unacked_bytes); } self.loss_time[epoch] = None; self.loss_probes[epoch] = 0; self.time_of_last_sent_ack_eliciting_pkt[epoch] = None; self.sent[epoch].clear(); self.lost[epoch].clear(); self.acked[epoch].clear(); } pub fn loss_detection_timer(&self) -> Option<Instant> { self.loss_detection_timer } pub fn cwnd_available(&self) -> usize { // Ignore cwnd when sending probe packets. if self.loss_probes.iter().any(|&x| x > 0) { return std::usize::MAX; } let now_time_ms = match time::SystemTime::now() .duration_since(time::SystemTime::UNIX_EPOCH) { Ok(n) => n.as_millis(), Err(_) => panic!("SystemTime before UNIX EPOCH!"), }; debug!( "timestamp: {} ms; cwnd {} bytes; bytes_in_flight {} bytes ; available {} bytes", now_time_ms, self.cc.cwnd(), self.cc.bytes_in_flight(), self.cc.cwnd().saturating_sub(self.cc.bytes_in_flight()), ); self.cc.cwnd().saturating_sub(self.cc.bytes_in_flight()) } pub fn rtt(&self) -> Duration { self.smoothed_rtt.unwrap_or(INITIAL_RTT) } pub fn pto(&self) -> Duration { self.rtt() + cmp::max(self.rttvar * 4, GRANULARITY) + self.max_ack_delay } fn update_rtt(&mut self, latest_rtt: Duration, ack_delay: Duration) { self.latest_rtt = latest_rtt; match self.smoothed_rtt { // First RTT sample. None => { self.min_rtt = latest_rtt; self.smoothed_rtt = Some(latest_rtt); self.rttvar = latest_rtt / 2; }, Some(srtt) => { self.min_rtt = cmp::min(self.min_rtt, latest_rtt); let ack_delay = cmp::min(self.max_ack_delay, ack_delay); // Adjust for ack delay if plausible. let adjusted_rtt = if latest_rtt > self.min_rtt + ack_delay { latest_rtt - ack_delay } else { latest_rtt }; self.rttvar = self.rttvar.mul_f64(3.0 / 4.0) + sub_abs(srtt, adjusted_rtt).mul_f64(1.0 / 4.0); self.smoothed_rtt = Some( srtt.mul_f64(7.0 / 8.0) + adjusted_rtt.mul_f64(1.0 / 8.0), ); }, } } fn earliest_loss_time( &mut self, times: [Option<Instant>; packet::EPOCH_COUNT], handshake_completed: bool, ) -> (Option<Instant>, packet::Epoch) { let mut epoch = packet::EPOCH_INITIAL; let mut time = times[epoch]; // Iterate over all packet number spaces starting from Handshake. #[allow(clippy::needless_range_loop)] for e in packet::EPOCH_HANDSHAKE..packet::EPOCH_COUNT { let new_time = times[e]; if e == packet::EPOCH_APPLICATION && !handshake_completed { continue; } if new_time.is_some() && (time.is_none() || new_time < time) { time = new_time; epoch = e; } } (time, epoch) } fn set_loss_detection_timer(&mut self, handshake_completed: bool) { let (earliest_loss_time, _) = self.earliest_loss_time(self.loss_time, handshake_completed); if earliest_loss_time.is_some() { // Time threshold loss detection. self.loss_detection_timer = earliest_loss_time; return; } if self.cc.bytes_in_flight() == 0 { // TODO: check if peer is awaiting address validation. self.loss_detection_timer = None; return; } // PTO timer. let timeout = match self.smoothed_rtt { None => INITIAL_RTT * 2, Some(_) => self.pto() * 2_u32.pow(self.pto_count), }; let (sent_time, _) = self.earliest_loss_time( self.time_of_last_sent_ack_eliciting_pkt, handshake_completed, ); if let Some(sent_time) = sent_time { self.loss_detection_timer = Some(sent_time + timeout); } } fn detect_lost_packets( &mut self, epoch: packet::Epoch, now: Instant, trace_id: &str, ) { let largest_acked = self.largest_acked_pkt[epoch]; let mut lost_pkt: Vec<u64> = Vec::new(); self.loss_time[epoch] = None; let loss_delay = cmp::max(self.latest_rtt, self.rtt()).mul_f64(TIME_THRESHOLD); // Minimum time of kGranularity before packets are deemed lost. let loss_delay = cmp::max(loss_delay, GRANULARITY); // Packets sent before this time are deemed lost. let lost_send_time = now - loss_delay; for (_, unacked) in self.sent[epoch].range(..=largest_acked) { // Mark packet as lost, or set time when it should be marked. if unacked.time <= lost_send_time || largest_acked >= unacked.pkt_num + PACKET_THRESHOLD { if unacked.in_flight { trace!( "{} packet {} lost on epoch {}", trace_id, unacked.pkt_num, epoch ); } // We can't remove the lost packet from |self.sent| here, so // simply keep track of the number so it can be removed later. lost_pkt.push(unacked.pkt_num); } else { let loss_time = match self.loss_time[epoch] { None => unacked.time + loss_delay, Some(loss_time) => cmp::min(loss_time, unacked.time + loss_delay), }; self.loss_time[epoch] = Some(loss_time); } } if !lost_pkt.is_empty() { self.on_packets_lost(lost_pkt, epoch, now, trace_id); } } fn on_packet_acked( &mut self, pkt_num: u64, epoch: packet::Epoch, trace_id: &str, ) -> bool { // Check if packet is newly acked. if let Some(mut p) = self.sent[epoch].remove(&pkt_num) { self.acked[epoch].append(&mut p.frames); debug!("packet inflight : {}", p.in_flight); if p.fec_info.group_id != 0 { let pns_group = self.pns_in_fec_group.get_mut(&p.fec_info.group_id).unwrap(); pns_group.packet_acked(p.pkt_num); // begin collect sample. if let Some(delta) = pns_group.get_delta() { let now = Instant::now(); match self.last_adjust_time { None => { self.last_adjust_time = Some(now); self.delta = self.min_filter.reset(now, delta); }, Some(last_time) => { debug!("collect new sample {} at {:?}, last time is: {:?}", delta, now, last_time); self.delta = self.min_filter.running_min( self.min_rtt * 2, now, delta, ); if last_time + self.min_rtt * 2 > now { // begin adjustment. let adjusted_n = self.n as i8 - self.delta; if adjusted_n < 1 { self.n = 1; } else { self.n = adjusted_n as u8; } self.m = (self.cc.pacing_rate() * self.min_rtt.as_millis() as u64 / 8000 / 1350) .try_into() .unwrap_or(std::u8::MAX); debug!( "m: {} , n: {} , delta: {} ", self.m, self.n, self.delta ); // reset last_time stamp self.last_adjust_time = Some(now); } }, } } if let Some(recovered_pns) = pns_group.get_recovered_pns() { for recovered_pn in recovered_pns { self.on_packet_recovered(recovered_pn); } } } if p.in_flight { // OnPacketAckedCC(acked_packet) if epoch >= 2 { self.cc.on_packet_acked_cc( &p, self.rtt(), self.min_rtt, self.latest_rtt, self.app_limited, trace_id, epoch, self.lost_count ); } } return true; } // Is not newly acked. false } fn on_packet_recovered(&mut self, pkt_num: u64) { if let Some(p) = self.sent[packet::EPOCH_APPLICATION].get_mut(&pkt_num) { self.acked[packet::EPOCH_APPLICATION].append(&mut p.frames); } } // TODO: move to Congestion Control and implement draft 24 fn in_persistent_congestion(&mut self, _largest_lost_pkt: &Sent) -> bool { let _congestion_period = self.pto() * PERSISTENT_CONGESTION_THRESHOLD; // TODO: properly detect persistent congestion false } // TODO: move to Congestion Control fn on_packets_lost( &mut self, lost_pkt: Vec<u64>, epoch: packet::Epoch, now: Instant, trace_id: &str, ) { // Differently from OnPacketsLost(), we need to handle both // in-flight and non-in-flight packets, so need to keep track // of whether we saw any lost in-flight packet to trigger the // congestion event later. let mut largest_lost_pkt: Option<Sent> = None; for lost in lost_pkt { let mut p = self.sent[epoch].remove(&lost).unwrap(); self.lost_count += 1; if !p.in_flight { continue; } debug!("on_packet_lost"); if epoch >= 2 { self.cc.decrease_bytes_in_flight(p.size); } if p.fec_info.group_id != 0 { let pns_group = self.pns_in_fec_group.get_mut(&p.fec_info.group_id).unwrap(); pns_group.packet_lost(p.pkt_num); } self.lost[epoch].append(&mut p.frames); largest_lost_pkt = Some(p); } if let Some(largest_lost_pkt) = largest_lost_pkt { // CongestionEvent self.cc.congestion_event( self.rtt(), largest_lost_pkt.time, now, trace_id, largest_lost_pkt.pkt_num, epoch, self.lost_count ); if self.in_persistent_congestion(&largest_lost_pkt) { self.cc.collapse_cwnd(); } } } } impl std::fmt::Debug for Recovery { fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result { match self.loss_detection_timer { Some(v) => { let now = Instant::now(); if v > now { let d = v.duration_since(now); write!(f, "timer={:?} ", d)?; } else { write!(f, "timer=exp ")?; } }, None => { write!(f, "timer=none ")?; }, }; write!(f, "latest_rtt={:?} ", self.latest_rtt)?; write!(f, "srtt={:?} ", self.smoothed_rtt)?; write!(f, "min_rtt={:?} ", self.min_rtt)?; write!(f, "rttvar={:?} ", self.rttvar)?; write!(f, "loss_time={:?} ", self.loss_time)?; write!(f, "loss_probes={:?} ", self.loss_probes)?; write!(f, "{:?} ", self.cc)?; Ok(()) } } fn sub_abs(lhs: Duration, rhs: Duration) -> Duration { if lhs > rhs { lhs - rhs } else { rhs - lhs } }
/* * Copyright 2017 ThoughtWorks, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package cd.go.contrib.elasticagent.model; import com.google.gson.annotations.Expose; import com.google.gson.annotations.SerializedName; import static cd.go.contrib.elasticagent.utils.Util.GSON; public class ServerInfo { @Expose @SerializedName("server_id") private String serverId; @Expose @SerializedName("site_url") private String siteUrl; @Expose @SerializedName("secure_site_url") private String secureSiteUrl; public String getServerId() { return serverId; } public String getSiteUrl() { return siteUrl; } public String getSecureSiteUrl() { return secureSiteUrl; } public void setSecureSiteUrl(String secureSiteUrl) { this.secureSiteUrl = secureSiteUrl; } public static ServerInfo fromJSON(String json) { return GSON.fromJson(json, ServerInfo.class); } public String toJSON() { return GSON.toJson(this); } }
<filename>packages/contracts/test/connect-contracts.spec.ts import { ethers } from 'hardhat' import { Signer, Contract } from 'ethers' import { connectL1Contracts, connectL2Contracts, } from '../dist/connect-contracts' import { expect } from './setup' describe('connectL1Contracts', () => { let user: Signer const l1ContractNames = [ 'addressManager', 'canonicalTransactionChain', 'executionManager', 'fraudVerifier', 'multiMessageRelayer', 'stateCommitmentChain', 'xDomainMessengerProxy', 'bondManager', ] const l2ContractNames = [ 'eth', 'xDomainMessenger', 'messagePasser', 'messageSender', 'deployerWhiteList', 'ecdsaContractAccount', 'sequencerEntrypoint', 'erc1820Registry', 'addressManager', ] before(async () => { ;[user] = await ethers.getSigners() }) it(`connectL1Contracts should throw error if signer or provider isn't provided.`, async () => { try { await connectL1Contracts(undefined, 'mainnet') } catch (err) { expect(err.message).to.be.equal('signerOrProvider argument is undefined') } }) for (const name of l1ContractNames) { it(`connectL1Contracts should return a contract assigned to a field named "${name}"`, async () => { const l1Contracts = await connectL1Contracts(user, 'mainnet') expect(l1Contracts[name]).to.be.an.instanceOf(Contract) }) } for (const name of l2ContractNames) { it(`connectL2Contracts should return a contract assigned to a field named "${name}"`, async () => { const l2Contracts = await connectL2Contracts(user) expect(l2Contracts[name]).to.be.an.instanceOf(Contract) }) } })
/** * This class holds O3 line from source file. * * @author Hudson Schumaker */ @Data public final class O3FileLine { private String data; private boolean classHeader; private boolean functionHeader; private boolean conditionalStatement; private boolean loopStatement; private boolean variableDeclaration; private boolean constantDeclaration; private boolean returnStatement; private boolean functionCall; private Integer originalNumber; private Integer internalNumber; public O3FileLine(String data, Integer originalNumber) { this.data = data; this.originalNumber = originalNumber; } }
package main const Master = `<!DOCTYPE html> <html> <head> <title>{{ or .Title "Template Generation Demo"}}</title> </head> <body> <div id="topbar"> [[ call .Features "topbar" ]] </div> <div id="navbar"> [[ call .Features "navbar" ]] </div> {{ .Content }} <div id="footer"> [[ call .Features "footer" ]] </div> </body> </html>` var ( Tops = map[string]string{ "topbar.one": `<div class="topbar-item">One</div>`, "topbar.two": `<div class="topbar-item">Two</div>`, "topbar.three": `<div class="topbar-item">Three</div>`, } Navs = map[string]string{ "navbar.one": `<div class="navbar-item">One</div>`, "navbar.two": `<div class="navbar-item">Two</div>`, "navbar.three": `<div class="navbar-item">Three</div>`, } Foots = map[string]string{ "footer.one": `<div class="footer-item">One</div>`, "footer.two": `<div class="footer-item">Two</div>`, "footer.three": `<div class="footer-item">Three</div>`, } )
<reponame>brianmacdonald/ilodestone use std::fs::File; extern { fn lodestone_main(); } #[no_mangle] pub extern fn fopen(filename: String) -> String { unsafe { let mut f = File::open(filename).expect("file not found"); let mut contents = String::new(); f.read_to_string(&mut contents) .expect("something went wrong reading the file"); return contents; } } #[no_mangle] pub extern fn fprint(message: &str) -> String { unsafe { println!("{}", &str); } } fn main() { println!("Calling lodestone from rust..."); unsafe { lodestone_main() } }
package com.script.fairy; import com.script.framework.AtFairyApp; import com.umeng.commonsdk.UMConfigure; public class YpApplication extends AtFairyApp { @Override public void onCreate() { super.onCreate(); UMConfigure.init(getApplicationContext(),UMConfigure.DEVICE_TYPE_PHONE,"611345aebc78af7b6753dfc6"); } }
A Common Endocrine Signature Marks the Convergent Evolution of an Elaborate Dance Display in Frogs Unrelated species often evolve similar phenotypic solutions to the same environmental problem, a phenomenon known as convergent evolution. But how do these common traits arise? We address this question from a physiological perspective by assessing how convergence of an elaborate gestural display in frogs (foot-flagging) is linked to changes in the androgenic hormone systems that underlie it. We show that the emergence of this rare display in unrelated anuran taxa is marked by a robust increase in the expression of androgen receptor (AR) messenger RNA in the musculature that actuates leg and foot movements, but we find no evidence of changes in the abundance of AR expression in these frogs’ central nervous systems. Meanwhile, the magnitude of the evolutionary change in muscular AR and its association with the origin of foot-flagging differ among clades, suggesting that these variables evolve together in a mosaic fashion. Finally, while gestural displays do differ between species, variation in the complexity of a foot-flagging routine does not predict differences in muscular AR. Altogether, these findings suggest that androgen-muscle interactions provide a conduit for convergence in sexual display behavior, potentially providing a path of least resistance for the evolution of motor performance.
McAfee on Thursday announced its annual Threat Predictions report, highlighting the top security worries it predicts for 2013. Most of the forecasts are completely expected (mobile malware will become a bigger focus, crimeware and hacking as a service will expand, and large-scale attacks will increase) but one of them stood out like a sore thumb: “the influence of the hacktivist group ‘Anonymous’ will decline.” This is a very bold claim, and frankly one that we think McAfee shouldn’t be making. Nevertheless, the security firm offered the following reasoning behind its prediction: Due to many uncoordinated and unclear operations and false claims, the Anonymous hacktivist movement will slow down in 2013. Anonymous’ level of technical sophistication has stagnated and its tactics are better understood by its potential victims, and as such, the group’s level of success will decline. While hacktivist attacks won’t end in 2013, if ever, they are expected to decline in number and sophistication. This seems equivalent to saying that the Anonymous hacktivist movement will accelerate in 2013 due to the number of successful attacks it has made. This is as plausible as Mcafee’s claim. The security company doesn’t appear to have any data to back up its claims, meaning its conclusions are merely conjecture. In fact, we have seen nothing to suggest the group is declining, regardless of whether or not one believes they are using sophisticated methods of attack. In fact, I would argue that Anonymous is becoming more and more influential as we’ve seen the movement growing throughout 2012. While the group still largely performs rather simple DDoS (overloading websites with traffic to take them down) and doxxing (publicly posting private information about a targeted individual) attacks, at least some of its members have shown the technical know-how to hack sites, deface them, and more importantly, steal and leak sensitive data. Whether or not this is being done largely by people who some dismiss as “script kiddies” doesn’t really matter. The damage is being done, and Anonymous is likely to keep doing it. See also – Anonymous attacks over 650 Israeli sites, wipes databases, leaks email addresses and passwords and Biggest Anonymous Twitter account suspended for posting image of a private email, then reinstated Image credit: Jorge Guerrero/Getty Images Read next: Apple App Store downloads jumped 87% on Christmas Day 2012, revenues increased 70%
def load_config(filename: str, config_dir: WindowsPath) -> pd.DataFrame: filename = filename + '.csv' if '.csv' not in filename else filename config_path = config_dir.joinpath(filename) return pd.read_csv(config_path)
// Helper function converting a seed string to a 128 bit binary key. string GenerateBinaryKey(const string& seed) { const unsigned char kBlockTEA_Salt[] = { 0x2A, 0x0C, 0x84, 0x24, 0x5B, 0x0D, 0x85, 0x26, 0x72, 0x40, 0xBC, 0x38, 0xD3, 0x43, 0x63, 0x9E, 0x8E, 0x56, 0xF9, 0xD7, 0x00 }; string hash = seed + (char*)kBlockTEA_Salt; int len = (int)hash.size(); char digest[37]; memcpy(digest + 16, kBlockTEA_Salt, 21); {{ CalcMD5(hash.c_str(), hash.size(), (unsigned char*)digest); }} for (int i = 0; i < len; i++) { CalcMD5(digest, 36, (unsigned char*)digest); } return string(digest, kBlockTEA_KeySize*sizeof(Int4)); }
package de.polocloud.wrapper.network; import de.polocloud.api.CloudAPI; import de.polocloud.network.NetworkType; import de.polocloud.network.client.NettyClient; import de.polocloud.network.packet.PacketHandler; import io.netty.channel.ChannelHandlerContext; public final class WrapperClient extends NettyClient { public WrapperClient(final PacketHandler packetHandler, final String name, final String hostname, final int port) { super(packetHandler, name, NetworkType.WRAPPER); this.connect(hostname, port); CloudAPI.getInstance().getLogger().log("§7The service started successfully network service."); } @Override public void onActivated(ChannelHandlerContext channelHandlerContext) { CloudAPI.getInstance().getLogger().log("This service successfully connected to the cluster."); } @Override public void onClose(ChannelHandlerContext channelHandlerContext) { CloudAPI.getInstance().getLogger().log("This service disconnected from the cluster"); } }
def stop(self) -> None: if self.server_thread and self.server_thread.is_alive(): pexit = self.pilot_process.poll() if pexit is None: self.pilot_process.terminate() pexit = self.pilot_process.wait() self.logging_actor.debug.remote(self.worker_id, f"Payload return code: {pexit}", time.asctime()) asyncio.run_coroutine_threadsafe(self.notify_stop_server_task(), self.loop) self.server_thread.join() self.logging_actor.info.remote(self.worker_id, "Communicator stopped", time.asctime())
def class_data(inputmodel_file, dataset, outfile=None): model, label_encoder, scaler, model_feature_names = load_model_from_file(inputmodel_file) if outfile: f = open(outfile, 'w') else: f = sys.stdout for sequences_file in dataset: res = write_csv(extract_features(sequences_file, classformat=None)) features = res[0] accessions = res[1] data_feature_names = res[-1] features = match_features(model_feature_names, features, data_feature_names) features = remove_nan_vals(features) scaled_features = scaler.transform(features) labels_idx = model.predict(scaled_features) labels = label_encoder.inverse_transform(labels_idx) for acc, label in zip(accessions.values, labels): f.write('%s\t%s\n' % (acc, label)) if outfile: f.close()
Frontiers of marine science On 9–13 October 2010 early career scientists from the UK and Australia across marine research fields were given the opportunity to come together in Perth, Australia to discuss the frontiers of marine research and exchange ideas. INTRODUCTION Many of the challenges that face twenty-first century scientists, such as climate change and ecosystem research, are inherently interdisciplinary in nature . Perhaps nowhere is this better illustrated than in marine science, where the physics and chemistry of the medium are inextricably linked with the biology and ecology of ecosystems. Numerous feedback loops exist within and between biology and marine and atmospheric climate, which we are only beginning to understand, e.g. . In addition, our marine environment is under considerable stress, with every square kilometre of the global ocean affected by anthropogenic drivers of ecological change . Climate change is fundamentally altering marine systems, bringing challenges and costs for human societies and placing urgency on the science community to provide the information and understanding to drive policy and management responses . Synergistic effects between climate and other anthropogenic stressors such as pollution and exploitation are likely to exacerbate climate change impacts in the oceans . Marine systems also face the unique threat of ocean acidification as atmospheric CO 2 levels increase . The UK-Australia Frontiers of Science conference was held in October 2010 in Perth, Western Australia, supported by the UK's Royal Society and the Australian Academy of Science. The meeting brought together 70 early career scientists (35 from each country) over 3 days to present the latest advances in their fields, learn about research at the cutting edge of other disciplines, and explore new opportunities for international and multidisciplinary collaboration. Australia and the UK have an extensive and interlinked history and both countries are considered maritime nations with their oceans contributing substantial social and economic wealth . It is therefore appropriate that these two countries came together to consider the interconnectedness of the world's marine ecosystems, and the interdependence of methods used to study and manage these environments. PATHWAYS TO UNDERSTANDING THE MARINE ENVIRONMENT Frontiers of Science meetings are structured around a series of discipline-themed sessions, with three presentations setting out the state of the art in a given subject, and a strong emphasis on discussion among the multidisciplinary audience. Each member of the organizing committee proposed important topics relevant to their theme for wider consideration and one topic was then selected for each disciplinary session. Even at the planning stage of this meeting, however, the inherent interdisciplinarity of marine science was evident. For example, ocean acidification was formally presented in the macrobiology session, but could equally have been placed in any of a number of different sessions, from climatology to chemistry to applied ecology. This problem-centred approach to science typically is indifferent to traditional disciplinary boundaries (figure 1), and in addition blurs the distinction between 'pure' and 'applied' research. As well as the interdisciplinary nature of topics such as ocean acidification, ocean circulation and geoengineering, a number of other common themes linked the diverse sessions in the meeting. In particular, the interdependence of physical, chemical and biological processes across spatial and temporal scales as well as the consequent complementarity of methodological approaches applied by researchers. (a) Space The title of the mathematics session, Small things matter, referred specifically to the role of eddy-scale turbulence within physical oceanography, and in particular to the importance of considering such smallscale processes in regional and global climate models. But the same sentiment summed up the microbiology session on Symbioses, which highlighted the vital, and often poorly understood, role that microbes play in ocean ecosystems, in part through their intricate relationships with multicellular organisms. Likewise, the GEOTRACES programme (www.geotraces.org), introduced in the chemistry session, seeks to understand the distribution of minute concentrations of the trace metals which underpin global biogeochemical cycles. Of course, macro-scale processes also exert a powerful influence on local phenomena (e.g. low frequency climate signals) and studies at this scale provide unique but necessary understanding in the context of global change . (b) Time The meeting involved delegates with primary interests in documenting the past, understanding the present and predicting the future of the marine environment. The interdependence of these three perspectives was abundantly clear. For instance, information from the past can be used to inform our predictions of the future and develop hypotheses for testing in models and experiments. Retrospective data, for example from sediments and coral cores, can provide evidence of past climate or ecosystem states. However, palaeo-ecologists also require information from analogous extant species to inform their understanding of the fossil and sub-fossil record. One of the challenges highlighted at the meeting is the need to extend the temporal and spatial coverage of retrospective data. Such data are required by climate system models, in order to enhance our understanding of climate system dynamics, and by ecosystem models which aim to predict climate impacts. Whole ecosystem models can also be used to simulate conditions in the past or produce predictions of the future. Such models can simulate the state of ecosystems without exploitation or other anthropogenic pressures, conditions often beyond our data records, so expanding our understanding of key processes . (c) Methods A major message from this meeting was the interdependence of theoretical and empirical approaches. Models have been developed across a variety of scales to give a global picture (e.g. physical and biological oceanography, modelling ocean-or global-scale circulation) or more complex local detail (e.g. eddy-scale processes, ecosystem dynamics). But the importance of empirical studies remains key, for verifying model predictions, for deriving parameter estimates and for suggesting future theoretical developments. Coordinated large-scale empirical programmes in the marine environment have been designed to improve the spatial coverage of our understanding of patterns in the biodiversity (e.g. the Census of Marine Life, www. coml.org) and chemical composition of the oceans (e.g. GEOTRACES, www.geotraces.org) as well as to document the history of the Earth system (e.g. the Integrated Ocean Drilling Program, www.iodp.org). Each of these multinational, multidisciplinary initiatives blurs the boundaries between theory and empiricism, with models driving empirical questions, and the resultant data feeding back into improved models of marine systems. A strength of the Frontiers of Science was to bring together modellers and empiricists in discussions focused on generic problems, rather than on specific methodologies. This approach offers the best pathway to understanding the marine environment. PUSHING THE FRONTIERS Marine scientists face the challenges of working in a medium that can be difficult to access and sample, with large areas of the ocean still almost untouched by scientific surveys, e.g. . Nonetheless, this conference highlighted how collaborations and technological advances are pushing the frontiers of marine science. Cutting-edge technologies are allowing us to collect information over large areas (e.g. ocean colour by satellite remote-sensing), thus allowing automated observing of marine life. Metagenomics, which link 'old' single-species empirical technologies and 'new' molecular biologies at community and ecosystem levels, have the potential to integrate across diverse fields that may have previously lacked a genetics perspective. The development of the kinds of multinational, interdisciplinary networks of marine researchers described above is rapidly advancing our understanding of the exchanges between oceanic physical and biological processes. Finally, the way we manage the marine environment is changing moving from single-species management to whole-ecosystem reconstruction of past climate instrumental records global and regional climate model projections How will fish populations respond to climate change? In addition, understanding of other pressures on fish populations will be essential. These may include past and present exploitation in fisheries as well as probable responses to a range of policy scenarios, and may involve the complementary expertise of archaeologists, historians, social and political scientists. Each of these different disciplines will bring its own methods, including both empirical and modelling approaches. This interdisciplinary approach covers the requisite range of scales in space (from individual fish populations to global climate) and time (from deep time to the near future). Meeting report. Frontiers of marine science T. J. Webb & E. S. Poloczanska 325 management, and ecosystem models which have the capacity to link physics, biology and societal goals can provide unique insight for managers . TO CONCLUDE Our oceans cover 70 per cent of the Earth's surface and provide a suite of ecosystem services that are essential to human societies, economies and well-being but are increasingly under threat. Understanding marine systems, and in particular predicting how they will respond to environmental change, demands cooperation across disciplines. Marine scientists, with already shared vocabulary, and some history of collaboration (e.g. shared cruises), are well placed to pioneer partnerships across the natural and physical sciences. The Frontiers of Marine Science meeting not only encouraged the participants to think more broadly across disciplines, but hopefully fostered new collaborations and new thinking both within and between the two countries and across disciplines.
<gh_stars>0 package com.nsfocus.orchestration.entity; import java.io.Serializable; /** * TriggerAppEntity:触发APP * EventAppEntity:事件APP * @author xpn * */ public class OrchestrationNode implements Serializable{ private static final long serialVersionUID = 8147419497697842072L; private TriggerAppEntity triggerApp; private EventAppEntity eventApp; private TimerEntity timer; public TriggerAppEntity getTriggerApp() { return triggerApp; } public void setTriggerApp(TriggerAppEntity triggerApp) { this.triggerApp = triggerApp; } public EventAppEntity getEventApp() { return eventApp; } public void setEventApp(EventAppEntity eventApp) { this.eventApp = eventApp; } public TimerEntity getTimerEntity(){ return timer; } public void setTimer(TimerEntity timer){ this.timer = timer; } }
export default class SecureCookies { private identifier; private secretKey; private enablementKey; private cryptography; private enablementCryptography; private options; private status; constructor(identifier: any, secretKey: any); readonly isEnabled: boolean; readonly cookies: any; enable(): void; disable(): void; resetOptions(): void; setOption(property: any, option: any): void; getOption(property: any): any; private static execute(secureCookiesObject, callback, args?, enablementKey?); private store(object, enablementKey?); private all(enablementKey?); put(name: any, value: any): any; get(name: any): any; exists(name: any): any; has(name: any): any; forget(name: any): any; flush(): any; regenerate(): any; }
A scale and rotation invariant scheme for multi-oriented Character Recognition In printed stylized documents, text lines may be curved in shape and as a result characters of a single line may be multi-oriented. This paper presents a multi-scale and multi-oriented character recognition scheme using foreground as well as background information. Here each character is partitioned into multiple circular zones. For each zone, three centroids are computed by grouping the constituent character segments (components) of each zone into two clusters. As a result, we obtain one global centroid for all the components in the zone, and further two centroids for the two generated clusters. The above method is repeated for both foreground as well as background information. The features are generated by encoding the spatial distribution of these centroids by computing their relative angular information. These features are then fed into a SVM classifier. A PCA based feature selection phase has also been applied. Detailed experiments on Bangla and Devanagari datasets have been performed. It has been seen that the proposed methodology outperforms a recent competing method.
def convert_string(x): wanted = set() wanted.update(set(range(97, 123))) wanted.update(set(range(48, 58))) wanted.update({45, 95}) wanted.add(32) s = '' for c in x: if ord(c) in wanted: s += c elif 65 <= ord(c) <= 90: s += chr(ord(c) + 32) return s
<reponame>Jeanmilost/Visual-Mercutio /**************************************************************************** * ==> PSS_SelectPropertyDlg -----------------------------------------------* **************************************************************************** * Description : Provides a dialog box to select a property * * Developer : Processsoft * ****************************************************************************/ #ifndef PSS_SelectPropertyDlgH #define PSS_SelectPropertyDlgH #if _MSC_VER > 1000 #pragma once #endif // change the definition of AFX_EXT... to make it import #undef AFX_EXT_CLASS #undef AFX_EXT_API #undef AFX_EXT_DATA #define AFX_EXT_CLASS AFX_CLASS_IMPORT #define AFX_EXT_API AFX_API_IMPORT #define AFX_EXT_DATA AFX_DATA_IMPORT // processsoft #include "zPtyMgr\zPtyMgrRes.h" #include "PSS_PropertyListCtrl.h" // class name mapping #ifndef PSS_DynamicPropertiesManager #define PSS_DynamicPropertiesManager ZBDynamicPropertiesManager #endif #ifndef PSS_ProcessGraphModelMdl #define PSS_ProcessGraphModelMdl ZDProcessGraphModelMdl #endif // forward class declaration class PSS_DynamicPropertiesManager; class PSS_ProcessGraphModelMdl; #ifdef _ZPTYMGREXPORT // put the values back to make AFX_EXT_CLASS export again #undef AFX_EXT_CLASS #undef AFX_EXT_API #undef AFX_EXT_DATA #define AFX_EXT_CLASS AFX_CLASS_EXPORT #define AFX_EXT_API AFX_API_EXPORT #define AFX_EXT_DATA AFX_DATA_EXPORT #endif /** * Select a property dialog box *@author <NAME>, <NAME> */ class AFX_EXT_CLASS PSS_SelectPropertyDlg : public CDialog { public: /** * Constructor *@param pProps - the properties *@param showType - the view type to show *@param selection - if true, the selection is allowed *@param allowItemSelection - if true, the item selection is allowed *@param allowCategorySelection - if true, the category selection is allowed *@param pPropManager - the property manager *@param pModel - the model *@param pParent - the parent window, can be NULL */ PSS_SelectPropertyDlg(PSS_Properties* pProps, int showType = 0, bool selection = true, bool allowItemSelection = true, bool allowCategorySelection = false, PSS_DynamicPropertiesManager* pPropManager = NULL, PSS_ProcessGraphModelMdl* pModel = NULL, CWnd* pParent = NULL); /** * Constructor *@param pPropSet - the property set *@param showType - the view type to show *@param selection - if true, the selection is allowed *@param allowItemSelection - if true, the item selection is allowed *@param allowCategorySelection - if true, the category selection is allowed *@param pPropManager - the property manager *@param pModel - the model *@param pParent - the parent window, can be NULL */ PSS_SelectPropertyDlg(PSS_Properties::IPropertySet* pPropSet, int showType = 0, bool selection = true, bool allowItemSelection = true, bool allowCategorySelection = false, PSS_DynamicPropertiesManager* pPropManager = NULL, PSS_ProcessGraphModelMdl* pModel = NULL, CWnd* pParent = NULL); /** * Gets the selected property *@return the selected property */ virtual inline PSS_Property* GetSelectedProperty(); /** * Gets the selected property item *@return the selected property item */ virtual inline PSS_PropertyItem* GetSelectedPropertyItem() const; protected: /// ClassWizard generated virtual function overrides //{{AFX_VIRTUAL(PSS_SelectPropertyDlg) virtual void DoDataExchange(CDataExchange* pDX); //}}AFX_VIRTUAL // Generated message map functions //{{AFX_MSG(PSS_SelectPropertyDlg) virtual BOOL OnInitDialog(); afx_msg void OnProptype(); afx_msg void OnRenameAttribute(); afx_msg void OnDeleteAttribute(); virtual void OnOK(); //}}AFX_MSG DECLARE_MESSAGE_MAP() private: /** * Dialog resources */ enum { IDD = IDD_ALL_PROPERTIES }; PSS_PropertyListCtrl m_PropertyList; PSS_Properties* m_pProperties; PSS_Properties::IPropertySet* m_pPropSet; PSS_PropertyItem* m_pSelectedProperty; PSS_DynamicPropertiesManager* m_pPropManager; PSS_ProcessGraphModelMdl* m_pModel; int m_PropType; bool m_AllowItemSelection; bool m_AllowCategorySelection; bool m_Selection; /** * Checks the control state */ void CheckControlStates(); }; //--------------------------------------------------------------------------- // PSS_SelectPropertyDlg //--------------------------------------------------------------------------- PSS_Property* PSS_SelectPropertyDlg::GetSelectedProperty() { if (m_pSelectedProperty) return m_PropertyList.GetMatchingProperty(m_pSelectedProperty); return NULL; } //--------------------------------------------------------------------------- PSS_PropertyItem* PSS_SelectPropertyDlg::GetSelectedPropertyItem() const { return m_pSelectedProperty; } //--------------------------------------------------------------------------- #endif
async def lp_info(ctx, name0, name1): umanager = ctx.obj['umanager'] token0 = umanager.sl.get(name0 if name0 != 'OLT' else 'WOLT') token1 = umanager.sl.get(name1 if name1 != 'OLT' else 'WOLT') reserves = await umanager.get_reserves(token0, token1) if not reserves[0] or not reserves[1]: click.secho(f'LP does not have initial reserve', fg='red') return token0, token1 = UniswapUtils.sort_tokens(token0, token1) name0 = await umanager.get_token_name(token0) name1 = await umanager.get_token_name(token1) fee_rate = await umanager.get_pool_fee_rate() click.echo(f'Current LP [{name0} - {name1}] info:') click.echo(f'Reserve {name0}: {reserves[0]}') click.echo(f'Reserve {name1}: {reserves[1]}') click.echo(f'{name0} -> {name1}: {reserves[1] / reserves[0]}') click.echo(f'{name1} -> {name0}: {reserves[0] / reserves[1]}') click.echo(f'Liquidity: {reserves[0] * reserves[1]}') click.echo(f'Pool fee rate (%): {pretty_float(fee_rate * 100, 0)}')
About About Hi I'm Chase, but many of you may know me as retroshark. I created and have been hosting GBJAM for 5 years now and it has been the coolest thing for me to see how it has grown over the years. I have been hoping to create a whole new platform for GBJAM and it has been hard to do without the funds. I want the all new GBJam to be entirely self contained on a single website with a game submission system, forum and store where everyone can purchase sweet merch! What is GBJAM? If you are new and are unsure of what GBJAM is read ahead! GBJAM is a videogame development jam, or in other words a friendly competition to create a game based on a theme and rules in an allotted period of time. Funds Breakdown Curious about where your pledge is going? I'm completely transparent so here's a breakdown. Development of a website, game submission system and forum - $1000 Webhosting and Domain fees - $500 Storefront Hosting - $600 GBJAM Merchandise - $2000 (All hosting fees are for the next 5 years) The remainder of the funds not listed a fall back for expenses I may not have realized as well as allowing GBJAM to continue to expand well into the future. Credit Music in video: DOCTOR VOX - Frontier
Robert Beck ​Editor’s Note: Peter King and the staff of The MMQB took over this week’s issue of Sports Illustrated magazine. In it, you’ll read the kind of feature stories that you’ve come to expect from The MMQB, plus an all-access behind-the-scenes look at one day in the life of the NFL, from inside the Texans’ team meeting on the night before the season opener to the Patriots’ celebrations at the end of their surprise win in Arizona, and much more. Pick up the magazine on newsstands, or subscribe here. Justice Cunningham was on the sideline when he heard it. Some of his South Carolina teammates said it sounded like a gunshot. The 54,527 fans at the 2013 Outback Bowl in Tampa looked around as one, scanning the field for the source. “It got electric,” says Cunningham, a tight end. “I was on the sideline trying to get my head together. You heard that pop, looked up and said, What happened?” Sophomore Jadeveon Clowney had just skated between blockers and struck Michigan tailback Vincent Smith in the backfield, dislodging the ball and Smith’s helmet, and scooped up the fumble in almost the same motion. The play would become known as “the Hit.” Ten months later and 1,200 miles north, a smattering of fans dotted the seats at University of Buffalo Stadium as a light rain fell during UB’s game against UMass. The week before, defensive line coach Jappy Oliver had been goading fifth-year linebacker Khalil Mack about Minutemen offensive tackle Anthony Dima, who was pegged as a potential NFL draft pick. On second‑and‑10 Mack motioned as though he would sprint around the edge, then abruptly planted his foot and shoved the 300-pound Dima, lifting him skyward. “I swear to you,” says Oliver, “he lifted this kid totally off the ground. We froze the tape the next day to make sure.” Clowney, once the No. 1 recruit in the nation, had long been the sort of athlete you might design in a video game, an illogical combination of size (6' 6", 270) and speed (4.5 40 time). He was a legend before he ever stepped foot onto the field in Columbia, S.C., and for three years his biggest moments were broadcast nationally (and, in the case of the Hit, endlessly) on highlight shows. An edge rusher at South Pointe High (Rock Hill, S.C.), he had been recruited by every institution of higher education from Tuscaloosa to Cambridge. (Yes, Harvard threw its hat into the ring.) When he chose South Carolina over Alabama and Clemson, it was broadcast on national television. Mack was an under-recruited workout warrior at Fort Pierce (Fla.) Westwood High, a two-star prospect who didn’t get a single offer from a Power 5 conference school. He settled for Buffalo of the Mid-American Conference. Clowney’s and Mack’s paths didn’t converge until draft night 2014, when they were the top two defensive selections in the draft. One NFL general manager whose team had a top 10 pick that year put the contrast this way: “One of them you felt had all the intangibles you look for, and the other was an absolute freak.” Three years later, as a new NFL season begins, the paths of Clowney and Mack have diverged. Mack, a Raider, is an unquestioned superstar. Clowney, in Houston, is a question mark. Were those high school and college years, full of adulation, enabling and the shortcuts made possible by his rare physical gifts, a detriment to Clowney’s development? Conversely, did a five-year grind in anonymity make Mack the player he is today? What’s the difference between a college career that quietly crescendos and one that roars unceasingly? And for NFL teams, is there a lesson to be learned? * * * Kevin D. Liles “In many ways it’s a classic tale,” says Phil Savage, former Browns general manager who is now executive director of the Senior Bowl. “Here’s one player who has been rated at the top all the way through his career, and he was not really pushed to become a great player in college. He had some great flashes, but he was physically more developed than everybody. On the other hand, here’s a guy who is completely underrated in high school, excels without the benefit of a top-notch strength-and-conditioning program, and you say, ‘Wow, we think this guy’s got a huge ceiling.’ ” Clowney and Mack met at the NFL scouting combine in Indianapolis before the 2014 draft. There was a recurring joke among the high-profile prospects that year, at the expense of Clowney. A rumor being fanned by NFL writers at the time had the Bills trading up to the No. 1 spot. Clowney, the consensus prediction to be the first pick and a lifelong resident of South Carolina, batted away taunts from his fellow prospects about his future in winters of lake-effect snow. “There was all the talk about different people moving up and down, and everybody kept messing with Clowney,” Mack says. “He was like, ‘Bruh, I’m not going to Buffalo.’ ” Mack, of course, had already made the move north. A native of Fort Pierce, Fla., he had spent the previous five years attending college in the city that experiences upward of 90 inches of snow annually. Clowney had been productive in his freshman and sophomore seasons with the Gamecocks, with 21 sacks and an eye-popping 35.5 tackles for loss, but the Hit brought something new. When he returned to campus for the spring semester after the Outback Bowl, students mobbed him, asking for selfies. Friends and confidants told him he should have been the No. 1 pick in 2013, forget 2014. In March, Clowney took out a $5 million insurance policy on his body to guard against injury in his final collegiate season. That summer, coaches grew concerned he might seriously consider sitting out the season. Lorenzo Ward, then the Gamecocks’ defensive coordinator, volunteered to visit Clowney’s family in their hometown of Rock Hill, S.C. By the time he left dinner with Clowney’s mother and grandparents at a Cracker Barrel, he was satisfied that Clowney would play. “You’re talking about a kid who was 20 years old,” Ward says. “Did he buy into all that hype? I’m sure it affected him some.” Says Clowney, “I always believed in myself.” By the time Clowney returned for his junior season, opponents were ready for him. Their plan: throw bodies at him. Between his breakout sophomore season and his whirlwind junior year, Clowney had not learned the art of rushing the passer, and coaches never pressed him to learn it. Ward and his assistants had a phrase they’d repeat to linebackers and safeties playing behind Clowney—a mantra for operating around a player who relied on instinct: Make him right. In other words, if Clowney’s assignment was to set the edge but instead he darted inside, adjust your assignment. Reminded of those three words after a recent practice on a steamy August day in Houston, Clowney flashed a wide grin. “ ‘Make Jadeveon right’ means, Just play ball,” he said. “I mean, I line up in the defense and try to get to the ball, make a play, going inside, outside. Me and my linebackers had a good feel off each other. In the NFL, everything works together. If one guy’s wrong in the defense, [the opponent] can break out.” * * * Kevin Tanaka/AP Mack’s time at Buffalo began with considerably less fanfare than Clowney’s in Columbia—and a lot more pain. After a string of injuries in high school, including a torn left patellar tendon, he played one full prep season and led the team in tackles. Coaches at Buffalo stuck the 210‑pounder at middle linebacker in 2009. Mack, who has since grown to a ripped 250 pounds, recalls playing on the scout team, stepping up to meet runs up the gut and taking daily punishment from 250-pound senior fullback Lawrence Rolle. “Playing scout against him, I definitely got my mind right,” Mack says. “They just ran ISO over and over and over. I’m like hnnnnngggg. Then one day he came up to me before practice and said, ‘All right, it’s Thursday. Just relax.’ I’m like, ‘Damn. O.K., cool.’ ” The following summer Mack cracked open the latest in a favorite video game series—EA Sports’s NCAA Football 11—and discovered his overall rating was a paltry 46 out of 99, matching his jersey number. He kept that number throughout his college career as a reminder of those who had doubted his ability. Mack was growing into a rush linebacker and taking an interest in breaking down tendencies of offensive tackles he would face in coming weeks. Date night was a visit to the weight room. “I would pop in the office on a Sunday to pick up something, and Mack would be in the gym,” says Oliver. “I saw him in there with a lady friend one time, just doing sit‑ups, push‑ups, working with a medicine ball.” During the off‑season Mack and his roommate Branden Oliver worked out feverishly and talked about making it to the NFL. Their schedules left little time for summer jobs, and they only received per diems during the school year, so they came to rely on a friend, wide receiver Fred Lee, who was a late-night manager at Taco Bell. “Fred got to manager status, so he would bring home leftover Taco Bell,” says Oliver, now a running back with the Chargers. “Summer was the hardest time for us. If there was an event going on and food was there, we were there. Sometimes we were just having MET‑Rx shakes for breakfast, lunch and dinner.” As a senior, in the season opener against Ohio State, Mack returned an interception for a touchdown and had 2.5 sacks in a 40–20 loss. Scouts and GMs took notice, and Khalil Mack became a big name in NFL scouting circles. “Here’s the thing about going to a small school versus going to a big school,” Mack says. “There are still opportunities, and you’ve got to make the most of them. Whether it was against Ohio State or Miami [Ohio], I was going to make the most of that opportunity.” * * * Steve Jacobson For the Texans, Clowney and Mack had something—someone—in common: Ed Lambert, a 15-year scouting veteran, is pals with both Ward, the former South Carolina defensive coordinator, and Jappy Oliver, the former Buffalo defensive line coach. Lambert and Oliver coached together at Vanderbilt in the early 1990s before Lambert became a scout. Oliver told Lambert about the can’t-miss kid who worked out religiously, didn’t speak much and only played one year of prep football before coming to the MAC. “Ed and I went round and round about it,” Oliver says. “I told him, You’re making a mistake, Mack’s work ethic is second to none.” Says Lambert, “Being a coach for a while, I know exactly where Clowney’s coming from. To me as a coach, if I draft this player [Clowney], he’s gonna bring his ass to meetings, workouts, all that. It’s a matter of getting him with the right position coach and the right regimen.” At the South Carolina pro day, new Texans coach Bill O’Brien spent the day trying to get a read on Clowney. The workout mattered little; Clowney had already performed at the combine and, at 266 pounds, run a 4.53 40-yard dash. O’Brien wanted to know who Clowney was when the cameras weren’t rolling. Former South Carolina quarterback Connor Shaw says O’Brien approached him and asked what kind of teammate Clowney had been. “I told him the truth,” Shaw says. “He’s a freak athlete, and when he wanted to in college, he dominated the game. His work ethic was there. It was just about getting him in the building.” O’Brien says he doesn’t remember the conversation with Shaw, but he says he does remember the knocks on Clowney that emerged that spring. There were his declining stats, and he had become known as a guy who would pick his spots when he would practice and play hard. “Teams were running the ball away from him, or he was getting double‑teamed or triple‑teamed in pass protection,” O’Brien says. “Just like any player, could he have played better at certain moments? Sure.” Many evaluators with a top pick ask themselves one question: Given the right circumstances, if paired with the right coaches and the appropriate scheme, what might this player become? Physical tools play heavily into that analysis. A top five pick, in theory, ought to possess rare physical traits that would justify resources spent in acquisition and cultivation. “What God gave him, it’s rare. We call him a once-in-a-generation kind of athlete,” says Texans GM Rick Smith. “We knew, from the standpoint of learning how to be a professional, that he would have to cover some ground. What you expect is that the athleticism is such and the instincts are such that, as he learns how to do that, there’s still production along the way. Injuries have gotten in the way.” Oakland’s scouting of Mack began with general manager Reggie McKenzie’s twin brother, Raleigh, who covered the Northeast and brought the Buffalo pass rusher to Reggie’s attention after the Ohio State game. The McKenzies watched Mack closely after that game to see if he would play down to the level of competition in the MAC. “That was the key, and he did not,” Reggie says. “And when he had the chance to play a big school, he dominated there too. You gotta look past the level of competition. You can’t beat him up because of that.” Looking away from the Power 5, Reggie McKenzie found not only Mack but also Fresno State quarterback Derek Carr one round later. “For both of those guys, I felt strongly about their character, both on the field and off,” McKenzie says. “It’s hard to put certain specifics on what you’re looking for other than your ability to foresee what a guy will be. I don’t know if analytics, height, weight, speed can judge that.” The coach who would eventually become Mack’s defensive coordinator in 2015, Ken Norton Jr., evaluated Mack while a member of the Seahawks’ staff in 2013 and saw his humble beginnings as a positive. “When I was at USC we studied this. We were always trying to recruit five-stars because those are your better players,” says Norton, who coached linebackers at USC under Pete Carroll from 2004 to ’09. “What we found is that your two- and three-star players end up being pros because they’re so upset, so offended, and they’re trying to spend the rest of their career proving they’re five-stars.” * * * Ben Margot/AP If psychologist Abraham Maslow were to apply his hierarchy of needs to the football field, he might say that Mack has vaulted beyond basic and psychological needs and is on the path to self-actualization. His rookie season was so widely overlooked for its low sack total (four) that the Raiders’ media relations staff began emailing reporters weekly updates from Pro Football Focus on Mack’s unheralded exploits. By 2015, there would be no need. Mack truly arrived when he single-handedly changed the game in a Week 14 visit to Denver by sacking quarterback Brock Osweiler five times. The Raiders won 15–12, and the NFL got a taste of the former psychology major’s brutal philosophy on rushing the passer. “The mind‑set is, You know what I’m going to do, but can you stop it? I’m going to keep doing it over and over and over until it breaks you,” Mack says. “And then I switch it up. If I can hit him in the mouth over and over and over, then I might switch it up and fake him out. Then I can do whatever I want.” At Buffalo, Mack’s favorite class was Psych 431, an upper-level course for psychology majors that explores how the body responds to psychological processes. Mack homed in on the importance of mental preparation as it pertains to the body’s release of cortisol, an adrenal hormone that can provide energy in moments of stress, but also comes with deleterious effects in large quantities. “They talked about the value of a challenge versus a threat,” Mack says. “When you go into a test prepared, you feel like it’s a challenge, and you can do it with confidence. When you’re not prepared, you go into the test thinking you’re going to fail, and the stress goes to your stomach, and you actually do fail because of that.” Clowney’s road to self-actualization in the NFL has been slower. Before he ever stepped onto the field in 2014, he required surgery for a hernia, in June, a month after the draft. He suffered a torn right meniscus in his NFL debut, and complications led to season-ending microfracture surgery three months later. When he came back last season, eight months after the December surgery, he didn’t feel quite right, he later acknowledged. He played in 13 games with nine starts, injuring his right ankle in Week 5, then suffering a back injury in Week 8. Entering ’16, he had 4.5 sacks in 17 career games and, according to Stats Inc., eight hurries. * * * Craig Ruttle/AP; Tomasso DeRosa/AP Through those first two disappointing seasons, Clowney has discovered that he can’t bulldoze blockers the way he did in the SEC. Not only that, but he’s had to learn how to operate in Romeo Crennel’s hybrid 3–4 defense, which requires outside linebackers to drop into coverage. Clowney hadn’t learned that at South Carolina simply because he didn’t have to. Ditto for weight training, which had been an afterthought in college. “He thought natural ability would get him by in every situation, so he never really bought into the weight room,” says Ward. “He did just enough to get by. And I told him, ‘You’re gonna have to learn to develop your body because at that next level, all those guys are big, strong and fast just like you.’ I think he’s realized that now.” Lambert, the scout who talked with Ward about Clowney’s prospects, retired after the 2016 draft. He says it’s time for Clowney to “s‑‑‑ or get off the pot.” For O’Brien, Clowney’s strong preseason performance and steady play in the season-opening win over the Bears (including a sack) are evidence the man now gets it. “I think he’s definitely a guy who really understands now how important it is to do all those little things,” O’Brien says. Robert Beck for The MMQB Most, if not all, teams would have taken Clowney with the first pick in 2014. Injuries have slowed his development, just like injuries slowed Mack’s development in high school. Before the book is closed, Clowney could very well emerge as the league’s next great rush linebacker, and Mack could very well get hurt and miss time like Clowney has. Yet the way they arrived in the league—in disparate circumstances—remains a valuable case study in draft strategy for NFL teams. What if, Khalil Mack, you had been more highly touted? If you had gone to Alabama, Miami or Florida, do you think you would’ve gotten here? “That’s a tough question, man. Honestly, knowing the work ethic that I have and knowing how competitive I am, it’s easy to say, yes. But in those situations you can get comfortable going to those big schools, eating good, getting things that work in your favor.” And what if, Jadeveon Clowney, you had been a one- or two-star recruit and walked on at some small school? “I’d still be a bad mother------.” Question or comment? Email us at [email protected].
use claim::assert_some_eq; use testutils::assert_empty; use super::*; fn history_item_to_string(item: &HistoryItem) -> String { let range = if item.start_index == item.end_index { item.start_index.to_string() } else { format!("{}-{}", item.start_index, item.end_index) }; format!( "{:?}[{}] {}", item.operation, range, item.lines.iter().map(Line::to_text).collect::<Vec<String>>().join(", ") ) } fn _assert_history_items(actual: &[HistoryItem], expected: &[HistoryItem]) { let actual_strings: Vec<String> = actual.iter().map(history_item_to_string).collect(); let expected_strings: Vec<String> = expected.iter().map(history_item_to_string).collect(); pretty_assertions::assert_str_eq!(actual_strings.join("\n"), expected_strings.join("\n")); } macro_rules! assert_history_items { ($history_items:expr, $($arg:expr),*) => { let expected = &vec![$( $arg, )*]; _assert_history_items(&Vec::from($history_items), &expected); }; } fn create_lines() -> Vec<Line> { vec![ Line::new("pick aaa c1").unwrap(), Line::new("pick bbb c2").unwrap(), Line::new("pick ccc c3").unwrap(), Line::new("pick ddd c4").unwrap(), Line::new("pick eee c5").unwrap(), ] } macro_rules! assert_todo_lines { ($lines:expr, $($arg:expr),*) => { let expected = vec![$( Line::new($arg).unwrap(), )*]; pretty_assertions::assert_str_eq!( $lines.iter().map(Line::to_text).collect::<Vec<String>>().join("\n"), expected.iter().map(Line::to_text).collect::<Vec<String>>().join("\n") ); }; } #[test] fn new() { let history = History::new(100); assert_eq!(history.limit, 100); assert_empty!(history.undo_history); assert_empty!(history.redo_history); } #[test] fn record_history() { let mut history = History::new(5); history.redo_history.push_front(HistoryItem::new_add(1, 1)); history.record(HistoryItem::new_add(1, 1)); assert_history_items!(history.undo_history, HistoryItem::new_add(1, 1)); assert_empty!(history.redo_history); } #[test] fn record_history_overflow_limit() { let mut history = History::new(3); history.record(HistoryItem::new_add(1, 1)); history.record(HistoryItem::new_add(2, 2)); history.record(HistoryItem::new_add(3, 3)); history.record(HistoryItem::new_add(4, 4)); assert_history_items!( history.undo_history, HistoryItem::new_add(2, 2), HistoryItem::new_add(3, 3), HistoryItem::new_add(4, 4) ); assert_empty!(history.redo_history); } #[test] fn undo_redo_add_start() { let mut history = History::new(10); history.record(HistoryItem::new_add(0, 0)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (0, 0)); assert_todo_lines!(lines, "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5"); assert_some_eq!(history.redo(&mut lines), (0, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_add_end() { let mut history = History::new(10); history.record(HistoryItem::new_add(4, 4)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (3, 3)); assert_todo_lines!(lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4"); assert_some_eq!(history.redo(&mut lines), (4, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_add_middle() { let mut history = History::new(10); history.record(HistoryItem::new_add(2, 2)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 2)); assert_todo_lines!(lines, "pick aaa c1", "pick bbb c2", "pick ddd c4", "pick eee c5"); assert_some_eq!(history.redo(&mut lines), (2, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_add_range_start_index_at_top() { let mut history = History::new(10); history.record(HistoryItem::new_add(0, 1)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (0, 0)); assert_todo_lines!(lines, "pick ccc c3", "pick ddd c4", "pick eee c5"); assert_some_eq!(history.redo(&mut lines), (0, 1)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_add_range_end_index_at_top() { let mut history = History::new(10); history.record(HistoryItem::new_add(1, 0)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (0, 0)); assert_todo_lines!(lines, "pick ccc c3", "pick ddd c4", "pick eee c5"); assert_some_eq!(history.redo(&mut lines), (1, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_add_range_start_index_at_bottom() { let mut history = History::new(10); history.record(HistoryItem::new_add(4, 3)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 2)); assert_todo_lines!(lines, "pick aaa c1", "pick bbb c2", "pick ccc c3"); assert_some_eq!(history.redo(&mut lines), (4, 3)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_add_range_end_index_at_bottom() { let mut history = History::new(10); history.record(HistoryItem::new_add(3, 4)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 2)); assert_todo_lines!(lines, "pick aaa c1", "pick bbb c2", "pick ccc c3"); assert_some_eq!(history.redo(&mut lines), (3, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_remove_start() { let mut history = History::new(10); history.record(HistoryItem::new_remove(0, 0, vec![Line::new("drop xxx cx").unwrap()])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (0, 0)); assert_todo_lines!( lines, "drop xxx cx", "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (0, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_remove_end() { let mut history = History::new(10); history.record(HistoryItem::new_remove(5, 5, vec![Line::new("drop xxx cx").unwrap()])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (5, 5)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5", "drop xxx cx" ); assert_some_eq!(history.redo(&mut lines), (4, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_remove_middle() { let mut history = History::new(10); history.record(HistoryItem::new_remove(2, 2, vec![Line::new("drop xxx cx").unwrap()])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "drop xxx cx", "pick ccc c3", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (2, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_remove_range_start_index_top() { let mut history = History::new(10); history.record(HistoryItem::new_remove(0, 1, vec![ Line::new("drop xxx cx").unwrap(), Line::new("drop yyy cy").unwrap(), ])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (0, 1)); assert_todo_lines!( lines, "drop xxx cx", "drop yyy cy", "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (0, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_remove_range_start_index_bottom() { let mut history = History::new(10); history.record(HistoryItem::new_remove(6, 5, vec![ Line::new("drop xxx cx").unwrap(), Line::new("drop yyy cy").unwrap(), ])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (6, 5)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5", "drop xxx cx", "drop yyy cy" ); assert_some_eq!(history.redo(&mut lines), (4, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_remove_range_end_index_top() { let mut history = History::new(10); history.record(HistoryItem::new_remove(1, 0, vec![ Line::new("drop xxx cx").unwrap(), Line::new("drop yyy cy").unwrap(), ])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (1, 0)); assert_todo_lines!( lines, "drop xxx cx", "drop yyy cy", "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (0, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_remove_range_end_index_bottom() { let mut history = History::new(10); history.record(HistoryItem::new_remove(5, 6, vec![ Line::new("drop xxx cx").unwrap(), Line::new("drop yyy cy").unwrap(), ])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (5, 6)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5", "drop xxx cx", "drop yyy cy" ); assert_some_eq!(history.redo(&mut lines), (4, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_up_single_index_start() { let mut history = History::new(10); history.record(HistoryItem::new_swap_up(1, 1)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (1, 1)); assert_todo_lines!( lines, "pick bbb c2", "pick aaa c1", "pick ccc c3", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (0, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_up_single_index_end() { let mut history = History::new(10); history.record(HistoryItem::new_swap_up(4, 4)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (4, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick eee c5", "pick ddd c4" ); assert_some_eq!(history.redo(&mut lines), (3, 3)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_up_single_index_middle() { let mut history = History::new(10); history.record(HistoryItem::new_swap_up(2, 2)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick ccc c3", "pick bbb c2", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (1, 1)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_up_range_down_index_start() { let mut history = History::new(10); history.record(HistoryItem::new_swap_up(1, 2)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (1, 2)); assert_todo_lines!( lines, "pick ccc c3", "pick aaa c1", "pick bbb c2", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (0, 1)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_up_range_down_index_end() { let mut history = History::new(10); history.record(HistoryItem::new_swap_up(3, 4)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (3, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick eee c5", "pick ccc c3", "pick ddd c4" ); assert_some_eq!(history.redo(&mut lines), (2, 3)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_up_range_up_index_start() { let mut history = History::new(10); history.record(HistoryItem::new_swap_up(2, 1)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 1)); assert_todo_lines!( lines, "pick ccc c3", "pick aaa c1", "pick bbb c2", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (1, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_up_range_up_index_end() { let mut history = History::new(10); history.record(HistoryItem::new_swap_up(4, 3)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (4, 3)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick eee c5", "pick ccc c3", "pick ddd c4" ); assert_some_eq!(history.redo(&mut lines), (3, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_down_range_down_index_start() { let mut history = History::new(10); history.record(HistoryItem::new_swap_down(0, 1)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (0, 1)); assert_todo_lines!( lines, "pick bbb c2", "pick ccc c3", "pick aaa c1", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (1, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_down_range_down_index_end() { let mut history = History::new(10); history.record(HistoryItem::new_swap_down(2, 3)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 3)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ddd c4", "pick eee c5", "pick ccc c3" ); assert_some_eq!(history.redo(&mut lines), (3, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_down_range_up_index_start() { let mut history = History::new(10); history.record(HistoryItem::new_swap_down(1, 0)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (1, 0)); assert_todo_lines!( lines, "pick bbb c2", "pick ccc c3", "pick aaa c1", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (2, 1)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_swap_down_range_up_index_end() { let mut history = History::new(10); history.record(HistoryItem::new_swap_down(3, 2)); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (3, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ddd c4", "pick eee c5", "pick ccc c3" ); assert_some_eq!(history.redo(&mut lines), (4, 3)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_modify_single_index_start() { let mut history = History::new(10); history.record(HistoryItem::new_modify(0, 0, vec![Line::new("drop xxx cx").unwrap()])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (0, 0)); assert_todo_lines!( lines, "drop xxx cx", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (0, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_modify_single_index_end() { let mut history = History::new(10); history.record(HistoryItem::new_modify(4, 4, vec![Line::new("drop xxx cx").unwrap()])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (4, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "drop xxx cx" ); assert_some_eq!(history.redo(&mut lines), (4, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_modify_single_index_middle() { let mut history = History::new(10); history.record(HistoryItem::new_modify(2, 2, vec![Line::new("drop xxx cx").unwrap()])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "drop xxx cx", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (2, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_modify_range_down_index_start() { let mut history = History::new(10); history.record(HistoryItem::new_modify(0, 2, vec![ Line::new("drop xx1 c1").unwrap(), Line::new("drop xx2 c2").unwrap(), Line::new("drop xx3 c3").unwrap(), ])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (0, 2)); assert_todo_lines!( lines, "drop xx1 c1", "drop xx2 c2", "drop xx3 c3", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (0, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_modify_range_down_index_end() { let mut history = History::new(10); history.record(HistoryItem::new_modify(2, 4, vec![ Line::new("drop xx1 c1").unwrap(), Line::new("drop xx2 c2").unwrap(), Line::new("drop xx3 c3").unwrap(), ])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "drop xx1 c1", "drop xx2 c2", "drop xx3 c3" ); assert_some_eq!(history.redo(&mut lines), (2, 4)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_modify_range_up_index_start() { let mut history = History::new(10); history.record(HistoryItem::new_modify(2, 0, vec![ Line::new("drop xx1 c1").unwrap(), Line::new("drop xx2 c2").unwrap(), Line::new("drop xx3 c3").unwrap(), ])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (2, 0)); assert_todo_lines!( lines, "drop xx1 c1", "drop xx2 c2", "drop xx3 c3", "pick ddd c4", "pick eee c5" ); assert_some_eq!(history.redo(&mut lines), (2, 0)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn undo_redo_modify_range_up_index_end() { let mut history = History::new(10); history.record(HistoryItem::new_modify(4, 2, vec![ Line::new("drop xx1 c1").unwrap(), Line::new("drop xx2 c2").unwrap(), Line::new("drop xx3 c3").unwrap(), ])); let mut lines = create_lines(); assert_some_eq!(history.undo(&mut lines), (4, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "drop xx1 c1", "drop xx2 c2", "drop xx3 c3" ); assert_some_eq!(history.redo(&mut lines), (4, 2)); assert_todo_lines!( lines, "pick aaa c1", "pick bbb c2", "pick ccc c3", "pick ddd c4", "pick eee c5" ); } #[test] fn reset() { let mut history = History::new(3); history.redo_history.push_front(HistoryItem::new_add(1, 1)); history.undo_history.push_front(HistoryItem::new_add(1, 1)); history.reset(); assert_empty!(history.undo_history); assert_empty!(history.redo_history); }
def projectpoints(P, X): You code here return hom2cart(P.dot(cart2hom(X)))
<reponame>johnnywww/swd<filename>server/src/sysware.com/ivideo/server/http/template_func.go<gh_stars>0 package http import ( "fmt" "html/template" "strings" "sysware.com/ivideo/common" ) func getPageStartIndex(pageNo int) int { return pageNo*10 + 1 } func getPageEndIndex(pageNo int) int { return (pageNo + 1) * 10 } func getPageHref(title string, pageNo int, href string) string { if pageNo < 0 { return fmt.Sprintf("<a href=\"#\">%s</a>", title) } else { return fmt.Sprintf("<a href=\"%s?pageNo=%d\">%s</a>", href, pageNo, title) } } func getPageLiPageHref(liClass string, title string, pageNo int, href string) string { if pageNo < 0 { return fmt.Sprintf("<li class=\"%s disabled \">%s</li>", liClass, getPageHref(title, pageNo, href)) } else { return fmt.Sprintf("<li class=\"%s\">%s</li>", liClass, getPageHref(title, pageNo, href)) } } func getPageIndexHtml(pageNo int, totalPage int, href string) template.HTML { pageHtml := []string{} if pageNo < 1 { pageHtml = append(pageHtml, getPageLiPageHref("prev disabled", common.PAGE_TITLE_PREV, -1, "#")) } else { pageHtml = append(pageHtml, getPageLiPageHref("prev", common.PAGE_TITLE_PREV, pageNo-1, href)) } pageHtml = append(pageHtml, getPageLiPageHref("active", fmt.Sprintf("%d", pageNo+1), pageNo, href)) endPageNo := totalPage - 1 if (pageNo + 5) < (totalPage - 1) { endPageNo = pageNo + 5 } for i := pageNo + 1; i <= endPageNo; i++ { pageHtml = append(pageHtml, getPageLiPageHref("", fmt.Sprintf("%d", i), i, href)) } if endPageNo < totalPage-1 { pageHtml = append(pageHtml, getPageLiPageHref("next", common.PAGE_TITLE_NEXT, endPageNo, href)) } else { pageHtml = append(pageHtml, getPageLiPageHref("next", common.PAGE_TITLE_NEXT, -1, "#")) } return template.HTML(strings.Join(pageHtml, "\n")) } func getServerType(serverType int) string { switch serverType { case common.SERVER_TYPE_SIP: return "SIP服务器" case common.SERVER_TYPE_CMS: return "中心管理服务器" case common.SERVER_TYPE_MTS: return "转发服务器" case common.SERVER_TYPE_APS: return "报警服务器" default: return "" } }
package net.minecraft.block; import net.minecraft.creativetab.CreativeTabs; import net.minecraft.init.Blocks; import net.minecraft.item.Item; import net.minecraft.world.IBlockAccess; import net.minecraft.world.World; import java.util.*; public class BlockRedstoneTorch extends BlockTorch { private boolean field_150113_a; private static Map field_150112_b = new HashMap(); private boolean func_150111_a(World p_150111_1_, int p_150111_2_, int p_150111_3_, int p_150111_4_, boolean p_150111_5_) { if (!field_150112_b.containsKey(p_150111_1_)) { field_150112_b.put(p_150111_1_, new ArrayList()); } List var6 = (List)field_150112_b.get(p_150111_1_); if (p_150111_5_) { var6.add(new BlockRedstoneTorch.Toggle(p_150111_2_, p_150111_3_, p_150111_4_, p_150111_1_.getTotalWorldTime())); } int var7 = 0; for (int var8 = 0; var8 < var6.size(); ++var8) { BlockRedstoneTorch.Toggle var9 = (BlockRedstoneTorch.Toggle)var6.get(var8); if (var9.field_150847_a == p_150111_2_ && var9.field_150845_b == p_150111_3_ && var9.field_150846_c == p_150111_4_) { ++var7; if (var7 >= 8) { return true; } } } return false; } protected BlockRedstoneTorch(boolean p_i45423_1_) { this.field_150113_a = p_i45423_1_; this.setTickRandomly(true); this.setCreativeTab((CreativeTabs)null); } public int func_149738_a(World p_149738_1_) { return 2; } public void onBlockAdded(World p_149726_1_, int p_149726_2_, int p_149726_3_, int p_149726_4_) { if (p_149726_1_.getBlockMetadata(p_149726_2_, p_149726_3_, p_149726_4_) == 0) { super.onBlockAdded(p_149726_1_, p_149726_2_, p_149726_3_, p_149726_4_); } if (this.field_150113_a) { p_149726_1_.notifyBlocksOfNeighborChange(p_149726_2_, p_149726_3_ - 1, p_149726_4_, this); p_149726_1_.notifyBlocksOfNeighborChange(p_149726_2_, p_149726_3_ + 1, p_149726_4_, this); p_149726_1_.notifyBlocksOfNeighborChange(p_149726_2_ - 1, p_149726_3_, p_149726_4_, this); p_149726_1_.notifyBlocksOfNeighborChange(p_149726_2_ + 1, p_149726_3_, p_149726_4_, this); p_149726_1_.notifyBlocksOfNeighborChange(p_149726_2_, p_149726_3_, p_149726_4_ - 1, this); p_149726_1_.notifyBlocksOfNeighborChange(p_149726_2_, p_149726_3_, p_149726_4_ + 1, this); } } public void breakBlock(World p_149749_1_, int p_149749_2_, int p_149749_3_, int p_149749_4_, Block p_149749_5_, int p_149749_6_) { if (this.field_150113_a) { p_149749_1_.notifyBlocksOfNeighborChange(p_149749_2_, p_149749_3_ - 1, p_149749_4_, this); p_149749_1_.notifyBlocksOfNeighborChange(p_149749_2_, p_149749_3_ + 1, p_149749_4_, this); p_149749_1_.notifyBlocksOfNeighborChange(p_149749_2_ - 1, p_149749_3_, p_149749_4_, this); p_149749_1_.notifyBlocksOfNeighborChange(p_149749_2_ + 1, p_149749_3_, p_149749_4_, this); p_149749_1_.notifyBlocksOfNeighborChange(p_149749_2_, p_149749_3_, p_149749_4_ - 1, this); p_149749_1_.notifyBlocksOfNeighborChange(p_149749_2_, p_149749_3_, p_149749_4_ + 1, this); } } public int isProvidingWeakPower(IBlockAccess p_149709_1_, int p_149709_2_, int p_149709_3_, int p_149709_4_, int p_149709_5_) { if (!this.field_150113_a) { return 0; } else { int var6 = p_149709_1_.getBlockMetadata(p_149709_2_, p_149709_3_, p_149709_4_); return var6 == 5 && p_149709_5_ == 1 ? 0 : (var6 == 3 && p_149709_5_ == 3 ? 0 : (var6 == 4 && p_149709_5_ == 2 ? 0 : (var6 == 1 && p_149709_5_ == 5 ? 0 : (var6 == 2 && p_149709_5_ == 4 ? 0 : 15)))); } } private boolean func_150110_m(World p_150110_1_, int p_150110_2_, int p_150110_3_, int p_150110_4_) { int var5 = p_150110_1_.getBlockMetadata(p_150110_2_, p_150110_3_, p_150110_4_); return var5 == 5 && p_150110_1_.getIndirectPowerOutput(p_150110_2_, p_150110_3_ - 1, p_150110_4_, 0) ? true : (var5 == 3 && p_150110_1_.getIndirectPowerOutput(p_150110_2_, p_150110_3_, p_150110_4_ - 1, 2) ? true : (var5 == 4 && p_150110_1_.getIndirectPowerOutput(p_150110_2_, p_150110_3_, p_150110_4_ + 1, 3) ? true : (var5 == 1 && p_150110_1_.getIndirectPowerOutput(p_150110_2_ - 1, p_150110_3_, p_150110_4_, 4) ? true : var5 == 2 && p_150110_1_.getIndirectPowerOutput(p_150110_2_ + 1, p_150110_3_, p_150110_4_, 5)))); } /** * Ticks the block if it's been scheduled */ public void updateTick(World p_149674_1_, int p_149674_2_, int p_149674_3_, int p_149674_4_, Random p_149674_5_) { boolean var6 = this.func_150110_m(p_149674_1_, p_149674_2_, p_149674_3_, p_149674_4_); List var7 = (List)field_150112_b.get(p_149674_1_); while (var7 != null && !var7.isEmpty() && p_149674_1_.getTotalWorldTime() - ((BlockRedstoneTorch.Toggle)var7.get(0)).field_150844_d > 60L) { var7.remove(0); } if (this.field_150113_a) { if (var6) { p_149674_1_.setBlock(p_149674_2_, p_149674_3_, p_149674_4_, Blocks.unlit_redstone_torch, p_149674_1_.getBlockMetadata(p_149674_2_, p_149674_3_, p_149674_4_), 3); if (this.func_150111_a(p_149674_1_, p_149674_2_, p_149674_3_, p_149674_4_, true)) { p_149674_1_.playSoundEffect((double)((float)p_149674_2_ + 0.5F), (double)((float)p_149674_3_ + 0.5F), (double)((float)p_149674_4_ + 0.5F), "random.fizz", 0.5F, 2.6F + (p_149674_1_.rand.nextFloat() - p_149674_1_.rand.nextFloat()) * 0.8F); for (int var8 = 0; var8 < 5; ++var8) { double var9 = (double)p_149674_2_ + p_149674_5_.nextDouble() * 0.6D + 0.2D; double var11 = (double)p_149674_3_ + p_149674_5_.nextDouble() * 0.6D + 0.2D; double var13 = (double)p_149674_4_ + p_149674_5_.nextDouble() * 0.6D + 0.2D; p_149674_1_.spawnParticle("smoke", var9, var11, var13, 0.0D, 0.0D, 0.0D); } } } } else if (!var6 && !this.func_150111_a(p_149674_1_, p_149674_2_, p_149674_3_, p_149674_4_, false)) { p_149674_1_.setBlock(p_149674_2_, p_149674_3_, p_149674_4_, Blocks.redstone_torch, p_149674_1_.getBlockMetadata(p_149674_2_, p_149674_3_, p_149674_4_), 3); } } public void onNeighborBlockChange(World p_149695_1_, int p_149695_2_, int p_149695_3_, int p_149695_4_, Block p_149695_5_) { if (!this.func_150108_b(p_149695_1_, p_149695_2_, p_149695_3_, p_149695_4_, p_149695_5_)) { boolean var6 = this.func_150110_m(p_149695_1_, p_149695_2_, p_149695_3_, p_149695_4_); if (this.field_150113_a && var6 || !this.field_150113_a && !var6) { p_149695_1_.scheduleBlockUpdate(p_149695_2_, p_149695_3_, p_149695_4_, this, this.func_149738_a(p_149695_1_)); } } } public int isProvidingStrongPower(IBlockAccess p_149748_1_, int p_149748_2_, int p_149748_3_, int p_149748_4_, int p_149748_5_) { return p_149748_5_ == 0 ? this.isProvidingWeakPower(p_149748_1_, p_149748_2_, p_149748_3_, p_149748_4_, p_149748_5_) : 0; } public Item getItemDropped(int p_149650_1_, Random p_149650_2_, int p_149650_3_) { return Item.getItemFromBlock(Blocks.redstone_torch); } /** * Can this block provide power. Only wire currently seems to have this change based on its state. */ public boolean canProvidePower() { return true; } /** * A randomly called display update to be able to addSmelting particles or other items for display */ public void randomDisplayTick(World p_149734_1_, int p_149734_2_, int p_149734_3_, int p_149734_4_, Random p_149734_5_) { if (this.field_150113_a) { int var6 = p_149734_1_.getBlockMetadata(p_149734_2_, p_149734_3_, p_149734_4_); double var7 = (double)((float)p_149734_2_ + 0.5F) + (double)(p_149734_5_.nextFloat() - 0.5F) * 0.2D; double var9 = (double)((float)p_149734_3_ + 0.7F) + (double)(p_149734_5_.nextFloat() - 0.5F) * 0.2D; double var11 = (double)((float)p_149734_4_ + 0.5F) + (double)(p_149734_5_.nextFloat() - 0.5F) * 0.2D; double var13 = 0.2199999988079071D; double var15 = 0.27000001072883606D; if (var6 == 1) { p_149734_1_.spawnParticle("reddust", var7 - var15, var9 + var13, var11, 0.0D, 0.0D, 0.0D); } else if (var6 == 2) { p_149734_1_.spawnParticle("reddust", var7 + var15, var9 + var13, var11, 0.0D, 0.0D, 0.0D); } else if (var6 == 3) { p_149734_1_.spawnParticle("reddust", var7, var9 + var13, var11 - var15, 0.0D, 0.0D, 0.0D); } else if (var6 == 4) { p_149734_1_.spawnParticle("reddust", var7, var9 + var13, var11 + var15, 0.0D, 0.0D, 0.0D); } else { p_149734_1_.spawnParticle("reddust", var7, var9, var11, 0.0D, 0.0D, 0.0D); } } } /** * Gets an item for the block being called on. Args: world, x, y, z */ public Item getItem(World p_149694_1_, int p_149694_2_, int p_149694_3_, int p_149694_4_) { return Item.getItemFromBlock(Blocks.redstone_torch); } public boolean func_149667_c(Block p_149667_1_) { return p_149667_1_ == Blocks.unlit_redstone_torch || p_149667_1_ == Blocks.redstone_torch; } static class Toggle { int field_150847_a; int field_150845_b; int field_150846_c; long field_150844_d; public Toggle(int p_i45422_1_, int p_i45422_2_, int p_i45422_3_, long p_i45422_4_) { this.field_150847_a = p_i45422_1_; this.field_150845_b = p_i45422_2_; this.field_150846_c = p_i45422_3_; this.field_150844_d = p_i45422_4_; } } }
/** * The <i>rui_dlmodGetSymbol()</i> function shall locate symbol * information for target symbol in specified module. This will be the * mechanism for locating a target function within a module. The target * library module is specified by the ID/handle returned from the "open" * operation. * * @param dlmodId is the identifier of the target module. * @param symbol is a string containing name of the symbol for which * to perform the search/lookup. * @param value is a void pointer for returning the associated value of * the target symbol. * @return FALSE if fails to get symbol, otherwise TRUE is returned. */ gboolean rui_dlmodGetSymbol(rui_Dlmod dlmodId, const char *symbol, void **value) { char *checkRet = NULL; (void) dlerror(); *value = dlsym(dlmodId, symbol); checkRet = dlerror(); if (NULL != checkRet) { GST_ERROR("Failed to find symbol \"%s\" [dll_handle %p]. Reason: %s", symbol, dlmodId, checkRet); return FALSE; } GST_DEBUG("Found symbol \"%s\".", symbol); return TRUE; }
module AOC.BSTree.Strict where import Prelude hiding (elem) import qualified Data.List as DL import Data.Foldable (toList) data BSTree a = Empty | Branch !(BSTree a) !a !(BSTree a) deriving (Show, Eq) instance Foldable BSTree where foldMap f Empty = mempty foldMap f (Branch l o r) = foldMap f l <> f o <> foldMap f r insert :: Ord a => a -> BSTree a -> BSTree a insert a Empty = Branch Empty a Empty insert a (Branch l o r) | a <= o = Branch (insert a l) o r | otherwise = Branch l o (insert a r) elem :: Ord a => a -> BSTree a -> Bool elem _ Empty = False elem a (Branch l o r) | a == o = True | a < o = elem a l | otherwise = elem a r fromList :: Ord a => [a] -> BSTree a fromList = DL.foldl' (flip insert) Empty merge :: Ord a => BSTree a -> BSTree a -> BSTree a merge Empty b = b merge a Empty = a merge a@Branch {} b@Branch {} = DL.foldl' (flip insert) a (toList b) btTails :: Ord a => BSTree a -> [(a, BSTree a)] btTails Empty = [] btTails (Branch l o r) = (o, merge l r) : btTails (merge l r)
<reponame>setrar/ghc -- LML original: <NAME>, 1990 -- Haskell translation: <NAME>, May 1991 module Geomfuns( mapx, mapy, col, row, lrinvert, antirotate, place, rotatecw, tbinvert, tile, t4, xymax) where import Mgrfuns import Drawfuns --CR strange instructions here! -- xymax should be in layout.m, and the functions like t4 in -- a module specific to the program that #includes "layout.t" swapxy :: [Int] -> [Int] --xs [x1,y1,x2,y2] = [x1,x2] --ys [x1,y1,x2,y2] = [y1,y2] swapxy [x1,y1,x2,y2] = [y1,x1,y2,x2] mapx, mapy :: (Int -> Int) -> [Int] -> [Int] mapx f [x1,y1,x2,y2] = [f x1, y1, f x2, y2] mapy f [x1,y1,x2,y2] = [x1, f y1, x2, f y2] toright, down :: Int -> [[Int]] -> [[Int]] toright = map . mapx . (+) down = map . mapy . (+) origin :: Int -> Int -> [[Int]] -> [[Int]] origin x y = (toright x) . (down y) -- place x y takes a print and outputs a string that -- is interpreted by MGR with the result that -- the print is drawn at x y place :: Int -> Int -> [[Int]] -> [Char] place x y = drawlines . (origin x y) -- 72 is the size of the square in the big tile xymax :: Int xymax = 72 -- lrinvert etc still need the size of the square in which to do it -- so have not yet reverted to their original generality lrinvert, tbinvert, rotatecw, antirotate :: Int -> [[Int]] -> [[Int]] lrinvert m = map (mapx (\x -> m-x)) tbinvert m = map (mapy (\x -> m-x)) rotatecw m = map (swapxy . (mapy (\x -> m-x))) antirotate m = map (swapxy . (mapx (\x -> m-x))) --CR this doesn't really belong here - redefinition (cf postscript)! -- a function specifically for the potatoprinting program -- ss is the square size t4 :: [[[Int]]] -> [[Int]] t4 [c1,c2,c3,c4] = c1 ++ toright ss c2 ++ down ss c3 ++ (down ss . toright ss) c4 where ss = xymax -- a tile function specifically for use with t4 --CR ditto tile :: Int -> Int -> Int -> Int -> [[Int]] -> [Char] tile _ _ _ 0 coords = "" tile _ _ 0 _ coords = "" tile x y c r coords = col x y r coords ++ row (x + 2*xymax) y (c-1) coords ++ tile (x + 2*xymax) (y + 2*xymax) (c-1)(r-1) coords col, row :: Int -> Int -> Int -> [[Int]] -> [Char] col x y 0 coords = "" col x y n coords = place x y coords ++ col x y' (n-1) coords where y' = y + (2 * xymax) row x y 0 coords = "" row x y n coords = place x y coords ++ row x' y (n-1) coords where x' = x + (2 * xymax)
// return random integer between limits (inclusive) // uses Mersenne Twister algorithm int genRandInt(int min, int max) { std::uniform_int_distribution dist{min, max}; return dist(mersenne); }
<gh_stars>0 import { common } from '@app/helpers' import { Address, EthValue, Hash, Hex, HexNumber, HexTime, Tx } from '@app/models' import { Block as BlockLayout, BlockStats } from 'ethvm-common' import bn from 'bignumber.js' export class Block { public readonly id: string private readonly block: BlockLayout private cache: any constructor(block: BlockLayout) { this.cache = {} this.block = block this.id = this.block.hash } public getId(): string { return this.id } // ony tx hash are there public setTransactions(txs: string[]): void { this.block.transactions = txs } public addUncle(uncle: Block): void { if (!this.block.uncles) { this.block.uncles = [] } //this.block.uncles.push(uncle) } public getIsUncle(): boolean { if (!this.cache.isUncle) { if (this.block.uncles.length == 0) { return (this.cache.isUncle = false) } return (this.cache.isUncle = true) } return this.cache.isUncle } public getUncles(): string[] { return this.block.uncles } // public getUncleHashes(): Hash[] { // return this.block.uncleHashes.map(_uncle => { // return common.Hash(_uncle) // }) // } // public setUncleHashes(hashes: Hash[]): void { // this.block.uncleHashes = hashes // } public getHash(): string { return '0x' + this.block.hash } public getNumber(): number { return this.block.number } public getTransactionCount(): number { return this.block.transactions.length } public getTotalBlockReward(): EthValue { if (!this.cache.totalBlockReward) { let total = 0 for (let address in this.block.header.rewards) { total = this.block.header.rewards[address] + total } this.cache.totalBlockReward = total } return this.cache.totalBlockReward } public getParentHash(): string { if (!this.cache.parentHash) { this.cache.parentHash = '0x' + this.block.header.parentHash } return this.cache.parentHash } public getNonce(): Hex { if (!this.cache.nonce) { this.cache.nonce = this.block.header.nonce } return this.cache.nonce } // public getMixHash(): Hash { // if (!this.cache.mixHash) { // this.cache.mixHash = common.Hash(this.block.mixHash) // } // return this.cache.mixHash // } public getSha3Uncles(): string { if (!this.cache.sha3Uncles) { this.cache.sha3Uncles = '0x' + this.block.header.unclesHash } return this.cache.sha3Uncles } public getLogsBloom(): Hex { if (!this.cache.logsBloom) { this.cache.logsBloom = this.block.header.logsBloom } return this.cache.logsBloom } public getStateRoot(): Hash { if (!this.cache.stateRoot) { this.cache.stateRoot = this.block.header.stateRoot } return this.cache.stateRoot } public getMiner(): string { if (!this.cache.miner) { this.cache.miner = '0x' + this.block.header.miner } return this.cache.miner } public getMinerBalance(): EthValue { if (!this.cache.minerBalance) { this.cache.minerBalance = common.EthValue(this.block.header.rewards[this.block.header.miner]) } return this.cache.minerBalance } public getDifficulty(): number { if (!this.cache.difficulty) { this.cache.difficulty = this.block.header.difficulty } return this.cache.difficulty } public getTotalDifficulty(): number { if (!this.cache.totalDifficulty) { this.cache.totalDifficulty = this.block.header.totalDifficulty } return this.cache.totalDifficulty } public getExtraData(): Hex { if (!this.cache.extraData) { this.cache.extraData = common.Hex(this.block.header.extraData) } return this.cache.extraData } // public getSize(): HexNumber { // if (!this.cache.size) { // this.cache.size = common.HexNumber(this.block.header.) // } // return this.cache.size // } public getGasLimit(): number { if (!this.cache.gasLimit) { this.cache.garLimit = this.block.header.gasLimit } return this.cache.garLimit } public getGasUsed(): number { if (!this.cache.gasUsed) { this.cache.gasUsed = this.block.header.gasUsed } return this.cache.gasUsed } public getTimestamp(): Date { if (!this.cache.timestamp) { this.cache.timestamp = this.block.header.timestamp } return new Date(this.cache.timestamp * 1000) } public getTransactionsRoot(): Hash { if (!this.cache.transactionsRoot) { this.cache.transactionsRoot = common.Hash(this.block.header.transactionsRoot) } return this.cache.transactionsRoot } public getReceiptsRoot(): Hash { if (!this.cache.receiptsRoot) { this.cache.receiptsRoot = common.Hash(this.block.header.receiptsRoot) } return this.cache.receiptsRoot } public getTransactions(): Tx[] { return [] } public geTransactionHashes(): string[] { if (!this.cache.transactions) { this.cache.transactions = this.block.transactions } return this.cache.transactions } public getTxFees(): number { if (!this.cache.txFees) { this.cache.txFees = this.block.stats.totalTxsFees } return this.cache.txFees } public getBlockReward(): number { const rewards = this.block.header.rewards if (!this.cache.blockReward) { this.cache.blockReward = rewards[this.block.header.miner] } return this.cache.blockReward } public getUncleReward(): number { if (!this.cache.uncleReward) { let total = 0 if (this.block.header.rewards[this.block.header.unclesHash]) { return this.cache.uncleReward = total } for (let address in this.block.header.rewards) { if(address === this.block.header.miner) continue total = this.block.header.rewards[address] + total } this.cache.uncleReward = total } return this.cache.uncleReward } public getStats(): BlockStats { return this.block.stats } }
/** * {@link ConnectionStringBuilder} can be used to construct a connection string which can establish communication with ServiceBus entities. * It can also be used to perform basic validation on an existing connection string. * <p> Sample Code: * <pre>{@code * ConnectionStringBuilder connectionStringBuilder = new ConnectionStringBuilder( * "ServiceBusNamespaceName", * "ServiceBusEntityName", // eventHubName or QueueName or TopicName * "SharedAccessSignatureKeyName", * "SharedAccessSignatureKey"); * * String connectionString = connectionStringBuilder.toString(); * }</pre> * <p> * A connection string is basically a string consisted of key-value pair separated by ";". * Basic format is {{@literal <}key{@literal >}={@literal <}value{@literal >}[;{@literal <}key{@literal >}={@literal <}value{@literal >}]} where supported key name are as follow: * <ul> * <li> Endpoint - the URL that contains the servicebus namespace * <li> EntityPath - the path to the service bus entity (queue/topic/eventhub/subscription/consumergroup/partition) * <li> SharedAccessKeyName - the key name to the corresponding shared access policy rule for the namespace, or entity. * <li> SharedAccessKey - the key for the corresponding shared access policy rule of the namespace or entity. * </ul> */ public class ConnectionStringBuilder { final static String endpointFormat = "amqps://%s.servicebus.windows.net"; final static String endpointRawFormat = "amqps://%s"; final static String HostnameConfigName = "Hostname"; final static String EndpointConfigName = "Endpoint"; final static String SharedAccessKeyNameConfigName = "SharedAccessKeyName"; final static String SharedAccessKeyConfigName = "SharedAccessKey"; final static String SharedAccessSignatureConfigName = "SharedAccessSignature"; final static String EntityPathConfigName = "EntityPath"; final static String OperationTimeoutConfigName = "OperationTimeout"; final static String RetryPolicyConfigName = "RetryPolicy"; final static String KeyValueSeparator = "="; final static String KeyValuePairDelimiter = ";"; private static final String AllKeyEnumerateRegex = "(" + HostnameConfigName + "|" + EndpointConfigName + "|" + SharedAccessKeyNameConfigName + "|" + SharedAccessKeyConfigName + "|" + SharedAccessSignatureConfigName + "|" + EntityPathConfigName + "|" + OperationTimeoutConfigName + "|" + RetryPolicyConfigName + ")"; private static final String KeysWithDelimitersRegex = KeyValuePairDelimiter + AllKeyEnumerateRegex + KeyValueSeparator; private URI endpoint; private String sharedAccessKeyName; private String sharedAccessKey; private String entityPath; private String sharedAccessSignature; private Duration operationTimeout; private RetryPolicy retryPolicy; private ConnectionStringBuilder( final URI endpointAddress, final String entityPath, final String sharedAccessKeyName, final String sharedAccessKey, final Duration operationTimeout, final RetryPolicy retryPolicy) { this.endpoint = endpointAddress; this.sharedAccessKey = sharedAccessKey; this.sharedAccessKeyName = sharedAccessKeyName; this.operationTimeout = operationTimeout; this.retryPolicy = retryPolicy; this.entityPath = entityPath; } private ConnectionStringBuilder( final URI endpointAddress, final String entityPath, final String sharedAccessSignature, final Duration operationTimeout, final RetryPolicy retryPolicy) { this.endpoint = endpointAddress; this.sharedAccessSignature = sharedAccessSignature; this.operationTimeout = operationTimeout; this.retryPolicy = retryPolicy; this.entityPath = entityPath; } private ConnectionStringBuilder( final String namespaceName, final String entityPath, final String sharedAccessKeyName, final String sharedAccessKey, final Duration operationTimeout, final RetryPolicy retryPolicy) { try { this.endpoint = new URI(String.format(Locale.US, endpointFormat, namespaceName)); } catch (URISyntaxException exception) { throw new IllegalConnectionStringFormatException( String.format(Locale.US, "Invalid namespace name: %s", namespaceName), exception); } this.sharedAccessKey = sharedAccessKey; this.sharedAccessKeyName = sharedAccessKeyName; this.operationTimeout = operationTimeout; this.retryPolicy = retryPolicy; this.entityPath = entityPath; } /** * Build a connection string consumable by {@link com.microsoft.azure.eventhubs.EventHubClient#createFromConnectionString(String)} * * @param namespaceName Namespace name (dns suffix - ex: .servicebus.windows.net is not required) * @param entityPath Entity path. For eventHubs case specify - eventHub name. * @param sharedAccessKeyName Shared Access Key name * @param sharedAccessKey Shared Access Key */ public ConnectionStringBuilder( final String namespaceName, final String entityPath, final String sharedAccessKeyName, final String sharedAccessKey) { this(namespaceName, entityPath, sharedAccessKeyName, sharedAccessKey, MessagingFactory.DefaultOperationTimeout, RetryPolicy.getDefault()); } /** * Build a connection string consumable by {@link com.microsoft.azure.eventhubs.EventHubClient#createFromConnectionString(String)} * * @param endpointAddress namespace level endpoint. This needs to be in the format of scheme://fullyQualifiedServiceBusNamespaceEndpointName * @param entityPath Entity path. For eventHubs case specify - eventHub name. * @param sharedAccessKeyName Shared Access Key name * @param sharedAccessKey Shared Access Key */ public ConnectionStringBuilder( final URI endpointAddress, final String entityPath, final String sharedAccessKeyName, final String sharedAccessKey) { this(endpointAddress, entityPath, sharedAccessKeyName, sharedAccessKey, MessagingFactory.DefaultOperationTimeout, RetryPolicy.getDefault()); } /** * Build a connection string consumable by {@link com.microsoft.azure.eventhubs.EventHubClient#createFromConnectionString(String)} * * @param endpointAddress namespace level endpoint. This needs to be in the format of scheme://fullyQualifiedServiceBusNamespaceEndpointName * @param entityPath Entity path. For eventHubs case specify - eventHub name. * @param sharedAccessSignature Shared Access Signature */ public ConnectionStringBuilder( final URI endpointAddress, final String entityPath, final String sharedAccessSignature) { this(endpointAddress, entityPath, sharedAccessSignature, MessagingFactory.DefaultOperationTimeout, RetryPolicy.getDefault()); } /** * ConnectionString format: * Endpoint=sb://namespace_DNS_Name;EntityPath=EVENT_HUB_NAME;SharedAccessKeyName=SHARED_ACCESS_KEY_NAME;SharedAccessKey=SHARED_ACCESS_KEY * * @param connectionString ServiceBus ConnectionString * @throws IllegalConnectionStringFormatException when the format of the ConnectionString is not valid */ public ConnectionStringBuilder(String connectionString) { this.parseConnectionString(connectionString); } /** * Get the endpoint which can be used to connect to the ServiceBus Namespace * * @return Endpoint */ public URI getEndpoint() { return this.endpoint; } /** * Get the shared access policy key value from the connection string * * @return Shared Access Signature key */ public String getSasKey() { return this.sharedAccessKey; } /** * Get the shared access policy owner name from the connection string * * @return Shared Access Signature key name. */ public String getSasKeyName() { return this.sharedAccessKeyName; } /** * Get the shared access signature (also referred as SAS Token) from the connection string * * @return Shared Access Signature */ public String getSharedAccessSignature() { return this.sharedAccessSignature; } /** * Get the entity path value from the connection string * * @return Entity Path */ public String getEntityPath() { return this.entityPath; } /** * OperationTimeout is applied in erroneous situations to notify the caller about the relevant {@link ServiceBusException} * * @return operationTimeout */ public Duration getOperationTimeout() { return (this.operationTimeout == null ? MessagingFactory.DefaultOperationTimeout : this.operationTimeout); } /** * Set the OperationTimeout value in the Connection String. This value will be used by all operations which uses this {@link ConnectionStringBuilder}, unless explicitly over-ridden. * <p>ConnectionString with operationTimeout is not inter-operable between java and clients in other platforms. * * @param operationTimeout Operation Timeout */ public void setOperationTimeout(final Duration operationTimeout) { this.operationTimeout = operationTimeout; } /** * Get the retry policy instance that was created as part of this builder's creation. * * @return RetryPolicy applied for any operation performed using this ConnectionString */ @Deprecated public RetryPolicy getRetryPolicy() { return (this.retryPolicy == null ? RetryPolicy.getDefault() : this.retryPolicy); } /** * Set the retry policy. * <p>RetryPolicy is not inter-operable with ServiceBus clients in other platforms. * * @param retryPolicy RetryPolicy applied for any operation performed using this ConnectionString */ @Deprecated public void setRetryPolicy(final RetryPolicy retryPolicy) { this.retryPolicy = retryPolicy; } /** * Returns an inter-operable connection string that can be used to connect to ServiceBus Namespace * * @return connection string */ @Override public String toString() { final StringBuilder connectionStringBuilder = new StringBuilder(); if (this.endpoint != null) { connectionStringBuilder.append(String.format(Locale.US, "%s%s%s%s", EndpointConfigName, KeyValueSeparator, this.endpoint.toString(), KeyValuePairDelimiter)); } if (!StringUtil.isNullOrWhiteSpace(this.entityPath)) { connectionStringBuilder.append(String.format(Locale.US, "%s%s%s%s", EntityPathConfigName, KeyValueSeparator, this.entityPath, KeyValuePairDelimiter)); } if (!StringUtil.isNullOrWhiteSpace(this.sharedAccessKeyName)) { connectionStringBuilder.append(String.format(Locale.US, "%s%s%s%s", SharedAccessKeyNameConfigName, KeyValueSeparator, this.sharedAccessKeyName, KeyValuePairDelimiter)); } if (!StringUtil.isNullOrWhiteSpace(this.sharedAccessKey)) { connectionStringBuilder.append(String.format(Locale.US, "%s%s%s%s", SharedAccessKeyConfigName, KeyValueSeparator, this.sharedAccessKey, KeyValuePairDelimiter)); } if (!StringUtil.isNullOrWhiteSpace(this.sharedAccessSignature)) { connectionStringBuilder.append(String.format(Locale.US, "%s%s%s%s", SharedAccessSignatureConfigName, KeyValueSeparator, this.sharedAccessSignature, KeyValuePairDelimiter)); } if (this.operationTimeout != null) { connectionStringBuilder.append(String.format(Locale.US, "%s%s%s%s", OperationTimeoutConfigName, KeyValueSeparator, this.operationTimeout.toString(), KeyValuePairDelimiter)); } if (this.retryPolicy != null) { connectionStringBuilder.append(String.format(Locale.US, "%s%s%s%s", RetryPolicyConfigName, KeyValueSeparator, this.retryPolicy.toString(), KeyValuePairDelimiter)); } connectionStringBuilder.deleteCharAt(connectionStringBuilder.length() - 1); return connectionStringBuilder.toString(); } private void parseConnectionString(final String connectionString) { if (StringUtil.isNullOrWhiteSpace(connectionString)) { throw new IllegalConnectionStringFormatException(String.format("connectionString cannot be empty")); } final String connection = KeyValuePairDelimiter + connectionString; final Pattern keyValuePattern = Pattern.compile(KeysWithDelimitersRegex, Pattern.CASE_INSENSITIVE); final String[] values = keyValuePattern.split(connection); final Matcher keys = keyValuePattern.matcher(connection); if (values == null || values.length <= 1 || keys.groupCount() == 0) { throw new IllegalConnectionStringFormatException("Connection String cannot be parsed."); } if (!StringUtil.isNullOrWhiteSpace((values[0]))) { throw new IllegalConnectionStringFormatException( String.format(Locale.US, "Cannot parse part of ConnectionString: %s", values[0])); } int valueIndex = 0; while (keys.find()) { valueIndex++; String key = keys.group(); key = key.substring(1, key.length() - 1); if (values.length < valueIndex + 1) { throw new IllegalConnectionStringFormatException( String.format(Locale.US, "Value for the connection string parameter name: %s, not found", key)); } if (key.equalsIgnoreCase(EndpointConfigName)) { if (this.endpoint != null) { // we have parsed the endpoint once, which means we have multiple config which is not allowed throw new IllegalConnectionStringFormatException( String.format(Locale.US, "Multiple %s and/or %s detected. Make sure only one is defined", EndpointConfigName, HostnameConfigName)); } try { this.endpoint = new URI(values[valueIndex]); } catch (URISyntaxException exception) { throw new IllegalConnectionStringFormatException( String.format(Locale.US, "%s should be in format scheme://fullyQualifiedServiceBusNamespaceEndpointName", EndpointConfigName), exception); } } else if (key.equalsIgnoreCase(HostnameConfigName)) { if (this.endpoint != null) { // we have parsed the endpoint once, which means we have multiple config which is not allowed throw new IllegalConnectionStringFormatException( String.format(Locale.US, "Multiple %s and/or %s detected. Make sure only one is defined", EndpointConfigName, HostnameConfigName)); } try { this.endpoint = new URI(String.format(Locale.US, endpointRawFormat, values[valueIndex])); } catch (URISyntaxException exception) { throw new IllegalConnectionStringFormatException( String.format(Locale.US, "%s should be a fully quantified host name address", HostnameConfigName), exception); } } else if (key.equalsIgnoreCase(SharedAccessKeyNameConfigName)) { this.sharedAccessKeyName = values[valueIndex]; } else if (key.equalsIgnoreCase(SharedAccessKeyConfigName)) { this.sharedAccessKey = values[valueIndex]; } else if (key.equalsIgnoreCase(SharedAccessSignatureConfigName)) { this.sharedAccessSignature = values[valueIndex]; } else if (key.equalsIgnoreCase(EntityPathConfigName)) { this.entityPath = values[valueIndex]; } else if (key.equalsIgnoreCase(OperationTimeoutConfigName)) { try { this.operationTimeout = Duration.parse(values[valueIndex]); } catch (DateTimeParseException exception) { throw new IllegalConnectionStringFormatException("Invalid value specified for property 'Duration' in the ConnectionString.", exception); } } else if (key.equalsIgnoreCase(RetryPolicyConfigName)) { this.retryPolicy = values[valueIndex].equals(ClientConstants.DEFAULT_RETRY) ? RetryPolicy.getDefault() : (values[valueIndex].equals(ClientConstants.NO_RETRY) ? RetryPolicy.getNoRetry() : null); if (this.retryPolicy == null) throw new IllegalConnectionStringFormatException( String.format(Locale.US, "Connection string parameter '%s'='%s' is not recognized", RetryPolicyConfigName, values[valueIndex])); } else { throw new IllegalConnectionStringFormatException( String.format(Locale.US, "Illegal connection string parameter name: %s", key)); } } } }
from flask import Flask,render_template from article import Article #Router app = Flask("hello_world") posts = [ Article('title', 'subtitle' , 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.', 'shashank'), Article('title', 'subtitle' , 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.', 'shashank'), Article('title', 'subtitle' , 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.', 'shashank'), Article('title', 'subtitle' , 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.', 'shashank') ] @app.route('/') def index(): return render_template('index.html') @app.route('/articles') def articles(): return render_template('articles.html', articles=posts) @app.route('/articles/<int:id>') def article(id): try: post = posts[id-1] return render_template('article.html',article=post) except IndexError: return render_template('404.html') if __name__ == "__main__": app.run(port=8000, debug = True)
<gh_stars>1-10 package bolt import ( "fmt" "os" "sort" "unsafe" ) var lmbytes uint32 = 0 var smbytes uint32 = 0 var ombytes uint32 = 0 const pageHeaderSize = int(unsafe.Offsetof(((*page)(nil)).ptr)) const minKeysPerPage = 2 const branchPageElementSize = int(unsafe.Sizeof(branchPageElement{})) const leafPageElementSize = int(unsafe.Sizeof(leafPageElement{})) const ( branchPageFlag = 0x01 leafPageFlag = 0x02 metaPageFlag = 0x04 freelistPageFlag = 0x10 ) const ( bucketLeafFlag = 0x01 ) type pgid uint64 type page struct { id pgid flags uint16 count uint16 overflow uint32 ptr uintptr } // typ returns a human readable page type string used for debugging. func (p *page) typ() string { if (p.flags & branchPageFlag) != 0 { return "branch" } else if (p.flags & leafPageFlag) != 0 { return "leaf" } else if (p.flags & metaPageFlag) != 0 { return "meta" } else if (p.flags & freelistPageFlag) != 0 { return "freelist" } return fmt.Sprintf("unknown<%02x>", p.flags) } // meta returns a pointer to the metadata section of the page. func (p *page) meta() *meta { return (*meta)(unsafe.Pointer(&p.ptr)) } // leafPageElement retrieves the leaf node by index func (p *page) leafPageElement(index uint16) *leafPageElement { n := &((*[0x7FFFFFF]leafPageElement)(unsafe.Pointer(&p.ptr)))[index] return n } // leafPageElementLimit retrieves the leaf node by index func (p *page) leafPageElementLimit(index uint16, lbytes uint32) (*leafPageElementLimit) { n := &((*[0x7FFFFFF]leafPageElementLimit)(unsafe.Pointer(&p.ptr)))[index] lmbytes = lbytes return n } // leafPageElementOffset retrieves the leaf node by index func (p *page) leafPageElementOffset(index uint16, obytes uint32) (*leafPageElementOffset) { n := &((*[0x7FFFFFF]leafPageElementOffset)(unsafe.Pointer(&p.ptr)))[index] ombytes = obytes return n } // leafPageElementRange retrieves the leaf node by index func (p *page) leafPageElementRange(index uint16, sbytes uint32, lbytes uint32) (*leafPageElementRange) { n := &((*[0x7FFFFFF]leafPageElementRange)(unsafe.Pointer(&p.ptr)))[index] lmbytes = lbytes smbytes = sbytes return n } // leafPageElements retrieves a list of leaf nodes. func (p *page) leafPageElements() []leafPageElement { if p.count == 0 { return nil } return ((*[0x7FFFFFF]leafPageElement)(unsafe.Pointer(&p.ptr)))[:] } // branchPageElement retrieves the branch node by index func (p *page) branchPageElement(index uint16) *branchPageElement { return &((*[0x7FFFFFF]branchPageElement)(unsafe.Pointer(&p.ptr)))[index] } // branchPageElements retrieves a list of branch nodes. func (p *page) branchPageElements() []branchPageElement { if p.count == 0 { return nil } return ((*[0x7FFFFFF]branchPageElement)(unsafe.Pointer(&p.ptr)))[:] } // dump writes n bytes of the page to STDERR as hex output. func (p *page) hexdump(n int) { buf := (*[maxAllocSize]byte)(unsafe.Pointer(p))[:n] fmt.Fprintf(os.Stderr, "%x\n", buf) } type pages []*page func (s pages) Len() int { return len(s) } func (s pages) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s pages) Less(i, j int) bool { return s[i].id < s[j].id } // branchPageElement represents a node on a branch page. type branchPageElement struct { pos uint32 ksize uint32 pgid pgid } // key returns a byte slice of the node key. func (n *branchPageElement) key() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos]))[:n.ksize] } // leafPageElement represents a node on a leaf page. type leafPageElement struct { flags uint32 pos uint32 ksize uint32 vsize uint32 } // leafPageElementLimit represents a node on a leaf page. type leafPageElementLimit struct { flags uint32 pos uint32 ksize uint32 vsize uint32 } // leafPageElementOffset represents a node on a leaf page. type leafPageElementOffset struct { flags uint32 pos uint32 ksize uint32 vsize uint32 } // leafPageElementRange represents a node on a leaf page. type leafPageElementRange struct { flags uint32 pos uint32 ksize uint32 vsize uint32 } // key returns a byte slice of the node key. func (n *leafPageElement) key() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos]))[:n.ksize:n.ksize] } // key returns a byte slice of the node key. func (n *leafPageElementLimit) key() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos]))[:n.ksize:n.ksize] } // key returns a byte slice of the node key. func (n *leafPageElementOffset) key() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos]))[:n.ksize:n.ksize] } // key returns a byte slice of the node key. func (n *leafPageElementRange) key() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos]))[:n.ksize:n.ksize] } // value returns a byte slice of the node value. func (n *leafPageElement) value() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos+n.ksize]))[:n.vsize:n.vsize] } // value returns a byte slice of the node value limited by bytes count. func (n *leafPageElementLimit) value() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) if lmbytes > n.vsize { lmbytes = n.vsize } return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos+n.ksize]))[:lmbytes:lmbytes] } // value returns a byte slice of the node value with skipped bytes count. func (n *leafPageElementOffset) value() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) if ombytes >= n.vsize { ombytes = 0 } embytes := n.vsize - ombytes return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos+n.ksize+ombytes]))[:embytes:embytes] } // value returns a byte slice of the node value limited by bytes range. func (n *leafPageElementRange) value() []byte { buf := (*[maxAllocSize]byte)(unsafe.Pointer(n)) if smbytes >= n.vsize { smbytes = 0 } if lmbytes > n.vsize { lmbytes = n.vsize } return (*[maxAllocSize]byte)(unsafe.Pointer(&buf[n.pos+n.ksize+smbytes]))[:lmbytes:lmbytes] } // PageInfo represents human readable information about a page. type PageInfo struct { ID int Type string Count int OverflowCount int } type pgids []pgid func (s pgids) Len() int { return len(s) } func (s pgids) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s pgids) Less(i, j int) bool { return s[i] < s[j] } // merge returns the sorted union of a and b. func (a pgids) merge(b pgids) pgids { // Return the opposite slice if one is nil. if len(a) == 0 { return b } if len(b) == 0 { return a } merged := make(pgids, len(a)+len(b)) mergepgids(merged, a, b) return merged } // mergepgids copies the sorted union of a and b into dst. // If dst is too small, it panics. func mergepgids(dst, a, b pgids) { if len(dst) < len(a)+len(b) { panic(fmt.Errorf("mergepgids bad len %d < %d + %d", len(dst), len(a), len(b))) } // Copy in the opposite slice if one is nil. if len(a) == 0 { copy(dst, b) return } if len(b) == 0 { copy(dst, a) return } // Merged will hold all elements from both lists. merged := dst[:0] // Assign lead to the slice with a lower starting value, follow to the higher value. lead, follow := a, b if b[0] < a[0] { lead, follow = b, a } // Continue while there are elements in the lead. for len(lead) > 0 { // Merge largest prefix of lead that is ahead of follow[0]. n := sort.Search(len(lead), func(i int) bool { return lead[i] > follow[0] }) merged = append(merged, lead[:n]...) if n >= len(lead) { break } // Swap lead and follow. lead, follow = follow, lead[n:] } // Append what's left in follow. _ = append(merged, follow...) }
#include<stdio.h> int main() { int n,p,a,b; scanf ("%d%d%d",&n,&p,&a); int s=0; for (int i=1;i<n;i++) { scanf ("%d",&b); s=a-b>s?a-b:s; a=b; //printf ("%d\n",s); } s-=p; printf ("%d\n",s>0?s:0); return 0; }
#!/usr/bin/env python3 import rxcclib.io.Gaussian as rxgau import rxcclib.io.mol2 as rxmol2file import rxcclib.geometry.molecules as rxmol from rxcclib.ff.mol2 import Mol2 import subprocess import unittest, os, logging import numpy as np from io import StringIO rxgau.GauCOM.g09rt = 'g09' os.system( 'rm A* q* Q* p* esout *Gaussian* samples/bencom.fchk samples/bencom.chk samples/bencom.log' ) class TestFile(unittest.TestCase): def test_comfchk(self): file = rxgau.GauFile('samples/bencom') Mol2(file) self.assertIsInstance(file, rxgau.GauFile) self.assertIsInstance(file.com, rxgau.GauCOM) self.assertIsInstance(file.log, rxgau.GauLOG) self.assertIsInstance(file.fchk, rxgau.GauFCHK) self.assertIsInstance(file.mol2, Mol2) file.com.Popen() file.com.wait() file.chk.formchk() self.assertEqual(file.fchk.read(), True) self.assertEqual(file.fchk.natom, 12) self.assertEqual(file.fchk.mult, 1) self.assertEqual(file.fchk.charge, 0) self.assertIsInstance(file.fchk.xyz, str) hess = file.fchk.find33Hessian(3, 5) self.assertAlmostEqual(hess[0][0], -2.62909045e-2) self.assertAlmostEqual(hess[1][1], 3.38743754e-2) self.assertAlmostEqual(hess[2][2], 7.19580040e-3) def test_logmol2(self): file = rxgau.GauFile('samples/benresp') Mol2(file) args = 'antechamber -i {} -fi gout -o {} -fo mol2 -c resp'.format( file.log.abspath, file.mol2.abspath) args = args.split() run = subprocess.run(args) file.mol2.read() self.assertEqual(file.mol2.atomtypelist[0], None) self.assertEqual(file.mol2.atomtypelist[1], 'ca') self.assertEqual(file.mol2.atomtypelist[12], 'ha') self.assertEqual(file.mol2.atomchargelist[0], None) self.assertEqual(file.mol2.atomchargelist[1], -0.117738) self.assertEqual(file.mol2.atomchargelist[12], 0.117738) # def test_MMcom(self): # mmfile=rxgau.File('samples/mmfile') # mmfile.com.read() # xyz=StringIO(mmfile.com.xyz) # self.assertEqual() if __name__ == '__main__': logging.basicConfig(level=logging.DEBUG) unittest.main()
/* HPGL Command ER (Edge rectangle Relative) */ int hpgs_reader_do_ER (hpgs_reader *reader) { hpgs_point p,pp,cp; if (hpgs_reader_read_point(reader,&p,-1)) return -1; p.x += reader->current_point.x; p.y += reader->current_point.y; if (hpgs_reader_checkpath(reader)) return -1; reader->poly_buffer_size = 0; reader->polygon_mode = 1; cp = reader->current_point; if (hpgs_reader_moveto(reader,&cp)) return -1; pp.x = cp.x; pp.y = p.y; if (hpgs_reader_lineto(reader,&pp)) return -1; if (hpgs_reader_lineto(reader,&p)) return -1; pp.x = p.x; pp.y = cp.y; if (hpgs_reader_lineto(reader,&pp)) return -1; if (hpgs_reader_closepath(reader)) return -1; switch (do_polygon(reader,HPGS_FALSE)) { case 1: if (hpgs_reader_stroke(reader)) return -1; case 0: reader->polygon_mode = 0; return 0; default: return -1; } }
<gh_stars>0 /* * CDDL HEADER START * * The contents of this file are subject to the terms of the * Common Development and Distribution License (the "License"). * You may not use this file except in compliance with the License. * * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE * or http://www.opensolaris.org/os/licensing. * See the License for the specific language governing permissions * and limitations under the License. * * When distributing Covered Code, include this CDDL HEADER in each * file and include the License file at usr/src/OPENSOLARIS.LICENSE. * If applicable, add the following below this CDDL HEADER, with the * fields enclosed by brackets "[]" replaced with your own identifying * information: Portions Copyright [yyyy] [name of copyright owner] * * CDDL HEADER END */ /* * Copyright 2009 Sun Microsystems, Inc. All rights reserved. * Use is subject to license terms. */ /* * Open Host Controller Driver (OHCI) * * The USB Open Host Controller driver is a software driver which interfaces * to the Universal Serial Bus layer (USBA) and the USB Open Host Controller. * The interface to USB Open Host Controller is defined by the OpenHCI Host * Controller Interface. * * NOTE: * * Currently OHCI driver does not support the following features * * - Handle request with multiple TDs under short xfer conditions except for * bulk transfers. */ #include <sys/usb/hcd/openhci/ohcid.h> #include <sys/disp.h> #include <sys/strsun.h> /* Pointer to the state structure */ static void *ohci_statep; int force_ohci_off = 1; /* Number of instances */ #define OHCI_INSTS 1 /* Adjustable variables for the size of the pools */ int ohci_ed_pool_size = OHCI_ED_POOL_SIZE; int ohci_td_pool_size = OHCI_TD_POOL_SIZE; /* * Initialize the values which are used for setting up head pointers for * the 32ms scheduling lists which starts from the HCCA. */ static uchar_t ohci_index[NUM_INTR_ED_LISTS / 2] = {0x0, 0x8, 0x4, 0xc, 0x2, 0xa, 0x6, 0xe, 0x1, 0x9, 0x5, 0xd, 0x3, 0xb, 0x7, 0xf}; /* Debugging information */ uint_t ohci_errmask = (uint_t)PRINT_MASK_ALL; uint_t ohci_errlevel = USB_LOG_L2; uint_t ohci_instance_debug = (uint_t)-1; /* * OHCI MSI tunable: * * By default MSI is enabled on all supported platforms. */ boolean_t ohci_enable_msi = B_TRUE; /* * HCDI entry points * * The Host Controller Driver Interfaces (HCDI) are the software interfaces * between the Universal Serial Bus Driver (USBA) and the Host Controller * Driver (HCD). The HCDI interfaces or entry points are subject to change. */ static int ohci_hcdi_pipe_open( usba_pipe_handle_data_t *ph, usb_flags_t usb_flags); static int ohci_hcdi_pipe_close( usba_pipe_handle_data_t *ph, usb_flags_t usb_flags); static int ohci_hcdi_pipe_reset( usba_pipe_handle_data_t *ph, usb_flags_t usb_flags); static void ohci_hcdi_pipe_reset_data_toggle( usba_pipe_handle_data_t *ph); static int ohci_hcdi_pipe_ctrl_xfer( usba_pipe_handle_data_t *ph, usb_ctrl_req_t *ctrl_reqp, usb_flags_t usb_flags); static int ohci_hcdi_bulk_transfer_size( usba_device_t *usba_device, size_t *size); static int ohci_hcdi_pipe_bulk_xfer( usba_pipe_handle_data_t *ph, usb_bulk_req_t *bulk_reqp, usb_flags_t usb_flags); static int ohci_hcdi_pipe_intr_xfer( usba_pipe_handle_data_t *ph, usb_intr_req_t *intr_req, usb_flags_t usb_flags); static int ohci_hcdi_pipe_stop_intr_polling( usba_pipe_handle_data_t *ph, usb_flags_t usb_flags); static int ohci_hcdi_get_current_frame_number( usba_device_t *usba_device, usb_frame_number_t *frame_number); static int ohci_hcdi_get_max_isoc_pkts( usba_device_t *usba_device, uint_t *max_isoc_pkts_per_request); static int ohci_hcdi_pipe_isoc_xfer( usba_pipe_handle_data_t *ph, usb_isoc_req_t *isoc_reqp, usb_flags_t usb_flags); static int ohci_hcdi_pipe_stop_isoc_polling( usba_pipe_handle_data_t *ph, usb_flags_t usb_flags); /* * Internal Function Prototypes */ /* Host Controller Driver (HCD) initialization functions */ static void ohci_set_dma_attributes(ohci_state_t *ohcip); static int ohci_allocate_pools(ohci_state_t *ohcip); static void ohci_decode_ddi_dma_addr_bind_handle_result( ohci_state_t *ohcip, int result); static int ohci_map_regs(ohci_state_t *ohcip); static int ohci_register_intrs_and_init_mutex( ohci_state_t *ohcip); static int ohci_add_intrs(ohci_state_t *ohcip, int intr_type); static int ohci_init_ctlr(ohci_state_t *ohcip); static int ohci_init_hcca(ohci_state_t *ohcip); static void ohci_build_interrupt_lattice( ohci_state_t *ohcip); static int ohci_take_control(ohci_state_t *ohcip); static usba_hcdi_ops_t *ohci_alloc_hcdi_ops( ohci_state_t *ohcip); /* Host Controller Driver (HCD) deinitialization functions */ static int ohci_cleanup(ohci_state_t *ohcip); static void ohci_rem_intrs(ohci_state_t *ohcip); static int ohci_cpr_suspend(ohci_state_t *ohcip); static int ohci_cpr_resume(ohci_state_t *ohcip); /* Bandwidth Allocation functions */ static int ohci_allocate_bandwidth(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, uint_t *node); static void ohci_deallocate_bandwidth(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); static int ohci_compute_total_bandwidth( usb_ep_descr_t *endpoint, usb_port_status_t port_status, uint_t *bandwidth); static int ohci_adjust_polling_interval( ohci_state_t *ohcip, usb_ep_descr_t *endpoint, usb_port_status_t port_status); static uint_t ohci_lattice_height(uint_t interval); static uint_t ohci_lattice_parent(uint_t node); static uint_t ohci_leftmost_leaf(uint_t node, uint_t height); static uint_t ohci_hcca_intr_index( uint_t node); static uint_t ohci_hcca_leaf_index( uint_t leaf); static uint_t ohci_pow_2(uint_t x); static uint_t ohci_log_2(uint_t x); /* Endpoint Descriptor (ED) related functions */ static uint_t ohci_unpack_endpoint(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); static void ohci_insert_ed(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); static void ohci_insert_ctrl_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_insert_bulk_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_insert_intr_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_insert_isoc_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_modify_sKip_bit(ohci_state_t *ohcip, ohci_pipe_private_t *pp, skip_bit_t action, usb_flags_t flag); static void ohci_remove_ed(ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_remove_ctrl_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_remove_bulk_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_remove_periodic_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_insert_ed_on_reclaim_list( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_detach_ed_from_list( ohci_state_t *ohcip, ohci_ed_t *ept, uint_t ept_type); static ohci_ed_t *ohci_ed_iommu_to_cpu( ohci_state_t *ohcip, uintptr_t addr); /* Transfer Descriptor (TD) related functions */ static int ohci_initialize_dummy(ohci_state_t *ohcip, ohci_ed_t *ept); static ohci_trans_wrapper_t *ohci_allocate_ctrl_resources( ohci_state_t *ohcip, ohci_pipe_private_t *pp, usb_ctrl_req_t *ctrl_reqp, usb_flags_t usb_flags); static void ohci_insert_ctrl_req( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_ctrl_req_t *ctrl_reqp, ohci_trans_wrapper_t *tw, usb_flags_t usb_flags); static ohci_trans_wrapper_t *ohci_allocate_bulk_resources( ohci_state_t *ohcip, ohci_pipe_private_t *pp, usb_bulk_req_t *bulk_reqp, usb_flags_t usb_flags); static void ohci_insert_bulk_req(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_bulk_req_t *bulk_reqp, ohci_trans_wrapper_t *tw, usb_flags_t flags); static int ohci_start_pipe_polling(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_flags_t flags); static void ohci_set_periodic_pipe_polling( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); static ohci_trans_wrapper_t *ohci_allocate_intr_resources( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_intr_req_t *intr_reqp, usb_flags_t usb_flags); static void ohci_insert_intr_req(ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, usb_flags_t flags); static int ohci_stop_periodic_pipe_polling( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_flags_t flags); static ohci_trans_wrapper_t *ohci_allocate_isoc_resources( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_isoc_req_t *isoc_reqp, usb_flags_t usb_flags); static int ohci_insert_isoc_req(ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, uint_t flags); static int ohci_insert_hc_td(ohci_state_t *ohcip, uint_t hctd_ctrl, uint32_t hctd_dma_offs, size_t hctd_length, uint32_t hctd_ctrl_phase, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw); static ohci_td_t *ohci_allocate_td_from_pool( ohci_state_t *ohcip); static void ohci_fill_in_td(ohci_state_t *ohcip, ohci_td_t *td, ohci_td_t *new_dummy, uint_t hctd_ctrl, uint32_t hctd_dma_offs, size_t hctd_length, uint32_t hctd_ctrl_phase, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw); static void ohci_init_itd( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, uint_t hctd_ctrl, uint32_t index, ohci_td_t *td); static int ohci_insert_td_with_frame_number( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *current_td, ohci_td_t *dummy_td); static void ohci_insert_td_on_tw(ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, ohci_td_t *td); static void ohci_done_list_tds(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); /* Transfer Wrapper (TW) functions */ static ohci_trans_wrapper_t *ohci_create_transfer_wrapper( ohci_state_t *ohcip, ohci_pipe_private_t *pp, size_t length, uint_t usb_flags); static ohci_trans_wrapper_t *ohci_create_isoc_transfer_wrapper( ohci_state_t *ohcip, ohci_pipe_private_t *pp, size_t length, usb_isoc_pkt_descr_t *descr, ushort_t pkt_count, size_t td_count, uint_t usb_flags); int ohci_allocate_tds_for_tw( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, size_t td_count); static ohci_trans_wrapper_t *ohci_allocate_tw_resources( ohci_state_t *ohcip, ohci_pipe_private_t *pp, size_t length, usb_flags_t usb_flags, size_t td_count); static void ohci_free_tw_tds_resources( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw); static void ohci_start_xfer_timer( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw); static void ohci_stop_xfer_timer( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, uint_t flag); static void ohci_xfer_timeout_handler(void *arg); static void ohci_remove_tw_from_timeout_list( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw); static void ohci_start_timer(ohci_state_t *ohcip); static void ohci_free_dma_resources(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); static void ohci_free_tw(ohci_state_t *ohcip, ohci_trans_wrapper_t *tw); static int ohci_tw_rebind_cookie( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw); /* Interrupt Handling functions */ static uint_t ohci_intr(caddr_t arg1, caddr_t arg2); static void ohci_handle_missed_intr( ohci_state_t *ohcip); static void ohci_handle_ue(ohci_state_t *ohcip); static void ohci_handle_endpoint_reclaimation( ohci_state_t *ohcip); static void ohci_traverse_done_list( ohci_state_t *ohcip, ohci_td_t *head_done_list); static ohci_td_t *ohci_reverse_done_list( ohci_state_t *ohcip, ohci_td_t *head_done_list); static usb_cr_t ohci_parse_error(ohci_state_t *ohcip, ohci_td_t *td); static void ohci_parse_isoc_error( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td); static usb_cr_t ohci_check_for_error( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, uint_t ctrl); static void ohci_handle_error( ohci_state_t *ohcip, ohci_td_t *td, usb_cr_t error); static int ohci_cleanup_data_underrun( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td); static void ohci_handle_normal_td( ohci_state_t *ohcip, ohci_td_t *td, ohci_trans_wrapper_t *tw); static void ohci_handle_ctrl_td(ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, void *); static void ohci_handle_bulk_td(ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, void *); static void ohci_handle_intr_td(ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, void *); static void ohci_handle_one_xfer_completion( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw); static void ohci_handle_isoc_td(ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, void *); static void ohci_sendup_td_message( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, usb_cr_t error); static int ohci_check_done_head( ohci_state_t *ohcip, ohci_td_t *done_head); /* Miscillaneous functions */ static void ohci_cpr_cleanup( ohci_state_t *ohcip); static usb_req_attrs_t ohci_get_xfer_attrs(ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw); static int ohci_allocate_periodic_in_resource( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, usb_flags_t flags); static int ohci_wait_for_sof( ohci_state_t *ohcip); static void ohci_pipe_cleanup( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); static void ohci_wait_for_transfers_completion( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_check_for_transfers_completion( ohci_state_t *ohcip, ohci_pipe_private_t *pp); static void ohci_save_data_toggle(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); static void ohci_restore_data_toggle(ohci_state_t *ohcip, usba_pipe_handle_data_t *ph); static void ohci_deallocate_periodic_in_resource( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw); static void ohci_do_client_periodic_in_req_callback( ohci_state_t *ohcip, ohci_pipe_private_t *pp, usb_cr_t completion_reason); static void ohci_hcdi_callback( usba_pipe_handle_data_t *ph, ohci_trans_wrapper_t *tw, usb_cr_t completion_reason); /* Kstat Support */ static void ohci_create_stats(ohci_state_t *ohcip); static void ohci_destroy_stats(ohci_state_t *ohcip); static void ohci_do_byte_stats( ohci_state_t *ohcip, size_t len, uint8_t attr, uint8_t addr); static void ohci_do_intrs_stats( ohci_state_t *ohcip, int val); static void ohci_print_op_regs(ohci_state_t *ohcip); static void ohci_print_ed(ohci_state_t *ohcip, ohci_ed_t *ed); static void ohci_print_td(ohci_state_t *ohcip, ohci_td_t *td); /* extern */ int usba_hubdi_root_hub_power(dev_info_t *dip, int comp, int level); /* * Device operations (dev_ops) entries function prototypes. * * We use the hub cbops since all nexus ioctl operations defined so far will * be executed by the root hub. The following are the Host Controller Driver * (HCD) entry points. * * the open/close/ioctl functions call the corresponding usba_hubdi_* * calls after looking up the dip thru the dev_t. */ static int ohci_open(dev_t *devp, int flags, int otyp, cred_t *credp); static int ohci_close(dev_t dev, int flag, int otyp, cred_t *credp); static int ohci_ioctl(dev_t dev, int cmd, intptr_t arg, int mode, cred_t *credp, int *rvalp); static int ohci_attach(dev_info_t *dip, ddi_attach_cmd_t cmd); static int ohci_detach(dev_info_t *dip, ddi_detach_cmd_t cmd); static int ohci_quiesce(dev_info_t *dip); static int ohci_info(dev_info_t *dip, ddi_info_cmd_t infocmd, void *arg, void **result); static struct cb_ops ohci_cb_ops = { ohci_open, /* Open */ ohci_close, /* Close */ nodev, /* Strategy */ nodev, /* Print */ nodev, /* Dump */ nodev, /* Read */ nodev, /* Write */ ohci_ioctl, /* Ioctl */ nodev, /* Devmap */ nodev, /* Mmap */ nodev, /* Segmap */ nochpoll, /* Poll */ ddi_prop_op, /* cb_prop_op */ NULL, /* Streamtab */ D_MP /* Driver compatibility flag */ }; static struct dev_ops ohci_ops = { DEVO_REV, /* Devo_rev */ 0, /* Refcnt */ ohci_info, /* Info */ nulldev, /* Identify */ nulldev, /* Probe */ ohci_attach, /* Attach */ ohci_detach, /* Detach */ nodev, /* Reset */ &ohci_cb_ops, /* Driver operations */ &usba_hubdi_busops, /* Bus operations */ usba_hubdi_root_hub_power, /* Power */ ohci_quiesce, /* Quiesce */ }; /* * The USBA library must be loaded for this driver. */ static struct modldrv modldrv = { &mod_driverops, /* Type of module. This one is a driver */ "USB OpenHCI Driver", /* Name of the module. */ &ohci_ops, /* Driver ops */ }; static struct modlinkage modlinkage = { MODREV_1, (void *)&modldrv, NULL }; int _init(void) { int error; /* Initialize the soft state structures */ if ((error = ddi_soft_state_init(&ohci_statep, sizeof (ohci_state_t), OHCI_INSTS)) != 0) { return (error); } /* Install the loadable module */ if ((error = mod_install(&modlinkage)) != 0) { ddi_soft_state_fini(&ohci_statep); } return (error); } int _info(struct modinfo *modinfop) { return (mod_info(&modlinkage, modinfop)); } int _fini(void) { int error; if ((error = mod_remove(&modlinkage)) == 0) { /* Release per module resources */ ddi_soft_state_fini(&ohci_statep); } return (error); } /* * Host Controller Driver (HCD) entry points */ /* * ohci_attach: */ static int ohci_attach(dev_info_t *dip, ddi_attach_cmd_t cmd) { int instance; ohci_state_t *ohcip = NULL; usba_hcdi_register_args_t hcdi_args; switch (cmd) { case DDI_ATTACH: break; case DDI_RESUME: ohcip = ohci_obtain_state(dip); return (ohci_cpr_resume(ohcip)); default: return (DDI_FAILURE); } /* Get the instance and create soft state */ instance = ddi_get_instance(dip); if (ddi_soft_state_zalloc(ohci_statep, instance) != 0) { return (DDI_FAILURE); } ohcip = ddi_get_soft_state(ohci_statep, instance); if (ohcip == NULL) { return (DDI_FAILURE); } ohcip->ohci_flags = OHCI_ATTACH; ohcip->ohci_log_hdl = usb_alloc_log_hdl(dip, "ohci", &ohci_errlevel, &ohci_errmask, &ohci_instance_debug, 0); ohcip->ohci_flags |= OHCI_ZALLOC; /* Set host controller soft state to initilization */ ohcip->ohci_hc_soft_state = OHCI_CTLR_INIT_STATE; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohcip = 0x%p", (void *)ohcip); /* Initialize the DMA attributes */ ohci_set_dma_attributes(ohcip); /* Save the dip and instance */ ohcip->ohci_dip = dip; ohcip->ohci_instance = instance; /* Initialize the kstat structures */ ohci_create_stats(ohcip); /* Create the td and ed pools */ if (ohci_allocate_pools(ohcip) != DDI_SUCCESS) { (void) ohci_cleanup(ohcip); return (DDI_FAILURE); } /* Map the registers */ if (ohci_map_regs(ohcip) != DDI_SUCCESS) { (void) ohci_cleanup(ohcip); return (DDI_FAILURE); } /* Get the ohci chip vendor and device id */ ohcip->ohci_vendor_id = pci_config_get16( ohcip->ohci_config_handle, PCI_CONF_VENID); ohcip->ohci_device_id = pci_config_get16( ohcip->ohci_config_handle, PCI_CONF_DEVID); ohcip->ohci_rev_id = pci_config_get8( ohcip->ohci_config_handle, PCI_CONF_REVID); /* Register interrupts */ if (ohci_register_intrs_and_init_mutex(ohcip) != DDI_SUCCESS) { (void) ohci_cleanup(ohcip); return (DDI_FAILURE); } mutex_enter(&ohcip->ohci_int_mutex); /* Initialize the controller */ if (ohci_init_ctlr(ohcip) != DDI_SUCCESS) { mutex_exit(&ohcip->ohci_int_mutex); (void) ohci_cleanup(ohcip); return (DDI_FAILURE); } /* * At this point, the hardware wiil be okay. * Initialize the usba_hcdi structure */ ohcip->ohci_hcdi_ops = ohci_alloc_hcdi_ops(ohcip); mutex_exit(&ohcip->ohci_int_mutex); /* * Make this HCD instance known to USBA * (dma_attr must be passed for USBA busctl's) */ hcdi_args.usba_hcdi_register_version = HCDI_REGISTER_VERSION; hcdi_args.usba_hcdi_register_dip = dip; hcdi_args.usba_hcdi_register_ops = ohcip->ohci_hcdi_ops; hcdi_args.usba_hcdi_register_dma_attr = &ohcip->ohci_dma_attr; /* * Priority and iblock_cookie are one and the same * (However, retaining hcdi_soft_iblock_cookie for now * assigning it w/ priority. In future all iblock_cookie * could just go) */ hcdi_args.usba_hcdi_register_iblock_cookie = (ddi_iblock_cookie_t)(uintptr_t)ohcip->ohci_intr_pri; if (usba_hcdi_register(&hcdi_args, 0) != DDI_SUCCESS) { (void) ohci_cleanup(ohcip); return (DDI_FAILURE); } ohcip->ohci_flags |= OHCI_USBAREG; mutex_enter(&ohcip->ohci_int_mutex); if ((ohci_init_root_hub(ohcip)) != USB_SUCCESS) { mutex_exit(&ohcip->ohci_int_mutex); (void) ohci_cleanup(ohcip); return (DDI_FAILURE); } mutex_exit(&ohcip->ohci_int_mutex); /* Finally load the root hub driver */ if (ohci_load_root_hub_driver(ohcip) != USB_SUCCESS) { (void) ohci_cleanup(ohcip); return (DDI_FAILURE); } ohcip->ohci_flags |= OHCI_RHREG; /* Display information in the banner */ ddi_report_dev(dip); mutex_enter(&ohcip->ohci_int_mutex); /* Reset the ohci initilization flag */ ohcip->ohci_flags &= ~OHCI_ATTACH; /* Print the Host Control's Operational registers */ ohci_print_op_regs(ohcip); /* For RIO we need to call pci_report_pmcap */ if (OHCI_IS_RIO(ohcip)) { (void) pci_report_pmcap(dip, PCI_PM_IDLESPEED, (void *)4000); } mutex_exit(&ohcip->ohci_int_mutex); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_attach: dip = 0x%p done", (void *)dip); return (DDI_SUCCESS); } /* * ohci_detach: */ int ohci_detach(dev_info_t *dip, ddi_detach_cmd_t cmd) { ohci_state_t *ohcip = ohci_obtain_state(dip); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_detach:"); switch (cmd) { case DDI_DETACH: return (ohci_cleanup(ohcip)); case DDI_SUSPEND: return (ohci_cpr_suspend(ohcip)); default: return (DDI_FAILURE); } } /* * ohci_info: */ /* ARGSUSED */ static int ohci_info(dev_info_t *dip, ddi_info_cmd_t infocmd, void *arg, void **result) { dev_t dev; ohci_state_t *ohcip; int instance; int error = DDI_FAILURE; switch (infocmd) { case DDI_INFO_DEVT2DEVINFO: dev = (dev_t)arg; instance = OHCI_UNIT(dev); ohcip = ddi_get_soft_state(ohci_statep, instance); if (ohcip != NULL) { *result = (void *)ohcip->ohci_dip; if (*result != NULL) { error = DDI_SUCCESS; } } else { *result = NULL; } break; case DDI_INFO_DEVT2INSTANCE: dev = (dev_t)arg; instance = OHCI_UNIT(dev); *result = (void *)(uintptr_t)instance; error = DDI_SUCCESS; break; default: break; } return (error); } /* * cb_ops entry points */ static dev_info_t * ohci_get_dip(dev_t dev) { int instance = OHCI_UNIT(dev); ohci_state_t *ohcip = ddi_get_soft_state(ohci_statep, instance); if (ohcip) { return (ohcip->ohci_dip); } else { return (NULL); } } static int ohci_open(dev_t *devp, int flags, int otyp, cred_t *credp) { dev_info_t *dip = ohci_get_dip(*devp); return (usba_hubdi_open(dip, devp, flags, otyp, credp)); } static int ohci_close(dev_t dev, int flag, int otyp, cred_t *credp) { dev_info_t *dip = ohci_get_dip(dev); return (usba_hubdi_close(dip, dev, flag, otyp, credp)); } static int ohci_ioctl(dev_t dev, int cmd, intptr_t arg, int mode, cred_t *credp, int *rvalp) { dev_info_t *dip = ohci_get_dip(dev); return (usba_hubdi_ioctl(dip, dev, cmd, arg, mode, credp, rvalp)); } /* * Host Controller Driver (HCD) initialization functions */ /* * ohci_set_dma_attributes: * * Set the limits in the DMA attributes structure. Most of the values used * in the DMA limit structres are the default values as specified by the * Writing PCI device drivers document. */ static void ohci_set_dma_attributes(ohci_state_t *ohcip) { USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_set_dma_attributes:"); /* Initialize the DMA attributes */ ohcip->ohci_dma_attr.dma_attr_version = DMA_ATTR_V0; ohcip->ohci_dma_attr.dma_attr_addr_lo = 0x00000000ull; ohcip->ohci_dma_attr.dma_attr_addr_hi = 0xfffffffeull; /* 32 bit addressing */ ohcip->ohci_dma_attr.dma_attr_count_max = OHCI_DMA_ATTR_COUNT_MAX; /* Byte alignment */ ohcip->ohci_dma_attr.dma_attr_align = OHCI_DMA_ATTR_ALIGNMENT; /* * Since PCI specification is byte alignment, the * burstsize field should be set to 1 for PCI devices. */ ohcip->ohci_dma_attr.dma_attr_burstsizes = 0x1; ohcip->ohci_dma_attr.dma_attr_minxfer = 0x1; ohcip->ohci_dma_attr.dma_attr_maxxfer = OHCI_DMA_ATTR_MAX_XFER; ohcip->ohci_dma_attr.dma_attr_seg = 0xffffffffull; ohcip->ohci_dma_attr.dma_attr_sgllen = 1; ohcip->ohci_dma_attr.dma_attr_granular = OHCI_DMA_ATTR_GRANULAR; ohcip->ohci_dma_attr.dma_attr_flags = 0; } /* * ohci_allocate_pools: * * Allocate the system memory for the Endpoint Descriptor (ED) and for the * Transfer Descriptor (TD) pools. Both ED and TD structures must be aligned * to a 16 byte boundary. */ static int ohci_allocate_pools(ohci_state_t *ohcip) { ddi_device_acc_attr_t dev_attr; size_t real_length; int result; uint_t ccount; int i; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_allocate_pools:"); /* The host controller will be little endian */ dev_attr.devacc_attr_version = DDI_DEVICE_ATTR_V0; dev_attr.devacc_attr_endian_flags = DDI_STRUCTURE_LE_ACC; dev_attr.devacc_attr_dataorder = DDI_STRICTORDER_ACC; /* Byte alignment to TD alignment */ ohcip->ohci_dma_attr.dma_attr_align = OHCI_DMA_ATTR_TD_ALIGNMENT; /* Allocate the TD pool DMA handle */ if (ddi_dma_alloc_handle(ohcip->ohci_dip, &ohcip->ohci_dma_attr, DDI_DMA_SLEEP, 0, &ohcip->ohci_td_pool_dma_handle) != DDI_SUCCESS) { return (DDI_FAILURE); } /* Allocate the memory for the TD pool */ if (ddi_dma_mem_alloc(ohcip->ohci_td_pool_dma_handle, ohci_td_pool_size * sizeof (ohci_td_t), &dev_attr, DDI_DMA_CONSISTENT, DDI_DMA_SLEEP, 0, (caddr_t *)&ohcip->ohci_td_pool_addr, &real_length, &ohcip->ohci_td_pool_mem_handle)) { return (DDI_FAILURE); } /* Map the TD pool into the I/O address space */ result = ddi_dma_addr_bind_handle( ohcip->ohci_td_pool_dma_handle, NULL, (caddr_t)ohcip->ohci_td_pool_addr, real_length, DDI_DMA_RDWR | DDI_DMA_CONSISTENT, DDI_DMA_SLEEP, NULL, &ohcip->ohci_td_pool_cookie, &ccount); bzero((void *)ohcip->ohci_td_pool_addr, ohci_td_pool_size * sizeof (ohci_td_t)); /* Process the result */ if (result == DDI_DMA_MAPPED) { /* The cookie count should be 1 */ if (ccount != 1) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_allocate_pools: More than 1 cookie"); return (DDI_FAILURE); } } else { USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_allocate_pools: Result = %d", result); ohci_decode_ddi_dma_addr_bind_handle_result(ohcip, result); return (DDI_FAILURE); } /* * DMA addresses for TD pools are bound */ ohcip->ohci_dma_addr_bind_flag |= OHCI_TD_POOL_BOUND; /* Initialize the TD pool */ for (i = 0; i < ohci_td_pool_size; i ++) { Set_TD(ohcip->ohci_td_pool_addr[i].hctd_state, HC_TD_FREE); } /* Byte alignment to ED alignment */ ohcip->ohci_dma_attr.dma_attr_align = OHCI_DMA_ATTR_ED_ALIGNMENT; /* Allocate the ED pool DMA handle */ if (ddi_dma_alloc_handle(ohcip->ohci_dip, &ohcip->ohci_dma_attr, DDI_DMA_SLEEP, 0, &ohcip->ohci_ed_pool_dma_handle) != DDI_SUCCESS) { return (DDI_FAILURE); } /* Allocate the memory for the ED pool */ if (ddi_dma_mem_alloc(ohcip->ohci_ed_pool_dma_handle, ohci_ed_pool_size * sizeof (ohci_ed_t), &dev_attr, DDI_DMA_CONSISTENT, DDI_DMA_SLEEP, 0, (caddr_t *)&ohcip->ohci_ed_pool_addr, &real_length, &ohcip->ohci_ed_pool_mem_handle) != DDI_SUCCESS) { return (DDI_FAILURE); } result = ddi_dma_addr_bind_handle(ohcip->ohci_ed_pool_dma_handle, NULL, (caddr_t)ohcip->ohci_ed_pool_addr, real_length, DDI_DMA_RDWR | DDI_DMA_CONSISTENT, DDI_DMA_SLEEP, NULL, &ohcip->ohci_ed_pool_cookie, &ccount); bzero((void *)ohcip->ohci_ed_pool_addr, ohci_ed_pool_size * sizeof (ohci_ed_t)); /* Process the result */ if (result == DDI_DMA_MAPPED) { /* The cookie count should be 1 */ if (ccount != 1) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_allocate_pools: More than 1 cookie"); return (DDI_FAILURE); } } else { ohci_decode_ddi_dma_addr_bind_handle_result(ohcip, result); return (DDI_FAILURE); } /* * DMA addresses for ED pools are bound */ ohcip->ohci_dma_addr_bind_flag |= OHCI_ED_POOL_BOUND; /* Initialize the ED pool */ for (i = 0; i < ohci_ed_pool_size; i ++) { Set_ED(ohcip->ohci_ed_pool_addr[i].hced_state, HC_EPT_FREE); } return (DDI_SUCCESS); } /* * ohci_decode_ddi_dma_addr_bind_handle_result: * * Process the return values of ddi_dma_addr_bind_handle() */ static void ohci_decode_ddi_dma_addr_bind_handle_result( ohci_state_t *ohcip, int result) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_decode_ddi_dma_addr_bind_handle_result:"); switch (result) { case DDI_DMA_PARTIAL_MAP: USB_DPRINTF_L2(PRINT_MASK_ALL, ohcip->ohci_log_hdl, "Partial transfers not allowed"); break; case DDI_DMA_INUSE: USB_DPRINTF_L2(PRINT_MASK_ALL, ohcip->ohci_log_hdl, "Handle is in use"); break; case DDI_DMA_NORESOURCES: USB_DPRINTF_L2(PRINT_MASK_ALL, ohcip->ohci_log_hdl, "No resources"); break; case DDI_DMA_NOMAPPING: USB_DPRINTF_L2(PRINT_MASK_ALL, ohcip->ohci_log_hdl, "No mapping"); break; case DDI_DMA_TOOBIG: USB_DPRINTF_L2(PRINT_MASK_ALL, ohcip->ohci_log_hdl, "Object is too big"); break; default: USB_DPRINTF_L2(PRINT_MASK_ALL, ohcip->ohci_log_hdl, "Unknown dma error"); } } /* * ohci_map_regs: * * The Host Controller (HC) contains a set of on-chip operational registers * and which should be mapped into a non-cacheable portion of the system * addressable space. */ static int ohci_map_regs(ohci_state_t *ohcip) { ddi_device_acc_attr_t attr; uint16_t cmd_reg; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_map_regs:"); /* The host controller will be little endian */ attr.devacc_attr_version = DDI_DEVICE_ATTR_V0; attr.devacc_attr_endian_flags = DDI_STRUCTURE_LE_ACC; attr.devacc_attr_dataorder = DDI_STRICTORDER_ACC; /* Map in operational registers */ if (ddi_regs_map_setup(ohcip->ohci_dip, 1, (caddr_t *)&ohcip->ohci_regsp, 0, sizeof (ohci_regs_t), &attr, &ohcip->ohci_regs_handle) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_map_regs: Map setup error"); return (DDI_FAILURE); } if (pci_config_setup(ohcip->ohci_dip, &ohcip->ohci_config_handle) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_map_regs: Config error"); return (DDI_FAILURE); } /* Make sure Memory Access Enable and Master Enable are set */ cmd_reg = pci_config_get16(ohcip->ohci_config_handle, PCI_CONF_COMM); if (!(cmd_reg & PCI_COMM_MAE)) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_map_regs: Memory base address access disabled"); return (DDI_FAILURE); } cmd_reg |= (PCI_COMM_MAE | PCI_COMM_ME); pci_config_put16(ohcip->ohci_config_handle, PCI_CONF_COMM, cmd_reg); return (DDI_SUCCESS); } /* * The following simulated polling is for debugging purposes only. * It is activated on x86 by setting usb-polling=true in GRUB or ohci.conf. */ static int ohci_is_polled(dev_info_t *dip) { int ret; char *propval; if (ddi_prop_lookup_string(DDI_DEV_T_ANY, dip, 0, "usb-polling", &propval) != DDI_SUCCESS) return (0); ret = (strcmp(propval, "true") == 0); ddi_prop_free(propval); return (ret); } static void ohci_poll_intr(void *arg) { /* poll every millisecond */ for (;;) { (void) ohci_intr(arg, NULL); delay(drv_usectohz(1000)); } } /* * ohci_register_intrs_and_init_mutex: * * Register interrupts and initialize each mutex and condition variables */ static int ohci_register_intrs_and_init_mutex(ohci_state_t *ohcip) { int intr_types; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_register_intrs_and_init_mutex:"); /* * Sometimes the OHCI controller of ULI1575 southbridge * could not receive SOF intrs when enable MSI. Hence * MSI is disabled for this chip. */ if ((ohcip->ohci_vendor_id == PCI_ULI1575_VENID) && (ohcip->ohci_device_id == PCI_ULI1575_DEVID)) { ohcip->ohci_msi_enabled = B_FALSE; } else { ohcip->ohci_msi_enabled = ohci_enable_msi; } if (ohci_is_polled(ohcip->ohci_dip)) { extern pri_t maxclsyspri; USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_register_intrs_and_init_mutex: " "running in simulated polled mode"); (void) thread_create(NULL, 0, ohci_poll_intr, ohcip, 0, &p0, TS_RUN, maxclsyspri); goto skip_intr; } /* Get supported interrupt types */ if (ddi_intr_get_supported_types(ohcip->ohci_dip, &intr_types) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_register_intrs_and_init_mutex: " "ddi_intr_get_supported_types failed"); return (DDI_FAILURE); } USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_register_intrs_and_init_mutex: " "supported interrupt types 0x%x", intr_types); if ((intr_types & DDI_INTR_TYPE_MSI) && ohcip->ohci_msi_enabled) { if (ohci_add_intrs(ohcip, DDI_INTR_TYPE_MSI) != DDI_SUCCESS) { USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_register_intrs_and_init_mutex: MSI " "registration failed, trying FIXED interrupt \n"); } else { USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_register_intrs_and_init_mutex: " "Using MSI interrupt type\n"); ohcip->ohci_intr_type = DDI_INTR_TYPE_MSI; ohcip->ohci_flags |= OHCI_INTR; } } if ((!(ohcip->ohci_flags & OHCI_INTR)) && (intr_types & DDI_INTR_TYPE_FIXED)) { if (ohci_add_intrs(ohcip, DDI_INTR_TYPE_FIXED) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_register_intrs_and_init_mutex: " "FIXED interrupt registration failed\n"); return (DDI_FAILURE); } USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_register_intrs_and_init_mutex: " "Using FIXED interrupt type\n"); ohcip->ohci_intr_type = DDI_INTR_TYPE_FIXED; ohcip->ohci_flags |= OHCI_INTR; } skip_intr: /* Create prototype for SOF condition variable */ cv_init(&ohcip->ohci_SOF_cv, NULL, CV_DRIVER, NULL); /* Semaphore to serialize opens and closes */ sema_init(&ohcip->ohci_ocsem, 1, NULL, SEMA_DRIVER, NULL); return (DDI_SUCCESS); } /* * ohci_add_intrs: * * Register FIXED or MSI interrupts. */ static int ohci_add_intrs(ohci_state_t *ohcip, int intr_type) { int actual, avail, intr_size, count = 0; int i, flag, ret; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: interrupt type 0x%x", intr_type); /* Get number of interrupts */ ret = ddi_intr_get_nintrs(ohcip->ohci_dip, intr_type, &count); if ((ret != DDI_SUCCESS) || (count == 0)) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: ddi_intr_get_nintrs() failure, " "ret: %d, count: %d", ret, count); return (DDI_FAILURE); } /* Get number of available interrupts */ ret = ddi_intr_get_navail(ohcip->ohci_dip, intr_type, &avail); if ((ret != DDI_SUCCESS) || (avail == 0)) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: ddi_intr_get_navail() failure, " "ret: %d, count: %d", ret, count); return (DDI_FAILURE); } if (avail < count) { USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: ohci_add_intrs: nintrs () " "returned %d, navail returned %d\n", count, avail); } /* Allocate an array of interrupt handles */ intr_size = count * sizeof (ddi_intr_handle_t); ohcip->ohci_htable = kmem_zalloc(intr_size, KM_SLEEP); flag = (intr_type == DDI_INTR_TYPE_MSI) ? DDI_INTR_ALLOC_STRICT:DDI_INTR_ALLOC_NORMAL; /* call ddi_intr_alloc() */ ret = ddi_intr_alloc(ohcip->ohci_dip, ohcip->ohci_htable, intr_type, 0, count, &actual, flag); if ((ret != DDI_SUCCESS) || (actual == 0)) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: ddi_intr_alloc() failed %d", ret); kmem_free(ohcip->ohci_htable, intr_size); return (DDI_FAILURE); } if (actual < count) { USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: Requested: %d, Received: %d\n", count, actual); for (i = 0; i < actual; i++) (void) ddi_intr_free(ohcip->ohci_htable[i]); kmem_free(ohcip->ohci_htable, intr_size); return (DDI_FAILURE); } ohcip->ohci_intr_cnt = actual; if ((ret = ddi_intr_get_pri(ohcip->ohci_htable[0], &ohcip->ohci_intr_pri)) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: ddi_intr_get_pri() failed %d", ret); for (i = 0; i < actual; i++) (void) ddi_intr_free(ohcip->ohci_htable[i]); kmem_free(ohcip->ohci_htable, intr_size); return (DDI_FAILURE); } USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: Supported Interrupt priority 0x%x", ohcip->ohci_intr_pri); /* Test for high level mutex */ if (ohcip->ohci_intr_pri >= ddi_intr_get_hilevel_pri()) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: Hi level interrupt not supported"); for (i = 0; i < actual; i++) (void) ddi_intr_free(ohcip->ohci_htable[i]); kmem_free(ohcip->ohci_htable, intr_size); return (DDI_FAILURE); } /* Initialize the mutex */ mutex_init(&ohcip->ohci_int_mutex, NULL, MUTEX_DRIVER, DDI_INTR_PRI(ohcip->ohci_intr_pri)); /* Call ddi_intr_add_handler() */ for (i = 0; i < actual; i++) { if ((ret = ddi_intr_add_handler(ohcip->ohci_htable[i], ohci_intr, (caddr_t)ohcip, (caddr_t)(uintptr_t)i)) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: ddi_intr_add_handler() " "failed %d", ret); for (i = 0; i < actual; i++) (void) ddi_intr_free(ohcip->ohci_htable[i]); mutex_destroy(&ohcip->ohci_int_mutex); kmem_free(ohcip->ohci_htable, intr_size); return (DDI_FAILURE); } } if ((ret = ddi_intr_get_cap(ohcip->ohci_htable[0], &ohcip->ohci_intr_cap)) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_add_intrs: ddi_intr_get_cap() failed %d", ret); for (i = 0; i < actual; i++) { (void) ddi_intr_remove_handler(ohcip->ohci_htable[i]); (void) ddi_intr_free(ohcip->ohci_htable[i]); } mutex_destroy(&ohcip->ohci_int_mutex); kmem_free(ohcip->ohci_htable, intr_size); return (DDI_FAILURE); } /* Enable all interrupts */ if (ohcip->ohci_intr_cap & DDI_INTR_FLAG_BLOCK) { /* Call ddi_intr_block_enable() for MSI interrupts */ (void) ddi_intr_block_enable(ohcip->ohci_htable, ohcip->ohci_intr_cnt); } else { /* Call ddi_intr_enable for MSI or FIXED interrupts */ for (i = 0; i < ohcip->ohci_intr_cnt; i++) (void) ddi_intr_enable(ohcip->ohci_htable[i]); } return (DDI_SUCCESS); } /* * ohci_init_ctlr: * * Initialize the Host Controller (HC). */ static int ohci_init_ctlr(ohci_state_t *ohcip) { int revision, curr_control, max_packet = 0; clock_t sof_time_wait; int retry = 0; int ohci_frame_interval; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_ctlr:"); if (ohci_take_control(ohcip) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_ctlr: ohci_take_control failed\n"); return (DDI_FAILURE); } /* * Soft reset the host controller. * * On soft reset, the ohci host controller moves to the * USB Suspend state in which most of the ohci operational * registers are reset except stated ones. The soft reset * doesn't cause a reset to the ohci root hub and even no * subsequent reset signaling should be asserterd to its * down stream. */ Set_OpReg(hcr_cmd_status, HCR_STATUS_RESET); mutex_exit(&ohcip->ohci_int_mutex); /* Wait 10ms for reset to complete */ delay(drv_usectohz(OHCI_RESET_TIMEWAIT)); mutex_enter(&ohcip->ohci_int_mutex); /* * Do hard reset the host controller. * * Now perform USB reset in order to reset the ohci root * hub. */ Set_OpReg(hcr_control, HCR_CONTROL_RESET); /* * According to Section 5.1.2.3 of the specification, the * host controller will go into suspend state immediately * after the reset. */ /* Verify the version number */ revision = Get_OpReg(hcr_revision); if ((revision & HCR_REVISION_MASK) != HCR_REVISION_1_0) { return (DDI_FAILURE); } USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_ctlr: Revision verified"); /* hcca area need not be initialized on resume */ if (ohcip->ohci_hc_soft_state == OHCI_CTLR_INIT_STATE) { /* Initialize the hcca area */ if (ohci_init_hcca(ohcip) != DDI_SUCCESS) { return (DDI_FAILURE); } } /* * Workaround for ULI1575 chipset. Following OHCI Operational Memory * Registers are not cleared to their default value on reset. * Explicitly set the registers to default value. */ if (ohcip->ohci_vendor_id == PCI_ULI1575_VENID && ohcip->ohci_device_id == PCI_ULI1575_DEVID) { Set_OpReg(hcr_control, HCR_CONTROL_DEFAULT); Set_OpReg(hcr_intr_enable, HCR_INT_ENABLE_DEFAULT); Set_OpReg(hcr_HCCA, HCR_HCCA_DEFAULT); Set_OpReg(hcr_ctrl_head, HCR_CONTROL_HEAD_ED_DEFAULT); Set_OpReg(hcr_bulk_head, HCR_BULK_HEAD_ED_DEFAULT); Set_OpReg(hcr_frame_interval, HCR_FRAME_INTERVAL_DEFAULT); Set_OpReg(hcr_periodic_strt, HCR_PERIODIC_START_DEFAULT); } /* Set the HcHCCA to the physical address of the HCCA block */ Set_OpReg(hcr_HCCA, (uint_t)ohcip->ohci_hcca_cookie.dmac_address); /* * Set HcInterruptEnable to enable all interrupts except Root * Hub Status change and SOF interrupts. */ Set_OpReg(hcr_intr_enable, HCR_INTR_SO | HCR_INTR_WDH | HCR_INTR_RD | HCR_INTR_UE | HCR_INTR_FNO | HCR_INTR_MIE); /* * For non-periodic transfers, reserve atleast for one low-speed * device transaction. According to USB Bandwidth Analysis white * paper and also as per OHCI Specification 1.0a, section 7.3.5, * page 123, one low-speed transaction takes 0x628h full speed * bits (197 bytes), which comes to around 13% of USB frame time. * * The periodic transfers will get around 87% of USB frame time. */ Set_OpReg(hcr_periodic_strt, ((PERIODIC_XFER_STARTS * BITS_PER_BYTE) - 1)); /* Save the contents of the Frame Interval Registers */ ohcip->ohci_frame_interval = Get_OpReg(hcr_frame_interval); /* * Initialize the FSLargestDataPacket value in the frame interval * register. The controller compares the value of MaxPacketSize to * this value to see if the entire packet may be sent out before * the EOF. */ max_packet = ((((ohcip->ohci_frame_interval - MAX_OVERHEAD) * 6) / 7) << HCR_FRME_FSMPS_SHFT); Set_OpReg(hcr_frame_interval, (max_packet | ohcip->ohci_frame_interval)); /* * Sometimes the HcFmInterval register in OHCI controller does not * maintain its value after the first write. This problem is found * on ULI M1575 South Bridge. To workaround the hardware problem, * check the value after write and retry if the last write failed. */ if (ohcip->ohci_vendor_id == PCI_ULI1575_VENID && ohcip->ohci_device_id == PCI_ULI1575_DEVID) { ohci_frame_interval = Get_OpReg(hcr_frame_interval); while ((ohci_frame_interval != (max_packet | ohcip->ohci_frame_interval))) { if (retry >= 10) { USB_DPRINTF_L1(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "Failed to program" " Frame Interval Register."); return (DDI_FAILURE); } retry++; USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_ctlr: Failed to program Frame" " Interval Register, retry=%d", retry); Set_OpReg(hcr_frame_interval, (max_packet | ohcip->ohci_frame_interval)); ohci_frame_interval = Get_OpReg(hcr_frame_interval); } } /* Begin sending SOFs */ curr_control = Get_OpReg(hcr_control); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_ctlr: curr_control=0x%x", curr_control); /* Set the state to operational */ curr_control = (curr_control & (~HCR_CONTROL_HCFS)) | HCR_CONTROL_OPERAT; Set_OpReg(hcr_control, curr_control); ASSERT((Get_OpReg(hcr_control) & HCR_CONTROL_HCFS) == HCR_CONTROL_OPERAT); /* Set host controller soft state to operational */ ohcip->ohci_hc_soft_state = OHCI_CTLR_OPERATIONAL_STATE; /* Get the number of clock ticks to wait */ sof_time_wait = drv_usectohz(OHCI_MAX_SOF_TIMEWAIT * 1000000); /* Clear ohci_sof_flag indicating waiting for SOF interrupt */ ohcip->ohci_sof_flag = B_FALSE; /* Enable the SOF interrupt */ Set_OpReg(hcr_intr_enable, HCR_INTR_SOF); ASSERT(Get_OpReg(hcr_intr_enable) & HCR_INTR_SOF); (void) cv_reltimedwait(&ohcip->ohci_SOF_cv, &ohcip->ohci_int_mutex, sof_time_wait, TR_CLOCK_TICK); /* Wait for the SOF or timeout event */ if (ohcip->ohci_sof_flag == B_FALSE) { /* Set host controller soft state to error */ ohcip->ohci_hc_soft_state = OHCI_CTLR_ERROR_STATE; USB_DPRINTF_L0(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "No SOF interrupts have been received, this USB OHCI host" "controller is unusable"); return (DDI_FAILURE); } USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_ctlr: SOF's have started"); return (DDI_SUCCESS); } /* * ohci_init_hcca: * * Allocate the system memory and initialize Host Controller Communication * Area (HCCA). The HCCA structure must be aligned to a 256-byte boundary. */ static int ohci_init_hcca(ohci_state_t *ohcip) { ddi_device_acc_attr_t dev_attr; size_t real_length; uint_t mask, ccount; int result; uintptr_t addr; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_hcca:"); /* The host controller will be little endian */ dev_attr.devacc_attr_version = DDI_DEVICE_ATTR_V0; dev_attr.devacc_attr_endian_flags = DDI_STRUCTURE_LE_ACC; dev_attr.devacc_attr_dataorder = DDI_STRICTORDER_ACC; /* Byte alignment to HCCA alignment */ ohcip->ohci_dma_attr.dma_attr_align = OHCI_DMA_ATTR_HCCA_ALIGNMENT; /* Create space for the HCCA block */ if (ddi_dma_alloc_handle(ohcip->ohci_dip, &ohcip->ohci_dma_attr, DDI_DMA_SLEEP, 0, &ohcip->ohci_hcca_dma_handle) != DDI_SUCCESS) { return (DDI_FAILURE); } if (ddi_dma_mem_alloc(ohcip->ohci_hcca_dma_handle, 2 * sizeof (ohci_hcca_t), &dev_attr, DDI_DMA_CONSISTENT, DDI_DMA_SLEEP, 0, (caddr_t *)&ohcip->ohci_hccap, &real_length, &ohcip->ohci_hcca_mem_handle)) { return (DDI_FAILURE); } bzero((void *)ohcip->ohci_hccap, real_length); /* Figure out the alignment requirements */ Set_OpReg(hcr_HCCA, 0xFFFFFFFF); /* * Read the hcr_HCCA register until * contenets are non-zero. */ mask = Get_OpReg(hcr_HCCA); mutex_exit(&ohcip->ohci_int_mutex); while (mask == 0) { delay(drv_usectohz(OHCI_TIMEWAIT)); mask = Get_OpReg(hcr_HCCA); } mutex_enter(&ohcip->ohci_int_mutex); ASSERT(mask != 0); addr = (uintptr_t)ohcip->ohci_hccap; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_hcca: addr=0x%lx, mask=0x%x", addr, mask); while (addr & (~mask)) { addr++; } ohcip->ohci_hccap = (ohci_hcca_t *)addr; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_hcca: Real length %lu", real_length); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_hcca: virtual hcca 0x%p", (void *)ohcip->ohci_hccap); /* Map the whole HCCA into the I/O address space */ result = ddi_dma_addr_bind_handle(ohcip->ohci_hcca_dma_handle, NULL, (caddr_t)ohcip->ohci_hccap, real_length, DDI_DMA_RDWR | DDI_DMA_CONSISTENT, DDI_DMA_SLEEP, NULL, &ohcip->ohci_hcca_cookie, &ccount); if (result == DDI_DMA_MAPPED) { /* The cookie count should be 1 */ if (ccount != 1) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_hcca: More than 1 cookie"); return (DDI_FAILURE); } } else { ohci_decode_ddi_dma_addr_bind_handle_result(ohcip, result); return (DDI_FAILURE); } /* * DMA addresses for HCCA are bound */ ohcip->ohci_dma_addr_bind_flag |= OHCI_HCCA_DMA_BOUND; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_hcca: physical 0x%p", (void *)(uintptr_t)ohcip->ohci_hcca_cookie.dmac_address); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_hcca: size %lu", ohcip->ohci_hcca_cookie.dmac_size); /* Initialize the interrupt lists */ ohci_build_interrupt_lattice(ohcip); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_init_hcca: End"); return (DDI_SUCCESS); } /* * ohci_build_interrupt_lattice: * * Construct the interrupt lattice tree using static Endpoint Descriptors * (ED). This interrupt lattice tree will have total of 32 interrupt ED * lists and the Host Controller (HC) processes one interrupt ED list in * every frame. The lower five bits of the current frame number indexes * into an array of 32 interrupt Endpoint Descriptor lists found in the * HCCA. */ static void ohci_build_interrupt_lattice(ohci_state_t *ohcip) { ohci_ed_t *list_array = ohcip->ohci_ed_pool_addr; int half_list = NUM_INTR_ED_LISTS / 2; ohci_hcca_t *hccap = ohcip->ohci_hccap; uintptr_t addr; int i; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_build_interrupt_lattice:"); /* * Reserve the first 31 Endpoint Descriptor (ED) structures * in the pool as static endpoints & these are required for * constructing interrupt lattice tree. */ for (i = 0; i < NUM_STATIC_NODES; i++) { Set_ED(list_array[i].hced_ctrl, HC_EPT_sKip); Set_ED(list_array[i].hced_state, HC_EPT_STATIC); } /* Build the interrupt lattice tree */ for (i = 0; i < half_list - 1; i++) { /* * The next pointer in the host controller endpoint * descriptor must contain an iommu address. Calculate * the offset into the cpu address and add this to the * starting iommu address. */ addr = ohci_ed_cpu_to_iommu(ohcip, (ohci_ed_t *)&list_array[i]); Set_ED(list_array[2*i + 1].hced_next, addr); Set_ED(list_array[2*i + 2].hced_next, addr); } /* * Initialize the interrupt list in the HCCA so that it points * to the bottom of the tree. */ for (i = 0; i < half_list; i++) { addr = ohci_ed_cpu_to_iommu(ohcip, (ohci_ed_t *)&list_array[half_list - 1 + ohci_index[i]]); ASSERT(Get_ED(list_array[half_list - 1 + ohci_index[i]].hced_ctrl)); ASSERT(addr != 0); Set_HCCA(hccap->HccaIntTble[i], addr); Set_HCCA(hccap->HccaIntTble[i + half_list], addr); } } /* * ohci_take_control: * * Take control of the host controller. OpenHCI allows for optional support * of legacy devices through the use of System Management Mode software and * system Management interrupt hardware. See section 5.1.1.3 of the OpenHCI * spec for more details. */ static int ohci_take_control(ohci_state_t *ohcip) { #if defined(__x86) uint32_t hcr_control_val; uint32_t hcr_cmd_status_val; int wait; #endif /* __x86 */ USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_take_control:"); #if defined(__x86) /* * On x86, we must tell the BIOS we want the controller, * and wait for it to respond that we can have it. */ hcr_control_val = Get_OpReg(hcr_control); if ((hcr_control_val & HCR_CONTROL_IR) == 0) { USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_take_control: InterruptRouting off\n"); return (DDI_SUCCESS); } /* attempt the OwnershipChange request */ hcr_cmd_status_val = Get_OpReg(hcr_cmd_status); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_take_control: hcr_cmd_status: 0x%x\n", hcr_cmd_status_val); hcr_cmd_status_val |= HCR_STATUS_OCR; Set_OpReg(hcr_cmd_status, hcr_cmd_status_val); mutex_exit(&ohcip->ohci_int_mutex); /* now wait for 5 seconds for InterruptRouting to go away */ for (wait = 0; wait < 5000; wait++) { if ((Get_OpReg(hcr_control) & HCR_CONTROL_IR) == 0) break; delay(drv_usectohz(1000)); } mutex_enter(&ohcip->ohci_int_mutex); if (wait >= 5000) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_take_control: couldn't take control from BIOS\n"); return (DDI_FAILURE); } #else /* __x86 */ /* * On Sparc, there won't be special System Management Mode * hardware for legacy devices, while the x86 platforms may * have to deal with this. This function may be platform * specific. * * The interrupt routing bit should not be set. */ if (Get_OpReg(hcr_control) & HCR_CONTROL_IR) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_take_control: Routing bit set"); return (DDI_FAILURE); } #endif /* __x86 */ USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_take_control: End"); return (DDI_SUCCESS); } /* * ohci_pm_support: * always return success since PM has been quite reliable on ohci */ /*ARGSUSED*/ int ohci_hcdi_pm_support(dev_info_t *dip) { return (USB_SUCCESS); } /* * ohci_alloc_hcdi_ops: * * The HCDI interfaces or entry points are the software interfaces used by * the Universal Serial Bus Driver (USBA) to access the services of the * Host Controller Driver (HCD). During HCD initialization, inform USBA * about all available HCDI interfaces or entry points. */ static usba_hcdi_ops_t * ohci_alloc_hcdi_ops(ohci_state_t *ohcip) { usba_hcdi_ops_t *usba_hcdi_ops; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_alloc_hcdi_ops:"); usba_hcdi_ops = usba_alloc_hcdi_ops(); usba_hcdi_ops->usba_hcdi_ops_version = HCDI_OPS_VERSION; usba_hcdi_ops->usba_hcdi_pm_support = ohci_hcdi_pm_support; usba_hcdi_ops->usba_hcdi_pipe_open = ohci_hcdi_pipe_open; usba_hcdi_ops->usba_hcdi_pipe_close = ohci_hcdi_pipe_close; usba_hcdi_ops->usba_hcdi_pipe_reset = ohci_hcdi_pipe_reset; usba_hcdi_ops->usba_hcdi_pipe_reset_data_toggle = ohci_hcdi_pipe_reset_data_toggle; usba_hcdi_ops->usba_hcdi_pipe_ctrl_xfer = ohci_hcdi_pipe_ctrl_xfer; usba_hcdi_ops->usba_hcdi_pipe_bulk_xfer = ohci_hcdi_pipe_bulk_xfer; usba_hcdi_ops->usba_hcdi_pipe_intr_xfer = ohci_hcdi_pipe_intr_xfer; usba_hcdi_ops->usba_hcdi_pipe_isoc_xfer = ohci_hcdi_pipe_isoc_xfer; usba_hcdi_ops->usba_hcdi_bulk_transfer_size = ohci_hcdi_bulk_transfer_size; usba_hcdi_ops->usba_hcdi_pipe_stop_intr_polling = ohci_hcdi_pipe_stop_intr_polling; usba_hcdi_ops->usba_hcdi_pipe_stop_isoc_polling = ohci_hcdi_pipe_stop_isoc_polling; usba_hcdi_ops->usba_hcdi_get_current_frame_number = ohci_hcdi_get_current_frame_number; usba_hcdi_ops->usba_hcdi_get_max_isoc_pkts = ohci_hcdi_get_max_isoc_pkts; usba_hcdi_ops->usba_hcdi_console_input_init = ohci_hcdi_polled_input_init; usba_hcdi_ops->usba_hcdi_console_input_enter = ohci_hcdi_polled_input_enter; usba_hcdi_ops->usba_hcdi_console_read = ohci_hcdi_polled_read; usba_hcdi_ops->usba_hcdi_console_input_exit = ohci_hcdi_polled_input_exit; usba_hcdi_ops->usba_hcdi_console_input_fini = ohci_hcdi_polled_input_fini; usba_hcdi_ops->usba_hcdi_console_output_init = ohci_hcdi_polled_output_init; usba_hcdi_ops->usba_hcdi_console_output_enter = ohci_hcdi_polled_output_enter; usba_hcdi_ops->usba_hcdi_console_write = ohci_hcdi_polled_write; usba_hcdi_ops->usba_hcdi_console_output_exit = ohci_hcdi_polled_output_exit; usba_hcdi_ops->usba_hcdi_console_output_fini = ohci_hcdi_polled_output_fini; return (usba_hcdi_ops); } /* * Host Controller Driver (HCD) deinitialization functions */ /* * ohci_cleanup: * * Cleanup on attach failure or detach */ static int ohci_cleanup(ohci_state_t *ohcip) { ohci_trans_wrapper_t *tw; ohci_pipe_private_t *pp; ohci_td_t *td; int i, state, rval; int flags = ohcip->ohci_flags; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cleanup:"); if (flags & OHCI_RHREG) { /* Unload the root hub driver */ if (ohci_unload_root_hub_driver(ohcip) != USB_SUCCESS) { return (DDI_FAILURE); } } if (flags & OHCI_USBAREG) { /* Unregister this HCD instance with USBA */ usba_hcdi_unregister(ohcip->ohci_dip); } if (flags & OHCI_INTR) { mutex_enter(&ohcip->ohci_int_mutex); /* Disable all HC ED list processing */ Set_OpReg(hcr_control, (Get_OpReg(hcr_control) & ~(HCR_CONTROL_CLE | HCR_CONTROL_BLE | HCR_CONTROL_PLE | HCR_CONTROL_IE))); /* Disable all HC interrupts */ Set_OpReg(hcr_intr_disable, (HCR_INTR_SO | HCR_INTR_WDH | HCR_INTR_RD | HCR_INTR_UE)); /* Wait for the next SOF */ (void) ohci_wait_for_sof(ohcip); /* Disable Master and SOF interrupts */ Set_OpReg(hcr_intr_disable, (HCR_INTR_MIE | HCR_INTR_SOF)); /* Set the Host Controller Functional State to Reset */ Set_OpReg(hcr_control, ((Get_OpReg(hcr_control) & (~HCR_CONTROL_HCFS)) | HCR_CONTROL_RESET)); mutex_exit(&ohcip->ohci_int_mutex); /* Wait for sometime */ delay(drv_usectohz(OHCI_TIMEWAIT)); mutex_enter(&ohcip->ohci_int_mutex); /* * Workaround for ULI1575 chipset. Following OHCI Operational * Memory Registers are not cleared to their default value * on reset. Explicitly set the registers to default value. */ if (ohcip->ohci_vendor_id == PCI_ULI1575_VENID && ohcip->ohci_device_id == PCI_ULI1575_DEVID) { Set_OpReg(hcr_control, HCR_CONTROL_DEFAULT); Set_OpReg(hcr_intr_enable, HCR_INT_ENABLE_DEFAULT); Set_OpReg(hcr_HCCA, HCR_HCCA_DEFAULT); Set_OpReg(hcr_ctrl_head, HCR_CONTROL_HEAD_ED_DEFAULT); Set_OpReg(hcr_bulk_head, HCR_BULK_HEAD_ED_DEFAULT); Set_OpReg(hcr_frame_interval, HCR_FRAME_INTERVAL_DEFAULT); Set_OpReg(hcr_periodic_strt, HCR_PERIODIC_START_DEFAULT); } mutex_exit(&ohcip->ohci_int_mutex); ohci_rem_intrs(ohcip); } /* Unmap the OHCI registers */ if (ohcip->ohci_regs_handle) { /* Reset the host controller */ Set_OpReg(hcr_cmd_status, HCR_STATUS_RESET); ddi_regs_map_free(&ohcip->ohci_regs_handle); } if (ohcip->ohci_config_handle) { pci_config_teardown(&ohcip->ohci_config_handle); } /* Free all the buffers */ if (ohcip->ohci_td_pool_addr && ohcip->ohci_td_pool_mem_handle) { for (i = 0; i < ohci_td_pool_size; i ++) { td = &ohcip->ohci_td_pool_addr[i]; state = Get_TD(ohcip->ohci_td_pool_addr[i].hctd_state); if ((state != HC_TD_FREE) && (state != HC_TD_DUMMY) && (td->hctd_trans_wrapper)) { mutex_enter(&ohcip->ohci_int_mutex); tw = (ohci_trans_wrapper_t *) OHCI_LOOKUP_ID((uint32_t) Get_TD(td->hctd_trans_wrapper)); /* Obtain the pipe private structure */ pp = tw->tw_pipe_private; /* Stop the the transfer timer */ ohci_stop_xfer_timer(ohcip, tw, OHCI_REMOVE_XFER_ALWAYS); ohci_deallocate_tw_resources(ohcip, pp, tw); mutex_exit(&ohcip->ohci_int_mutex); } } /* * If OHCI_TD_POOL_BOUND flag is set, then unbind * the handle for TD pools. */ if ((ohcip->ohci_dma_addr_bind_flag & OHCI_TD_POOL_BOUND) == OHCI_TD_POOL_BOUND) { rval = ddi_dma_unbind_handle( ohcip->ohci_td_pool_dma_handle); ASSERT(rval == DDI_SUCCESS); } ddi_dma_mem_free(&ohcip->ohci_td_pool_mem_handle); } /* Free the TD pool */ if (ohcip->ohci_td_pool_dma_handle) { ddi_dma_free_handle(&ohcip->ohci_td_pool_dma_handle); } if (ohcip->ohci_ed_pool_addr && ohcip->ohci_ed_pool_mem_handle) { /* * If OHCI_ED_POOL_BOUND flag is set, then unbind * the handle for ED pools. */ if ((ohcip->ohci_dma_addr_bind_flag & OHCI_ED_POOL_BOUND) == OHCI_ED_POOL_BOUND) { rval = ddi_dma_unbind_handle( ohcip->ohci_ed_pool_dma_handle); ASSERT(rval == DDI_SUCCESS); } ddi_dma_mem_free(&ohcip->ohci_ed_pool_mem_handle); } /* Free the ED pool */ if (ohcip->ohci_ed_pool_dma_handle) { ddi_dma_free_handle(&ohcip->ohci_ed_pool_dma_handle); } /* Free the HCCA area */ if (ohcip->ohci_hccap && ohcip->ohci_hcca_mem_handle) { /* * If OHCI_HCCA_DMA_BOUND flag is set, then unbind * the handle for HCCA. */ if ((ohcip->ohci_dma_addr_bind_flag & OHCI_HCCA_DMA_BOUND) == OHCI_HCCA_DMA_BOUND) { rval = ddi_dma_unbind_handle( ohcip->ohci_hcca_dma_handle); ASSERT(rval == DDI_SUCCESS); } ddi_dma_mem_free(&ohcip->ohci_hcca_mem_handle); } if (ohcip->ohci_hcca_dma_handle) { ddi_dma_free_handle(&ohcip->ohci_hcca_dma_handle); } if (flags & OHCI_INTR) { /* Destroy the mutex */ mutex_destroy(&ohcip->ohci_int_mutex); /* Destroy the SOF condition varibale */ cv_destroy(&ohcip->ohci_SOF_cv); /* Destroy the serialize opens and closes semaphore */ sema_destroy(&ohcip->ohci_ocsem); } /* clean up kstat structs */ ohci_destroy_stats(ohcip); /* Free ohci hcdi ops */ if (ohcip->ohci_hcdi_ops) { usba_free_hcdi_ops(ohcip->ohci_hcdi_ops); } if (flags & OHCI_ZALLOC) { usb_free_log_hdl(ohcip->ohci_log_hdl); /* Remove all properties that might have been created */ ddi_prop_remove_all(ohcip->ohci_dip); /* Free the soft state */ ddi_soft_state_free(ohci_statep, ddi_get_instance(ohcip->ohci_dip)); } return (DDI_SUCCESS); } /* * ohci_rem_intrs: * * Unregister FIXED or MSI interrupts */ static void ohci_rem_intrs(ohci_state_t *ohcip) { int i; USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_rem_intrs: interrupt type 0x%x", ohcip->ohci_intr_type); /* Disable all interrupts */ if (ohcip->ohci_intr_cap & DDI_INTR_FLAG_BLOCK) { (void) ddi_intr_block_disable(ohcip->ohci_htable, ohcip->ohci_intr_cnt); } else { for (i = 0; i < ohcip->ohci_intr_cnt; i++) { (void) ddi_intr_disable(ohcip->ohci_htable[i]); } } /* Call ddi_intr_remove_handler() */ for (i = 0; i < ohcip->ohci_intr_cnt; i++) { (void) ddi_intr_remove_handler(ohcip->ohci_htable[i]); (void) ddi_intr_free(ohcip->ohci_htable[i]); } kmem_free(ohcip->ohci_htable, ohcip->ohci_intr_cnt * sizeof (ddi_intr_handle_t)); } /* * ohci_cpr_suspend */ static int ohci_cpr_suspend(ohci_state_t *ohcip) { USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_suspend:"); /* Call into the root hub and suspend it */ if (usba_hubdi_detach(ohcip->ohci_dip, DDI_SUSPEND) != DDI_SUCCESS) { return (DDI_FAILURE); } /* Only root hub's intr pipe should be open at this time */ mutex_enter(&ohcip->ohci_int_mutex); if (ohcip->ohci_open_pipe_count > 1) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_suspend: fails as open pipe count = %d", ohcip->ohci_open_pipe_count); mutex_exit(&ohcip->ohci_int_mutex); return (DDI_FAILURE); } USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_suspend: Disable HC ED list processing"); /* Disable all HC ED list processing */ Set_OpReg(hcr_control, (Get_OpReg(hcr_control) & ~(HCR_CONTROL_CLE | HCR_CONTROL_BLE | HCR_CONTROL_PLE | HCR_CONTROL_IE))); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_suspend: Disable HC interrupts"); /* Disable all HC interrupts */ Set_OpReg(hcr_intr_disable, ~(HCR_INTR_MIE|HCR_INTR_SOF)); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_suspend: Wait for the next SOF"); /* Wait for the next SOF */ if (ohci_wait_for_sof(ohcip) != USB_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_suspend: ohci host controller suspend failed"); mutex_exit(&ohcip->ohci_int_mutex); return (DDI_FAILURE); } USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_suspend: Disable Master interrupt"); /* * Disable Master interrupt so that ohci driver don't * get any ohci interrupts. */ Set_OpReg(hcr_intr_disable, HCR_INTR_MIE); /* * Suspend the ohci host controller * if usb keyboard is not connected. */ if (ohcip->ohci_polled_kbd_count == 0 || force_ohci_off != 0) { Set_OpReg(hcr_control, HCR_CONTROL_SUSPD); } /* Set host controller soft state to suspend */ ohcip->ohci_hc_soft_state = OHCI_CTLR_SUSPEND_STATE; mutex_exit(&ohcip->ohci_int_mutex); return (DDI_SUCCESS); } /* * ohci_cpr_resume */ static int ohci_cpr_resume(ohci_state_t *ohcip) { mutex_enter(&ohcip->ohci_int_mutex); USB_DPRINTF_L4(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_resume: Restart the controller"); /* Cleanup ohci specific information across cpr */ ohci_cpr_cleanup(ohcip); /* Restart the controller */ if (ohci_init_ctlr(ohcip) != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "ohci_cpr_resume: ohci host controller resume failed "); mutex_exit(&ohcip->ohci_int_mutex); return (DDI_FAILURE); } mutex_exit(&ohcip->ohci_int_mutex); /* Now resume the root hub */ if (usba_hubdi_attach(ohcip->ohci_dip, DDI_RESUME) != DDI_SUCCESS) { return (DDI_FAILURE); } return (DDI_SUCCESS); } /* * HCDI entry points * * The Host Controller Driver Interfaces (HCDI) are the software interfaces * between the Universal Serial Bus Layer (USBA) and the Host Controller * Driver (HCD). The HCDI interfaces or entry points are subject to change. */ /* * ohci_hcdi_pipe_open: * * Member of HCD Ops structure and called during client specific pipe open * Add the pipe to the data structure representing the device and allocate * bandwidth for the pipe if it is a interrupt or isochronous endpoint. */ static int ohci_hcdi_pipe_open( usba_pipe_handle_data_t *ph, usb_flags_t flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); usb_ep_descr_t *epdt = &ph->p_ep; int rval, error = USB_SUCCESS; int kmflag = (flags & USB_FLAGS_SLEEP) ? KM_SLEEP : KM_NOSLEEP; uint_t node = 0; ohci_pipe_private_t *pp; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_open: addr = 0x%x, ep%d", ph->p_usba_device->usb_addr, epdt->bEndpointAddress & USB_EP_NUM_MASK); sema_p(&ohcip->ohci_ocsem); mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); mutex_exit(&ohcip->ohci_int_mutex); if (rval != USB_SUCCESS) { sema_v(&ohcip->ohci_ocsem); return (rval); } /* * Check and handle root hub pipe open. */ if (ph->p_usba_device->usb_addr == ROOT_HUB_ADDR) { mutex_enter(&ohcip->ohci_int_mutex); error = ohci_handle_root_hub_pipe_open(ph, flags); mutex_exit(&ohcip->ohci_int_mutex); sema_v(&ohcip->ohci_ocsem); return (error); } /* * Opening of other pipes excluding root hub pipe are * handled below. Check whether pipe is already opened. */ if (ph->p_hcd_private) { USB_DPRINTF_L2(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_open: Pipe is already opened"); sema_v(&ohcip->ohci_ocsem); return (USB_FAILURE); } /* * A portion of the bandwidth is reserved for the non-periodic * transfers, i.e control and bulk transfers in each of one * millisecond frame period & usually it will be 10% of frame * period. Hence there is no need to check for the available * bandwidth before adding the control or bulk endpoints. * * There is a need to check for the available bandwidth before * adding the periodic transfers, i.e interrupt & isochronous, * since all these periodic transfers are guaranteed transfers. * Usually 90% of the total frame time is reserved for periodic * transfers. */ if (OHCI_PERIODIC_ENDPOINT(epdt)) { mutex_enter(&ohcip->ohci_int_mutex); mutex_enter(&ph->p_mutex); error = ohci_allocate_bandwidth(ohcip, ph, &node); if (error != USB_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_open: Bandwidth allocation failed"); mutex_exit(&ph->p_mutex); mutex_exit(&ohcip->ohci_int_mutex); sema_v(&ohcip->ohci_ocsem); return (error); } mutex_exit(&ph->p_mutex); mutex_exit(&ohcip->ohci_int_mutex); } /* Create the HCD pipe private structure */ pp = kmem_zalloc(sizeof (ohci_pipe_private_t), kmflag); /* * Return failure if ohci pipe private * structure allocation fails. */ if (pp == NULL) { mutex_enter(&ohcip->ohci_int_mutex); /* Deallocate bandwidth */ if (OHCI_PERIODIC_ENDPOINT(epdt)) { mutex_enter(&ph->p_mutex); ohci_deallocate_bandwidth(ohcip, ph); mutex_exit(&ph->p_mutex); } mutex_exit(&ohcip->ohci_int_mutex); sema_v(&ohcip->ohci_ocsem); return (USB_NO_RESOURCES); } mutex_enter(&ohcip->ohci_int_mutex); /* Store the node in the interrupt lattice */ pp->pp_node = node; /* Create prototype for xfer completion condition variable */ cv_init(&pp->pp_xfer_cmpl_cv, NULL, CV_DRIVER, NULL); /* Set the state of pipe as idle */ pp->pp_state = OHCI_PIPE_STATE_IDLE; /* Store a pointer to the pipe handle */ pp->pp_pipe_handle = ph; mutex_enter(&ph->p_mutex); /* Store the pointer in the pipe handle */ ph->p_hcd_private = (usb_opaque_t)pp; /* Store a copy of the pipe policy */ bcopy(&ph->p_policy, &pp->pp_policy, sizeof (usb_pipe_policy_t)); mutex_exit(&ph->p_mutex); /* Allocate the host controller endpoint descriptor */ pp->pp_ept = ohci_alloc_hc_ed(ohcip, ph); if (pp->pp_ept == NULL) { USB_DPRINTF_L2(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_open: ED allocation failed"); mutex_enter(&ph->p_mutex); /* Deallocate bandwidth */ if (OHCI_PERIODIC_ENDPOINT(epdt)) { ohci_deallocate_bandwidth(ohcip, ph); } /* Destroy the xfer completion condition varibale */ cv_destroy(&pp->pp_xfer_cmpl_cv); /* * Deallocate the hcd private portion * of the pipe handle. */ kmem_free(ph->p_hcd_private, sizeof (ohci_pipe_private_t)); /* * Set the private structure in the * pipe handle equal to NULL. */ ph->p_hcd_private = NULL; mutex_exit(&ph->p_mutex); mutex_exit(&ohcip->ohci_int_mutex); sema_v(&ohcip->ohci_ocsem); return (USB_NO_RESOURCES); } /* Restore the data toggle information */ ohci_restore_data_toggle(ohcip, ph); /* * Insert the endpoint onto the host controller's * appropriate endpoint list. The host controller * will not schedule this endpoint and will not have * any TD's to process. */ ohci_insert_ed(ohcip, ph); USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_open: ph = 0x%p", (void *)ph); ohcip->ohci_open_pipe_count++; mutex_exit(&ohcip->ohci_int_mutex); sema_v(&ohcip->ohci_ocsem); return (USB_SUCCESS); } /* * ohci_hcdi_pipe_close: * * Member of HCD Ops structure and called during the client specific pipe * close. Remove the pipe and the data structure representing the device. * Deallocate bandwidth for the pipe if it is a interrupt or isochronous * endpoint. */ /* ARGSUSED */ static int ohci_hcdi_pipe_close( usba_pipe_handle_data_t *ph, usb_flags_t flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_ep_descr_t *eptd = &ph->p_ep; int error = USB_SUCCESS; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_close: addr = 0x%x, ep%d", ph->p_usba_device->usb_addr, eptd->bEndpointAddress & USB_EP_NUM_MASK); sema_p(&ohcip->ohci_ocsem); /* Check and handle root hub pipe close */ if (ph->p_usba_device->usb_addr == ROOT_HUB_ADDR) { mutex_enter(&ohcip->ohci_int_mutex); error = ohci_handle_root_hub_pipe_close(ph); mutex_exit(&ohcip->ohci_int_mutex); sema_v(&ohcip->ohci_ocsem); return (error); } ASSERT(ph->p_hcd_private != NULL); mutex_enter(&ohcip->ohci_int_mutex); /* Set pipe state to pipe close */ pp->pp_state = OHCI_PIPE_STATE_CLOSE; ohci_pipe_cleanup(ohcip, ph); /* * Remove the endoint descriptor from Host * Controller's appropriate endpoint list. */ ohci_remove_ed(ohcip, pp); /* Deallocate bandwidth */ if (OHCI_PERIODIC_ENDPOINT(eptd)) { mutex_enter(&ph->p_mutex); ohci_deallocate_bandwidth(ohcip, ph); mutex_exit(&ph->p_mutex); } mutex_enter(&ph->p_mutex); /* Destroy the xfer completion condition varibale */ cv_destroy(&pp->pp_xfer_cmpl_cv); /* * Deallocate the hcd private portion * of the pipe handle. */ kmem_free(ph->p_hcd_private, sizeof (ohci_pipe_private_t)); ph->p_hcd_private = NULL; mutex_exit(&ph->p_mutex); USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_close: ph = 0x%p", (void *)ph); ohcip->ohci_open_pipe_count--; mutex_exit(&ohcip->ohci_int_mutex); sema_v(&ohcip->ohci_ocsem); return (error); } /* * ohci_hcdi_pipe_reset: */ /* ARGSUSED */ static int ohci_hcdi_pipe_reset( usba_pipe_handle_data_t *ph, usb_flags_t usb_flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; int error = USB_SUCCESS; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_reset: ph = 0x%p ", (void *)ph); /* * Check and handle root hub pipe reset. */ if (ph->p_usba_device->usb_addr == ROOT_HUB_ADDR) { error = ohci_handle_root_hub_pipe_reset(ph, usb_flags); return (error); } mutex_enter(&ohcip->ohci_int_mutex); /* Set pipe state to pipe reset */ pp->pp_state = OHCI_PIPE_STATE_RESET; ohci_pipe_cleanup(ohcip, ph); mutex_exit(&ohcip->ohci_int_mutex); return (error); } /* * ohci_hcdi_pipe_reset_data_toggle: */ void ohci_hcdi_pipe_reset_data_toggle( usba_pipe_handle_data_t *ph) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_reset_data_toggle:"); mutex_enter(&ohcip->ohci_int_mutex); mutex_enter(&ph->p_mutex); usba_hcdi_set_data_toggle(ph->p_usba_device, ph->p_ep.bEndpointAddress, DATA0); mutex_exit(&ph->p_mutex); Set_ED(pp->pp_ept->hced_headp, Get_ED(pp->pp_ept->hced_headp) & (~HC_EPT_Carry)); mutex_exit(&ohcip->ohci_int_mutex); } /* * ohci_hcdi_pipe_ctrl_xfer: */ static int ohci_hcdi_pipe_ctrl_xfer( usba_pipe_handle_data_t *ph, usb_ctrl_req_t *ctrl_reqp, usb_flags_t usb_flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; int rval; int error = USB_SUCCESS; ohci_trans_wrapper_t *tw; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_ctrl_xfer: ph = 0x%p reqp = 0x%p flags = 0x%x", (void *)ph, (void *)ctrl_reqp, usb_flags); mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); mutex_exit(&ohcip->ohci_int_mutex); if (rval != USB_SUCCESS) { return (rval); } /* * Check and handle root hub control request. */ if (ph->p_usba_device->usb_addr == ROOT_HUB_ADDR) { error = ohci_handle_root_hub_request(ohcip, ph, ctrl_reqp); return (error); } mutex_enter(&ohcip->ohci_int_mutex); /* * Check whether pipe is in halted state. */ if (pp->pp_state == OHCI_PIPE_STATE_ERROR) { USB_DPRINTF_L2(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_ctrl_xfer:" "Pipe is in error state, need pipe reset to continue"); mutex_exit(&ohcip->ohci_int_mutex); return (USB_FAILURE); } /* Allocate a transfer wrapper */ if ((tw = ohci_allocate_ctrl_resources(ohcip, pp, ctrl_reqp, usb_flags)) == NULL) { error = USB_NO_RESOURCES; } else { /* Insert the td's on the endpoint */ ohci_insert_ctrl_req(ohcip, ph, ctrl_reqp, tw, usb_flags); } mutex_exit(&ohcip->ohci_int_mutex); return (error); } /* * ohci_hcdi_bulk_transfer_size: * * Return maximum bulk transfer size */ /* ARGSUSED */ static int ohci_hcdi_bulk_transfer_size( usba_device_t *usba_device, size_t *size) { ohci_state_t *ohcip = ohci_obtain_state( usba_device->usb_root_hub_dip); int rval; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_bulk_transfer_size:"); mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); mutex_exit(&ohcip->ohci_int_mutex); if (rval != USB_SUCCESS) { return (rval); } *size = OHCI_MAX_BULK_XFER_SIZE; return (USB_SUCCESS); } /* * ohci_hcdi_pipe_bulk_xfer: */ static int ohci_hcdi_pipe_bulk_xfer( usba_pipe_handle_data_t *ph, usb_bulk_req_t *bulk_reqp, usb_flags_t usb_flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; int rval, error = USB_SUCCESS; ohci_trans_wrapper_t *tw; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_bulk_xfer: ph = 0x%p reqp = 0x%p flags = 0x%x", (void *)ph, (void *)bulk_reqp, usb_flags); mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); if (rval != USB_SUCCESS) { mutex_exit(&ohcip->ohci_int_mutex); return (rval); } /* * Check whether pipe is in halted state. */ if (pp->pp_state == OHCI_PIPE_STATE_ERROR) { USB_DPRINTF_L2(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_bulk_xfer:" "Pipe is in error state, need pipe reset to continue"); mutex_exit(&ohcip->ohci_int_mutex); return (USB_FAILURE); } /* Allocate a transfer wrapper */ if ((tw = ohci_allocate_bulk_resources(ohcip, pp, bulk_reqp, usb_flags)) == NULL) { error = USB_NO_RESOURCES; } else { /* Add the TD into the Host Controller's bulk list */ ohci_insert_bulk_req(ohcip, ph, bulk_reqp, tw, usb_flags); } mutex_exit(&ohcip->ohci_int_mutex); return (error); } /* * ohci_hcdi_pipe_intr_xfer: */ static int ohci_hcdi_pipe_intr_xfer( usba_pipe_handle_data_t *ph, usb_intr_req_t *intr_reqp, usb_flags_t usb_flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); int pipe_dir, rval, error = USB_SUCCESS; ohci_trans_wrapper_t *tw; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_intr_xfer: ph = 0x%p reqp = 0x%p flags = 0x%x", (void *)ph, (void *)intr_reqp, usb_flags); mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); if (rval != USB_SUCCESS) { mutex_exit(&ohcip->ohci_int_mutex); return (rval); } /* Get the pipe direction */ pipe_dir = ph->p_ep.bEndpointAddress & USB_EP_DIR_MASK; if (pipe_dir == USB_EP_DIR_IN) { error = ohci_start_periodic_pipe_polling(ohcip, ph, (usb_opaque_t)intr_reqp, usb_flags); } else { /* Allocate transaction resources */ if ((tw = ohci_allocate_intr_resources(ohcip, ph, intr_reqp, usb_flags)) == NULL) { error = USB_NO_RESOURCES; } else { ohci_insert_intr_req(ohcip, (ohci_pipe_private_t *)ph->p_hcd_private, tw, usb_flags); } } mutex_exit(&ohcip->ohci_int_mutex); return (error); } /* * ohci_hcdi_pipe_stop_intr_polling() */ static int ohci_hcdi_pipe_stop_intr_polling( usba_pipe_handle_data_t *ph, usb_flags_t flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); int error = USB_SUCCESS; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_stop_intr_polling: ph = 0x%p fl = 0x%x", (void *)ph, flags); mutex_enter(&ohcip->ohci_int_mutex); error = ohci_stop_periodic_pipe_polling(ohcip, ph, flags); mutex_exit(&ohcip->ohci_int_mutex); return (error); } /* * ohci_hcdi_get_current_frame_number: * * Get the current usb frame number. * Return whether the request is handled successfully. */ static int ohci_hcdi_get_current_frame_number( usba_device_t *usba_device, usb_frame_number_t *frame_number) { ohci_state_t *ohcip = ohci_obtain_state( usba_device->usb_root_hub_dip); int rval; ohcip = ohci_obtain_state(usba_device->usb_root_hub_dip); mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); if (rval != USB_SUCCESS) { mutex_exit(&ohcip->ohci_int_mutex); return (rval); } *frame_number = ohci_get_current_frame_number(ohcip); mutex_exit(&ohcip->ohci_int_mutex); USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_get_current_frame_number:" "Current frame number 0x%llx", (unsigned long long)(*frame_number)); return (rval); } /* * ohci_hcdi_get_max_isoc_pkts: * * Get maximum isochronous packets per usb isochronous request. * Return whether the request is handled successfully. */ static int ohci_hcdi_get_max_isoc_pkts( usba_device_t *usba_device, uint_t *max_isoc_pkts_per_request) { ohci_state_t *ohcip = ohci_obtain_state( usba_device->usb_root_hub_dip); int rval; mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); mutex_exit(&ohcip->ohci_int_mutex); if (rval != USB_SUCCESS) { return (rval); } *max_isoc_pkts_per_request = OHCI_MAX_ISOC_PKTS_PER_XFER; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_get_max_isoc_pkts: maximum isochronous" "packets per usb isochronous request = 0x%x", *max_isoc_pkts_per_request); return (rval); } /* * ohci_hcdi_pipe_isoc_xfer: */ static int ohci_hcdi_pipe_isoc_xfer( usba_pipe_handle_data_t *ph, usb_isoc_req_t *isoc_reqp, usb_flags_t usb_flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); int error = USB_SUCCESS; int pipe_dir, rval; ohci_trans_wrapper_t *tw; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_isoc_xfer: ph = 0x%p reqp = 0x%p flags = 0x%x", (void *)ph, (void *)isoc_reqp, usb_flags); mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); if (rval != USB_SUCCESS) { mutex_exit(&ohcip->ohci_int_mutex); return (rval); } /* Get the isochronous pipe direction */ pipe_dir = ph->p_ep.bEndpointAddress & USB_EP_DIR_MASK; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_isoc_xfer: isoc_reqp = 0x%p, uf = 0x%x", (void *)isoc_reqp, usb_flags); if (pipe_dir == USB_EP_DIR_IN) { error = ohci_start_periodic_pipe_polling(ohcip, ph, (usb_opaque_t)isoc_reqp, usb_flags); } else { /* Allocate transaction resources */ if ((tw = ohci_allocate_isoc_resources(ohcip, ph, isoc_reqp, usb_flags)) == NULL) { error = USB_NO_RESOURCES; } else { error = ohci_insert_isoc_req(ohcip, (ohci_pipe_private_t *)ph->p_hcd_private, tw, usb_flags); } } mutex_exit(&ohcip->ohci_int_mutex); return (error); } /* * ohci_hcdi_pipe_stop_isoc_polling() */ static int ohci_hcdi_pipe_stop_isoc_polling( usba_pipe_handle_data_t *ph, usb_flags_t flags) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); int rval, error = USB_SUCCESS; USB_DPRINTF_L4(PRINT_MASK_HCDI, ohcip->ohci_log_hdl, "ohci_hcdi_pipe_stop_isoc_polling: ph = 0x%p fl = 0x%x", (void *)ph, flags); mutex_enter(&ohcip->ohci_int_mutex); rval = ohci_state_is_operational(ohcip); if (rval != USB_SUCCESS) { mutex_exit(&ohcip->ohci_int_mutex); return (rval); } error = ohci_stop_periodic_pipe_polling(ohcip, ph, flags); mutex_exit(&ohcip->ohci_int_mutex); return (error); } /* * Bandwidth Allocation functions */ /* * ohci_allocate_bandwidth: * * Figure out whether or not this interval may be supported. Return the index * into the lattice if it can be supported. Return allocation failure if it * can not be supported. * * The lattice structure looks like this with the bottom leaf actually * being an array. There is a total of 63 nodes in this tree. The lattice tree * itself is 0 based, while the bottom leaf array is 0 based. The 0 bucket in * the bottom leaf array is used to store the smalled allocated bandwidth of all * the leaves. * * 0 * 1 2 * 3 4 5 6 * ... * (32 33 ... 62 63) <-- last row does not exist in lattice, but an array * 0 1 2 3 ... 30 31 * * We keep track of the bandwidth that each leaf uses. First we search for the * first leaf with the smallest used bandwidth. Based on that leaf we find the * parent node of that leaf based on the interval time. * * From the parent node, we find all the leafs of that subtree and update the * additional bandwidth needed. In order to balance the load the leaves are not * executed directly from left to right, but scattered. For a better picture * refer to Section 3.3.2 in the OpenHCI 1.0 spec, there should be a figure * showing the Interrupt ED Structure. */ static int ohci_allocate_bandwidth( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, uint_t *node) { int interval, error, i; uint_t min, min_index, height; uint_t leftmost, list, bandwidth; usb_ep_descr_t *endpoint = &ph->p_ep; /* This routine is protected by the ohci_int_mutex */ ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Calculate the length in bytes of a transaction on this * periodic endpoint. */ mutex_enter(&ph->p_usba_device->usb_mutex); error = ohci_compute_total_bandwidth( endpoint, ph->p_usba_device->usb_port_status, &bandwidth); mutex_exit(&ph->p_usba_device->usb_mutex); /* * If length is zero, then, it means endpoint maximum packet * supported is zero. In that case, return failure without * allocating any bandwidth. */ if (error != USB_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_BW, ohcip->ohci_log_hdl, "ohci_allocate_bandwidth: Periodic endpoint with " "zero endpoint maximum packet size is not supported"); return (USB_NOT_SUPPORTED); } /* * If the length in bytes plus the allocated bandwidth exceeds * the maximum, return bandwidth allocation failure. */ if ((ohcip->ohci_periodic_minimum_bandwidth + bandwidth) > (MAX_PERIODIC_BANDWIDTH)) { USB_DPRINTF_L2(PRINT_MASK_BW, ohcip->ohci_log_hdl, "ohci_allocate_bandwidth: Reached maximum " "bandwidth value and cannot allocate bandwidth " "for a given periodic endpoint"); return (USB_NO_BANDWIDTH); } /* Adjust polling interval to be a power of 2 */ mutex_enter(&ph->p_usba_device->usb_mutex); interval = ohci_adjust_polling_interval(ohcip, endpoint, ph->p_usba_device->usb_port_status); mutex_exit(&ph->p_usba_device->usb_mutex); /* * If this interval can't be supported, * return allocation failure. */ if (interval == USB_FAILURE) { return (USB_FAILURE); } USB_DPRINTF_L4(PRINT_MASK_BW, ohcip->ohci_log_hdl, "The new interval is %d", interval); /* Find the leaf with the smallest allocated bandwidth */ min_index = 0; min = ohcip->ohci_periodic_bandwidth[0]; for (i = 1; i < NUM_INTR_ED_LISTS; i++) { if (ohcip->ohci_periodic_bandwidth[i] < min) { min_index = i; min = ohcip->ohci_periodic_bandwidth[i]; } } USB_DPRINTF_L4(PRINT_MASK_BW, ohcip->ohci_log_hdl, "The leaf %d for minimal bandwidth %d", min_index, min); /* Adjust min for the lattice */ min_index = min_index + NUM_INTR_ED_LISTS - 1; /* * Find the index into the lattice given the * leaf with the smallest allocated bandwidth. */ height = ohci_lattice_height(interval); USB_DPRINTF_L4(PRINT_MASK_BW, ohcip->ohci_log_hdl, "The height is %d", height); *node = min_index; for (i = 0; i < height; i++) { *node = ohci_lattice_parent(*node); } USB_DPRINTF_L4(PRINT_MASK_BW, ohcip->ohci_log_hdl, "Real node is %d", *node); /* * Find the leftmost leaf in the subtree * specified by the node. */ leftmost = ohci_leftmost_leaf(*node, height); USB_DPRINTF_L4(PRINT_MASK_BW, ohcip->ohci_log_hdl, "Leftmost %d", leftmost); for (i = 0; i < (NUM_INTR_ED_LISTS/interval); i++) { list = ohci_hcca_leaf_index(leftmost + i); if ((ohcip->ohci_periodic_bandwidth[list] + bandwidth) > MAX_PERIODIC_BANDWIDTH) { USB_DPRINTF_L2(PRINT_MASK_BW, ohcip->ohci_log_hdl, "ohci_allocate_bandwidth: Reached maximum " "bandwidth value and cannot allocate bandwidth " "for periodic endpoint"); return (USB_NO_BANDWIDTH); } } /* * All the leaves for this node must be updated with the bandwidth. */ for (i = 0; i < (NUM_INTR_ED_LISTS/interval); i++) { list = ohci_hcca_leaf_index(leftmost + i); ohcip->ohci_periodic_bandwidth[list] += bandwidth; } /* Find the leaf with the smallest allocated bandwidth */ min_index = 0; min = ohcip->ohci_periodic_bandwidth[0]; for (i = 1; i < NUM_INTR_ED_LISTS; i++) { if (ohcip->ohci_periodic_bandwidth[i] < min) { min_index = i; min = ohcip->ohci_periodic_bandwidth[i]; } } /* Save the minimum for later use */ ohcip->ohci_periodic_minimum_bandwidth = min; return (USB_SUCCESS); } /* * ohci_deallocate_bandwidth: * * Deallocate bandwidth for the given node in the lattice and the length * of transfer. */ static void ohci_deallocate_bandwidth( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { uint_t min, node, bandwidth; uint_t height, leftmost, list; int i, interval; usb_ep_descr_t *endpoint = &ph->p_ep; ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; /* This routine is protected by the ohci_int_mutex */ ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Obtain the length */ mutex_enter(&ph->p_usba_device->usb_mutex); (void) ohci_compute_total_bandwidth( endpoint, ph->p_usba_device->usb_port_status, &bandwidth); mutex_exit(&ph->p_usba_device->usb_mutex); /* Obtain the node */ node = pp->pp_node; /* Adjust polling interval to be a power of 2 */ mutex_enter(&ph->p_usba_device->usb_mutex); interval = ohci_adjust_polling_interval(ohcip, endpoint, ph->p_usba_device->usb_port_status); mutex_exit(&ph->p_usba_device->usb_mutex); /* Find the height in the tree */ height = ohci_lattice_height(interval); /* * Find the leftmost leaf in the subtree specified by the node */ leftmost = ohci_leftmost_leaf(node, height); /* Delete the bandwith from the appropriate lists */ for (i = 0; i < (NUM_INTR_ED_LISTS/interval); i++) { list = ohci_hcca_leaf_index(leftmost + i); ohcip->ohci_periodic_bandwidth[list] -= bandwidth; } min = ohcip->ohci_periodic_bandwidth[0]; /* Recompute the minimum */ for (i = 1; i < NUM_INTR_ED_LISTS; i++) { if (ohcip->ohci_periodic_bandwidth[i] < min) { min = ohcip->ohci_periodic_bandwidth[i]; } } /* Save the minimum for later use */ ohcip->ohci_periodic_minimum_bandwidth = min; } /* * ohci_compute_total_bandwidth: * * Given a periodic endpoint (interrupt or isochronous) determine the total * bandwidth for one transaction. The OpenHCI host controller traverses the * endpoint descriptor lists on a first-come-first-serve basis. When the HC * services an endpoint, only a single transaction attempt is made. The HC * moves to the next Endpoint Descriptor after the first transaction attempt * rather than finishing the entire Transfer Descriptor. Therefore, when a * Transfer Descriptor is inserted into the lattice, we will only count the * number of bytes for one transaction. * * The following are the formulas used for calculating bandwidth in terms * bytes and it is for the single USB full speed and low speed transaction * respectively. The protocol overheads will be different for each of type * of USB transfer and all these formulas & protocol overheads are derived * from the 5.9.3 section of USB Specification & with the help of Bandwidth * Analysis white paper which is posted on the USB developer forum. * * Full-Speed: * Protocol overhead + ((MaxPacketSize * 7)/6 ) + Host_Delay * * Low-Speed: * Protocol overhead + Hub LS overhead + * (Low-Speed clock * ((MaxPacketSize * 7)/6 )) + Host_Delay */ static int ohci_compute_total_bandwidth( usb_ep_descr_t *endpoint, usb_port_status_t port_status, uint_t *bandwidth) { ushort_t maxpacketsize = endpoint->wMaxPacketSize; /* * If endpoint maximum packet is zero, then return immediately. */ if (maxpacketsize == 0) { return (USB_NOT_SUPPORTED); } /* Add Host Controller specific delay to required bandwidth */ *bandwidth = HOST_CONTROLLER_DELAY; /* Add bit-stuffing overhead */ maxpacketsize = (ushort_t)((maxpacketsize * 7) / 6); /* Low Speed interrupt transaction */ if (port_status == USBA_LOW_SPEED_DEV) { /* Low Speed interrupt transaction */ *bandwidth += (LOW_SPEED_PROTO_OVERHEAD + HUB_LOW_SPEED_PROTO_OVERHEAD + (LOW_SPEED_CLOCK * maxpacketsize)); } else { /* Full Speed transaction */ *bandwidth += maxpacketsize; if ((endpoint->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_INTR) { /* Full Speed interrupt transaction */ *bandwidth += FS_NON_ISOC_PROTO_OVERHEAD; } else { /* Isochronous and input transaction */ if ((endpoint->bEndpointAddress & USB_EP_DIR_MASK) == USB_EP_DIR_IN) { *bandwidth += FS_ISOC_INPUT_PROTO_OVERHEAD; } else { /* Isochronous and output transaction */ *bandwidth += FS_ISOC_OUTPUT_PROTO_OVERHEAD; } } } return (USB_SUCCESS); } /* * ohci_adjust_polling_interval: */ static int ohci_adjust_polling_interval( ohci_state_t *ohcip, usb_ep_descr_t *endpoint, usb_port_status_t port_status) { uint_t interval; int i = 0; /* * Get the polling interval from the endpoint descriptor */ interval = endpoint->bInterval; /* * The bInterval value in the endpoint descriptor can range * from 1 to 255ms. The interrupt lattice has 32 leaf nodes, * and the host controller cycles through these nodes every * 32ms. The longest polling interval that the controller * supports is 32ms. */ /* * Return an error if the polling interval is less than 1ms * and greater than 255ms */ if ((interval < MIN_POLL_INTERVAL) || (interval > MAX_POLL_INTERVAL)) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_adjust_polling_interval: " "Endpoint's poll interval must be between %d and %d ms", MIN_POLL_INTERVAL, MAX_POLL_INTERVAL); return (USB_FAILURE); } /* * According USB Specifications, a full-speed endpoint can * specify a desired polling interval 1ms to 255ms and a low * speed endpoints are limited to specifying only 10ms to * 255ms. But some old keyboards & mice uses polling interval * of 8ms. For compatibility purpose, we are using polling * interval between 8ms & 255ms for low speed endpoints. But * ohci driver will reject the any low speed endpoints which * request polling interval less than 8ms. */ if ((port_status == USBA_LOW_SPEED_DEV) && (interval < MIN_LOW_SPEED_POLL_INTERVAL)) { USB_DPRINTF_L2(PRINT_MASK_BW, ohcip->ohci_log_hdl, "ohci_adjust_polling_interval: " "Low speed endpoint's poll interval of %d ms " "is below threshold. Rounding up to %d ms", interval, MIN_LOW_SPEED_POLL_INTERVAL); interval = MIN_LOW_SPEED_POLL_INTERVAL; } /* * If polling interval is greater than 32ms, * adjust polling interval equal to 32ms. */ if (interval > NUM_INTR_ED_LISTS) { interval = NUM_INTR_ED_LISTS; } /* * Find the nearest power of 2 that'sless * than interval. */ while ((ohci_pow_2(i)) <= interval) { i++; } return (ohci_pow_2((i - 1))); } /* * ohci_lattice_height: * * Given the requested bandwidth, find the height in the tree at which the * nodes for this bandwidth fall. The height is measured as the number of * nodes from the leaf to the level specified by bandwidth The root of the * tree is at height TREE_HEIGHT. */ static uint_t ohci_lattice_height(uint_t interval) { return (TREE_HEIGHT - (ohci_log_2(interval))); } /* * ohci_lattice_parent: */ static uint_t ohci_lattice_parent(uint_t node) { if ((node % 2) == 0) { return ((node/2) - 1); } else { return ((node + 1)/2 - 1); } } /* * ohci_leftmost_leaf: * * Find the leftmost leaf in the subtree specified by the node. Height refers * to number of nodes from the bottom of the tree to the node, including the * node. * * The formula for a zero based tree is: * 2^H * Node + 2^H - 1 * The leaf of the tree is an array, convert the number for the array. * Subtract the size of nodes not in the array * 2^H * Node + 2^H - 1 - (NUM_INTR_ED_LIST - 1) = * 2^H * Node + 2^H - NUM_INTR_ED_LIST = * 2^H * (Node + 1) - NUM_INTR_ED_LIST * 0 * 1 2 * 0 1 2 3 */ static uint_t ohci_leftmost_leaf( uint_t node, uint_t height) { return ((ohci_pow_2(height) * (node + 1)) - NUM_INTR_ED_LISTS); } /* * ohci_hcca_intr_index: * * Given a node in the lattice, find the index for the hcca interrupt table */ static uint_t ohci_hcca_intr_index(uint_t node) { /* * Adjust the node to the array representing * the bottom of the tree. */ node = node - NUM_STATIC_NODES; if ((node % 2) == 0) { return (ohci_index[node / 2]); } else { return (ohci_index[node / 2] + (NUM_INTR_ED_LISTS / 2)); } } /* * ohci_hcca_leaf_index: * * Given a node in the bottom leaf array of the lattice, find the index * for the hcca interrupt table */ static uint_t ohci_hcca_leaf_index(uint_t leaf) { if ((leaf % 2) == 0) { return (ohci_index[leaf / 2]); } else { return (ohci_index[leaf / 2] + (NUM_INTR_ED_LISTS / 2)); } } /* * ohci_pow_2: * * Compute 2 to the power */ static uint_t ohci_pow_2(uint_t x) { if (x == 0) { return (1); } else { return (2 << (x - 1)); } } /* * ohci_log_2: * * Compute log base 2 of x */ static uint_t ohci_log_2(uint_t x) { int i = 0; while (x != 1) { x = x >> 1; i++; } return (i); } /* * Endpoint Descriptor (ED) manipulations functions */ /* * ohci_alloc_hc_ed: * NOTE: This function is also called from POLLED MODE. * * Allocate an endpoint descriptor (ED) */ ohci_ed_t * ohci_alloc_hc_ed( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { int i, state; ohci_ed_t *hc_ed; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_alloc_hc_ed: ph = 0x%p", (void *)ph); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * The first 31 endpoints in the Endpoint Descriptor (ED) * buffer pool are reserved for building interrupt lattice * tree. Search for a blank endpoint descriptor in the ED * buffer pool. */ for (i = NUM_STATIC_NODES; i < ohci_ed_pool_size; i ++) { state = Get_ED(ohcip->ohci_ed_pool_addr[i].hced_state); if (state == HC_EPT_FREE) { break; } } USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_alloc_hc_ed: Allocated %d", i); if (i == ohci_ed_pool_size) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_alloc_hc_ed: ED exhausted"); return (NULL); } else { hc_ed = &ohcip->ohci_ed_pool_addr[i]; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_alloc_hc_ed: Allocated address 0x%p", (void *)hc_ed); ohci_print_ed(ohcip, hc_ed); /* Unpack the endpoint descriptor into a control field */ if (ph) { if ((ohci_initialize_dummy(ohcip, hc_ed)) == USB_NO_RESOURCES) { bzero((void *)hc_ed, sizeof (ohci_ed_t)); Set_ED(hc_ed->hced_state, HC_EPT_FREE); return (NULL); } Set_ED(hc_ed->hced_prev, 0); Set_ED(hc_ed->hced_next, 0); /* Change ED's state Active */ Set_ED(hc_ed->hced_state, HC_EPT_ACTIVE); Set_ED(hc_ed->hced_ctrl, ohci_unpack_endpoint(ohcip, ph)); } else { Set_ED(hc_ed->hced_ctrl, HC_EPT_sKip); /* Change ED's state Static */ Set_ED(hc_ed->hced_state, HC_EPT_STATIC); } return (hc_ed); } } /* * ohci_unpack_endpoint: * * Unpack the information in the pipe handle and create the first byte * of the Host Controller's (HC) Endpoint Descriptor (ED). */ static uint_t ohci_unpack_endpoint( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { usb_ep_descr_t *endpoint = &ph->p_ep; uint_t maxpacketsize, addr, ctrl = 0; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_unpack_endpoint:"); ctrl = ph->p_usba_device->usb_addr; addr = endpoint->bEndpointAddress; /* Assign the endpoint's address */ ctrl = ctrl | ((addr & USB_EP_NUM_MASK) << HC_EPT_EP_SHFT); /* * Assign the direction. If the endpoint is a control endpoint, * the direction is assigned by the Transfer Descriptor (TD). */ if ((endpoint->bmAttributes & USB_EP_ATTR_MASK) != USB_EP_ATTR_CONTROL) { if (addr & USB_EP_DIR_MASK) { /* The direction is IN */ ctrl = ctrl | HC_EPT_DF_IN; } else { /* The direction is OUT */ ctrl = ctrl | HC_EPT_DF_OUT; } } /* Assign the speed */ mutex_enter(&ph->p_usba_device->usb_mutex); if (ph->p_usba_device->usb_port_status == USBA_LOW_SPEED_DEV) { ctrl = ctrl | HC_EPT_Speed; } mutex_exit(&ph->p_usba_device->usb_mutex); /* Assign the format */ if ((endpoint->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH) { ctrl = ctrl | HC_EPT_Format; } maxpacketsize = endpoint->wMaxPacketSize; maxpacketsize = maxpacketsize << HC_EPT_MAXPKTSZ; ctrl = ctrl | (maxpacketsize & HC_EPT_MPS); return (ctrl); } /* * ohci_insert_ed: * * Add the Endpoint Descriptor (ED) into the Host Controller's * (HC) appropriate endpoint list. */ static void ohci_insert_ed( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_ed:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); switch (ph->p_ep.bmAttributes & USB_EP_ATTR_MASK) { case USB_EP_ATTR_CONTROL: ohci_insert_ctrl_ed(ohcip, pp); break; case USB_EP_ATTR_BULK: ohci_insert_bulk_ed(ohcip, pp); break; case USB_EP_ATTR_INTR: ohci_insert_intr_ed(ohcip, pp); break; case USB_EP_ATTR_ISOCH: ohci_insert_isoc_ed(ohcip, pp); break; } } /* * ohci_insert_ctrl_ed: * * Insert a control endpoint into the Host Controller's (HC) * control endpoint list. */ static void ohci_insert_ctrl_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_ed_t *ept = pp->pp_ept; ohci_ed_t *prev_ept; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_ctrl_ed:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Obtain a ptr to the head of the list */ if (Get_OpReg(hcr_ctrl_head)) { prev_ept = ohci_ed_iommu_to_cpu(ohcip, Get_OpReg(hcr_ctrl_head)); /* Set up the backwards pointer */ Set_ED(prev_ept->hced_prev, ohci_ed_cpu_to_iommu(ohcip, ept)); } /* The new endpoint points to the head of the list */ Set_ED(ept->hced_next, Get_OpReg(hcr_ctrl_head)); /* Set the head ptr to the new endpoint */ Set_OpReg(hcr_ctrl_head, ohci_ed_cpu_to_iommu(ohcip, ept)); /* * Enable Control list processing if control open * pipe count is zero. */ if (!ohcip->ohci_open_ctrl_pipe_count) { /* Start Control list processing */ Set_OpReg(hcr_control, (Get_OpReg(hcr_control) | HCR_CONTROL_CLE)); } ohcip->ohci_open_ctrl_pipe_count++; } /* * ohci_insert_bulk_ed: * * Insert a bulk endpoint into the Host Controller's (HC) bulk endpoint list. */ static void ohci_insert_bulk_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_ed_t *ept = pp->pp_ept; ohci_ed_t *prev_ept; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_bulk_ed:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Obtain a ptr to the head of the Bulk list */ if (Get_OpReg(hcr_bulk_head)) { prev_ept = ohci_ed_iommu_to_cpu(ohcip, Get_OpReg(hcr_bulk_head)); /* Set up the backwards pointer */ Set_ED(prev_ept->hced_prev, ohci_ed_cpu_to_iommu(ohcip, ept)); } /* The new endpoint points to the head of the Bulk list */ Set_ED(ept->hced_next, Get_OpReg(hcr_bulk_head)); /* Set the Bulk head ptr to the new endpoint */ Set_OpReg(hcr_bulk_head, ohci_ed_cpu_to_iommu(ohcip, ept)); /* * Enable Bulk list processing if bulk open pipe * count is zero. */ if (!ohcip->ohci_open_bulk_pipe_count) { /* Start Bulk list processing */ Set_OpReg(hcr_control, (Get_OpReg(hcr_control) | HCR_CONTROL_BLE)); } ohcip->ohci_open_bulk_pipe_count++; } /* * ohci_insert_intr_ed: * * Insert a interrupt endpoint into the Host Controller's (HC) interrupt * lattice tree. */ static void ohci_insert_intr_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_ed_t *ept = pp->pp_ept; ohci_ed_t *next_lattice_ept, *lattice_ept; uint_t node; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_intr_ed:"); /* * The appropriate node was found * during the opening of the pipe. */ node = pp->pp_node; if (node >= NUM_STATIC_NODES) { /* Get the hcca interrupt table index */ node = ohci_hcca_intr_index(node); /* Get the first endpoint on the list */ next_lattice_ept = ohci_ed_iommu_to_cpu(ohcip, Get_HCCA(ohcip->ohci_hccap->HccaIntTble[node])); /* Update this endpoint to point to it */ Set_ED(ept->hced_next, ohci_ed_cpu_to_iommu(ohcip, next_lattice_ept)); /* Put this endpoint at the head of the list */ Set_HCCA(ohcip->ohci_hccap->HccaIntTble[node], ohci_ed_cpu_to_iommu(ohcip, ept)); /* The previous pointer is NULL */ Set_ED(ept->hced_prev, 0); /* Update the previous pointer of ept->hced_next */ if (Get_ED(next_lattice_ept->hced_state) != HC_EPT_STATIC) { Set_ED(next_lattice_ept->hced_prev, ohci_ed_cpu_to_iommu(ohcip, ept)); } } else { /* Find the lattice endpoint */ lattice_ept = &ohcip->ohci_ed_pool_addr[node]; /* Find the next lattice endpoint */ next_lattice_ept = ohci_ed_iommu_to_cpu( ohcip, Get_ED(lattice_ept->hced_next)); /* * Update this endpoint to point to the next one in the * lattice. */ Set_ED(ept->hced_next, Get_ED(lattice_ept->hced_next)); /* Insert this endpoint into the lattice */ Set_ED(lattice_ept->hced_next, ohci_ed_cpu_to_iommu(ohcip, ept)); /* Update the previous pointer */ Set_ED(ept->hced_prev, ohci_ed_cpu_to_iommu(ohcip, lattice_ept)); /* Update the previous pointer of ept->hced_next */ if ((next_lattice_ept) && (Get_ED(next_lattice_ept->hced_state) != HC_EPT_STATIC)) { Set_ED(next_lattice_ept->hced_prev, ohci_ed_cpu_to_iommu(ohcip, ept)); } } /* * Enable periodic list processing if periodic (interrupt * and isochronous) open pipe count is zero. */ if (!ohcip->ohci_open_periodic_pipe_count) { ASSERT(!ohcip->ohci_open_isoch_pipe_count); Set_OpReg(hcr_control, (Get_OpReg(hcr_control) | HCR_CONTROL_PLE)); } ohcip->ohci_open_periodic_pipe_count++; } /* * ohci_insert_isoc_ed: * * Insert a isochronous endpoint into the Host Controller's (HC) interrupt * lattice tree. A isochronous endpoint will be inserted at the end of the * 1ms interrupt endpoint list. */ static void ohci_insert_isoc_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_ed_t *next_lattice_ept, *lattice_ept; ohci_ed_t *ept = pp->pp_ept; uint_t node; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_isoc_ed:"); /* * The appropriate node was found during the opening of the pipe. * This node must be root of the interrupt lattice tree. */ node = pp->pp_node; ASSERT(node == 0); /* Find the 1ms interrupt lattice endpoint */ lattice_ept = &ohcip->ohci_ed_pool_addr[node]; /* Find the next lattice endpoint */ next_lattice_ept = ohci_ed_iommu_to_cpu( ohcip, Get_ED(lattice_ept->hced_next)); while (next_lattice_ept) { lattice_ept = next_lattice_ept; /* Find the next lattice endpoint */ next_lattice_ept = ohci_ed_iommu_to_cpu( ohcip, Get_ED(lattice_ept->hced_next)); } /* The next pointer is NULL */ Set_ED(ept->hced_next, 0); /* Update the previous pointer */ Set_ED(ept->hced_prev, ohci_ed_cpu_to_iommu(ohcip, lattice_ept)); /* Insert this endpoint into the lattice */ Set_ED(lattice_ept->hced_next, ohci_ed_cpu_to_iommu(ohcip, ept)); /* * Enable periodic and isoch lists processing if isoch * open pipe count is zero. */ if (!ohcip->ohci_open_isoch_pipe_count) { Set_OpReg(hcr_control, (Get_OpReg(hcr_control) | HCR_CONTROL_PLE | HCR_CONTROL_IE)); } ohcip->ohci_open_periodic_pipe_count++; ohcip->ohci_open_isoch_pipe_count++; } /* * ohci_modify_sKip_bit: * * Modify the sKip bit on the Host Controller (HC) Endpoint Descriptor (ED). */ static void ohci_modify_sKip_bit( ohci_state_t *ohcip, ohci_pipe_private_t *pp, skip_bit_t action, usb_flags_t flag) { ohci_ed_t *ept = pp->pp_ept; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_modify_sKip_bit: action = 0x%x flag = 0x%x", action, flag); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); if (action == CLEAR_sKip) { /* * If the skip bit is to be cleared, just clear it. * there shouldn't be any race condition problems. * If the host controller reads the bit before the * driver has a chance to set the bit, the bit will * be reread on the next frame. */ Set_ED(ept->hced_ctrl, (Get_ED(ept->hced_ctrl) & ~HC_EPT_sKip)); } else { /* Sync ED and TD pool */ if (flag & OHCI_FLAGS_DMA_SYNC) { Sync_ED_TD_Pool(ohcip); } /* Check Halt or Skip bit is already set */ if ((Get_ED(ept->hced_headp) & HC_EPT_Halt) || (Get_ED(ept->hced_ctrl) & HC_EPT_sKip)) { USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_modify_sKip_bit: " "Halt or Skip bit is already set"); } else { /* * The action is to set the skip bit. In order to * be sure that the HCD has seen the sKip bit, wait * for the next start of frame. */ Set_ED(ept->hced_ctrl, (Get_ED(ept->hced_ctrl) | HC_EPT_sKip)); if (flag & OHCI_FLAGS_SLEEP) { /* Wait for the next SOF */ (void) ohci_wait_for_sof(ohcip); /* Sync ED and TD pool */ if (flag & OHCI_FLAGS_DMA_SYNC) { Sync_ED_TD_Pool(ohcip); } } } } } /* * ohci_remove_ed: * * Remove the Endpoint Descriptor (ED) from the Host Controller's appropriate * endpoint list. */ static void ohci_remove_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { uchar_t attributes; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_remove_ed:"); attributes = pp->pp_pipe_handle->p_ep.bmAttributes & USB_EP_ATTR_MASK; switch (attributes) { case USB_EP_ATTR_CONTROL: ohci_remove_ctrl_ed(ohcip, pp); break; case USB_EP_ATTR_BULK: ohci_remove_bulk_ed(ohcip, pp); break; case USB_EP_ATTR_INTR: case USB_EP_ATTR_ISOCH: ohci_remove_periodic_ed(ohcip, pp); break; } } /* * ohci_remove_ctrl_ed: * * Remove a control Endpoint Descriptor (ED) from the Host Controller's (HC) * control endpoint list. */ static void ohci_remove_ctrl_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_ed_t *ept = pp->pp_ept; /* ept to be removed */ USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_remove_ctrl_ed:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* The control list should already be stopped */ ASSERT(!(Get_OpReg(hcr_control) & HCR_CONTROL_CLE)); ohcip->ohci_open_ctrl_pipe_count--; /* Detach the endpoint from the list that it's on */ ohci_detach_ed_from_list(ohcip, ept, USB_EP_ATTR_CONTROL); /* * If next endpoint pointed by endpoint to be removed is not NULL * then set current control pointer to the next endpoint pointed by * endpoint to be removed. Otherwise set current control pointer to * the beginning of the control list. */ if (Get_ED(ept->hced_next)) { Set_OpReg(hcr_ctrl_curr, Get_ED(ept->hced_next)); } else { Set_OpReg(hcr_ctrl_curr, Get_OpReg(hcr_ctrl_head)); } if (ohcip->ohci_open_ctrl_pipe_count) { ASSERT(Get_OpReg(hcr_ctrl_head)); /* Reenable the control list */ Set_OpReg(hcr_control, (Get_OpReg(hcr_control) | HCR_CONTROL_CLE)); } ohci_insert_ed_on_reclaim_list(ohcip, pp); } /* * ohci_remove_bulk_ed: * * Remove free the bulk Endpoint Descriptor (ED) from the Host Controller's * (HC) bulk endpoint list. */ static void ohci_remove_bulk_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_ed_t *ept = pp->pp_ept; /* ept to be removed */ USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_remove_bulk_ed:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* The bulk list should already be stopped */ ASSERT(!(Get_OpReg(hcr_control) & HCR_CONTROL_BLE)); ohcip->ohci_open_bulk_pipe_count--; /* Detach the endpoint from the bulk list */ ohci_detach_ed_from_list(ohcip, ept, USB_EP_ATTR_BULK); /* * If next endpoint pointed by endpoint to be removed is not NULL * then set current bulk pointer to the next endpoint pointed by * endpoint to be removed. Otherwise set current bulk pointer to * the beginning of the bulk list. */ if (Get_ED(ept->hced_next)) { Set_OpReg(hcr_bulk_curr, Get_ED(ept->hced_next)); } else { Set_OpReg(hcr_bulk_curr, Get_OpReg(hcr_bulk_head)); } if (ohcip->ohci_open_bulk_pipe_count) { ASSERT(Get_OpReg(hcr_bulk_head)); /* Re-enable the bulk list */ Set_OpReg(hcr_control, (Get_OpReg(hcr_control) | HCR_CONTROL_BLE)); } ohci_insert_ed_on_reclaim_list(ohcip, pp); } /* * ohci_remove_periodic_ed: * * Set up an periodic endpoint to be removed from the Host Controller's (HC) * interrupt lattice tree. The Endpoint Descriptor (ED) will be freed in the * interrupt handler. */ static void ohci_remove_periodic_ed( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_ed_t *ept = pp->pp_ept; /* ept to be removed */ uint_t ept_type; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_remove_periodic_ed:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT((Get_ED(ept->hced_tailp) & HC_EPT_TD_TAIL) == (Get_ED(ept->hced_headp) & HC_EPT_TD_HEAD)); ohcip->ohci_open_periodic_pipe_count--; ept_type = pp->pp_pipe_handle-> p_ep.bmAttributes & USB_EP_ATTR_MASK; if (ept_type == USB_EP_ATTR_ISOCH) { ohcip->ohci_open_isoch_pipe_count--; } /* Store the node number */ Set_ED(ept->hced_node, pp->pp_node); /* Remove the endpoint from interrupt lattice tree */ ohci_detach_ed_from_list(ohcip, ept, ept_type); /* * Disable isoch list processing if isoch open pipe count * is zero. */ if (!ohcip->ohci_open_isoch_pipe_count) { Set_OpReg(hcr_control, (Get_OpReg(hcr_control) & ~(HCR_CONTROL_IE))); } /* * Disable periodic list processing if periodic (interrupt * and isochrous) open pipe count is zero. */ if (!ohcip->ohci_open_periodic_pipe_count) { ASSERT(!ohcip->ohci_open_isoch_pipe_count); Set_OpReg(hcr_control, (Get_OpReg(hcr_control) & ~(HCR_CONTROL_PLE))); } ohci_insert_ed_on_reclaim_list(ohcip, pp); } /* * ohci_detach_ed_from_list: * * Remove the Endpoint Descriptor (ED) from the appropriate Host Controller's * (HC) endpoint list. */ static void ohci_detach_ed_from_list( ohci_state_t *ohcip, ohci_ed_t *ept, uint_t ept_type) { ohci_ed_t *prev_ept; /* Previous endpoint */ ohci_ed_t *next_ept; /* Endpoint after one to be removed */ uint_t node; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_detach_ed_from_list:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); prev_ept = ohci_ed_iommu_to_cpu(ohcip, Get_ED(ept->hced_prev)); next_ept = ohci_ed_iommu_to_cpu(ohcip, Get_ED(ept->hced_next)); /* * If there is no previous endpoint, then this * endpoint is at the head of the endpoint list. */ if (prev_ept == NULL) { if (next_ept) { /* * If this endpoint is the first element of the * list and there is more than one endpoint on * the list then perform specific actions based * on the type of endpoint list. */ switch (ept_type) { case USB_EP_ATTR_CONTROL: /* Set the head of list to next ept */ Set_OpReg(hcr_ctrl_head, Get_ED(ept->hced_next)); /* Clear prev ptr of next endpoint */ Set_ED(next_ept->hced_prev, 0); break; case USB_EP_ATTR_BULK: /* Set the head of list to next ept */ Set_OpReg(hcr_bulk_head, Get_ED(ept->hced_next)); /* Clear prev ptr of next endpoint */ Set_ED(next_ept->hced_prev, 0); break; case USB_EP_ATTR_INTR: /* * HCCA area should point * directly to this ept. */ ASSERT(Get_ED(ept->hced_node) >= NUM_STATIC_NODES); /* Get the hcca interrupt table index */ node = ohci_hcca_intr_index( Get_ED(ept->hced_node)); /* * Delete the ept from the * bottom of the tree. */ Set_HCCA(ohcip->ohci_hccap-> HccaIntTble[node], Get_ED(ept->hced_next)); /* * Update the previous pointer * of ept->hced_next */ if (Get_ED(next_ept->hced_state) != HC_EPT_STATIC) { Set_ED(next_ept->hced_prev, 0); } break; case USB_EP_ATTR_ISOCH: default: break; } } else { /* * If there was only one element on the list * perform specific actions based on the type * of the list. */ switch (ept_type) { case USB_EP_ATTR_CONTROL: /* Set the head to NULL */ Set_OpReg(hcr_ctrl_head, 0); break; case USB_EP_ATTR_BULK: /* Set the head to NULL */ Set_OpReg(hcr_bulk_head, 0); break; case USB_EP_ATTR_INTR: case USB_EP_ATTR_ISOCH: default: break; } } } else { /* The previous ept points to the next one */ Set_ED(prev_ept->hced_next, Get_ED(ept->hced_next)); /* * Set the previous ptr of the next_ept to prev_ept * if this isn't the last endpoint on the list */ if ((next_ept) && (Get_ED(next_ept->hced_state) != HC_EPT_STATIC)) { /* Set the previous ptr of the next one */ Set_ED(next_ept->hced_prev, Get_ED(ept->hced_prev)); } } } /* * ohci_insert_ed_on_reclaim_list: * * Insert Endpoint onto the reclaim list */ static void ohci_insert_ed_on_reclaim_list( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_ed_t *ept = pp->pp_ept; /* ept to be removed */ ohci_ed_t *next_ept, *prev_ept; usb_frame_number_t frame_number; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Read current usb frame number and add appropriate number of * usb frames needs to wait before reclaiming current endpoint. */ frame_number = ohci_get_current_frame_number(ohcip) + MAX_SOF_WAIT_COUNT; /* Store 32bit ID */ Set_ED(ept->hced_reclaim_frame, ((uint32_t)(OHCI_GET_ID((void *)(uintptr_t)frame_number)))); /* Insert the endpoint onto the reclaimation list */ if (ohcip->ohci_reclaim_list) { next_ept = ohcip->ohci_reclaim_list; while (next_ept) { prev_ept = next_ept; next_ept = ohci_ed_iommu_to_cpu(ohcip, Get_ED(next_ept->hced_reclaim_next)); } Set_ED(prev_ept->hced_reclaim_next, ohci_ed_cpu_to_iommu(ohcip, ept)); } else { ohcip->ohci_reclaim_list = ept; } ASSERT(Get_ED(ept->hced_reclaim_next) == 0); /* Enable the SOF interrupt */ Set_OpReg(hcr_intr_enable, HCR_INTR_SOF); } /* * ohci_deallocate_ed: * NOTE: This function is also called from POLLED MODE. * * Deallocate a Host Controller's (HC) Endpoint Descriptor (ED). */ void ohci_deallocate_ed( ohci_state_t *ohcip, ohci_ed_t *old_ed) { ohci_td_t *dummy_td; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_deallocate_ed:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); dummy_td = ohci_td_iommu_to_cpu(ohcip, Get_ED(old_ed->hced_headp)); if (dummy_td) { ASSERT(Get_TD(dummy_td->hctd_state) == HC_TD_DUMMY); ohci_deallocate_td(ohcip, dummy_td); } USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_deallocate_ed: Deallocated 0x%p", (void *)old_ed); bzero((void *)old_ed, sizeof (ohci_ed_t)); Set_ED(old_ed->hced_state, HC_EPT_FREE); } /* * ohci_ed_cpu_to_iommu: * NOTE: This function is also called from POLLED MODE. * * This function converts for the given Endpoint Descriptor (ED) CPU address * to IO address. */ uint32_t ohci_ed_cpu_to_iommu( ohci_state_t *ohcip, ohci_ed_t *addr) { uint32_t ed; ed = (uint32_t)ohcip->ohci_ed_pool_cookie.dmac_address + (uint32_t)((uintptr_t)addr - (uintptr_t)(ohcip->ohci_ed_pool_addr)); ASSERT(ed >= ohcip->ohci_ed_pool_cookie.dmac_address); ASSERT(ed <= ohcip->ohci_ed_pool_cookie.dmac_address + sizeof (ohci_ed_t) * ohci_ed_pool_size); return (ed); } /* * ohci_ed_iommu_to_cpu: * * This function converts for the given Endpoint Descriptor (ED) IO address * to CPU address. */ static ohci_ed_t * ohci_ed_iommu_to_cpu( ohci_state_t *ohcip, uintptr_t addr) { ohci_ed_t *ed; if (addr == 0) return (NULL); ed = (ohci_ed_t *)((uintptr_t) (addr - ohcip->ohci_ed_pool_cookie.dmac_address) + (uintptr_t)ohcip->ohci_ed_pool_addr); ASSERT(ed >= ohcip->ohci_ed_pool_addr); ASSERT((uintptr_t)ed <= (uintptr_t)ohcip->ohci_ed_pool_addr + (uintptr_t)(sizeof (ohci_ed_t) * ohci_ed_pool_size)); return (ed); } /* * Transfer Descriptor manipulations functions */ /* * ohci_initialize_dummy: * * An Endpoint Descriptor (ED) has a dummy Transfer Descriptor (TD) on the * end of its TD list. Initially, both the head and tail pointers of the ED * point to the dummy TD. */ static int ohci_initialize_dummy( ohci_state_t *ohcip, ohci_ed_t *ept) { ohci_td_t *dummy; /* Obtain a dummy TD */ dummy = ohci_allocate_td_from_pool(ohcip); if (dummy == NULL) { return (USB_NO_RESOURCES); } /* * Both the head and tail pointers of an ED point * to this new dummy TD. */ Set_ED(ept->hced_headp, (ohci_td_cpu_to_iommu(ohcip, dummy))); Set_ED(ept->hced_tailp, (ohci_td_cpu_to_iommu(ohcip, dummy))); return (USB_SUCCESS); } /* * ohci_allocate_ctrl_resources: * * Calculates the number of tds necessary for a ctrl transfer, and allocates * all the resources necessary. * * Returns NULL if there is insufficient resources otherwise TW. */ static ohci_trans_wrapper_t * ohci_allocate_ctrl_resources( ohci_state_t *ohcip, ohci_pipe_private_t *pp, usb_ctrl_req_t *ctrl_reqp, usb_flags_t usb_flags) { size_t td_count = 2; size_t ctrl_buf_size; ohci_trans_wrapper_t *tw; /* Add one more td for data phase */ if (ctrl_reqp->ctrl_wLength) { td_count++; } /* * If we have a control data phase, the data buffer starts * on the next 4K page boundary. So the TW buffer is allocated * to be larger than required. The buffer in the range of * [SETUP_SIZE, OHCI_MAX_TD_BUF_SIZE) is just for padding * and not to be transferred. */ if (ctrl_reqp->ctrl_wLength) { ctrl_buf_size = OHCI_MAX_TD_BUF_SIZE + ctrl_reqp->ctrl_wLength; } else { ctrl_buf_size = SETUP_SIZE; } tw = ohci_allocate_tw_resources(ohcip, pp, ctrl_buf_size, usb_flags, td_count); return (tw); } /* * ohci_insert_ctrl_req: * * Create a Transfer Descriptor (TD) and a data buffer for a control endpoint. */ /* ARGSUSED */ static void ohci_insert_ctrl_req( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_ctrl_req_t *ctrl_reqp, ohci_trans_wrapper_t *tw, usb_flags_t usb_flags) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; uchar_t bmRequestType = ctrl_reqp->ctrl_bmRequestType; uchar_t bRequest = ctrl_reqp->ctrl_bRequest; uint16_t wValue = ctrl_reqp->ctrl_wValue; uint16_t wIndex = ctrl_reqp->ctrl_wIndex; uint16_t wLength = ctrl_reqp->ctrl_wLength; mblk_t *data = ctrl_reqp->ctrl_data; uint32_t ctrl = 0; int sdata; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_ctrl_req:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Save current control request pointer and timeout values * in transfer wrapper. */ tw->tw_curr_xfer_reqp = (usb_opaque_t)ctrl_reqp; tw->tw_timeout = ctrl_reqp->ctrl_timeout ? ctrl_reqp->ctrl_timeout : OHCI_DEFAULT_XFER_TIMEOUT; /* * Initialize the callback and any callback data for when * the td completes. */ tw->tw_handle_td = ohci_handle_ctrl_td; tw->tw_handle_callback_value = NULL; /* Create the first four bytes of the setup packet */ sdata = (bmRequestType << 24) | (bRequest << 16) | (((wValue >> 8) | (wValue << 8)) & 0x0000FFFF); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_create_setup_pkt: sdata = 0x%x", sdata); ddi_put32(tw->tw_accesshandle, (uint_t *)tw->tw_buf, sdata); /* Create the second four bytes */ sdata = (uint32_t)(((((wIndex >> 8) | (wIndex << 8)) << 16) & 0xFFFF0000) | (((wLength >> 8) | (wLength << 8)) & 0x0000FFFF)); USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_setup_pkt: sdata = 0x%x", sdata); ddi_put32(tw->tw_accesshandle, (uint_t *)((uintptr_t)tw->tw_buf + sizeof (uint_t)), sdata); ctrl = HC_TD_SETUP|HC_TD_MS_DT|HC_TD_DT_0|HC_TD_6I; /* * The TD's are placed on the ED one at a time. * Once this TD is placed on the done list, the * data or status phase TD will be enqueued. */ (void) ohci_insert_hc_td(ohcip, ctrl, 0, SETUP_SIZE, OHCI_CTRL_SETUP_PHASE, pp, tw); USB_DPRINTF_L3(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "Create_setup: pp 0x%p", (void *)pp); /* * If this control transfer has a data phase, record the * direction. If the data phase is an OUT transaction, * copy the data into the buffer of the transfer wrapper. */ if (wLength != 0) { /* There is a data stage. Find the direction */ if (bmRequestType & USB_DEV_REQ_DEV_TO_HOST) { tw->tw_direction = HC_TD_IN; } else { tw->tw_direction = HC_TD_OUT; /* Copy the data into the message */ ddi_rep_put8(tw->tw_accesshandle, data->b_rptr, (uint8_t *)(tw->tw_buf + OHCI_MAX_TD_BUF_SIZE), wLength, DDI_DEV_AUTOINCR); } ctrl = (ctrl_reqp->ctrl_attributes & USB_ATTRS_SHORT_XFER_OK) ? HC_TD_R : 0; /* * There is a data stage. * Find the direction. */ if (tw->tw_direction == HC_TD_IN) { ctrl = ctrl|HC_TD_IN|HC_TD_MS_DT|HC_TD_DT_1|HC_TD_6I; } else { ctrl = ctrl|HC_TD_OUT|HC_TD_MS_DT|HC_TD_DT_1|HC_TD_6I; } /* * Create the TD. If this is an OUT transaction, * the data is already in the buffer of the TW. */ (void) ohci_insert_hc_td(ohcip, ctrl, OHCI_MAX_TD_BUF_SIZE, wLength, OHCI_CTRL_DATA_PHASE, pp, tw); /* * The direction of the STATUS TD depends on * the direction of the transfer. */ if (tw->tw_direction == HC_TD_IN) { ctrl = HC_TD_OUT|HC_TD_MS_DT|HC_TD_DT_1|HC_TD_1I; } else { ctrl = HC_TD_IN|HC_TD_MS_DT|HC_TD_DT_1|HC_TD_1I; } } else { ctrl = HC_TD_IN|HC_TD_MS_DT|HC_TD_DT_1|HC_TD_1I; } /* Status stage */ (void) ohci_insert_hc_td(ohcip, ctrl, 0, 0, OHCI_CTRL_STATUS_PHASE, pp, tw); /* Indicate that the control list is filled */ Set_OpReg(hcr_cmd_status, HCR_STATUS_CLF); /* Start the timer for this control transfer */ ohci_start_xfer_timer(ohcip, pp, tw); } /* * ohci_allocate_bulk_resources: * * Calculates the number of tds necessary for a ctrl transfer, and allocates * all the resources necessary. * * Returns NULL if there is insufficient resources otherwise TW. */ static ohci_trans_wrapper_t * ohci_allocate_bulk_resources( ohci_state_t *ohcip, ohci_pipe_private_t *pp, usb_bulk_req_t *bulk_reqp, usb_flags_t usb_flags) { size_t td_count = 0; ohci_trans_wrapper_t *tw; /* Check the size of bulk request */ if (bulk_reqp->bulk_len > OHCI_MAX_BULK_XFER_SIZE) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_bulk_resources: Bulk request size 0x%x is " "more than 0x%x", bulk_reqp->bulk_len, OHCI_MAX_BULK_XFER_SIZE); return (NULL); } /* Get the required bulk packet size */ td_count = bulk_reqp->bulk_len / OHCI_MAX_TD_XFER_SIZE; if (bulk_reqp->bulk_len % OHCI_MAX_TD_XFER_SIZE || bulk_reqp->bulk_len == 0) { td_count++; } tw = ohci_allocate_tw_resources(ohcip, pp, bulk_reqp->bulk_len, usb_flags, td_count); return (tw); } /* * ohci_insert_bulk_req: * * Create a Transfer Descriptor (TD) and a data buffer for a bulk * endpoint. */ /* ARGSUSED */ static void ohci_insert_bulk_req( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_bulk_req_t *bulk_reqp, ohci_trans_wrapper_t *tw, usb_flags_t flags) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; uint_t bulk_pkt_size, count; size_t residue = 0, len = 0; uint32_t ctrl = 0; int pipe_dir; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_bulk_req: bulk_reqp = 0x%p flags = 0x%x", (void *)bulk_reqp, flags); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Get the bulk pipe direction */ pipe_dir = ph->p_ep.bEndpointAddress & USB_EP_DIR_MASK; /* Get the required bulk packet size */ bulk_pkt_size = min(bulk_reqp->bulk_len, OHCI_MAX_TD_XFER_SIZE); if (bulk_pkt_size) residue = tw->tw_length % bulk_pkt_size; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_bulk_req: bulk_pkt_size = %d", bulk_pkt_size); /* * Save current bulk request pointer and timeout values * in transfer wrapper. */ tw->tw_curr_xfer_reqp = (usb_opaque_t)bulk_reqp; tw->tw_timeout = bulk_reqp->bulk_timeout; /* * Initialize the callback and any callback * data required when the td completes. */ tw->tw_handle_td = ohci_handle_bulk_td; tw->tw_handle_callback_value = NULL; tw->tw_direction = (pipe_dir == USB_EP_DIR_OUT) ? HC_TD_OUT : HC_TD_IN; if (tw->tw_direction == HC_TD_OUT && bulk_reqp->bulk_len) { ASSERT(bulk_reqp->bulk_data != NULL); /* Copy the data into the message */ ddi_rep_put8(tw->tw_accesshandle, bulk_reqp->bulk_data->b_rptr, (uint8_t *)tw->tw_buf, bulk_reqp->bulk_len, DDI_DEV_AUTOINCR); } ctrl = tw->tw_direction|HC_TD_DT_0|HC_TD_6I; /* Insert all the bulk TDs */ for (count = 0; count < tw->tw_num_tds; count++) { /* Check for last td */ if (count == (tw->tw_num_tds - 1)) { ctrl = ((ctrl & ~HC_TD_DI) | HC_TD_1I); /* Check for inserting residue data */ if (residue) { bulk_pkt_size = (uint_t)residue; } /* * Only set the round bit on the last TD, to ensure * the controller will always HALT the ED in case of * a short transfer. */ if (bulk_reqp->bulk_attributes & USB_ATTRS_SHORT_XFER_OK) { ctrl |= HC_TD_R; } } /* Insert the TD onto the endpoint */ (void) ohci_insert_hc_td(ohcip, ctrl, len, bulk_pkt_size, 0, pp, tw); len = len + bulk_pkt_size; } /* Indicate that the bulk list is filled */ Set_OpReg(hcr_cmd_status, HCR_STATUS_BLF); /* Start the timer for this bulk transfer */ ohci_start_xfer_timer(ohcip, pp, tw); } /* * ohci_start_periodic_pipe_polling: * NOTE: This function is also called from POLLED MODE. */ int ohci_start_periodic_pipe_polling( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_opaque_t periodic_in_reqp, usb_flags_t flags) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_ep_descr_t *eptd = &ph->p_ep; int error = USB_SUCCESS; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_periodic_pipe_polling: ep%d", ph->p_ep.bEndpointAddress & USB_EP_NUM_MASK); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Check and handle start polling on root hub interrupt pipe. */ if ((ph->p_usba_device->usb_addr == ROOT_HUB_ADDR) && ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_INTR)) { error = ohci_handle_root_hub_pipe_start_intr_polling(ph, (usb_intr_req_t *)periodic_in_reqp, flags); return (error); } switch (pp->pp_state) { case OHCI_PIPE_STATE_IDLE: /* Save the Original client's Periodic IN request */ pp->pp_client_periodic_in_reqp = periodic_in_reqp; /* * This pipe is uninitialized or if a valid TD is * not found then insert a TD on the interrupt or * isochronous IN endpoint. */ error = ohci_start_pipe_polling(ohcip, ph, flags); if (error != USB_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_periodic_pipe_polling: " "Start polling failed"); pp->pp_client_periodic_in_reqp = NULL; return (error); } USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_start_periodic_pipe_polling: PP = 0x%p", (void *)pp); ASSERT((pp->pp_tw_head != NULL) && (pp->pp_tw_tail != NULL)); break; case OHCI_PIPE_STATE_ACTIVE: USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_periodic_pipe_polling: " "Polling is already in progress"); error = USB_FAILURE; break; case OHCI_PIPE_STATE_ERROR: USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_periodic_pipe_polling: " "Pipe is halted and perform reset before restart polling"); error = USB_FAILURE; break; default: USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_periodic_pipe_polling: Undefined state"); error = USB_FAILURE; break; } return (error); } /* * ohci_start_pipe_polling: * * Insert the number of periodic requests corresponding to polling * interval as calculated during pipe open. */ static int ohci_start_pipe_polling( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_flags_t flags) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_ep_descr_t *eptd = &ph->p_ep; ohci_trans_wrapper_t *tw_list, *tw; int i, total_tws; int error = USB_SUCCESS; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_pipe_polling:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * For the start polling, pp_max_periodic_req_cnt will be zero * and for the restart polling request, it will be non zero. * * In case of start polling request, find out number of requests * required for the Interrupt IN endpoints corresponding to the * endpoint polling interval. For Isochronous IN endpoints, it is * always fixed since its polling interval will be one ms. */ if (pp->pp_max_periodic_req_cnt == 0) { ohci_set_periodic_pipe_polling(ohcip, ph); } ASSERT(pp->pp_max_periodic_req_cnt != 0); /* Allocate all the necessary resources for the IN transfer */ tw_list = NULL; total_tws = pp->pp_max_periodic_req_cnt - pp->pp_cur_periodic_req_cnt; for (i = 0; i < total_tws; i++) { switch (eptd->bmAttributes & USB_EP_ATTR_MASK) { case USB_EP_ATTR_INTR: tw = ohci_allocate_intr_resources( ohcip, ph, NULL, flags); break; case USB_EP_ATTR_ISOCH: tw = ohci_allocate_isoc_resources( ohcip, ph, NULL, flags); break; } if (tw == NULL) { error = USB_NO_RESOURCES; /* There are not enough resources, deallocate the TWs */ tw = tw_list; while (tw != NULL) { tw_list = tw->tw_next; ohci_deallocate_periodic_in_resource( ohcip, pp, tw); ohci_deallocate_tw_resources(ohcip, pp, tw); tw = tw_list; } return (error); } else { if (tw_list == NULL) { tw_list = tw; } } } i = 0; while (pp->pp_cur_periodic_req_cnt < pp->pp_max_periodic_req_cnt) { USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_pipe_polling: max = %d curr = %d tw = %p:", pp->pp_max_periodic_req_cnt, pp->pp_cur_periodic_req_cnt, (void *)tw_list); tw = tw_list; tw_list = tw->tw_next; switch (eptd->bmAttributes & USB_EP_ATTR_MASK) { case USB_EP_ATTR_INTR: ohci_insert_intr_req(ohcip, pp, tw, flags); break; case USB_EP_ATTR_ISOCH: error = ohci_insert_isoc_req(ohcip, pp, tw, flags); break; } if (error == USB_SUCCESS) { pp->pp_cur_periodic_req_cnt++; } else { /* * Deallocate the remaining tw * The current tw should have already been deallocated */ tw = tw_list; while (tw != NULL) { tw_list = tw->tw_next; ohci_deallocate_periodic_in_resource( ohcip, pp, tw); ohci_deallocate_tw_resources(ohcip, pp, tw); tw = tw_list; } /* * If this is the first req return an error. * Otherwise return success. */ if (i != 0) { error = USB_SUCCESS; } break; } i++; } return (error); } /* * ohci_set_periodic_pipe_polling: * * Calculate the number of periodic requests needed corresponding to the * interrupt/isochronous IN endpoints polling interval. Table below gives * the number of periodic requests needed for the interrupt/isochronous * IN endpoints according to endpoint polling interval. * * Polling interval Number of periodic requests * * 1ms 4 * 2ms 2 * 4ms to 32ms 1 */ static void ohci_set_periodic_pipe_polling( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_ep_descr_t *endpoint = &ph->p_ep; uchar_t ep_attr = endpoint->bmAttributes; uint_t interval; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_set_periodic_pipe_polling:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); pp->pp_cur_periodic_req_cnt = 0; /* * Check usb flag whether USB_FLAGS_ONE_TIME_POLL flag is * set and if so, set pp->pp_max_periodic_req_cnt to one. */ if (((ep_attr & USB_EP_ATTR_MASK) == USB_EP_ATTR_INTR) && (pp->pp_client_periodic_in_reqp)) { usb_intr_req_t *intr_reqp = (usb_intr_req_t *)pp->pp_client_periodic_in_reqp; if (intr_reqp->intr_attributes & USB_ATTRS_ONE_XFER) { pp->pp_max_periodic_req_cnt = INTR_XMS_REQS; return; } } mutex_enter(&ph->p_usba_device->usb_mutex); /* * The ohci_adjust_polling_interval function will not fail * at this instance since bandwidth allocation is already * done. Here we are getting only the periodic interval. */ interval = ohci_adjust_polling_interval(ohcip, endpoint, ph->p_usba_device->usb_port_status); mutex_exit(&ph->p_usba_device->usb_mutex); switch (interval) { case INTR_1MS_POLL: pp->pp_max_periodic_req_cnt = INTR_1MS_REQS; break; case INTR_2MS_POLL: pp->pp_max_periodic_req_cnt = INTR_2MS_REQS; break; default: pp->pp_max_periodic_req_cnt = INTR_XMS_REQS; break; } USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_set_periodic_pipe_polling: Max periodic requests = %d", pp->pp_max_periodic_req_cnt); } /* * ohci_allocate_intr_resources: * * Calculates the number of tds necessary for a intr transfer, and allocates * all the necessary resources. * * Returns NULL if there is insufficient resources otherwise TW. */ static ohci_trans_wrapper_t * ohci_allocate_intr_resources( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_intr_req_t *intr_reqp, usb_flags_t flags) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; int pipe_dir; size_t td_count = 1; size_t tw_length; ohci_trans_wrapper_t *tw; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_intr_resources:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); pipe_dir = ph->p_ep.bEndpointAddress & USB_EP_DIR_MASK; /* Get the length of interrupt transfer & alloc data */ if (intr_reqp) { tw_length = intr_reqp->intr_len; } else { ASSERT(pipe_dir == USB_EP_DIR_IN); tw_length = (pp->pp_client_periodic_in_reqp) ? (((usb_intr_req_t *)pp-> pp_client_periodic_in_reqp)->intr_len) : ph->p_ep.wMaxPacketSize; } /* Check the size of interrupt request */ if (tw_length > OHCI_MAX_TD_XFER_SIZE) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_intr_resources: Intr request size 0x%lx is " "more than 0x%x", tw_length, OHCI_MAX_TD_XFER_SIZE); return (NULL); } if ((tw = ohci_allocate_tw_resources(ohcip, pp, tw_length, flags, td_count)) == NULL) { return (NULL); } if (pipe_dir == USB_EP_DIR_IN) { if (ohci_allocate_periodic_in_resource(ohcip, pp, tw, flags) != USB_SUCCESS) { ohci_deallocate_tw_resources(ohcip, pp, tw); return (NULL); } tw->tw_direction = HC_TD_IN; } else { if (tw_length) { ASSERT(intr_reqp->intr_data != NULL); /* Copy the data into the message */ ddi_rep_put8(tw->tw_accesshandle, intr_reqp->intr_data->b_rptr, (uint8_t *)tw->tw_buf, intr_reqp->intr_len, DDI_DEV_AUTOINCR); } tw->tw_curr_xfer_reqp = (usb_opaque_t)intr_reqp; tw->tw_direction = HC_TD_OUT; } if (intr_reqp) { tw->tw_timeout = intr_reqp->intr_timeout; } /* * Initialize the callback and any callback * data required when the td completes. */ tw->tw_handle_td = ohci_handle_intr_td; tw->tw_handle_callback_value = NULL; return (tw); } /* * ohci_insert_intr_req: * * Insert an Interrupt request into the Host Controller's periodic list. */ /* ARGSUSED */ static void ohci_insert_intr_req( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, usb_flags_t flags) { usb_intr_req_t *curr_intr_reqp = NULL; uint_t ctrl = 0; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT(tw->tw_curr_xfer_reqp != NULL); /* Get the current interrupt request pointer */ curr_intr_reqp = (usb_intr_req_t *)tw->tw_curr_xfer_reqp; ctrl = tw->tw_direction | HC_TD_DT_0 | HC_TD_1I; if (curr_intr_reqp->intr_attributes & USB_ATTRS_SHORT_XFER_OK) { ctrl |= HC_TD_R; } /* Insert another interrupt TD */ (void) ohci_insert_hc_td(ohcip, ctrl, 0, tw->tw_length, 0, pp, tw); /* Start the timer for this Interrupt transfer */ ohci_start_xfer_timer(ohcip, pp, tw); } /* * ohci_stop_periodic_pipe_polling: */ /* ARGSUSED */ static int ohci_stop_periodic_pipe_polling( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_flags_t flags) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_ep_descr_t *eptd = &ph->p_ep; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_stop_periodic_pipe_polling: Flags = 0x%x", flags); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Check and handle stop polling on root hub interrupt pipe. */ if ((ph->p_usba_device->usb_addr == ROOT_HUB_ADDR) && ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_INTR)) { ohci_handle_root_hub_pipe_stop_intr_polling( ph, flags); return (USB_SUCCESS); } if (pp->pp_state != OHCI_PIPE_STATE_ACTIVE) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_stop_periodic_pipe_polling: Polling already stopped"); return (USB_SUCCESS); } /* Set pipe state to pipe stop polling */ pp->pp_state = OHCI_PIPE_STATE_STOP_POLLING; ohci_pipe_cleanup(ohcip, ph); return (USB_SUCCESS); } /* * ohci_allocate_isoc_resources: * * Calculates the number of tds necessary for a intr transfer, and allocates * all the necessary resources. * * Returns NULL if there is insufficient resources otherwise TW. */ static ohci_trans_wrapper_t * ohci_allocate_isoc_resources( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph, usb_isoc_req_t *isoc_reqp, usb_flags_t flags) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; int pipe_dir; uint_t max_pkt_size = ph->p_ep.wMaxPacketSize; uint_t max_isoc_xfer_size; usb_isoc_pkt_descr_t *isoc_pkt_descr, *start_isoc_pkt_descr; ushort_t isoc_pkt_count; size_t count, td_count; size_t tw_length; size_t isoc_pkts_length; ohci_trans_wrapper_t *tw; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_isoc_resources: flags = ox%x", flags); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Check whether pipe is in halted state. */ if (pp->pp_state == OHCI_PIPE_STATE_ERROR) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_isoc_resources:" "Pipe is in error state, need pipe reset to continue"); return (NULL); } pipe_dir = ph->p_ep.bEndpointAddress & USB_EP_DIR_MASK; /* Calculate the maximum isochronous transfer size */ max_isoc_xfer_size = OHCI_MAX_ISOC_PKTS_PER_XFER * max_pkt_size; if (isoc_reqp) { isoc_pkt_descr = isoc_reqp->isoc_pkt_descr; isoc_pkt_count = isoc_reqp->isoc_pkts_count; isoc_pkts_length = isoc_reqp->isoc_pkts_length; } else { isoc_pkt_descr = ((usb_isoc_req_t *) pp->pp_client_periodic_in_reqp)->isoc_pkt_descr; isoc_pkt_count = ((usb_isoc_req_t *) pp->pp_client_periodic_in_reqp)->isoc_pkts_count; isoc_pkts_length = ((usb_isoc_req_t *) pp->pp_client_periodic_in_reqp)->isoc_pkts_length; } start_isoc_pkt_descr = isoc_pkt_descr; /* * For isochronous IN pipe, get value of number of isochronous * packets per usb isochronous request */ if (pipe_dir == USB_EP_DIR_IN) { for (count = 0, tw_length = 0; count < isoc_pkt_count; count++) { tw_length += isoc_pkt_descr->isoc_pkt_length; isoc_pkt_descr++; } if ((isoc_pkts_length) && (isoc_pkts_length != tw_length)) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_isoc_resources: " "isoc_pkts_length 0x%lx is not equal to the sum of " "all pkt lengths 0x%lx in an isoc request", isoc_pkts_length, tw_length); return (NULL); } } else { ASSERT(isoc_reqp != NULL); tw_length = MBLKL(isoc_reqp->isoc_data); } USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_isoc_resources: length = 0x%lx", tw_length); /* Check the size of isochronous request */ if (tw_length > max_isoc_xfer_size) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_isoc_resources: Maximum isoc request" "size 0x%x Given isoc request size 0x%lx", max_isoc_xfer_size, tw_length); return (NULL); } /* * Each isochronous TD can hold data upto eight isochronous * data packets. Calculate the number of isochronous TDs needs * to be insert to complete current isochronous request. */ td_count = isoc_pkt_count / OHCI_ISOC_PKTS_PER_TD; if (isoc_pkt_count % OHCI_ISOC_PKTS_PER_TD) { td_count++; } tw = ohci_create_isoc_transfer_wrapper(ohcip, pp, tw_length, start_isoc_pkt_descr, isoc_pkt_count, td_count, flags); if (tw == NULL) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_create_isoc_transfer_wrapper: " "Unable to allocate TW"); return (NULL); } if (ohci_allocate_tds_for_tw(ohcip, tw, td_count) == USB_SUCCESS) { tw->tw_num_tds = (uint_t)td_count; } else { ohci_deallocate_tw_resources(ohcip, pp, tw); return (NULL); } if (pipe_dir == USB_EP_DIR_IN) { if (ohci_allocate_periodic_in_resource(ohcip, pp, tw, flags) != USB_SUCCESS) { ohci_deallocate_tw_resources(ohcip, pp, tw); return (NULL); } tw->tw_direction = HC_TD_IN; } else { if (tw->tw_length) { uchar_t *p; int i; ASSERT(isoc_reqp->isoc_data != NULL); p = isoc_reqp->isoc_data->b_rptr; /* Copy the data into the message */ for (i = 0; i < td_count; i++) { ddi_rep_put8( tw->tw_isoc_bufs[i].mem_handle, p, (uint8_t *)tw->tw_isoc_bufs[i].buf_addr, tw->tw_isoc_bufs[i].length, DDI_DEV_AUTOINCR); p += tw->tw_isoc_bufs[i].length; } } tw->tw_curr_xfer_reqp = (usb_opaque_t)isoc_reqp; tw->tw_direction = HC_TD_OUT; } /* * Initialize the callback and any callback * data required when the td completes. */ tw->tw_handle_td = ohci_handle_isoc_td; tw->tw_handle_callback_value = NULL; return (tw); } /* * ohci_insert_isoc_req: * * Insert an isochronous request into the Host Controller's * isochronous list. If there is an error is will appropriately * deallocate the unused resources. */ static int ohci_insert_isoc_req( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, uint_t flags) { size_t curr_isoc_xfer_offset, curr_isoc_xfer_len; uint_t isoc_pkts, residue, count; uint_t i, ctrl, frame_count; uint_t error = USB_SUCCESS; usb_isoc_req_t *curr_isoc_reqp; usb_isoc_pkt_descr_t *curr_isoc_pkt_descr; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_isoc_req: flags = 0x%x", flags); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Get the current isochronous request and packet * descriptor pointers. */ curr_isoc_reqp = (usb_isoc_req_t *)tw->tw_curr_xfer_reqp; curr_isoc_pkt_descr = curr_isoc_reqp->isoc_pkt_descr; ASSERT(curr_isoc_reqp != NULL); ASSERT(curr_isoc_reqp->isoc_pkt_descr != NULL); /* * Save address of first usb isochronous packet descriptor. */ tw->tw_curr_isoc_pktp = curr_isoc_reqp->isoc_pkt_descr; /* Insert all the isochronous TDs */ for (count = 0, curr_isoc_xfer_offset = 0, isoc_pkts = 0; count < tw->tw_num_tds; count++) { residue = curr_isoc_reqp->isoc_pkts_count - isoc_pkts; /* Check for inserting residue data */ if ((count == (tw->tw_num_tds - 1)) && (residue < OHCI_ISOC_PKTS_PER_TD)) { frame_count = residue; } else { frame_count = OHCI_ISOC_PKTS_PER_TD; } curr_isoc_pkt_descr = tw->tw_curr_isoc_pktp; /* * Calculate length of isochronous transfer * for the current TD. */ for (i = 0, curr_isoc_xfer_len = 0; i < frame_count; i++, curr_isoc_pkt_descr++) { curr_isoc_xfer_len += curr_isoc_pkt_descr->isoc_pkt_length; } /* * Programm td control field by checking whether this * is last td. */ if (count == (tw->tw_num_tds - 1)) { ctrl = ((((frame_count - 1) << HC_ITD_FC_SHIFT) & HC_ITD_FC) | HC_TD_DT_0 | HC_TD_0I); } else { ctrl = ((((frame_count - 1) << HC_ITD_FC_SHIFT) & HC_ITD_FC) | HC_TD_DT_0 | HC_TD_6I); } /* Insert the TD into the endpoint */ if ((error = ohci_insert_hc_td(ohcip, ctrl, count, curr_isoc_xfer_len, 0, pp, tw)) != USB_SUCCESS) { tw->tw_num_tds = count; tw->tw_length = curr_isoc_xfer_offset; break; } isoc_pkts += frame_count; tw->tw_curr_isoc_pktp += frame_count; curr_isoc_xfer_offset += curr_isoc_xfer_len; } if (error != USB_SUCCESS) { /* Free periodic in resources */ if (tw->tw_direction == USB_EP_DIR_IN) { ohci_deallocate_periodic_in_resource(ohcip, pp, tw); } /* Free all resources if IN or if count == 0(for both IN/OUT) */ if (tw->tw_direction == USB_EP_DIR_IN || count == 0) { ohci_deallocate_tw_resources(ohcip, pp, tw); if (pp->pp_cur_periodic_req_cnt) { /* * Set pipe state to stop polling and * error to no resource. Don't insert * any more isochronous polling requests. */ pp->pp_state = OHCI_PIPE_STATE_STOP_POLLING; pp->pp_error = error; } else { /* Set periodic in pipe state to idle */ pp->pp_state = OHCI_PIPE_STATE_IDLE; } } } else { /* * Reset back to the address of first usb isochronous * packet descriptor. */ tw->tw_curr_isoc_pktp = curr_isoc_reqp->isoc_pkt_descr; /* Reset the CONTINUE flag */ pp->pp_flag &= ~OHCI_ISOC_XFER_CONTINUE; } return (error); } /* * ohci_insert_hc_td: * * Insert a Transfer Descriptor (TD) on an Endpoint Descriptor (ED). * Always returns USB_SUCCESS, except for ISOCH. */ static int ohci_insert_hc_td( ohci_state_t *ohcip, uint_t hctd_ctrl, uint32_t hctd_dma_offs, size_t hctd_length, uint32_t hctd_ctrl_phase, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw) { ohci_td_t *new_dummy; ohci_td_t *cpu_current_dummy; ohci_ed_t *ept = pp->pp_ept; int error; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Retrieve preallocated td from the TW */ new_dummy = tw->tw_hctd_free_list; ASSERT(new_dummy != NULL); tw->tw_hctd_free_list = ohci_td_iommu_to_cpu(ohcip, Get_TD(new_dummy->hctd_tw_next_td)); Set_TD(new_dummy->hctd_tw_next_td, NULL); /* Fill in the current dummy */ cpu_current_dummy = (ohci_td_t *) (ohci_td_iommu_to_cpu(ohcip, Get_ED(ept->hced_tailp))); /* * Fill in the current dummy td and * add the new dummy to the end. */ ohci_fill_in_td(ohcip, cpu_current_dummy, new_dummy, hctd_ctrl, hctd_dma_offs, hctd_length, hctd_ctrl_phase, pp, tw); /* * If this is an isochronous TD, first write proper * starting usb frame number in which this TD must * can be processed. After writing the frame number * insert this TD into the ED's list. */ if ((pp->pp_pipe_handle->p_ep.bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH) { error = ohci_insert_td_with_frame_number( ohcip, pp, tw, cpu_current_dummy, new_dummy); if (error != USB_SUCCESS) { /* Reset the current dummy back to a dummy */ bzero((char *)cpu_current_dummy, sizeof (ohci_td_t)); Set_TD(cpu_current_dummy->hctd_state, HC_TD_DUMMY); /* return the new dummy back to the free list */ bzero((char *)new_dummy, sizeof (ohci_td_t)); Set_TD(new_dummy->hctd_state, HC_TD_DUMMY); if (tw->tw_hctd_free_list != NULL) { Set_TD(new_dummy->hctd_tw_next_td, ohci_td_cpu_to_iommu(ohcip, tw->tw_hctd_free_list)); } tw->tw_hctd_free_list = new_dummy; return (error); } } else { /* * For control, bulk and interrupt TD, just * add the new dummy to the ED's list. When * this occurs, the Host Controller ill see * the newly filled in dummy TD. */ Set_ED(ept->hced_tailp, (ohci_td_cpu_to_iommu(ohcip, new_dummy))); } /* Insert this td onto the tw */ ohci_insert_td_on_tw(ohcip, tw, cpu_current_dummy); return (USB_SUCCESS); } /* * ohci_allocate_td_from_pool: * * Allocate a Transfer Descriptor (TD) from the TD buffer pool. */ static ohci_td_t * ohci_allocate_td_from_pool(ohci_state_t *ohcip) { int i, state; ohci_td_t *td; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Search for a blank Transfer Descriptor (TD) * in the TD buffer pool. */ for (i = 0; i < ohci_td_pool_size; i ++) { state = Get_TD(ohcip->ohci_td_pool_addr[i].hctd_state); if (state == HC_TD_FREE) { break; } } if (i >= ohci_td_pool_size) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_allocate_td_from_pool: TD exhausted"); return (NULL); } USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_allocate_td_from_pool: Allocated %d", i); /* Create a new dummy for the end of the TD list */ td = &ohcip->ohci_td_pool_addr[i]; USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_td_from_pool: td 0x%p", (void *)td); /* Mark the newly allocated TD as a dummy */ Set_TD(td->hctd_state, HC_TD_DUMMY); return (td); } /* * ohci_fill_in_td: * * Fill in the fields of a Transfer Descriptor (TD). * * hctd_dma_offs - different meanings for non-isoc and isoc TDs: * starting offset into the TW buffer for a non-isoc TD * and the index into the isoc TD list for an isoc TD. * For non-isoc TDs, the starting offset should be 4k * aligned and the TDs in one transfer must be filled in * increasing order. */ static void ohci_fill_in_td( ohci_state_t *ohcip, ohci_td_t *td, ohci_td_t *new_dummy, uint_t hctd_ctrl, uint32_t hctd_dma_offs, size_t hctd_length, uint32_t hctd_ctrl_phase, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw) { USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_fill_in_td: td 0x%p bufoffs 0x%x len 0x%lx", (void *)td, hctd_dma_offs, hctd_length); /* Assert that the td to be filled in is a dummy */ ASSERT(Get_TD(td->hctd_state) == HC_TD_DUMMY); /* Change TD's state Active */ Set_TD(td->hctd_state, HC_TD_ACTIVE); /* Update the TD special fields */ if ((pp->pp_pipe_handle->p_ep.bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH) { ohci_init_itd(ohcip, tw, hctd_ctrl, hctd_dma_offs, td); } else { /* Update the dummy with control information */ Set_TD(td->hctd_ctrl, (hctd_ctrl | HC_TD_CC_NA)); ohci_init_td(ohcip, tw, hctd_dma_offs, hctd_length, td); } /* The current dummy now points to the new dummy */ Set_TD(td->hctd_next_td, (ohci_td_cpu_to_iommu(ohcip, new_dummy))); /* * For Control transfer, hctd_ctrl_phase is a valid field. */ if (hctd_ctrl_phase) { Set_TD(td->hctd_ctrl_phase, hctd_ctrl_phase); } /* Print the td */ ohci_print_td(ohcip, td); /* Fill in the wrapper portion of the TD */ /* Set the transfer wrapper */ ASSERT(tw != NULL); ASSERT(tw->tw_id != 0); Set_TD(td->hctd_trans_wrapper, tw->tw_id); Set_TD(td->hctd_tw_next_td, NULL); } /* * ohci_init_td: * * Initialize the buffer address portion of non-isoc Transfer * Descriptor (TD). */ void ohci_init_td( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, uint32_t hctd_dma_offs, size_t hctd_length, ohci_td_t *td) { uint32_t page_addr, start_addr = 0, end_addr = 0; size_t buf_len = hctd_length; int rem_len, i; /* * TDs must be filled in increasing DMA offset order. * tw_dma_offs is initialized to be 0 at TW creation and * is only increased in this function. */ ASSERT(buf_len == 0 || hctd_dma_offs >= tw->tw_dma_offs); Set_TD(td->hctd_xfer_offs, hctd_dma_offs); Set_TD(td->hctd_xfer_len, buf_len); /* Computing the starting buffer address and end buffer address */ for (i = 0; (i < 2) && (buf_len > 0); i++) { /* Advance to the next DMA cookie if necessary */ if ((tw->tw_dma_offs + tw->tw_cookie.dmac_size) <= hctd_dma_offs) { /* * tw_dma_offs always points to the starting offset * of a cookie */ tw->tw_dma_offs += tw->tw_cookie.dmac_size; ddi_dma_nextcookie(tw->tw_dmahandle, &tw->tw_cookie); tw->tw_cookie_idx++; ASSERT(tw->tw_cookie_idx < tw->tw_ncookies); } ASSERT((tw->tw_dma_offs + tw->tw_cookie.dmac_size) > hctd_dma_offs); /* * Counting the remained buffer length to be filled in * the TD for current DMA cookie */ rem_len = (tw->tw_dma_offs + tw->tw_cookie.dmac_size) - hctd_dma_offs; /* Get the beginning address of the buffer */ page_addr = (hctd_dma_offs - tw->tw_dma_offs) + tw->tw_cookie.dmac_address; ASSERT((page_addr % OHCI_4K_ALIGN) == 0); if (i == 0) { start_addr = page_addr; } USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_init_td: page_addr 0x%x dmac_size " "0x%lx idx %d", page_addr, tw->tw_cookie.dmac_size, tw->tw_cookie_idx); if (buf_len <= OHCI_MAX_TD_BUF_SIZE) { ASSERT(buf_len <= rem_len); end_addr = page_addr + buf_len - 1; buf_len = 0; break; } else { ASSERT(rem_len >= OHCI_MAX_TD_BUF_SIZE); buf_len -= OHCI_MAX_TD_BUF_SIZE; hctd_dma_offs += OHCI_MAX_TD_BUF_SIZE; } } ASSERT(buf_len == 0); Set_TD(td->hctd_cbp, start_addr); Set_TD(td->hctd_buf_end, end_addr); } /* * ohci_init_itd: * * Initialize the buffer address portion of isoc Transfer Descriptor (TD). */ static void ohci_init_itd( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, uint_t hctd_ctrl, uint32_t index, ohci_td_t *td) { uint32_t start_addr, end_addr, offset, offset_addr; ohci_isoc_buf_t *bufp; size_t buf_len; uint_t buf, fc, toggle, flag; usb_isoc_pkt_descr_t *temp_pkt_descr; int i; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_init_itd: ctrl = 0x%x", hctd_ctrl); /* * Write control information except starting * usb frame number. */ Set_TD(td->hctd_ctrl, (hctd_ctrl | HC_TD_CC_NA)); bufp = &tw->tw_isoc_bufs[index]; Set_TD(td->hctd_xfer_offs, index); Set_TD(td->hctd_xfer_len, bufp->length); start_addr = bufp->cookie.dmac_address; ASSERT((start_addr % OHCI_4K_ALIGN) == 0); buf_len = bufp->length; if (bufp->ncookies == OHCI_DMA_ATTR_TD_SGLLEN) { buf_len = bufp->length - bufp->cookie.dmac_size; ddi_dma_nextcookie(bufp->dma_handle, &bufp->cookie); } end_addr = bufp->cookie.dmac_address + buf_len - 1; /* * For an isochronous transfer, the hctd_cbp contains, * the 4k page, and not the actual start of the buffer. */ Set_TD(td->hctd_cbp, ((uint32_t)start_addr & HC_ITD_PAGE_MASK)); Set_TD(td->hctd_buf_end, end_addr); fc = (hctd_ctrl & HC_ITD_FC) >> HC_ITD_FC_SHIFT; toggle = 0; buf = start_addr; /* * Get the address of first isochronous data packet * for the current isochronous TD. */ temp_pkt_descr = tw->tw_curr_isoc_pktp; /* The offsets are actually offsets into the page */ for (i = 0; i <= fc; i++) { offset_addr = (uint32_t)((buf & HC_ITD_OFFSET_ADDR) | (HC_ITD_OFFSET_CC)); flag = ((start_addr & HC_ITD_PAGE_MASK) ^ (buf & HC_ITD_PAGE_MASK)); if (flag) { offset_addr |= HC_ITD_4KBOUNDARY_CROSS; } if (toggle) { offset = (uint32_t)((offset_addr << HC_ITD_OFFSET_SHIFT) & HC_ITD_ODD_OFFSET); Set_TD(td->hctd_offsets[i / 2], Get_TD(td->hctd_offsets[i / 2]) | offset); toggle = 0; } else { offset = (uint32_t)(offset_addr & HC_ITD_EVEN_OFFSET); Set_TD(td->hctd_offsets[i / 2], Get_TD(td->hctd_offsets[i / 2]) | offset); toggle = 1; } buf = (uint32_t)(buf + temp_pkt_descr->isoc_pkt_length); temp_pkt_descr++; } } /* * ohci_insert_td_with_frame_number: * * Insert current isochronous TD into the ED's list. with proper * usb frame number in which this TD can be processed. */ static int ohci_insert_td_with_frame_number( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *current_td, ohci_td_t *dummy_td) { usb_isoc_req_t *isoc_reqp = (usb_isoc_req_t *)tw->tw_curr_xfer_reqp; usb_frame_number_t current_frame_number, start_frame_number; uint_t ddic, ctrl, isoc_pkts; ohci_ed_t *ept = pp->pp_ept; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_td_with_frame_number:" "isoc flags 0x%x", isoc_reqp->isoc_attributes); /* Get the TD ctrl information */ isoc_pkts = ((Get_TD(current_td->hctd_ctrl) & HC_ITD_FC) >> HC_ITD_FC_SHIFT) + 1; /* * Enter critical, while programming the usb frame number * and inserting current isochronous TD into the ED's list. */ ddic = ddi_enter_critical(); /* Get the current frame number */ current_frame_number = ohci_get_current_frame_number(ohcip); /* Check the given isochronous flags */ switch (isoc_reqp->isoc_attributes & (USB_ATTRS_ISOC_START_FRAME | USB_ATTRS_ISOC_XFER_ASAP)) { case USB_ATTRS_ISOC_START_FRAME: /* Starting frame number is specified */ if (pp->pp_flag & OHCI_ISOC_XFER_CONTINUE) { /* Get the starting usb frame number */ start_frame_number = pp->pp_next_frame_number; } else { /* Check for the Starting usb frame number */ if ((isoc_reqp->isoc_frame_no == 0) || ((isoc_reqp->isoc_frame_no + isoc_reqp->isoc_pkts_count) < current_frame_number)) { /* Exit the critical */ ddi_exit_critical(ddic); USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_td_with_frame_number:" "Invalid starting frame number"); return (USB_INVALID_START_FRAME); } /* Get the starting usb frame number */ start_frame_number = isoc_reqp->isoc_frame_no; pp->pp_next_frame_number = 0; } break; case USB_ATTRS_ISOC_XFER_ASAP: /* ohci has to specify starting frame number */ if ((pp->pp_next_frame_number) && (pp->pp_next_frame_number > current_frame_number)) { /* * Get the next usb frame number. */ start_frame_number = pp->pp_next_frame_number; } else { /* * Add appropriate offset to the current usb * frame number and use it as a starting frame * number. */ start_frame_number = current_frame_number + OHCI_FRAME_OFFSET; } if (!(pp->pp_flag & OHCI_ISOC_XFER_CONTINUE)) { isoc_reqp->isoc_frame_no = start_frame_number; } break; default: /* Exit the critical */ ddi_exit_critical(ddic); USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_td_with_frame_number: Either starting " "frame number or ASAP flags are not set, attrs = 0x%x", isoc_reqp->isoc_attributes); return (USB_NO_FRAME_NUMBER); } /* Get the TD ctrl information */ ctrl = Get_TD(current_td->hctd_ctrl) & (~(HC_ITD_SF)); /* Set the frame number field */ Set_TD(current_td->hctd_ctrl, ctrl | (start_frame_number & HC_ITD_SF)); /* * Add the new dummy to the ED's list. When this occurs, * the Host Controller will see newly filled in dummy TD. */ Set_ED(ept->hced_tailp, (ohci_td_cpu_to_iommu(ohcip, dummy_td))); /* Exit the critical */ ddi_exit_critical(ddic); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_insert_td_with_frame_number:" "current frame number 0x%llx start frame number 0x%llx", (unsigned long long)current_frame_number, (unsigned long long)start_frame_number); /* * Increment this saved frame number by current number * of data packets needs to be transfer. */ pp->pp_next_frame_number = start_frame_number + isoc_pkts; /* * Set OHCI_ISOC_XFER_CONTINUE flag in order to send other * isochronous packets, part of the current isoch request * in the subsequent frames. */ pp->pp_flag |= OHCI_ISOC_XFER_CONTINUE; return (USB_SUCCESS); } /* * ohci_insert_td_on_tw: * * The transfer wrapper keeps a list of all Transfer Descriptors (TD) that * are allocated for this transfer. Insert a TD onto this list. The list * of TD's does not include the dummy TD that is at the end of the list of * TD's for the endpoint. */ static void ohci_insert_td_on_tw( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, ohci_td_t *td) { /* * Set the next pointer to NULL because * this is the last TD on list. */ Set_TD(td->hctd_tw_next_td, NULL); if (tw->tw_hctd_head == NULL) { ASSERT(tw->tw_hctd_tail == NULL); tw->tw_hctd_head = td; tw->tw_hctd_tail = td; } else { ohci_td_t *dummy = (ohci_td_t *)tw->tw_hctd_tail; ASSERT(dummy != NULL); ASSERT(dummy != td); ASSERT(Get_TD(td->hctd_state) != HC_TD_DUMMY); /* Add the td to the end of the list */ Set_TD(dummy->hctd_tw_next_td, ohci_td_cpu_to_iommu(ohcip, td)); tw->tw_hctd_tail = td; ASSERT(Get_TD(td->hctd_tw_next_td) == 0); } } /* * ohci_traverse_tds: * NOTE: This function is also called from POLLED MODE. * * Traverse the list of TD's for an endpoint. Since the endpoint is marked * as sKipped, the Host Controller (HC) is no longer accessing these TD's. * Remove all the TD's that are attached to the endpoint. */ void ohci_traverse_tds( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { ohci_trans_wrapper_t *tw; ohci_ed_t *ept; ohci_pipe_private_t *pp; uint32_t addr; ohci_td_t *tailp, *headp, *next; pp = (ohci_pipe_private_t *)ph->p_hcd_private; ept = pp->pp_ept; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_traverse_tds: ph = 0x%p ept = 0x%p", (void *)ph, (void *)ept); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); addr = Get_ED(ept->hced_headp) & (uint32_t)HC_EPT_TD_HEAD; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_traverse_tds: addr (head) = 0x%x", addr); headp = (ohci_td_t *)(ohci_td_iommu_to_cpu(ohcip, addr)); addr = Get_ED(ept->hced_tailp) & (uint32_t)HC_EPT_TD_TAIL; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_traverse_tds: addr (tail) = 0x%x", addr); tailp = (ohci_td_t *)(ohci_td_iommu_to_cpu(ohcip, addr)); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_traverse_tds: cpu head = 0x%p cpu tail = 0x%p", (void *)headp, (void *)tailp); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_traverse_tds: iommu head = 0x%x iommu tail = 0x%x", ohci_td_cpu_to_iommu(ohcip, headp), ohci_td_cpu_to_iommu(ohcip, tailp)); /* * Traverse the list of TD's that are currently on the endpoint. * These TD's have not been processed and will not be processed * because the endpoint processing is stopped. */ while (headp != tailp) { next = (ohci_td_t *)(ohci_td_iommu_to_cpu(ohcip, (Get_TD(headp->hctd_next_td) & HC_EPT_TD_TAIL))); tw = (ohci_trans_wrapper_t *)OHCI_LOOKUP_ID( (uint32_t)Get_TD(headp->hctd_trans_wrapper)); /* Stop the the transfer timer */ ohci_stop_xfer_timer(ohcip, tw, OHCI_REMOVE_XFER_ALWAYS); ohci_deallocate_td(ohcip, headp); headp = next; } /* Both head and tail pointers must be same */ USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_traverse_tds: head = 0x%p tail = 0x%p", (void *)headp, (void *)tailp); /* Update the pointer in the endpoint descriptor */ Set_ED(ept->hced_headp, (ohci_td_cpu_to_iommu(ohcip, headp))); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_traverse_tds: new head = 0x%x", (ohci_td_cpu_to_iommu(ohcip, headp))); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_traverse_tds: tailp = 0x%x headp = 0x%x", (Get_ED(ept->hced_tailp) & HC_EPT_TD_TAIL), (Get_ED(ept->hced_headp) & HC_EPT_TD_HEAD)); ASSERT((Get_ED(ept->hced_tailp) & HC_EPT_TD_TAIL) == (Get_ED(ept->hced_headp) & HC_EPT_TD_HEAD)); } /* * ohci_done_list_tds: * * There may be TD's on the done list that have not been processed yet. Walk * through these TD's and mark them as RECLAIM. All the mappings for the TD * will be torn down, so the interrupt handle is alerted of this fact through * the RECLAIM flag. */ static void ohci_done_list_tds( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; ohci_trans_wrapper_t *head_tw = pp->pp_tw_head; ohci_trans_wrapper_t *next_tw; ohci_td_t *head_td, *next_td; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_done_list_tds:"); /* Process the transfer wrappers for this pipe */ next_tw = head_tw; while (next_tw) { head_td = (ohci_td_t *)next_tw->tw_hctd_head; next_td = head_td; if (head_td) { /* * Walk through each TD for this transfer * wrapper. If a TD still exists, then it * is currently on the done list. */ while (next_td) { /* To free TD, set TD state to RECLAIM */ Set_TD(next_td->hctd_state, HC_TD_RECLAIM); Set_TD(next_td->hctd_trans_wrapper, NULL); next_td = ohci_td_iommu_to_cpu(ohcip, Get_TD(next_td->hctd_tw_next_td)); } } /* Stop the the transfer timer */ ohci_stop_xfer_timer(ohcip, next_tw, OHCI_REMOVE_XFER_ALWAYS); next_tw = next_tw->tw_next; } } /* * Remove old_td from tw and update the links. */ void ohci_unlink_td_from_tw( ohci_state_t *ohcip, ohci_td_t *old_td, ohci_trans_wrapper_t *tw) { ohci_td_t *next, *head, *tail; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_unlink_td_from_tw: ohcip = 0x%p, old_td = 0x%p, tw = 0x%p", (void *)ohcip, (void *)old_td, (void *)tw); if (old_td == NULL || tw == NULL) { return; } head = tw->tw_hctd_head; tail = tw->tw_hctd_tail; if (head == NULL) { return; } /* if this old_td is on head */ if (old_td == head) { if (old_td == tail) { tw->tw_hctd_head = NULL; tw->tw_hctd_tail = NULL; } else { tw->tw_hctd_head = ohci_td_iommu_to_cpu(ohcip, Get_TD(head->hctd_tw_next_td)); } return; } /* find this old_td's position in the tw */ next = ohci_td_iommu_to_cpu(ohcip, Get_TD(head->hctd_tw_next_td)); while (next && (old_td != next)) { head = next; next = ohci_td_iommu_to_cpu(ohcip, Get_TD(next->hctd_tw_next_td)); } /* unlink the found old_td from the tw */ if (old_td == next) { Set_TD(head->hctd_tw_next_td, Get_TD(next->hctd_tw_next_td)); if (old_td == tail) { tw->tw_hctd_tail = head; } } } /* * ohci_deallocate_td: * NOTE: This function is also called from POLLED MODE. * * Deallocate a Host Controller's (HC) Transfer Descriptor (TD). */ void ohci_deallocate_td( ohci_state_t *ohcip, ohci_td_t *old_td) { ohci_trans_wrapper_t *tw; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_deallocate_td: old_td = 0x%p", (void *)old_td); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Obtain the transaction wrapper and tw will be * NULL for the dummy and for the reclaim TD's. */ if ((Get_TD(old_td->hctd_state) == HC_TD_DUMMY) || (Get_TD(old_td->hctd_state) == HC_TD_RECLAIM)) { tw = (ohci_trans_wrapper_t *)((uintptr_t) Get_TD(old_td->hctd_trans_wrapper)); ASSERT(tw == NULL); } else { tw = (ohci_trans_wrapper_t *) OHCI_LOOKUP_ID((uint32_t) Get_TD(old_td->hctd_trans_wrapper)); ASSERT(tw != NULL); } /* * If this TD should be reclaimed, don't try to access its * transfer wrapper. */ if ((Get_TD(old_td->hctd_state) != HC_TD_RECLAIM) && tw) { ohci_unlink_td_from_tw(ohcip, old_td, tw); } bzero((void *)old_td, sizeof (ohci_td_t)); Set_TD(old_td->hctd_state, HC_TD_FREE); USB_DPRINTF_L3(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_deallocate_td: td 0x%p", (void *)old_td); } /* * ohci_td_cpu_to_iommu: * NOTE: This function is also called from POLLED MODE. * * This function converts for the given Transfer Descriptor (TD) CPU address * to IO address. */ uint32_t ohci_td_cpu_to_iommu( ohci_state_t *ohcip, ohci_td_t *addr) { uint32_t td; td = (uint32_t)ohcip->ohci_td_pool_cookie.dmac_address + (uint32_t)((uintptr_t)addr - (uintptr_t)(ohcip->ohci_td_pool_addr)); ASSERT((ohcip->ohci_td_pool_cookie.dmac_address + (uint32_t) (sizeof (ohci_td_t) * (addr - ohcip->ohci_td_pool_addr))) == (ohcip->ohci_td_pool_cookie.dmac_address + (uint32_t)((uintptr_t)addr - (uintptr_t) (ohcip->ohci_td_pool_addr)))); ASSERT(td >= ohcip->ohci_td_pool_cookie.dmac_address); ASSERT(td <= ohcip->ohci_td_pool_cookie.dmac_address + sizeof (ohci_td_t) * ohci_td_pool_size); return (td); } /* * ohci_td_iommu_to_cpu: * NOTE: This function is also called from POLLED MODE. * * This function converts for the given Transfer Descriptor (TD) IO address * to CPU address. */ ohci_td_t * ohci_td_iommu_to_cpu( ohci_state_t *ohcip, uintptr_t addr) { ohci_td_t *td; if (addr == 0) return (NULL); td = (ohci_td_t *)((uintptr_t) (addr - ohcip->ohci_td_pool_cookie.dmac_address) + (uintptr_t)ohcip->ohci_td_pool_addr); ASSERT(td >= ohcip->ohci_td_pool_addr); ASSERT((uintptr_t)td <= (uintptr_t)ohcip->ohci_td_pool_addr + (uintptr_t)(sizeof (ohci_td_t) * ohci_td_pool_size)); return (td); } /* * ohci_allocate_tds_for_tw: * * Allocate n Transfer Descriptors (TD) from the TD buffer pool and places it * into the TW. * * Returns USB_NO_RESOURCES if it was not able to allocate all the requested TD * otherwise USB_SUCCESS. */ int ohci_allocate_tds_for_tw( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, size_t td_count) { ohci_td_t *td; uint32_t td_addr; int i; int error = USB_SUCCESS; for (i = 0; i < td_count; i++) { td = ohci_allocate_td_from_pool(ohcip); if (td == NULL) { error = USB_NO_RESOURCES; USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_tds_for_tw: " "Unable to allocate %lu TDs", td_count); break; } if (tw->tw_hctd_free_list != NULL) { td_addr = ohci_td_cpu_to_iommu(ohcip, tw->tw_hctd_free_list); Set_TD(td->hctd_tw_next_td, td_addr); } tw->tw_hctd_free_list = td; } return (error); } /* * ohci_allocate_tw_resources: * * Allocate a Transaction Wrapper (TW) and n Transfer Descriptors (TD) * from the TD buffer pool and places it into the TW. It does an all * or nothing transaction. * * Returns NULL if there is insufficient resources otherwise TW. */ static ohci_trans_wrapper_t * ohci_allocate_tw_resources( ohci_state_t *ohcip, ohci_pipe_private_t *pp, size_t tw_length, usb_flags_t usb_flags, size_t td_count) { ohci_trans_wrapper_t *tw; tw = ohci_create_transfer_wrapper(ohcip, pp, tw_length, usb_flags); if (tw == NULL) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_tw_resources: Unable to allocate TW"); } else { if (ohci_allocate_tds_for_tw(ohcip, tw, td_count) == USB_SUCCESS) { tw->tw_num_tds = (uint_t)td_count; } else { ohci_deallocate_tw_resources(ohcip, pp, tw); tw = NULL; } } return (tw); } /* * ohci_free_tw_tds_resources: * * Free all allocated resources for Transaction Wrapper (TW). * Does not free the TW itself. */ static void ohci_free_tw_tds_resources( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw) { ohci_td_t *td; ohci_td_t *temp_td; td = tw->tw_hctd_free_list; while (td != NULL) { /* Save the pointer to the next td before destroying it */ temp_td = ohci_td_iommu_to_cpu(ohcip, Get_TD(td->hctd_tw_next_td)); ohci_deallocate_td(ohcip, td); td = temp_td; } tw->tw_hctd_free_list = NULL; } /* * Transfer Wrapper functions * * ohci_create_transfer_wrapper: * * Create a Transaction Wrapper (TW) for non-isoc transfer types * and this involves the allocating of DMA resources. */ static ohci_trans_wrapper_t * ohci_create_transfer_wrapper( ohci_state_t *ohcip, ohci_pipe_private_t *pp, size_t length, uint_t usb_flags) { ddi_device_acc_attr_t dev_attr; int result; size_t real_length; ohci_trans_wrapper_t *tw; ddi_dma_attr_t dma_attr; int kmem_flag; int (*dmamem_wait)(caddr_t); usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_transfer_wrapper: length = 0x%lx flags = 0x%x", length, usb_flags); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* isochronous pipe should not call into this function */ if ((ph->p_ep.bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH) { return (NULL); } /* SLEEP flag should not be used while holding mutex */ kmem_flag = KM_NOSLEEP; dmamem_wait = DDI_DMA_DONTWAIT; /* Allocate space for the transfer wrapper */ tw = kmem_zalloc(sizeof (ohci_trans_wrapper_t), kmem_flag); if (tw == NULL) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_transfer_wrapper: kmem_zalloc failed"); return (NULL); } /* zero-length packet doesn't need to allocate dma memory */ if (length == 0) { goto dmadone; } /* allow sg lists for transfer wrapper dma memory */ bcopy(&ohcip->ohci_dma_attr, &dma_attr, sizeof (ddi_dma_attr_t)); dma_attr.dma_attr_sgllen = OHCI_DMA_ATTR_TW_SGLLEN; dma_attr.dma_attr_align = OHCI_DMA_ATTR_ALIGNMENT; /* Allocate the DMA handle */ result = ddi_dma_alloc_handle(ohcip->ohci_dip, &dma_attr, dmamem_wait, 0, &tw->tw_dmahandle); if (result != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_transfer_wrapper: Alloc handle failed"); kmem_free(tw, sizeof (ohci_trans_wrapper_t)); return (NULL); } dev_attr.devacc_attr_version = DDI_DEVICE_ATTR_V0; /* The host controller will be little endian */ dev_attr.devacc_attr_endian_flags = DDI_STRUCTURE_BE_ACC; dev_attr.devacc_attr_dataorder = DDI_STRICTORDER_ACC; /* Allocate the memory */ result = ddi_dma_mem_alloc(tw->tw_dmahandle, length, &dev_attr, DDI_DMA_CONSISTENT, dmamem_wait, NULL, (caddr_t *)&tw->tw_buf, &real_length, &tw->tw_accesshandle); if (result != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_transfer_wrapper: dma_mem_alloc fail"); ddi_dma_free_handle(&tw->tw_dmahandle); kmem_free(tw, sizeof (ohci_trans_wrapper_t)); return (NULL); } ASSERT(real_length >= length); /* Bind the handle */ result = ddi_dma_addr_bind_handle(tw->tw_dmahandle, NULL, (caddr_t)tw->tw_buf, real_length, DDI_DMA_RDWR|DDI_DMA_CONSISTENT, dmamem_wait, NULL, &tw->tw_cookie, &tw->tw_ncookies); if (result != DDI_DMA_MAPPED) { ohci_decode_ddi_dma_addr_bind_handle_result(ohcip, result); ddi_dma_mem_free(&tw->tw_accesshandle); ddi_dma_free_handle(&tw->tw_dmahandle); kmem_free(tw, sizeof (ohci_trans_wrapper_t)); return (NULL); } tw->tw_cookie_idx = 0; tw->tw_dma_offs = 0; dmadone: /* * Only allow one wrapper to be added at a time. Insert the * new transaction wrapper into the list for this pipe. */ if (pp->pp_tw_head == NULL) { pp->pp_tw_head = tw; pp->pp_tw_tail = tw; } else { pp->pp_tw_tail->tw_next = tw; pp->pp_tw_tail = tw; } /* Store the transfer length */ tw->tw_length = length; /* Store a back pointer to the pipe private structure */ tw->tw_pipe_private = pp; /* Store the transfer type - synchronous or asynchronous */ tw->tw_flags = usb_flags; /* Get and Store 32bit ID */ tw->tw_id = OHCI_GET_ID((void *)tw); ASSERT(tw->tw_id != 0); USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_transfer_wrapper: tw = 0x%p, ncookies = %u", (void *)tw, tw->tw_ncookies); return (tw); } /* * Transfer Wrapper functions * * ohci_create_isoc_transfer_wrapper: * * Create a Transaction Wrapper (TW) for isoc transfer * and this involves the allocating of DMA resources. */ static ohci_trans_wrapper_t * ohci_create_isoc_transfer_wrapper( ohci_state_t *ohcip, ohci_pipe_private_t *pp, size_t length, usb_isoc_pkt_descr_t *descr, ushort_t pkt_count, size_t td_count, uint_t usb_flags) { ddi_device_acc_attr_t dev_attr; int result; size_t real_length, xfer_size; uint_t ccount; ohci_trans_wrapper_t *tw; ddi_dma_attr_t dma_attr; int kmem_flag; uint_t i, j, frame_count, residue; int (*dmamem_wait)(caddr_t); usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; usb_isoc_pkt_descr_t *isoc_pkt_descr = descr; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_isoc_transfer_wrapper: length = 0x%lx flags = 0x%x", length, usb_flags); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* non-isochronous pipe should not call into this function */ if ((ph->p_ep.bmAttributes & USB_EP_ATTR_MASK) != USB_EP_ATTR_ISOCH) { return (NULL); } /* SLEEP flag should not be used in interrupt context */ if (servicing_interrupt()) { kmem_flag = KM_NOSLEEP; dmamem_wait = DDI_DMA_DONTWAIT; } else { kmem_flag = KM_SLEEP; dmamem_wait = DDI_DMA_SLEEP; } /* Allocate space for the transfer wrapper */ tw = kmem_zalloc(sizeof (ohci_trans_wrapper_t), kmem_flag); if (tw == NULL) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_transfer_wrapper: kmem_zalloc failed"); return (NULL); } /* Allocate space for the isoc buffer handles */ tw->tw_isoc_strtlen = sizeof (ohci_isoc_buf_t) * td_count; if ((tw->tw_isoc_bufs = kmem_zalloc(tw->tw_isoc_strtlen, kmem_flag)) == NULL) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_create_isoc_transfer_wrapper: kmem_alloc " "isoc buffer failed"); kmem_free(tw, sizeof (ohci_trans_wrapper_t)); return (NULL); } /* allow sg lists for transfer wrapper dma memory */ bcopy(&ohcip->ohci_dma_attr, &dma_attr, sizeof (ddi_dma_attr_t)); dma_attr.dma_attr_sgllen = OHCI_DMA_ATTR_TD_SGLLEN; dma_attr.dma_attr_align = OHCI_DMA_ATTR_ALIGNMENT; dev_attr.devacc_attr_version = DDI_DEVICE_ATTR_V0; /* The host controller will be little endian */ dev_attr.devacc_attr_endian_flags = DDI_STRUCTURE_BE_ACC; dev_attr.devacc_attr_dataorder = DDI_STRICTORDER_ACC; residue = pkt_count % OHCI_ISOC_PKTS_PER_TD; for (i = 0; i < td_count; i++) { tw->tw_isoc_bufs[i].index = i; if ((i == (td_count - 1)) && (residue != 0)) { frame_count = residue; } else { frame_count = OHCI_ISOC_PKTS_PER_TD; } /* Allocate the DMA handle */ result = ddi_dma_alloc_handle(ohcip->ohci_dip, &dma_attr, dmamem_wait, 0, &tw->tw_isoc_bufs[i].dma_handle); if (result != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_isoc_transfer_wrapper: " "Alloc handle failed"); for (j = 0; j < i; j++) { result = ddi_dma_unbind_handle( tw->tw_isoc_bufs[j].dma_handle); ASSERT(result == USB_SUCCESS); ddi_dma_mem_free(&tw->tw_isoc_bufs[j]. mem_handle); ddi_dma_free_handle(&tw->tw_isoc_bufs[j]. dma_handle); } kmem_free(tw->tw_isoc_bufs, tw->tw_isoc_strtlen); kmem_free(tw, sizeof (ohci_trans_wrapper_t)); return (NULL); } /* Compute the memory length */ for (xfer_size = 0, j = 0; j < frame_count; j++) { ASSERT(isoc_pkt_descr != NULL); xfer_size += isoc_pkt_descr->isoc_pkt_length; isoc_pkt_descr++; } /* Allocate the memory */ result = ddi_dma_mem_alloc(tw->tw_isoc_bufs[i].dma_handle, xfer_size, &dev_attr, DDI_DMA_CONSISTENT, dmamem_wait, NULL, (caddr_t *)&tw->tw_isoc_bufs[i].buf_addr, &real_length, &tw->tw_isoc_bufs[i].mem_handle); if (result != DDI_SUCCESS) { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_isoc_transfer_wrapper: " "dma_mem_alloc %d fail", i); ddi_dma_free_handle(&tw->tw_isoc_bufs[i].dma_handle); for (j = 0; j < i; j++) { result = ddi_dma_unbind_handle( tw->tw_isoc_bufs[j].dma_handle); ASSERT(result == USB_SUCCESS); ddi_dma_mem_free(&tw->tw_isoc_bufs[j]. mem_handle); ddi_dma_free_handle(&tw->tw_isoc_bufs[j]. dma_handle); } kmem_free(tw->tw_isoc_bufs, tw->tw_isoc_strtlen); kmem_free(tw, sizeof (ohci_trans_wrapper_t)); return (NULL); } ASSERT(real_length >= xfer_size); /* Bind the handle */ result = ddi_dma_addr_bind_handle( tw->tw_isoc_bufs[i].dma_handle, NULL, (caddr_t)tw->tw_isoc_bufs[i].buf_addr, real_length, DDI_DMA_RDWR|DDI_DMA_CONSISTENT, dmamem_wait, NULL, &tw->tw_isoc_bufs[i].cookie, &ccount); if ((result == DDI_DMA_MAPPED) && (ccount <= OHCI_DMA_ATTR_TD_SGLLEN)) { tw->tw_isoc_bufs[i].length = xfer_size; tw->tw_isoc_bufs[i].ncookies = ccount; continue; } else { USB_DPRINTF_L2(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_isoc_transfer_wrapper: " "Bind handle %d failed", i); if (result == DDI_DMA_MAPPED) { result = ddi_dma_unbind_handle( tw->tw_isoc_bufs[i].dma_handle); ASSERT(result == USB_SUCCESS); } ddi_dma_mem_free(&tw->tw_isoc_bufs[i].mem_handle); ddi_dma_free_handle(&tw->tw_isoc_bufs[i].dma_handle); for (j = 0; j < i; j++) { result = ddi_dma_unbind_handle( tw->tw_isoc_bufs[j].dma_handle); ASSERT(result == USB_SUCCESS); ddi_dma_mem_free(&tw->tw_isoc_bufs[j]. mem_handle); ddi_dma_free_handle(&tw->tw_isoc_bufs[j]. dma_handle); } kmem_free(tw->tw_isoc_bufs, tw->tw_isoc_strtlen); kmem_free(tw, sizeof (ohci_trans_wrapper_t)); return (NULL); } } /* * Only allow one wrapper to be added at a time. Insert the * new transaction wrapper into the list for this pipe. */ if (pp->pp_tw_head == NULL) { pp->pp_tw_head = tw; pp->pp_tw_tail = tw; } else { pp->pp_tw_tail->tw_next = tw; pp->pp_tw_tail = tw; } /* Store the transfer length */ tw->tw_length = length; /* Store the td numbers */ tw->tw_ncookies = (uint_t)td_count; /* Store a back pointer to the pipe private structure */ tw->tw_pipe_private = pp; /* Store the transfer type - synchronous or asynchronous */ tw->tw_flags = usb_flags; /* Get and Store 32bit ID */ tw->tw_id = OHCI_GET_ID((void *)tw); ASSERT(tw->tw_id != 0); USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_create_isoc_transfer_wrapper: tw = 0x%p", (void *)tw); return (tw); } /* * ohci_start_xfer_timer: * * Start the timer for the control, bulk and for one time interrupt * transfers. */ /* ARGSUSED */ static void ohci_start_xfer_timer( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw) { USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_xfer_timer: tw = 0x%p", (void *)tw); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * The timeout handling is done only for control, bulk and for * one time Interrupt transfers. * * NOTE: If timeout is zero; Assume infinite timeout and don't * insert this transfer on the timeout list. */ if (tw->tw_timeout) { /* * Increase timeout value by one second and this extra one * second is used to halt the endpoint if given transfer * times out. */ tw->tw_timeout++; /* * Add this transfer wrapper into the transfer timeout list. */ if (ohcip->ohci_timeout_list) { tw->tw_timeout_next = ohcip->ohci_timeout_list; } ohcip->ohci_timeout_list = tw; ohci_start_timer(ohcip); } } /* * ohci_stop_xfer_timer: * * Start the timer for the control, bulk and for one time interrupt * transfers. */ void ohci_stop_xfer_timer( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw, uint_t flag) { timeout_id_t timer_id; USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_stop_xfer_timer: tw = 0x%p", (void *)tw); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * The timeout handling is done only for control, bulk * and for one time Interrupt transfers. */ if (ohcip->ohci_timeout_list == NULL) { return; } switch (flag) { case OHCI_REMOVE_XFER_IFLAST: if (tw->tw_hctd_head != tw->tw_hctd_tail) { break; } /* FALLTHRU */ case OHCI_REMOVE_XFER_ALWAYS: ohci_remove_tw_from_timeout_list(ohcip, tw); if ((ohcip->ohci_timeout_list == NULL) && (ohcip->ohci_timer_id)) { timer_id = ohcip->ohci_timer_id; /* Reset the timer id to zero */ ohcip->ohci_timer_id = 0; mutex_exit(&ohcip->ohci_int_mutex); (void) untimeout(timer_id); mutex_enter(&ohcip->ohci_int_mutex); } break; default: break; } } /* * ohci_xfer_timeout_handler: * * Control or bulk transfer timeout handler. */ static void ohci_xfer_timeout_handler(void *arg) { ohci_state_t *ohcip = (ohci_state_t *)arg; ohci_trans_wrapper_t *exp_xfer_list_head = NULL; ohci_trans_wrapper_t *exp_xfer_list_tail = NULL; ohci_trans_wrapper_t *tw, *next; ohci_td_t *td; usb_flags_t flags; USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_xfer_timeout_handler: ohcip = 0x%p", (void *)ohcip); mutex_enter(&ohcip->ohci_int_mutex); /* Set the required flags */ flags = OHCI_FLAGS_NOSLEEP | OHCI_FLAGS_DMA_SYNC; /* * Check whether still timeout handler is valid. */ if (ohcip->ohci_timer_id) { /* Reset the timer id to zero */ ohcip->ohci_timer_id = 0; } else { mutex_exit(&ohcip->ohci_int_mutex); return; } /* Get the transfer timeout list head */ tw = ohcip->ohci_timeout_list; /* * Process ohci timeout list and look whether the timer * has expired for any transfers. Create a temporary list * of expired transfers and process them later. */ while (tw) { /* Get the transfer on the timeout list */ next = tw->tw_timeout_next; tw->tw_timeout--; /* * Set the sKip bit to stop all transactions on * this pipe */ if (tw->tw_timeout == 1) { ohci_modify_sKip_bit(ohcip, tw->tw_pipe_private, SET_sKip, flags); /* Reset dma sync flag */ flags &= ~OHCI_FLAGS_DMA_SYNC; } /* Remove tw from the timeout list */ if (tw->tw_timeout == 0) { ohci_remove_tw_from_timeout_list(ohcip, tw); /* Add tw to the end of expire list */ if (exp_xfer_list_head) { exp_xfer_list_tail->tw_timeout_next = tw; } else { exp_xfer_list_head = tw; } exp_xfer_list_tail = tw; tw->tw_timeout_next = NULL; } tw = next; } /* Get the expired transfer timeout list head */ tw = exp_xfer_list_head; if (tw && (flags & OHCI_FLAGS_DMA_SYNC)) { /* Sync ED and TD pool */ Sync_ED_TD_Pool(ohcip); } /* * Process the expired transfers by notifing the corrsponding * client driver through the exception callback. */ while (tw) { /* Get the transfer on the expired transfer timeout list */ next = tw->tw_timeout_next; td = tw->tw_hctd_head; while (td) { /* Set TD state to TIMEOUT */ Set_TD(td->hctd_state, HC_TD_TIMEOUT); /* Get the next TD from the wrapper */ td = ohci_td_iommu_to_cpu(ohcip, Get_TD(td->hctd_tw_next_td)); } ohci_handle_error(ohcip, tw->tw_hctd_head, USB_CR_TIMEOUT); tw = next; } ohci_start_timer(ohcip); mutex_exit(&ohcip->ohci_int_mutex); } /* * ohci_remove_tw_from_timeout_list: * * Remove Control or bulk transfer from the timeout list. */ static void ohci_remove_tw_from_timeout_list( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw) { ohci_trans_wrapper_t *prev, *next; USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_remove_tw_from_timeout_list: tw = 0x%p", (void *)tw); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); if (ohcip->ohci_timeout_list == tw) { ohcip->ohci_timeout_list = tw->tw_timeout_next; } else { prev = ohcip->ohci_timeout_list; next = prev->tw_timeout_next; while (next && (next != tw)) { prev = next; next = next->tw_timeout_next; } if (next == tw) { prev->tw_timeout_next = next->tw_timeout_next; } } /* Reset the xfer timeout */ tw->tw_timeout_next = NULL; } /* * ohci_start_timer: * * Start the ohci timer */ static void ohci_start_timer(ohci_state_t *ohcip) { USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_start_timer: ohcip = 0x%p", (void *)ohcip); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Start the global timer only if currently timer is not * running and if there are any transfers on the timeout * list. This timer will be per USB Host Controller. */ if ((!ohcip->ohci_timer_id) && (ohcip->ohci_timeout_list)) { ohcip->ohci_timer_id = timeout(ohci_xfer_timeout_handler, (void *)ohcip, drv_usectohz(1000000)); } } /* * ohci_deallocate_tw_resources: * NOTE: This function is also called from POLLED MODE. * * Deallocate of a Transaction Wrapper (TW) and this involves the freeing of * of DMA resources. */ void ohci_deallocate_tw_resources( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw) { ohci_trans_wrapper_t *prev, *next; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_deallocate_tw_resources: tw = 0x%p", (void *)tw); /* * If the transfer wrapper has no Host Controller (HC) * Transfer Descriptors (TD) associated with it, then * remove the transfer wrapper. */ if (tw->tw_hctd_head) { ASSERT(tw->tw_hctd_tail != NULL); return; } ASSERT(tw->tw_hctd_tail == NULL); /* Make sure we return all the unused td's to the pool as well */ ohci_free_tw_tds_resources(ohcip, tw); /* * If pp->pp_tw_head and pp->pp_tw_tail are pointing to * given TW then set the head and tail equal to NULL. * Otherwise search for this TW in the linked TW's list * and then remove this TW from the list. */ if (pp->pp_tw_head == tw) { if (pp->pp_tw_tail == tw) { pp->pp_tw_head = NULL; pp->pp_tw_tail = NULL; } else { pp->pp_tw_head = tw->tw_next; } } else { prev = pp->pp_tw_head; next = prev->tw_next; while (next && (next != tw)) { prev = next; next = next->tw_next; } if (next == tw) { prev->tw_next = next->tw_next; if (pp->pp_tw_tail == tw) { pp->pp_tw_tail = prev; } } } ohci_free_tw(ohcip, tw); } /* * ohci_free_dma_resources: * * Free dma resources of a Transfer Wrapper (TW) and also free the TW. */ static void ohci_free_dma_resources( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; ohci_trans_wrapper_t *head_tw = pp->pp_tw_head; ohci_trans_wrapper_t *next_tw, *tw; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_free_dma_resources: ph = 0x%p", (void *)ph); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Process the Transfer Wrappers */ next_tw = head_tw; while (next_tw) { tw = next_tw; next_tw = tw->tw_next; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_free_dma_resources: Free TW = 0x%p", (void *)tw); ohci_free_tw(ohcip, tw); } /* Adjust the head and tail pointers */ pp->pp_tw_head = NULL; pp->pp_tw_tail = NULL; } /* * ohci_free_tw: * * Free the Transfer Wrapper (TW). */ static void ohci_free_tw( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw) { int rval, i; USB_DPRINTF_L4(PRINT_MASK_ALLOC, ohcip->ohci_log_hdl, "ohci_free_tw: tw = 0x%p", (void *)tw); ASSERT(tw != NULL); ASSERT(tw->tw_id != 0); /* Free 32bit ID */ OHCI_FREE_ID((uint32_t)tw->tw_id); if (tw->tw_isoc_strtlen > 0) { ASSERT(tw->tw_isoc_bufs != NULL); for (i = 0; i < tw->tw_ncookies; i++) { if (tw->tw_isoc_bufs[i].ncookies > 0) { rval = ddi_dma_unbind_handle( tw->tw_isoc_bufs[i].dma_handle); ASSERT(rval == USB_SUCCESS); } ddi_dma_mem_free(&tw->tw_isoc_bufs[i].mem_handle); ddi_dma_free_handle(&tw->tw_isoc_bufs[i].dma_handle); } kmem_free(tw->tw_isoc_bufs, tw->tw_isoc_strtlen); } else if (tw->tw_dmahandle != NULL) { if (tw->tw_ncookies > 0) { rval = ddi_dma_unbind_handle(tw->tw_dmahandle); ASSERT(rval == DDI_SUCCESS); } ddi_dma_mem_free(&tw->tw_accesshandle); ddi_dma_free_handle(&tw->tw_dmahandle); } /* Free transfer wrapper */ kmem_free(tw, sizeof (ohci_trans_wrapper_t)); } /* * Interrupt Handling functions */ /* * ohci_intr: * * OpenHCI (OHCI) interrupt handling routine. */ static uint_t ohci_intr(caddr_t arg1, caddr_t arg2) { ohci_state_t *ohcip = (ohci_state_t *)arg1; uint_t intr; ohci_td_t *done_head = NULL; ohci_save_intr_sts_t *ohci_intr_sts = &ohcip->ohci_save_intr_sts; USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Interrupt occurred, arg1 0x%p arg2 0x%p", (void *)arg1, (void *)arg2); mutex_enter(&ohcip->ohci_int_mutex); /* Any interrupt is not handled for the suspended device. */ if (ohcip->ohci_hc_soft_state == OHCI_CTLR_SUSPEND_STATE) { mutex_exit(&ohcip->ohci_int_mutex); return (DDI_INTR_UNCLAIMED); } /* * Suppose if we switched to the polled mode from the normal * mode when interrupt handler is executing then we need to * save the interrupt status information in the polled mode * to avoid race conditions. The following flag will be set * and reset on entering & exiting of ohci interrupt handler * respectively. This flag will be used in the polled mode * to check whether the interrupt handler was running when we * switched to the polled mode from the normal mode. */ ohci_intr_sts->ohci_intr_flag = OHCI_INTR_HANDLING; /* Temporarily turn off interrupts */ Set_OpReg(hcr_intr_disable, HCR_INTR_MIE); /* * Handle any missed ohci interrupt especially WriteDoneHead * and SOF interrupts because of previous polled mode switch. */ ohci_handle_missed_intr(ohcip); /* * Now process the actual ohci interrupt events that caused * invocation of this ohci interrupt handler. */ /* * Updating the WriteDoneHead interrupt: * * (a) Host Controller * * - First Host controller (HC) checks whether WDH bit * in the interrupt status register is cleared. * * - If WDH bit is cleared then HC writes new done head * list information into the HCCA done head field. * * - Set WDH bit in the interrupt status register. * * (b) Host Controller Driver (HCD) * * - First read the interrupt status register. The HCCA * done head and WDH bit may be set or may not be set * while reading the interrupt status register. * * - Read the HCCA done head list. By this time may be * HC has updated HCCA done head and WDH bit in ohci * interrupt status register. * * - If done head is non-null and if WDH bit is not set * then Host Controller has updated HCCA done head & * WDH bit in the interrupt stats register in between * reading the interrupt status register & HCCA done * head. In that case, definitely WDH bit will be set * in the interrupt status register & driver can take * it for granted. * * Now read the Interrupt Status & Interrupt enable register * to determine the exact interrupt events. */ intr = ohci_intr_sts->ohci_curr_intr_sts = (Get_OpReg(hcr_intr_status) & Get_OpReg(hcr_intr_enable)); if (ohcip->ohci_hccap) { /* Sync HCCA area */ Sync_HCCA(ohcip); /* Read and Save the HCCA DoneHead value */ done_head = ohci_intr_sts->ohci_curr_done_lst = (ohci_td_t *)(uintptr_t) (Get_HCCA(ohcip->ohci_hccap->HccaDoneHead) & HCCA_DONE_HEAD_MASK); USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Done head! 0x%p", (void *)done_head); } /* Update kstat values */ ohci_do_intrs_stats(ohcip, intr); /* * Look at the HccaDoneHead, if it is a non-zero valid address, * a done list update interrupt is indicated. Otherwise, this * intr bit is cleared. */ if (ohci_check_done_head(ohcip, done_head) == USB_SUCCESS) { /* Set the WriteDoneHead bit in the interrupt events */ intr |= HCR_INTR_WDH; } else { /* Clear the WriteDoneHead bit */ intr &= ~HCR_INTR_WDH; } /* * We could have gotten a spurious interrupts. If so, do not * claim it. This is quite possible on some architectures * where more than one PCI slots share the IRQs. If so, the * associated driver's interrupt routine may get called even * if the interrupt is not meant for them. * * By unclaiming the interrupt, the other driver gets chance * to service its interrupt. */ if (!intr) { /* Reset the interrupt handler flag */ ohci_intr_sts->ohci_intr_flag &= ~OHCI_INTR_HANDLING; Set_OpReg(hcr_intr_enable, HCR_INTR_MIE); mutex_exit(&ohcip->ohci_int_mutex); return (DDI_INTR_UNCLAIMED); } USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "Interrupt status 0x%x", intr); /* * Check for Frame Number Overflow. */ if (intr & HCR_INTR_FNO) { USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Frame Number Overflow"); ohci_handle_frame_number_overflow(ohcip); } if (intr & HCR_INTR_SOF) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Start of Frame"); /* Set ohci_sof_flag indicating SOF interrupt occurred */ ohcip->ohci_sof_flag = B_TRUE; /* Disabel SOF interrupt */ Set_OpReg(hcr_intr_disable, HCR_INTR_SOF); /* * Call cv_broadcast on every SOF interrupt to wakeup * all the threads that are waiting the SOF. Calling * cv_broadcast on every SOF has no effect even if no * threads are waiting for the SOF. */ cv_broadcast(&ohcip->ohci_SOF_cv); } if (intr & HCR_INTR_SO) { USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Schedule overrun"); ohcip->ohci_so_error++; } if ((intr & HCR_INTR_WDH) && (done_head)) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Done Head"); /* * Currently if we are processing one WriteDoneHead * interrupt and also if we switched to the polled * mode at least once during this time, then there * may be chance that Host Controller generates one * more Write DoneHead or Start of Frame interrupts * for the normal since the polled code clears WDH & * SOF interrupt bits before returning to the normal * mode. Under this condition, we must not clear the * HCCA done head field & also we must not clear WDH * interrupt bit in the interrupt status register. */ if (done_head == (ohci_td_t *)(uintptr_t) (Get_HCCA(ohcip->ohci_hccap->HccaDoneHead) & HCCA_DONE_HEAD_MASK)) { /* Reset the done head to NULL */ Set_HCCA(ohcip->ohci_hccap->HccaDoneHead, 0); } else { intr &= ~HCR_INTR_WDH; } /* Clear the current done head field */ ohci_intr_sts->ohci_curr_done_lst = NULL; ohci_traverse_done_list(ohcip, done_head); } /* Process endpoint reclaimation list */ if (ohcip->ohci_reclaim_list) { ohci_handle_endpoint_reclaimation(ohcip); } if (intr & HCR_INTR_RD) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Resume Detected"); } if (intr & HCR_INTR_RHSC) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Root hub status change"); } if (intr & HCR_INTR_OC) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Change ownership"); } if (intr & HCR_INTR_UE) { USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_intr: Unrecoverable error"); ohci_handle_ue(ohcip); } /* Acknowledge the interrupt */ Set_OpReg(hcr_intr_status, intr); /* Clear the current interrupt event field */ ohci_intr_sts->ohci_curr_intr_sts = 0; /* * Reset the following flag indicating exiting the interrupt * handler and this flag will be used in the polled mode to * do some extra processing. */ ohci_intr_sts->ohci_intr_flag &= ~OHCI_INTR_HANDLING; Set_OpReg(hcr_intr_enable, HCR_INTR_MIE); /* * Read interrupt status register to make sure that any PIO * store to clear the ISR has made it on the PCI bus before * returning from its interrupt handler. */ (void) Get_OpReg(hcr_intr_status); mutex_exit(&ohcip->ohci_int_mutex); USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "Interrupt handling completed"); return (DDI_INTR_CLAIMED); } /* * Check whether done_head is a valid td point address. * It should be non-zero, 16-byte aligned, and fall in ohci_td_pool. */ static int ohci_check_done_head(ohci_state_t *ohcip, ohci_td_t *done_head) { uintptr_t lower, upper, headp; lower = ohcip->ohci_td_pool_cookie.dmac_address; upper = lower + ohcip->ohci_td_pool_cookie.dmac_size; headp = (uintptr_t)done_head; if (headp && !(headp & ~HCCA_DONE_HEAD_MASK) && (headp >= lower) && (headp < upper)) { return (USB_SUCCESS); } else { return (USB_FAILURE); } } /* * ohci_handle_missed_intr: * * Handle any ohci missed interrupts because of polled mode switch. */ static void ohci_handle_missed_intr(ohci_state_t *ohcip) { ohci_save_intr_sts_t *ohci_intr_sts = &ohcip->ohci_save_intr_sts; ohci_td_t *done_head; uint_t intr; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Check whether we have missed any ohci interrupts because * of the polled mode switch during previous ohci interrupt * handler execution. Only Write Done Head & SOF interrupts * saved in the polled mode. First process these interrupts * before processing actual interrupts that caused invocation * of ohci interrupt handler. */ if (!ohci_intr_sts->ohci_missed_intr_sts) { /* No interrupts are missed, simply return */ return; } USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_missed_intr: Handle ohci missed interrupts"); /* * The functionality and importance of critical code section * in the normal mode ohci interrupt handler & its usage in * the polled mode is explained below. * * (a) Normal mode: * * - Set the flag indicating that processing critical * code in ohci interrupt handler. * * - Process the missed ohci interrupts by copying the * miised interrupt events and done head list fields * information to the critical interrupt event & done * list fields. * * - Reset the missed ohci interrupt events & done head * list fields so that the new missed interrupt event * and done head list information can be saved. * * - All above steps will be executed with in critical * section of the interrupt handler.Then ohci missed * interrupt handler will be called to service missed * ohci interrupts. * * (b) Polled mode: * * - On entering the polled code,it checks for critical * section code execution within the normal mode ohci * interrupt handler. * * - If the critical section code is executing in normal * mode ohci interrupt handler and if copying of ohci * missed interrupt events & done head list fields to * the critical fields is finished then save the "any * missed interrupt events & done head list" because * of current polled mode switch into "critical missed * interrupt events & done list fields" instead actual * missed events and done list fields. * * - Otherwise save "any missed interrupt events & done * list" because of this current polled mode switch * in the actual missed interrupt events & done head * list fields. */ /* * Set flag indicating that interrupt handler is processing * critical interrupt code, so that polled mode code checks * for this condition & will do extra processing as explained * above in order to aviod the race conditions. */ ohci_intr_sts->ohci_intr_flag |= OHCI_INTR_CRITICAL; ohci_intr_sts->ohci_critical_intr_sts |= ohci_intr_sts->ohci_missed_intr_sts; if (ohci_intr_sts->ohci_missed_done_lst) { ohci_intr_sts->ohci_critical_done_lst = ohci_intr_sts->ohci_missed_done_lst; } ohci_intr_sts->ohci_missed_intr_sts = 0; ohci_intr_sts->ohci_missed_done_lst = NULL; ohci_intr_sts->ohci_intr_flag &= ~OHCI_INTR_CRITICAL; intr = ohci_intr_sts->ohci_critical_intr_sts; done_head = ohci_intr_sts->ohci_critical_done_lst; if (intr & HCR_INTR_SOF) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_missed_intr: Start of Frame"); /* * Call cv_broadcast on every SOF interrupt to wakeup * all the threads that are waiting the SOF. Calling * cv_broadcast on every SOF has no effect even if no * threads are waiting for the SOF. */ cv_broadcast(&ohcip->ohci_SOF_cv); } if ((intr & HCR_INTR_WDH) && (done_head)) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_missed_intr: Done Head"); /* Clear the critical done head field */ ohci_intr_sts->ohci_critical_done_lst = NULL; ohci_traverse_done_list(ohcip, done_head); } /* Clear the critical interrupt event field */ ohci_intr_sts->ohci_critical_intr_sts = 0; } /* * ohci_handle_ue: * * Handling of Unrecoverable Error interrupt (UE). */ static void ohci_handle_ue(ohci_state_t *ohcip) { usb_frame_number_t before_frame_number, after_frame_number; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_ue: Handling of UE interrupt"); /* * First check whether current UE error occured due to USB or * due to some other subsystem. This can be verified by reading * usb frame numbers before & after a delay of few milliseconds. * If usb frame number read after delay is greater than the one * read before delay, then, USB subsystem is fine. In this case, * disable UE error interrupt and return without shutdowning the * USB subsystem. * * Otherwise, if usb frame number read after delay is less than * or equal to one read before the delay, then, current UE error * occured from USB susbsystem. In this case,go ahead with actual * UE error recovery procedure. * * Get the current usb frame number before waiting for few * milliseconds. */ before_frame_number = ohci_get_current_frame_number(ohcip); /* Wait for few milliseconds */ drv_usecwait(OHCI_TIMEWAIT); /* * Get the current usb frame number after waiting for * milliseconds. */ after_frame_number = ohci_get_current_frame_number(ohcip); USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_ue: Before Frm No 0x%llx After Frm No 0x%llx", (unsigned long long)before_frame_number, (unsigned long long)after_frame_number); if (after_frame_number > before_frame_number) { /* Disable UE interrupt */ Set_OpReg(hcr_intr_disable, HCR_INTR_UE); return; } /* * This UE is due to USB hardware error. Reset ohci controller * and reprogram to bring it back to functional state. */ if ((ohci_do_soft_reset(ohcip)) != USB_SUCCESS) { USB_DPRINTF_L0(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "Unrecoverable USB Hardware Error"); /* Disable UE interrupt */ Set_OpReg(hcr_intr_disable, HCR_INTR_UE); /* Set host controller soft state to error */ ohcip->ohci_hc_soft_state = OHCI_CTLR_ERROR_STATE; } } /* * ohci_handle_frame_number_overflow: * * Update software based usb frame number part on every frame number * overflow interrupt. * * NOTE: This function is also called from POLLED MODE. * * Refer ohci spec 1.0a, section 5.3, page 81 for more details. */ void ohci_handle_frame_number_overflow(ohci_state_t *ohcip) { USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_frame_number_overflow:"); ohcip->ohci_fno += (0x10000 - (((Get_HCCA(ohcip->ohci_hccap->HccaFrameNo) & 0xFFFF) ^ ohcip->ohci_fno) & 0x8000)); USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_frame_number_overflow:" "Frame Number Higher Part 0x%llx\n", (unsigned long long)(ohcip->ohci_fno)); } /* * ohci_handle_endpoint_reclaimation: * * Reclamation of Host Controller (HC) Endpoint Descriptors (ED). */ static void ohci_handle_endpoint_reclaimation(ohci_state_t *ohcip) { usb_frame_number_t current_frame_number; usb_frame_number_t endpoint_frame_number; ohci_ed_t *reclaim_ed; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_endpoint_reclaimation:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); current_frame_number = ohci_get_current_frame_number(ohcip); /* * Deallocate all Endpoint Descriptors (ED) which are on the * reclaimation list. These ED's are already removed from the * interrupt lattice tree. */ while (ohcip->ohci_reclaim_list) { reclaim_ed = ohcip->ohci_reclaim_list; endpoint_frame_number = (usb_frame_number_t)(uintptr_t) (OHCI_LOOKUP_ID(Get_ED(reclaim_ed->hced_reclaim_frame))); USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_endpoint_reclaimation:" "current frame number 0x%llx endpoint frame number 0x%llx", (unsigned long long)current_frame_number, (unsigned long long)endpoint_frame_number); /* * Deallocate current endpoint only if endpoint's usb frame * number is less than or equal to current usb frame number. * * If endpoint's usb frame number is greater than the current * usb frame number, ignore rest of the endpoints in the list * since rest of the endpoints are inserted into the reclaim * list later than the current reclaim endpoint. */ if (endpoint_frame_number > current_frame_number) { break; } /* Get the next endpoint from the rec. list */ ohcip->ohci_reclaim_list = ohci_ed_iommu_to_cpu(ohcip, Get_ED(reclaim_ed->hced_reclaim_next)); /* Free 32bit ID */ OHCI_FREE_ID((uint32_t)Get_ED(reclaim_ed->hced_reclaim_frame)); /* Deallocate the endpoint */ ohci_deallocate_ed(ohcip, reclaim_ed); } } /* * ohci_traverse_done_list: */ static void ohci_traverse_done_list( ohci_state_t *ohcip, ohci_td_t *head_done_list) { uint_t state; /* TD state */ ohci_td_t *td, *old_td; /* TD pointers */ usb_cr_t error; /* Error from TD */ ohci_trans_wrapper_t *tw = NULL; /* Transfer wrapper */ ohci_pipe_private_t *pp = NULL; /* Pipe private field */ USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_traverse_done_list:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Sync ED and TD pool */ Sync_ED_TD_Pool(ohcip); /* Reverse the done list */ td = ohci_reverse_done_list(ohcip, head_done_list); /* Traverse the list of transfer descriptors */ while (td) { /* Check for TD state */ state = Get_TD(td->hctd_state); USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_traverse_done_list:\n\t" "td = 0x%p state = 0x%x", (void *)td, state); /* * Obtain the transfer wrapper only if the TD is * not marked as RECLAIM. * * A TD that is marked as RECLAIM has had its DMA * mappings, ED, TD and pipe private structure are * ripped down. Just deallocate this TD. */ if (state != HC_TD_RECLAIM) { tw = (ohci_trans_wrapper_t *)OHCI_LOOKUP_ID( (uint32_t)Get_TD(td->hctd_trans_wrapper)); ASSERT(tw != NULL); pp = tw->tw_pipe_private; USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_traverse_done_list: PP = 0x%p TW = 0x%p", (void *)pp, (void *)tw); } /* * Don't process the TD if its state is marked as * either RECLAIM or TIMEOUT. * * A TD that is marked as TIMEOUT has already been * processed by TD timeout handler & client driver * has been informed through exception callback. */ if ((state != HC_TD_RECLAIM) && (state != HC_TD_TIMEOUT)) { /* Look at the error status */ error = ohci_parse_error(ohcip, td); if (error == USB_CR_OK) { ohci_handle_normal_td(ohcip, td, tw); } else { /* handle the error condition */ ohci_handle_error(ohcip, td, error); } } else { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_traverse_done_list: TD State = %d", state); } /* * Save a pointer to the current transfer descriptor */ old_td = td; td = ohci_td_iommu_to_cpu(ohcip, Get_TD(td->hctd_next_td)); /* Deallocate this transfer descriptor */ ohci_deallocate_td(ohcip, old_td); /* * Deallocate the transfer wrapper if there are no more * TD's for the transfer wrapper. ohci_deallocate_tw_resources() * will not deallocate the tw for a periodic endpoint * since it will always have a TD attached to it. * * Do not deallocate the TW if it is a isoc or intr pipe in. * The tw's are reused. * * An TD that is marked as reclaim doesn't have a pipe * or a TW associated with it anymore so don't call this * function. */ if (state != HC_TD_RECLAIM) { ASSERT(tw != NULL); ohci_deallocate_tw_resources(ohcip, pp, tw); } } } /* * ohci_reverse_done_list: * * Reverse the order of the Transfer Descriptor (TD) Done List. */ static ohci_td_t * ohci_reverse_done_list( ohci_state_t *ohcip, ohci_td_t *head_done_list) { ohci_td_t *cpu_new_tail, *cpu_new_head, *cpu_save; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_reverse_done_list:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT(head_done_list != NULL); /* At first, both the tail and head pointers point to the same elem */ cpu_new_tail = cpu_new_head = ohci_td_iommu_to_cpu(ohcip, (uintptr_t)head_done_list); /* See if the list has only one element */ if (Get_TD(cpu_new_head->hctd_next_td) == 0) { return (cpu_new_head); } /* Advance the head pointer */ cpu_new_head = (ohci_td_t *) ohci_td_iommu_to_cpu(ohcip, Get_TD(cpu_new_head->hctd_next_td)); /* The new tail now points to nothing */ Set_TD(cpu_new_tail->hctd_next_td, NULL); cpu_save = (ohci_td_t *) ohci_td_iommu_to_cpu(ohcip, Get_TD(cpu_new_head->hctd_next_td)); /* Reverse the list and store the pointers as CPU addresses */ while (cpu_save) { Set_TD(cpu_new_head->hctd_next_td, ohci_td_cpu_to_iommu(ohcip, cpu_new_tail)); cpu_new_tail = cpu_new_head; cpu_new_head = cpu_save; cpu_save = (ohci_td_t *) ohci_td_iommu_to_cpu(ohcip, Get_TD(cpu_new_head->hctd_next_td)); } Set_TD(cpu_new_head->hctd_next_td, ohci_td_cpu_to_iommu(ohcip, cpu_new_tail)); return (cpu_new_head); } /* * ohci_parse_error: * * Parse the result for any errors. */ static usb_cr_t ohci_parse_error( ohci_state_t *ohcip, ohci_td_t *td) { uint_t ctrl; usb_ep_descr_t *eptd; ohci_trans_wrapper_t *tw; ohci_pipe_private_t *pp; uint_t flag; usb_cr_t error; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_parse_error:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT(td != NULL); /* Obtain the transfer wrapper from the TD */ tw = (ohci_trans_wrapper_t *) OHCI_LOOKUP_ID((uint32_t)Get_TD(td->hctd_trans_wrapper)); ASSERT(tw != NULL); /* Obtain the pipe private structure */ pp = tw->tw_pipe_private; USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_parse_error: PP 0x%p TW 0x%p", (void *)pp, (void *)tw); eptd = &pp->pp_pipe_handle->p_ep; ctrl = (uint_t)Get_TD(td->hctd_ctrl) & (uint32_t)HC_TD_CC; /* * Check the condition code of completed TD and report errors * if any. This checking will be done both for the general and * the isochronous TDs. */ if ((error = ohci_check_for_error(ohcip, pp, tw, td, ctrl)) != USB_CR_OK) { flag = OHCI_REMOVE_XFER_ALWAYS; } else { flag = OHCI_REMOVE_XFER_IFLAST; } /* Stop the the transfer timer */ ohci_stop_xfer_timer(ohcip, tw, flag); /* * The isochronous endpoint needs additional error checking * and special processing. */ if ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH) { ohci_parse_isoc_error(ohcip, pp, tw, td); /* always reset error */ error = USB_CR_OK; } return (error); } /* * ohci_parse_isoc_error: * * Check for any errors in the isochronous data packets. Also fillup * the status for each of the isochrnous data packets. */ void ohci_parse_isoc_error( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td) { usb_isoc_req_t *isoc_reqp; usb_isoc_pkt_descr_t *isoc_pkt_descr; uint_t toggle = 0, fc, ctrl, psw; int i; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_parse_isoc_error: td 0x%p", (void *)td); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); fc = ((uint_t)Get_TD(td->hctd_ctrl) & HC_ITD_FC) >> HC_ITD_FC_SHIFT; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_parse_isoc_error: frame count %d", fc); /* * Get the address of current usb isochronous request * and array of packet descriptors. */ isoc_reqp = (usb_isoc_req_t *)tw->tw_curr_xfer_reqp; isoc_pkt_descr = isoc_reqp->isoc_pkt_descr; isoc_pkt_descr += tw->tw_pkt_idx; for (i = 0; i <= fc; i++) { psw = Get_TD(td->hctd_offsets[i / 2]); if (toggle) { ctrl = psw & HC_ITD_ODD_OFFSET; toggle = 0; } else { ctrl = (psw & HC_ITD_EVEN_OFFSET) << HC_ITD_OFFSET_SHIFT; toggle = 1; } isoc_pkt_descr->isoc_pkt_actual_length = (ctrl >> HC_ITD_OFFSET_SHIFT) & HC_ITD_OFFSET_ADDR; ctrl = (uint_t)(ctrl & (uint32_t)HC_TD_CC); /* Write the status of isoc data packet */ isoc_pkt_descr->isoc_pkt_status = ohci_check_for_error(ohcip, pp, tw, td, ctrl); if (isoc_pkt_descr->isoc_pkt_status) { /* Increment isoc data packet error count */ isoc_reqp->isoc_error_count++; } /* * Get the address of next isoc data packet descriptor. */ isoc_pkt_descr++; } tw->tw_pkt_idx = tw->tw_pkt_idx + fc + 1; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_parse_isoc_error: tw_pkt_idx %d", tw->tw_pkt_idx); } /* * ohci_check_for_error: * * Check for any errors. */ static usb_cr_t ohci_check_for_error( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, uint_t ctrl) { usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; uchar_t ep_attrs = ph->p_ep.bmAttributes; usb_cr_t error = USB_CR_OK; usb_req_attrs_t xfer_attrs; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: td = 0x%p ctrl = 0x%x", (void *)td, ctrl); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); switch (ctrl) { case HC_TD_CC_NO_E: USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: No Error"); error = USB_CR_OK; break; case HC_TD_CC_CRC: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: CRC error"); error = USB_CR_CRC; break; case HC_TD_CC_BS: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Bit stuffing"); error = USB_CR_BITSTUFFING; break; case HC_TD_CC_DTM: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Data Toggle Mismatch"); error = USB_CR_DATA_TOGGLE_MM; break; case HC_TD_CC_STALL: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Stall"); error = USB_CR_STALL; break; case HC_TD_CC_DNR: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Device not responding"); error = USB_CR_DEV_NOT_RESP; break; case HC_TD_CC_PCF: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: PID check failure"); error = USB_CR_PID_CHECKFAILURE; break; case HC_TD_CC_UPID: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Unexpected PID"); error = USB_CR_UNEXP_PID; break; case HC_TD_CC_DO: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Data overrrun"); error = USB_CR_DATA_OVERRUN; break; case HC_TD_CC_DU: /* * Check whether short packets are acceptable. * If so don't report error to client drivers * and restart the endpoint. Otherwise report * data underrun error to client driver. */ xfer_attrs = ohci_get_xfer_attrs(ohcip, pp, tw); if (xfer_attrs & USB_ATTRS_SHORT_XFER_OK) { error = USB_CR_OK; if ((ep_attrs & USB_EP_ATTR_MASK) != USB_EP_ATTR_ISOCH) { /* * Cleanup the remaining resources that may have * been allocated for this transfer. */ if (ohci_cleanup_data_underrun(ohcip, pp, tw, td) == USB_SUCCESS) { /* Clear the halt bit */ Set_ED(pp->pp_ept->hced_headp, (Get_ED(pp->pp_ept->hced_headp) & ~HC_EPT_Halt)); } else { error = USB_CR_UNSPECIFIED_ERR; } } } else { USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Data underrun"); error = USB_CR_DATA_UNDERRUN; } break; case HC_TD_CC_BO: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Buffer overrun"); error = USB_CR_BUFFER_OVERRUN; break; case HC_TD_CC_BU: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Buffer underrun"); error = USB_CR_BUFFER_UNDERRUN; break; case HC_TD_CC_NA: default: USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Not accessed"); error = USB_CR_NOT_ACCESSED; break; } if (error) { uint_t hced_ctrl = Get_ED(pp->pp_ept->hced_ctrl); USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_check_for_error: Error %d Device address %d " "Endpoint number %d", error, (hced_ctrl & HC_EPT_FUNC), ((hced_ctrl & HC_EPT_EP) >> HC_EPT_EP_SHFT)); } return (error); } /* * ohci_handle_error: * * Inform USBA about occured transaction errors by calling the USBA callback * routine. */ static void ohci_handle_error( ohci_state_t *ohcip, ohci_td_t *td, usb_cr_t error) { ohci_trans_wrapper_t *tw; usba_pipe_handle_data_t *ph; ohci_pipe_private_t *pp; mblk_t *mp = NULL; size_t length = 0; uchar_t attributes; usb_intr_req_t *curr_intr_reqp; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_error: error = 0x%x", error); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT(td != NULL); /* Print the values in the td */ ohci_print_td(ohcip, td); /* Obtain the transfer wrapper from the TD */ tw = (ohci_trans_wrapper_t *) OHCI_LOOKUP_ID((uint32_t)Get_TD(td->hctd_trans_wrapper)); ASSERT(tw != NULL); /* Obtain the pipe private structure */ pp = tw->tw_pipe_private; ph = tw->tw_pipe_private->pp_pipe_handle; attributes = ph->p_ep.bmAttributes & USB_EP_ATTR_MASK; /* * Special error handling */ if (tw->tw_direction == HC_TD_IN) { switch (attributes) { case USB_EP_ATTR_CONTROL: if (((ph->p_ep.bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_CONTROL) && (Get_TD(td->hctd_ctrl_phase) == OHCI_CTRL_SETUP_PHASE)) { break; } /* FALLTHROUGH */ case USB_EP_ATTR_BULK: /* * Call ohci_sendup_td_message * to send message to upstream. */ ohci_sendup_td_message(ohcip, pp, tw, td, error); return; case USB_EP_ATTR_INTR: curr_intr_reqp = (usb_intr_req_t *)tw->tw_curr_xfer_reqp; if (curr_intr_reqp->intr_attributes & USB_ATTRS_ONE_XFER) { ohci_handle_one_xfer_completion(ohcip, tw); } /* Decrement periodic in request count */ pp->pp_cur_periodic_req_cnt--; break; case USB_EP_ATTR_ISOCH: default: break; } } else { switch (attributes) { case USB_EP_ATTR_BULK: case USB_EP_ATTR_INTR: /* * If "CurrentBufferPointer" of Transfer * Descriptor (TD) is not equal to zero, * then we sent less data to the device * than requested by client. In that case, * return the mblk after updating the * data->r_ptr. */ if (Get_TD(td->hctd_cbp)) { usb_opaque_t xfer_reqp = tw->tw_curr_xfer_reqp; size_t residue; residue = ohci_get_td_residue(ohcip, td); length = Get_TD(td->hctd_xfer_offs) + Get_TD(td->hctd_xfer_len) - residue; USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_error: requested data %lu " "sent data %lu", tw->tw_length, length); if (attributes == USB_EP_ATTR_BULK) { mp = (mblk_t *)((usb_bulk_req_t *) (xfer_reqp))->bulk_data; } else { mp = (mblk_t *)((usb_intr_req_t *) (xfer_reqp))->intr_data; } /* Increment the read pointer */ mp->b_rptr = mp->b_rptr + length; } break; default: break; } } /* * Callback the client with the * failure reason. */ ohci_hcdi_callback(ph, tw, error); /* Check anybody is waiting for transfers completion event */ ohci_check_for_transfers_completion(ohcip, pp); } /* * ohci_cleanup_data_underrun: * * Cleans up resources when a short xfer occurs */ static int ohci_cleanup_data_underrun( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td) { ohci_td_t *next_td; ohci_td_t *last_td; ohci_td_t *temp_td; uint32_t last_td_addr; uint_t hced_head; USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_cleanup_data_underrun: td 0x%p, tw 0x%p", (void *)td, (void *)tw); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT(tw->tw_hctd_head == td); /* Check if this TD is the last td in the tw */ last_td = tw->tw_hctd_tail; if (td == last_td) { /* There is no need for cleanup */ return (USB_SUCCESS); } /* * Make sure the ED is halted before we change any td's. * If for some reason it is not halted, return error to client * driver so they can reset the port. */ hced_head = Get_ED(pp->pp_ept->hced_headp); if (!(hced_head & HC_EPT_Halt)) { uint_t hced_ctrl = Get_ED(pp->pp_ept->hced_ctrl); USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_cleanup_data_underrun: Unable to clean up a short " "xfer error. Client might send/receive irrelevant data." " Device address %d Endpoint number %d", (hced_ctrl & HC_EPT_FUNC), ((hced_ctrl & HC_EPT_EP) >> HC_EPT_EP_SHFT)); Set_ED(pp->pp_ept->hced_headp, hced_head | HC_EPT_Halt); return (USB_FAILURE); } /* * Get the address of the first td of the next transfer (tw). * This td, may currently be a dummy td, but when a new request * arrives, it will be transformed into a regular td. */ last_td_addr = Get_TD(last_td->hctd_next_td); /* Set ED head to this last td */ Set_ED(pp->pp_ept->hced_headp, (last_td_addr & HC_EPT_TD_HEAD) | (hced_head & ~HC_EPT_TD_HEAD)); /* * Start removing all the unused TD's from the TW, * but keep the first one. */ tw->tw_hctd_tail = td; /* * Get the last_td, the next td in the tw list. * Afterwards completely disassociate the current td from other tds */ next_td = (ohci_td_t *)ohci_td_iommu_to_cpu(ohcip, Get_TD(td->hctd_tw_next_td)); Set_TD(td->hctd_tw_next_td, NULL); /* * Iterate down the tw list and deallocate them */ while (next_td != NULL) { tw->tw_num_tds--; /* Disassociate this td from it's TW and set to RECLAIM */ Set_TD(next_td->hctd_trans_wrapper, NULL); Set_TD(next_td->hctd_state, HC_TD_RECLAIM); temp_td = next_td; next_td = (ohci_td_t *)ohci_td_iommu_to_cpu(ohcip, Get_TD(next_td->hctd_tw_next_td)); ohci_deallocate_td(ohcip, temp_td); } ASSERT(tw->tw_num_tds == 1); return (USB_SUCCESS); } /* * ohci_handle_normal_td: */ static void ohci_handle_normal_td( ohci_state_t *ohcip, ohci_td_t *td, ohci_trans_wrapper_t *tw) { ohci_pipe_private_t *pp; /* Pipe private field */ USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_normal_td:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT(tw != NULL); /* Obtain the pipe private structure */ pp = tw->tw_pipe_private; (*tw->tw_handle_td)(ohcip, pp, tw, td, tw->tw_handle_callback_value); /* Check anybody is waiting for transfers completion event */ ohci_check_for_transfers_completion(ohcip, pp); } /* * ohci_handle_ctrl_td: * * Handle a control Transfer Descriptor (TD). */ /* ARGSUSED */ static void ohci_handle_ctrl_td( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, void *tw_handle_callback_value) { usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_ctrl_td: pp = 0x%p tw = 0x%p td = 0x%p state = 0x%x", (void *)pp, (void *)tw, (void *)td, Get_TD(td->hctd_ctrl_phase)); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Check which control transfer phase got completed. */ tw->tw_num_tds--; switch (Get_TD(td->hctd_ctrl_phase)) { case OHCI_CTRL_SETUP_PHASE: USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "Setup complete: pp 0x%p td 0x%p", (void *)pp, (void *)td); break; case OHCI_CTRL_DATA_PHASE: /* * If "CurrentBufferPointer" of Transfer Descriptor (TD) * is not equal to zero, then we received less data from * the device than requested by us. In that case, get the * actual received data size. */ if (Get_TD(td->hctd_cbp)) { size_t length, residue; residue = ohci_get_td_residue(ohcip, td); length = Get_TD(td->hctd_xfer_offs) + Get_TD(td->hctd_xfer_len) - residue; USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_ctrl_qtd: requested data %lu " "received data %lu", tw->tw_length, length); /* Save actual received data length */ tw->tw_length = length; } USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "Data complete: pp 0x%p td 0x%p", (void *)pp, (void *)td); break; case OHCI_CTRL_STATUS_PHASE: if ((tw->tw_length != 0) && (tw->tw_direction == HC_TD_IN)) { /* * Call ohci_sendup_td_message * to send message to upstream. */ ohci_sendup_td_message(ohcip, pp, tw, td, USB_CR_OK); } else { ohci_do_byte_stats(ohcip, tw->tw_length - OHCI_MAX_TD_BUF_SIZE, ph->p_ep.bmAttributes, ph->p_ep.bEndpointAddress); ohci_hcdi_callback(ph, tw, USB_CR_OK); } USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "Status complete: pp 0x%p td 0x%p", (void *)pp, (void *)td); break; } } /* * ohci_handle_bulk_td: * * Handle a bulk Transfer Descriptor (TD). */ /* ARGSUSED */ static void ohci_handle_bulk_td( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, void *tw_handle_callback_value) { usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; usb_ep_descr_t *eptd = &ph->p_ep; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_bulk_td:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Decrement the TDs counter and check whether all the bulk * data has been send or received. If TDs counter reaches * zero then inform client driver about completion current * bulk request. Other wise wait for completion of other bulk * TDs or transactions on this pipe. */ if (--tw->tw_num_tds != 0) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_bulk_td: Number of TDs %d", tw->tw_num_tds); return; } /* * If this is a bulk in pipe, return the data to the client. * For a bulk out pipe, there is no need to do anything. */ if ((eptd->bEndpointAddress & USB_EP_DIR_MASK) == USB_EP_DIR_OUT) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_bulk_td: Bulk out pipe"); ohci_do_byte_stats(ohcip, tw->tw_length, eptd->bmAttributes, eptd->bEndpointAddress); /* Do the callback */ ohci_hcdi_callback(ph, tw, USB_CR_OK); return; } /* Call ohci_sendup_td_message to send message to upstream */ ohci_sendup_td_message(ohcip, pp, tw, td, USB_CR_OK); } /* * ohci_handle_intr_td: * * Handle a interrupt Transfer Descriptor (TD). */ /* ARGSUSED */ static void ohci_handle_intr_td( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, void *tw_handle_callback_value) { usb_intr_req_t *curr_intr_reqp = (usb_intr_req_t *)tw->tw_curr_xfer_reqp; usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; usb_ep_descr_t *eptd = &ph->p_ep; usb_req_attrs_t attrs; int error = USB_SUCCESS; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_intr_td: pp=0x%p tw=0x%p td=0x%p" "intr_reqp=0%p data=0x%p", (void *)pp, (void *)tw, (void *)td, (void *)curr_intr_reqp, (void *)curr_intr_reqp->intr_data); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Get the interrupt xfer attributes */ attrs = curr_intr_reqp->intr_attributes; /* * For a Interrupt OUT pipe, we just callback and we are done */ if ((eptd->bEndpointAddress & USB_EP_DIR_MASK) == USB_EP_DIR_OUT) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_intr_td: Intr out pipe, intr_reqp=0x%p," "data=0x%p", (void *)curr_intr_reqp, (void *)curr_intr_reqp->intr_data); ohci_do_byte_stats(ohcip, tw->tw_length, eptd->bmAttributes, eptd->bEndpointAddress); /* Do the callback */ ohci_hcdi_callback(ph, tw, USB_CR_OK); return; } /* Decrement number of interrupt request count */ pp->pp_cur_periodic_req_cnt--; /* * Check usb flag whether USB_FLAGS_ONE_XFER flag is set * and if so, free duplicate request. */ if (attrs & USB_ATTRS_ONE_XFER) { ohci_handle_one_xfer_completion(ohcip, tw); } /* Call ohci_sendup_td_message to callback into client */ ohci_sendup_td_message(ohcip, pp, tw, td, USB_CR_OK); /* * If interrupt pipe state is still active, insert next Interrupt * request into the Host Controller's Interrupt list. Otherwise * you are done. */ if (pp->pp_state != OHCI_PIPE_STATE_ACTIVE) { return; } if ((error = ohci_allocate_periodic_in_resource(ohcip, pp, tw, 0)) == USB_SUCCESS) { curr_intr_reqp = (usb_intr_req_t *)tw->tw_curr_xfer_reqp; ASSERT(curr_intr_reqp != NULL); tw->tw_num_tds = 1; if (ohci_tw_rebind_cookie(ohcip, pp, tw) != USB_SUCCESS) { ohci_deallocate_periodic_in_resource(ohcip, pp, tw); error = USB_FAILURE; } else if (ohci_allocate_tds_for_tw(ohcip, tw, tw->tw_num_tds) != USB_SUCCESS) { ohci_deallocate_periodic_in_resource(ohcip, pp, tw); error = USB_FAILURE; } } if (error != USB_SUCCESS) { /* * Set pipe state to stop polling and error to no * resource. Don't insert any more interrupt polling * requests. */ pp->pp_state = OHCI_PIPE_STATE_STOP_POLLING; pp->pp_error = USB_CR_NO_RESOURCES; } else { ohci_insert_intr_req(ohcip, pp, tw, 0); /* Increment number of interrupt request count */ pp->pp_cur_periodic_req_cnt++; ASSERT(pp->pp_cur_periodic_req_cnt == pp->pp_max_periodic_req_cnt); } } /* * ohci_handle_one_xfer_completion: */ static void ohci_handle_one_xfer_completion( ohci_state_t *ohcip, ohci_trans_wrapper_t *tw) { usba_pipe_handle_data_t *ph = tw->tw_pipe_private->pp_pipe_handle; ohci_pipe_private_t *pp = tw->tw_pipe_private; usb_intr_req_t *curr_intr_reqp = (usb_intr_req_t *)tw->tw_curr_xfer_reqp; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_one_xfer_completion: tw = 0x%p", (void *)tw); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT(curr_intr_reqp->intr_attributes & USB_ATTRS_ONE_XFER); pp->pp_state = OHCI_PIPE_STATE_IDLE; /* * For one xfer, we need to copy back data ptr * and free current request */ ((usb_intr_req_t *)(pp->pp_client_periodic_in_reqp))-> intr_data = ((usb_intr_req_t *) (tw->tw_curr_xfer_reqp))->intr_data; ((usb_intr_req_t *)tw->tw_curr_xfer_reqp)->intr_data = NULL; /* Now free duplicate current request */ usb_free_intr_req((usb_intr_req_t *)tw-> tw_curr_xfer_reqp); mutex_enter(&ph->p_mutex); ph->p_req_count--; mutex_exit(&ph->p_mutex); /* Make client's request the current request */ tw->tw_curr_xfer_reqp = pp->pp_client_periodic_in_reqp; pp->pp_client_periodic_in_reqp = NULL; } /* * ohci_handle_isoc_td: * * Handle an isochronous Transfer Descriptor (TD). */ /* ARGSUSED */ static void ohci_handle_isoc_td( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, void *tw_handle_callback_value) { usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; usb_ep_descr_t *eptd = &ph->p_ep; usb_isoc_req_t *curr_isoc_reqp = (usb_isoc_req_t *)tw->tw_curr_xfer_reqp; int error = USB_SUCCESS; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_isoc_td: pp=0x%p tw=0x%p td=0x%p" "isoc_reqp=0%p data=0x%p", (void *)pp, (void *)tw, (void *)td, (void *)curr_isoc_reqp, (void *)curr_isoc_reqp->isoc_data); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Decrement the TDs counter and check whether all the isoc * data has been send or received. If TDs counter reaches * zero then inform client driver about completion current * isoc request. Otherwise wait for completion of other isoc * TDs or transactions on this pipe. */ if (--tw->tw_num_tds != 0) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_isoc_td: Number of TDs %d", tw->tw_num_tds); return; } /* * If this is a isoc in pipe, return the data to the client. * For a isoc out pipe, there is no need to do anything. */ if ((eptd->bEndpointAddress & USB_EP_DIR_MASK) == USB_EP_DIR_OUT) { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_handle_isoc_td: Isoc out pipe, isoc_reqp=0x%p," "data=0x%p", (void *)curr_isoc_reqp, (void *)curr_isoc_reqp->isoc_data); ohci_do_byte_stats(ohcip, tw->tw_length, eptd->bmAttributes, eptd->bEndpointAddress); /* Do the callback */ ohci_hcdi_callback(ph, tw, USB_CR_OK); return; } /* Decrement number of IN isochronous request count */ pp->pp_cur_periodic_req_cnt--; /* Call ohci_sendup_td_message to send message to upstream */ ohci_sendup_td_message(ohcip, pp, tw, td, USB_CR_OK); /* * If isochronous pipe state is still active, insert next isochronous * request into the Host Controller's isochronous list. */ if (pp->pp_state != OHCI_PIPE_STATE_ACTIVE) { return; } if ((error = ohci_allocate_periodic_in_resource(ohcip, pp, tw, 0)) == USB_SUCCESS) { curr_isoc_reqp = (usb_isoc_req_t *)tw->tw_curr_xfer_reqp; ASSERT(curr_isoc_reqp != NULL); tw->tw_num_tds = curr_isoc_reqp->isoc_pkts_count / OHCI_ISOC_PKTS_PER_TD; if (curr_isoc_reqp->isoc_pkts_count % OHCI_ISOC_PKTS_PER_TD) { tw->tw_num_tds++; } if (ohci_tw_rebind_cookie(ohcip, pp, tw) != USB_SUCCESS) { ohci_deallocate_periodic_in_resource(ohcip, pp, tw); error = USB_FAILURE; } else if (ohci_allocate_tds_for_tw(ohcip, tw, tw->tw_num_tds) != USB_SUCCESS) { ohci_deallocate_periodic_in_resource(ohcip, pp, tw); error = USB_FAILURE; } } if (error != USB_SUCCESS || ohci_insert_isoc_req(ohcip, pp, tw, 0) != USB_SUCCESS) { /* * Set pipe state to stop polling and error to no * resource. Don't insert any more isoch polling * requests. */ pp->pp_state = OHCI_PIPE_STATE_STOP_POLLING; pp->pp_error = USB_CR_NO_RESOURCES; } else { /* Increment number of IN isochronous request count */ pp->pp_cur_periodic_req_cnt++; ASSERT(pp->pp_cur_periodic_req_cnt == pp->pp_max_periodic_req_cnt); } } /* * ohci_tw_rebind_cookie: * * If the cookie associated with a DMA buffer has been walked, the cookie * is not usable any longer. To reuse the DMA buffer, the DMA handle needs * to rebind for cookies. */ static int ohci_tw_rebind_cookie( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw) { usb_ep_descr_t *eptd = &pp->pp_pipe_handle->p_ep; int rval, i; uint_t ccount; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_tw_rebind_cookie:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); if ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH) { ASSERT(tw->tw_num_tds == tw->tw_ncookies); for (i = 0; i < tw->tw_num_tds; i++) { if (tw->tw_isoc_bufs[i].ncookies == 1) { /* * no need to rebind when there is * only one cookie in a buffer */ continue; } /* unbind the DMA handle before rebinding */ rval = ddi_dma_unbind_handle( tw->tw_isoc_bufs[i].dma_handle); ASSERT(rval == USB_SUCCESS); tw->tw_isoc_bufs[i].ncookies = 0; USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "rebind dma_handle %d", i); /* rebind the handle to get cookies */ rval = ddi_dma_addr_bind_handle( tw->tw_isoc_bufs[i].dma_handle, NULL, (caddr_t)tw->tw_isoc_bufs[i].buf_addr, tw->tw_isoc_bufs[i].length, DDI_DMA_RDWR|DDI_DMA_CONSISTENT, DDI_DMA_DONTWAIT, NULL, &tw->tw_isoc_bufs[i].cookie, &ccount); if ((rval == DDI_DMA_MAPPED) && (ccount <= OHCI_DMA_ATTR_TD_SGLLEN)) { tw->tw_isoc_bufs[i].ncookies = ccount; } else { return (USB_NO_RESOURCES); } } } else { if (tw->tw_cookie_idx != 0) { /* unbind the DMA handle before rebinding */ rval = ddi_dma_unbind_handle(tw->tw_dmahandle); ASSERT(rval == DDI_SUCCESS); tw->tw_ncookies = 0; USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "rebind dma_handle"); /* rebind the handle to get cookies */ rval = ddi_dma_addr_bind_handle( tw->tw_dmahandle, NULL, (caddr_t)tw->tw_buf, tw->tw_length, DDI_DMA_RDWR|DDI_DMA_CONSISTENT, DDI_DMA_DONTWAIT, NULL, &tw->tw_cookie, &ccount); if (rval == DDI_DMA_MAPPED) { tw->tw_ncookies = ccount; tw->tw_dma_offs = 0; tw->tw_cookie_idx = 0; } else { return (USB_NO_RESOURCES); } } } return (USB_SUCCESS); } /* * ohci_sendup_td_message: * copy data, if necessary and do callback */ static void ohci_sendup_td_message( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, ohci_td_t *td, usb_cr_t error) { usb_ep_descr_t *eptd = &pp->pp_pipe_handle->p_ep; usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; size_t length = 0, skip_len = 0, residue; mblk_t *mp; uchar_t *buf; usb_opaque_t curr_xfer_reqp = tw->tw_curr_xfer_reqp; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_sendup_td_message:"); ASSERT(tw != NULL); length = tw->tw_length; switch (eptd->bmAttributes & USB_EP_ATTR_MASK) { case USB_EP_ATTR_CONTROL: /* * Get the correct length, adjust it for the setup size * which is not part of the data length in control end * points. Update tw->tw_length for future references. */ if (((usb_ctrl_req_t *)curr_xfer_reqp)->ctrl_wLength) { tw->tw_length = length = length - OHCI_MAX_TD_BUF_SIZE; } else { tw->tw_length = length = length - SETUP_SIZE; } /* Set the length of the buffer to skip */ skip_len = OHCI_MAX_TD_BUF_SIZE; if (Get_TD(td->hctd_ctrl_phase) != OHCI_CTRL_DATA_PHASE) { break; } /* FALLTHRU */ case USB_EP_ATTR_BULK: case USB_EP_ATTR_INTR: /* * If error is "data overrun", do not check for the * "CurrentBufferPointer" and return whatever data * received to the client driver. */ if (error == USB_CR_DATA_OVERRUN) { break; } /* * If "CurrentBufferPointer" of Transfer Descriptor * (TD) is not equal to zero, then we received less * data from the device than requested by us. In that * case, get the actual received data size. */ if (Get_TD(td->hctd_cbp)) { residue = ohci_get_td_residue(ohcip, td); length = Get_TD(td->hctd_xfer_offs) + Get_TD(td->hctd_xfer_len) - residue - skip_len; USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_sendup_qtd_message: requested data %lu " "received data %lu", tw->tw_length, length); } break; case USB_EP_ATTR_ISOCH: default: break; } /* Copy the data into the mblk_t */ buf = (uchar_t *)tw->tw_buf + skip_len; USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_sendup_qtd_message: length %lu error %d", length, error); /* Get the message block */ switch (eptd->bmAttributes & USB_EP_ATTR_MASK) { case USB_EP_ATTR_CONTROL: mp = ((usb_ctrl_req_t *)curr_xfer_reqp)->ctrl_data; break; case USB_EP_ATTR_BULK: mp = ((usb_bulk_req_t *)curr_xfer_reqp)->bulk_data; break; case USB_EP_ATTR_INTR: mp = ((usb_intr_req_t *)curr_xfer_reqp)->intr_data; break; case USB_EP_ATTR_ISOCH: mp = ((usb_isoc_req_t *)curr_xfer_reqp)->isoc_data; break; } ASSERT(mp != NULL); if (length) { int i; uchar_t *p = mp->b_rptr; /* * Update kstat byte counts * The control endpoints don't have direction bits so in * order for control stats to be counted correctly an in * bit must be faked on a control read. */ if ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_CONTROL) { ohci_do_byte_stats(ohcip, length, eptd->bmAttributes, USB_EP_DIR_IN); } else { ohci_do_byte_stats(ohcip, length, eptd->bmAttributes, eptd->bEndpointAddress); } if ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH) { for (i = 0; i < tw->tw_ncookies; i++) { Sync_IO_Buffer( tw->tw_isoc_bufs[i].dma_handle, tw->tw_isoc_bufs[i].length); ddi_rep_get8(tw->tw_isoc_bufs[i].mem_handle, p, (uint8_t *)tw->tw_isoc_bufs[i].buf_addr, tw->tw_isoc_bufs[i].length, DDI_DEV_AUTOINCR); p += tw->tw_isoc_bufs[i].length; } tw->tw_pkt_idx = 0; } else { /* Sync IO buffer */ Sync_IO_Buffer(tw->tw_dmahandle, (skip_len + length)); /* Copy the data into the message */ ddi_rep_get8(tw->tw_accesshandle, mp->b_rptr, buf, length, DDI_DEV_AUTOINCR); } /* Increment the write pointer */ mp->b_wptr = mp->b_wptr + length; } else { USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_sendup_td_message: Zero length packet"); } ohci_hcdi_callback(ph, tw, error); } /* * ohci_get_td_residue: * * Calculate the bytes not transfered by the TD */ size_t ohci_get_td_residue( ohci_state_t *ohcip, ohci_td_t *td) { uint32_t buf_addr, end_addr; size_t residue; buf_addr = Get_TD(td->hctd_cbp); end_addr = Get_TD(td->hctd_buf_end); if ((buf_addr & 0xfffff000) == (end_addr & 0xfffff000)) { residue = end_addr - buf_addr + 1; } else { residue = OHCI_MAX_TD_BUF_SIZE - (buf_addr & 0x00000fff) + (end_addr & 0x00000fff) + 1; } return (residue); } /* * Miscellaneous functions */ /* * ohci_obtain_state: * NOTE: This function is also called from POLLED MODE. */ ohci_state_t * ohci_obtain_state(dev_info_t *dip) { int instance = ddi_get_instance(dip); ohci_state_t *state = ddi_get_soft_state( ohci_statep, instance); ASSERT(state != NULL); return (state); } /* * ohci_state_is_operational: * * Check the Host controller state and return proper values. */ int ohci_state_is_operational(ohci_state_t *ohcip) { int val; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); switch (ohcip->ohci_hc_soft_state) { case OHCI_CTLR_INIT_STATE: case OHCI_CTLR_SUSPEND_STATE: val = USB_FAILURE; break; case OHCI_CTLR_OPERATIONAL_STATE: val = USB_SUCCESS; break; case OHCI_CTLR_ERROR_STATE: val = USB_HC_HARDWARE_ERROR; break; default: val = USB_FAILURE; break; } return (val); } /* * ohci_do_soft_reset * * Do soft reset of ohci host controller. */ int ohci_do_soft_reset(ohci_state_t *ohcip) { usb_frame_number_t before_frame_number, after_frame_number; timeout_id_t xfer_timer_id, rh_timer_id; ohci_regs_t *ohci_save_regs; ohci_td_t *done_head; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Increment host controller error count */ ohcip->ohci_hc_error++; USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_do_soft_reset:" "Reset ohci host controller 0x%x", ohcip->ohci_hc_error); /* * Allocate space for saving current Host Controller * registers. Don't do any recovery if allocation * fails. */ ohci_save_regs = (ohci_regs_t *) kmem_zalloc(sizeof (ohci_regs_t), KM_NOSLEEP); if (ohci_save_regs == NULL) { USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_do_soft_reset: kmem_zalloc failed"); return (USB_FAILURE); } /* Save current ohci registers */ ohci_save_regs->hcr_control = Get_OpReg(hcr_control); ohci_save_regs->hcr_cmd_status = Get_OpReg(hcr_cmd_status); ohci_save_regs->hcr_intr_enable = Get_OpReg(hcr_intr_enable); ohci_save_regs->hcr_periodic_strt = Get_OpReg(hcr_periodic_strt); ohci_save_regs->hcr_frame_interval = Get_OpReg(hcr_frame_interval); ohci_save_regs->hcr_HCCA = Get_OpReg(hcr_HCCA); ohci_save_regs->hcr_bulk_head = Get_OpReg(hcr_bulk_head); ohci_save_regs->hcr_ctrl_head = Get_OpReg(hcr_ctrl_head); USB_DPRINTF_L4(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_do_soft_reset: Save reg = 0x%p", (void *)ohci_save_regs); /* Disable all list processing and interrupts */ Set_OpReg(hcr_control, (Get_OpReg(hcr_control) & ~(HCR_CONTROL_CLE | HCR_CONTROL_BLE | HCR_CONTROL_PLE | HCR_CONTROL_IE))); Set_OpReg(hcr_intr_disable, HCR_INTR_SO | HCR_INTR_WDH | HCR_INTR_RD | HCR_INTR_UE | HCR_INTR_FNO | HCR_INTR_SOF | HCR_INTR_MIE); /* Wait for few milliseconds */ drv_usecwait(OHCI_TIMEWAIT); /* Root hub interrupt pipe timeout id */ rh_timer_id = ohcip->ohci_root_hub.rh_intr_pipe_timer_id; /* Stop the root hub interrupt timer */ if (rh_timer_id) { ohcip->ohci_root_hub.rh_intr_pipe_timer_id = 0; ohcip->ohci_root_hub.rh_intr_pipe_state = OHCI_PIPE_STATE_IDLE; mutex_exit(&ohcip->ohci_int_mutex); (void) untimeout(rh_timer_id); mutex_enter(&ohcip->ohci_int_mutex); } /* Transfer timeout id */ xfer_timer_id = ohcip->ohci_timer_id; /* Stop the global transfer timer */ if (xfer_timer_id) { ohcip->ohci_timer_id = 0; mutex_exit(&ohcip->ohci_int_mutex); (void) untimeout(xfer_timer_id); mutex_enter(&ohcip->ohci_int_mutex); } /* Process any pending HCCA DoneHead */ done_head = (ohci_td_t *)(uintptr_t) (Get_HCCA(ohcip->ohci_hccap->HccaDoneHead) & HCCA_DONE_HEAD_MASK); if (ohci_check_done_head(ohcip, done_head) == USB_SUCCESS) { /* Reset the done head to NULL */ Set_HCCA(ohcip->ohci_hccap->HccaDoneHead, 0); ohci_traverse_done_list(ohcip, done_head); } /* Process any pending hcr_done_head value */ done_head = (ohci_td_t *)(uintptr_t) (Get_OpReg(hcr_done_head) & HCCA_DONE_HEAD_MASK); if (ohci_check_done_head(ohcip, done_head) == USB_SUCCESS) { ohci_traverse_done_list(ohcip, done_head); } /* Do soft reset of ohci host controller */ Set_OpReg(hcr_cmd_status, HCR_STATUS_RESET); USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_do_soft_reset: Reset in progress"); /* Wait for reset to complete */ drv_usecwait(OHCI_RESET_TIMEWAIT); /* Reset HCCA HcFrameNumber */ Set_HCCA(ohcip->ohci_hccap->HccaFrameNo, 0x00000000); /* * Restore previous saved HC register value * into the current HC registers. */ Set_OpReg(hcr_periodic_strt, (uint32_t) ohci_save_regs->hcr_periodic_strt); Set_OpReg(hcr_frame_interval, (uint32_t) ohci_save_regs->hcr_frame_interval); Set_OpReg(hcr_done_head, 0x0); Set_OpReg(hcr_bulk_curr, 0x0); Set_OpReg(hcr_bulk_head, (uint32_t) ohci_save_regs->hcr_bulk_head); Set_OpReg(hcr_ctrl_curr, 0x0); Set_OpReg(hcr_ctrl_head, (uint32_t) ohci_save_regs->hcr_ctrl_head); Set_OpReg(hcr_periodic_curr, 0x0); Set_OpReg(hcr_HCCA, (uint32_t) ohci_save_regs->hcr_HCCA); Set_OpReg(hcr_intr_status, 0x0); /* * Set HcInterruptEnable to enable all interrupts except * Root Hub Status change interrupt. */ Set_OpReg(hcr_intr_enable, HCR_INTR_SO | HCR_INTR_WDH | HCR_INTR_RD | HCR_INTR_UE | HCR_INTR_FNO | HCR_INTR_SOF | HCR_INTR_MIE); /* Start Control and Bulk list processing */ Set_OpReg(hcr_cmd_status, (HCR_STATUS_CLF | HCR_STATUS_BLF)); /* * Start up Control, Bulk, Periodic and Isochronous lists * processing. */ Set_OpReg(hcr_control, (uint32_t) (ohci_save_regs->hcr_control & (~HCR_CONTROL_HCFS))); /* * Deallocate the space that allocated for saving * HC registers. */ kmem_free((void *) ohci_save_regs, sizeof (ohci_regs_t)); /* Resume the host controller */ Set_OpReg(hcr_control, ((Get_OpReg(hcr_control) & (~HCR_CONTROL_HCFS)) | HCR_CONTROL_RESUME)); /* Wait for resume to complete */ drv_usecwait(OHCI_RESUME_TIMEWAIT); /* Set the Host Controller Functional State to Operational */ Set_OpReg(hcr_control, ((Get_OpReg(hcr_control) & (~HCR_CONTROL_HCFS)) | HCR_CONTROL_OPERAT)); /* Wait 10ms for HC to start sending SOF */ drv_usecwait(OHCI_TIMEWAIT); /* * Get the current usb frame number before waiting for few * milliseconds. */ before_frame_number = ohci_get_current_frame_number(ohcip); /* Wait for few milliseconds */ drv_usecwait(OHCI_TIMEWAIT); /* * Get the current usb frame number after waiting for few * milliseconds. */ after_frame_number = ohci_get_current_frame_number(ohcip); USB_DPRINTF_L3(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_do_soft_reset: Before Frm No 0x%llx After Frm No 0x%llx", (unsigned long long)before_frame_number, (unsigned long long)after_frame_number); if (after_frame_number <= before_frame_number) { USB_DPRINTF_L2(PRINT_MASK_INTR, ohcip->ohci_log_hdl, "ohci_do_soft_reset: Soft reset failed"); return (USB_FAILURE); } /* Start the timer for the root hub interrupt pipe polling */ if (rh_timer_id) { ohcip->ohci_root_hub.rh_intr_pipe_timer_id = timeout(ohci_handle_root_hub_status_change, (void *)ohcip, drv_usectohz(OHCI_RH_POLL_TIME)); ohcip->ohci_root_hub. rh_intr_pipe_state = OHCI_PIPE_STATE_ACTIVE; } /* Start the global timer */ if (xfer_timer_id) { ohcip->ohci_timer_id = timeout(ohci_xfer_timeout_handler, (void *)ohcip, drv_usectohz(1000000)); } return (USB_SUCCESS); } /* * ohci_get_current_frame_number: * * Get the current software based usb frame number. */ usb_frame_number_t ohci_get_current_frame_number(ohci_state_t *ohcip) { usb_frame_number_t usb_frame_number; usb_frame_number_t ohci_fno, frame_number; ohci_save_intr_sts_t *ohci_intr_sts = &ohcip->ohci_save_intr_sts; ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Sync HCCA area only if this function * is invoked in non interrupt context. */ if (!(ohci_intr_sts->ohci_intr_flag & OHCI_INTR_HANDLING)) { /* Sync HCCA area */ Sync_HCCA(ohcip); } ohci_fno = ohcip->ohci_fno; frame_number = Get_HCCA(ohcip->ohci_hccap->HccaFrameNo); /* * Calculate current software based usb frame number. * * This code accounts for the fact that frame number is * updated by the Host Controller before the ohci driver * gets an FrameNumberOverflow (FNO) interrupt that will * adjust Frame higher part. * * Refer ohci specification 1.0a, section 5.4, page 86. */ usb_frame_number = ((frame_number & 0x7FFF) | ohci_fno) + (((frame_number & 0xFFFF) ^ ohci_fno) & 0x8000); return (usb_frame_number); } /* * ohci_cpr_cleanup: * * Cleanup ohci state and other ohci specific informations across * Check Point Resume (CPR). */ static void ohci_cpr_cleanup(ohci_state_t *ohcip) { ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Reset software part of usb frame number */ ohcip->ohci_fno = 0; /* Reset Schedule Overrrun Error Counter */ ohcip->ohci_so_error = 0; /* Reset HCCA HcFrameNumber */ Set_HCCA(ohcip->ohci_hccap->HccaFrameNo, 0x00000000); } /* * ohci_get_xfer_attrs: * * Get the attributes of a particular xfer. */ static usb_req_attrs_t ohci_get_xfer_attrs( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw) { usb_ep_descr_t *eptd = &pp->pp_pipe_handle->p_ep; usb_req_attrs_t attrs = 0; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_get_xfer_attrs:"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); switch (eptd->bmAttributes & USB_EP_ATTR_MASK) { case USB_EP_ATTR_CONTROL: attrs = ((usb_ctrl_req_t *) tw->tw_curr_xfer_reqp)->ctrl_attributes; break; case USB_EP_ATTR_BULK: attrs = ((usb_bulk_req_t *) tw->tw_curr_xfer_reqp)->bulk_attributes; break; case USB_EP_ATTR_INTR: attrs = ((usb_intr_req_t *) tw->tw_curr_xfer_reqp)->intr_attributes; break; case USB_EP_ATTR_ISOCH: attrs = ((usb_isoc_req_t *) tw->tw_curr_xfer_reqp)->isoc_attributes; break; } return (attrs); } /* * ohci_allocate_periodic_in_resource * * Allocate interrupt/isochronous request structure for the * interrupt/isochronous IN transfer. */ static int ohci_allocate_periodic_in_resource( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw, usb_flags_t flags) { usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; uchar_t ep_attr = ph->p_ep.bmAttributes; usb_intr_req_t *curr_intr_reqp; usb_isoc_req_t *curr_isoc_reqp; usb_opaque_t client_periodic_in_reqp; size_t length = 0; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_periodic_in_resource:" "pp = 0x%p tw = 0x%p flags = 0x%x", (void *)pp, (void *)tw, flags); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); ASSERT(tw->tw_curr_xfer_reqp == NULL); /* Get the client periodic in request pointer */ client_periodic_in_reqp = pp->pp_client_periodic_in_reqp; /* * If it a periodic IN request and periodic request is NULL, * allocate corresponding usb periodic IN request for the * current periodic polling request and copy the information * from the saved periodic request structure. */ if ((ep_attr & USB_EP_ATTR_MASK) == USB_EP_ATTR_INTR) { if (client_periodic_in_reqp) { /* Get the interrupt transfer length */ length = ((usb_intr_req_t *) client_periodic_in_reqp)->intr_len; curr_intr_reqp = usba_hcdi_dup_intr_req( ph->p_dip, (usb_intr_req_t *) client_periodic_in_reqp, length, flags); } else { curr_intr_reqp = usb_alloc_intr_req( ph->p_dip, length, flags); } if (curr_intr_reqp == NULL) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_periodic_in_resource: Interrupt " "request structure allocation failed"); return (USB_NO_RESOURCES); } if (client_periodic_in_reqp == NULL) { /* For polled mode */ curr_intr_reqp-> intr_attributes = USB_ATTRS_SHORT_XFER_OK; curr_intr_reqp-> intr_len = ph->p_ep.wMaxPacketSize; } else { /* Check and save the timeout value */ tw->tw_timeout = (curr_intr_reqp->intr_attributes & USB_ATTRS_ONE_XFER) ? curr_intr_reqp->intr_timeout: 0; } tw->tw_curr_xfer_reqp = (usb_opaque_t)curr_intr_reqp; tw->tw_length = curr_intr_reqp->intr_len; } else { ASSERT(client_periodic_in_reqp != NULL); curr_isoc_reqp = usba_hcdi_dup_isoc_req(ph->p_dip, (usb_isoc_req_t *)client_periodic_in_reqp, flags); if (curr_isoc_reqp == NULL) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_allocate_periodic_in_resource: Isochronous" "request structure allocation failed"); return (USB_NO_RESOURCES); } /* * Save the client's isochronous request pointer and * length of isochronous transfer in transfer wrapper. * The dup'ed request is saved in pp_client_periodic_in_reqp */ tw->tw_curr_xfer_reqp = (usb_opaque_t)pp->pp_client_periodic_in_reqp; pp->pp_client_periodic_in_reqp = (usb_opaque_t)curr_isoc_reqp; } mutex_enter(&ph->p_mutex); ph->p_req_count++; mutex_exit(&ph->p_mutex); pp->pp_state = OHCI_PIPE_STATE_ACTIVE; return (USB_SUCCESS); } /* * ohci_wait_for_sof: * * Wait for couple of SOF interrupts */ static int ohci_wait_for_sof(ohci_state_t *ohcip) { usb_frame_number_t before_frame_number, after_frame_number; clock_t sof_time_wait; int rval, sof_wait_count; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_wait_for_sof"); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); rval = ohci_state_is_operational(ohcip); if (rval != USB_SUCCESS) { return (rval); } /* Get the number of clock ticks to wait */ sof_time_wait = drv_usectohz(OHCI_MAX_SOF_TIMEWAIT * 1000000); sof_wait_count = 0; /* * Get the current usb frame number before waiting for the * SOF interrupt event. */ before_frame_number = ohci_get_current_frame_number(ohcip); while (sof_wait_count < MAX_SOF_WAIT_COUNT) { /* Enable the SOF interrupt */ Set_OpReg(hcr_intr_enable, HCR_INTR_SOF); ASSERT(Get_OpReg(hcr_intr_enable) & HCR_INTR_SOF); /* Wait for the SOF or timeout event */ rval = cv_reltimedwait(&ohcip->ohci_SOF_cv, &ohcip->ohci_int_mutex, sof_time_wait, TR_CLOCK_TICK); /* * Get the current usb frame number after woken up either * from SOF interrupt or timer expired event. */ after_frame_number = ohci_get_current_frame_number(ohcip); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_wait_for_sof: before 0x%llx, after 0x%llx", (unsigned long long)before_frame_number, (unsigned long long)after_frame_number); /* * Return failure, if we are woken up becuase of timer expired * event and if usb frame number has not been changed. */ if ((rval == -1) && (after_frame_number <= before_frame_number)) { if ((ohci_do_soft_reset(ohcip)) != USB_SUCCESS) { USB_DPRINTF_L0(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "No SOF interrupts"); /* Set host controller soft state to error */ ohcip->ohci_hc_soft_state = OHCI_CTLR_ERROR_STATE; return (USB_FAILURE); } /* Get new usb frame number */ after_frame_number = before_frame_number = ohci_get_current_frame_number(ohcip); } ASSERT(after_frame_number >= before_frame_number); before_frame_number = after_frame_number; sof_wait_count++; } return (USB_SUCCESS); } /* * ohci_pipe_cleanup * * Cleanup ohci pipe. */ static void ohci_pipe_cleanup( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_ep_descr_t *eptd = &ph->p_ep; usb_cr_t completion_reason; uint_t pipe_state = pp->pp_state; uint_t bit = 0; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_pipe_cleanup: ph = 0x%p", (void *)ph); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); switch (pipe_state) { case OHCI_PIPE_STATE_CLOSE: if (OHCI_NON_PERIODIC_ENDPOINT(eptd)) { bit = ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_CONTROL) ? HCR_CONTROL_CLE: HCR_CONTROL_BLE; Set_OpReg(hcr_control, (Get_OpReg(hcr_control) & ~(bit))); /* Wait for the next SOF */ (void) ohci_wait_for_sof(ohcip); break; } /* FALLTHROUGH */ case OHCI_PIPE_STATE_RESET: case OHCI_PIPE_STATE_STOP_POLLING: /* * Set the sKip bit to stop all transactions on * this pipe */ ohci_modify_sKip_bit(ohcip, pp, SET_sKip, OHCI_FLAGS_SLEEP | OHCI_FLAGS_DMA_SYNC); break; default: return; } /* * Wait for processing all completed transfers and * to send results to upstream. */ ohci_wait_for_transfers_completion(ohcip, pp); /* Save the data toggle information */ ohci_save_data_toggle(ohcip, ph); /* * Traverse the list of TD's on this endpoint and * these TD's have outstanding transfer requests. * Since the list processing is stopped, these tds * can be deallocated. */ ohci_traverse_tds(ohcip, ph); /* * If all of the endpoint's TD's have been deallocated, * then the DMA mappings can be torn down. If not there * are some TD's on the done list that have not been * processed. Tag these TD's so that they are thrown * away when the done list is processed. */ ohci_done_list_tds(ohcip, ph); /* Do callbacks for all unfinished requests */ ohci_handle_outstanding_requests(ohcip, pp); /* Free DMA resources */ ohci_free_dma_resources(ohcip, ph); switch (pipe_state) { case OHCI_PIPE_STATE_CLOSE: completion_reason = USB_CR_PIPE_CLOSING; break; case OHCI_PIPE_STATE_RESET: case OHCI_PIPE_STATE_STOP_POLLING: /* Set completion reason */ completion_reason = (pipe_state == OHCI_PIPE_STATE_RESET) ? USB_CR_PIPE_RESET: USB_CR_STOPPED_POLLING; /* Restore the data toggle information */ ohci_restore_data_toggle(ohcip, ph); /* * Clear the sKip bit to restart all the * transactions on this pipe. */ ohci_modify_sKip_bit(ohcip, pp, CLEAR_sKip, OHCI_FLAGS_NOSLEEP); /* Set pipe state to idle */ pp->pp_state = OHCI_PIPE_STATE_IDLE; break; } ASSERT((Get_ED(pp->pp_ept->hced_tailp) & HC_EPT_TD_TAIL) == (Get_ED(pp->pp_ept->hced_headp) & HC_EPT_TD_HEAD)); ASSERT((pp->pp_tw_head == NULL) && (pp->pp_tw_tail == NULL)); /* * Do the callback for the original client * periodic IN request. */ if ((OHCI_PERIODIC_ENDPOINT(eptd)) && ((ph->p_ep.bEndpointAddress & USB_EP_DIR_MASK) == USB_EP_DIR_IN)) { ohci_do_client_periodic_in_req_callback( ohcip, pp, completion_reason); } } /* * ohci_wait_for_transfers_completion: * * Wait for processing all completed transfers and to send results * to upstream. */ static void ohci_wait_for_transfers_completion( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { ohci_trans_wrapper_t *head_tw = pp->pp_tw_head; ohci_trans_wrapper_t *next_tw; ohci_td_t *tailp, *headp, *nextp; ohci_td_t *head_td, *next_td; ohci_ed_t *ept = pp->pp_ept; int rval; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_wait_for_transfers_completion: pp = 0x%p", (void *)pp); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); headp = (ohci_td_t *)(ohci_td_iommu_to_cpu(ohcip, Get_ED(ept->hced_headp) & (uint32_t)HC_EPT_TD_HEAD)); tailp = (ohci_td_t *)(ohci_td_iommu_to_cpu(ohcip, Get_ED(ept->hced_tailp) & (uint32_t)HC_EPT_TD_TAIL)); rval = ohci_state_is_operational(ohcip); if (rval != USB_SUCCESS) { return; } pp->pp_count_done_tds = 0; /* Process the transfer wrappers for this pipe */ next_tw = head_tw; while (next_tw) { head_td = (ohci_td_t *)next_tw->tw_hctd_head; next_td = head_td; if (head_td) { /* * Walk through each TD for this transfer * wrapper. If a TD still exists, then it * is currently on the done list. */ while (next_td) { nextp = headp; while (nextp != tailp) { /* TD is on the ED */ if (nextp == next_td) { break; } nextp = (ohci_td_t *) (ohci_td_iommu_to_cpu(ohcip, (Get_TD(nextp->hctd_next_td) & HC_EPT_TD_TAIL))); } if (nextp == tailp) { pp->pp_count_done_tds++; } next_td = ohci_td_iommu_to_cpu(ohcip, Get_TD(next_td->hctd_tw_next_td)); } } next_tw = next_tw->tw_next; } USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_wait_for_transfers_completion: count_done_tds = 0x%x", pp->pp_count_done_tds); if (!pp->pp_count_done_tds) { return; } (void) cv_reltimedwait(&pp->pp_xfer_cmpl_cv, &ohcip->ohci_int_mutex, drv_usectohz(OHCI_XFER_CMPL_TIMEWAIT * 1000000), TR_CLOCK_TICK); if (pp->pp_count_done_tds) { USB_DPRINTF_L2(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_wait_for_transfers_completion: No transfers " "completion confirmation received for 0x%x requests", pp->pp_count_done_tds); } } /* * ohci_check_for_transfers_completion: * * Check whether anybody is waiting for transfers completion event. If so, send * this event and also stop initiating any new transfers on this pipe. */ static void ohci_check_for_transfers_completion( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_check_for_transfers_completion: pp = 0x%p", (void *)pp); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); if ((pp->pp_state == OHCI_PIPE_STATE_STOP_POLLING) && (pp->pp_error == USB_CR_NO_RESOURCES) && (pp->pp_cur_periodic_req_cnt == 0)) { /* Reset pipe error to zero */ pp->pp_error = 0; /* Do callback for original request */ ohci_do_client_periodic_in_req_callback( ohcip, pp, USB_CR_NO_RESOURCES); } if (pp->pp_count_done_tds) { USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_check_for_transfers_completion:" "count_done_tds = 0x%x", pp->pp_count_done_tds); /* Decrement the done td count */ pp->pp_count_done_tds--; if (!pp->pp_count_done_tds) { USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_check_for_transfers_completion:" "Sent transfers completion event pp = 0x%p", (void *)pp); /* Send the transfer completion signal */ cv_signal(&pp->pp_xfer_cmpl_cv); } } } /* * ohci_save_data_toggle: * * Save the data toggle information. */ static void ohci_save_data_toggle( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_ep_descr_t *eptd = &ph->p_ep; uint_t data_toggle; usb_cr_t error = pp->pp_error; ohci_ed_t *ed = pp->pp_ept; ohci_td_t *headp, *tailp; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_save_data_toggle: ph = 0x%p", (void *)ph); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Reset the pipe error value */ pp->pp_error = USB_CR_OK; /* Return immediately if it is a control or isoc pipe */ if (((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_CONTROL) || ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH)) { return; } headp = (ohci_td_t *)(ohci_td_iommu_to_cpu(ohcip, Get_ED(ed->hced_headp) & (uint32_t)HC_EPT_TD_HEAD)); tailp = (ohci_td_t *)(ohci_td_iommu_to_cpu(ohcip, Get_ED(ed->hced_tailp) & (uint32_t)HC_EPT_TD_TAIL)); /* * Retrieve the data toggle information either from the endpoint * (ED) or from the transfer descriptor (TD) depending on the * situation. */ if ((Get_ED(ed->hced_headp) & HC_EPT_Halt) || (headp == tailp)) { /* Get the data toggle information from the endpoint */ data_toggle = (Get_ED(ed->hced_headp) & HC_EPT_Carry)? DATA1:DATA0; } else { /* * Retrieve the data toggle information depending on the * master data toggle information saved in the transfer * descriptor (TD) at the head of the endpoint (ED). * * Check for master data toggle information . */ if (Get_TD(headp->hctd_ctrl) & HC_TD_MS_DT) { /* Get the data toggle information from td */ data_toggle = (Get_TD(headp->hctd_ctrl) & HC_TD_DT_1) ? DATA1:DATA0; } else { /* Get the data toggle information from the endpoint */ data_toggle = (Get_ED(ed->hced_headp) & HC_EPT_Carry)? DATA1:DATA0; } } /* * If error is STALL, then, set * data toggle to zero. */ if (error == USB_CR_STALL) { data_toggle = DATA0; } /* * Save the data toggle information * in the usb device structure. */ mutex_enter(&ph->p_mutex); usba_hcdi_set_data_toggle(ph->p_usba_device, ph->p_ep.bEndpointAddress, data_toggle); mutex_exit(&ph->p_mutex); } /* * ohci_restore_data_toggle: * * Restore the data toggle information. */ static void ohci_restore_data_toggle( ohci_state_t *ohcip, usba_pipe_handle_data_t *ph) { ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_ep_descr_t *eptd = &ph->p_ep; uint_t data_toggle = 0; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_restore_data_toggle: ph = 0x%p", (void *)ph); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Return immediately if it is a control or isoc pipe. */ if (((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_CONTROL) || ((eptd->bmAttributes & USB_EP_ATTR_MASK) == USB_EP_ATTR_ISOCH)) { return; } mutex_enter(&ph->p_mutex); data_toggle = usba_hcdi_get_data_toggle(ph->p_usba_device, ph->p_ep.bEndpointAddress); usba_hcdi_set_data_toggle(ph->p_usba_device, ph->p_ep.bEndpointAddress, 0); mutex_exit(&ph->p_mutex); /* * Restore the data toggle bit depending on the * previous data toggle information. */ if (data_toggle) { Set_ED(pp->pp_ept->hced_headp, Get_ED(pp->pp_ept->hced_headp) | HC_EPT_Carry); } else { Set_ED(pp->pp_ept->hced_headp, Get_ED(pp->pp_ept->hced_headp) & (~HC_EPT_Carry)); } } /* * ohci_handle_outstanding_requests * NOTE: This function is also called from POLLED MODE. * * Deallocate interrupt/isochronous request structure for the * interrupt/isochronous IN transfer. Do the callbacks for all * unfinished requests. */ void ohci_handle_outstanding_requests( ohci_state_t *ohcip, ohci_pipe_private_t *pp) { usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; usb_ep_descr_t *eptd = &ph->p_ep; ohci_trans_wrapper_t *curr_tw; ohci_trans_wrapper_t *next_tw; usb_opaque_t curr_xfer_reqp; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_handle_outstanding_requests: pp = 0x%p", (void *)pp); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Deallocate all the pre-allocated interrupt requests */ next_tw = pp->pp_tw_head; while (next_tw) { curr_tw = next_tw; next_tw = curr_tw->tw_next; curr_xfer_reqp = curr_tw->tw_curr_xfer_reqp; /* Deallocate current interrupt request */ if (curr_xfer_reqp) { if ((OHCI_PERIODIC_ENDPOINT(eptd)) && (curr_tw->tw_direction == HC_TD_IN)) { /* Decrement periodic in request count */ pp->pp_cur_periodic_req_cnt--; ohci_deallocate_periodic_in_resource( ohcip, pp, curr_tw); } else { ohci_hcdi_callback(ph, curr_tw, USB_CR_FLUSHED); } } } } /* * ohci_deallocate_periodic_in_resource * * Deallocate interrupt/isochronous request structure for the * interrupt/isochronous IN transfer. */ static void ohci_deallocate_periodic_in_resource( ohci_state_t *ohcip, ohci_pipe_private_t *pp, ohci_trans_wrapper_t *tw) { usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; uchar_t ep_attr = ph->p_ep.bmAttributes; usb_opaque_t curr_xfer_reqp; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_deallocate_periodic_in_resource: " "pp = 0x%p tw = 0x%p", (void *)pp, (void *)tw); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); curr_xfer_reqp = tw->tw_curr_xfer_reqp; /* Check the current periodic in request pointer */ if (curr_xfer_reqp) { /* * Reset periodic in request usb isoch * packet request pointers to null. */ tw->tw_curr_xfer_reqp = NULL; tw->tw_curr_isoc_pktp = NULL; mutex_enter(&ph->p_mutex); ph->p_req_count--; mutex_exit(&ph->p_mutex); /* * Free pre-allocated interrupt * or isochronous requests. */ switch (ep_attr & USB_EP_ATTR_MASK) { case USB_EP_ATTR_INTR: usb_free_intr_req( (usb_intr_req_t *)curr_xfer_reqp); break; case USB_EP_ATTR_ISOCH: usb_free_isoc_req( (usb_isoc_req_t *)curr_xfer_reqp); break; } } } /* * ohci_do_client_periodic_in_req_callback * * Do callback for the original client periodic IN request. */ static void ohci_do_client_periodic_in_req_callback( ohci_state_t *ohcip, ohci_pipe_private_t *pp, usb_cr_t completion_reason) { usba_pipe_handle_data_t *ph = pp->pp_pipe_handle; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_do_client_periodic_in_req_callback: " "pp = 0x%p cc = 0x%x", (void *)pp, completion_reason); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* * Check for Interrupt/Isochronous IN, whether we need to do * callback for the original client's periodic IN request. */ if (pp->pp_client_periodic_in_reqp) { ASSERT(pp->pp_cur_periodic_req_cnt == 0); ohci_hcdi_callback(ph, NULL, completion_reason); } } /* * ohci_hcdi_callback() * * Convenience wrapper around usba_hcdi_cb() other than root hub. */ static void ohci_hcdi_callback( usba_pipe_handle_data_t *ph, ohci_trans_wrapper_t *tw, usb_cr_t completion_reason) { ohci_state_t *ohcip = ohci_obtain_state( ph->p_usba_device->usb_root_hub_dip); uchar_t attributes = ph->p_ep.bmAttributes & USB_EP_ATTR_MASK; ohci_pipe_private_t *pp = (ohci_pipe_private_t *)ph->p_hcd_private; usb_opaque_t curr_xfer_reqp; uint_t pipe_state = 0; USB_DPRINTF_L4(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_hcdi_callback: ph = 0x%p, tw = 0x%p, cr = 0x%x", (void *)ph, (void *)tw, completion_reason); ASSERT(mutex_owned(&ohcip->ohci_int_mutex)); /* Set the pipe state as per completion reason */ switch (completion_reason) { case USB_CR_OK: pipe_state = pp->pp_state; break; case USB_CR_NO_RESOURCES: case USB_CR_NOT_SUPPORTED: case USB_CR_STOPPED_POLLING: case USB_CR_PIPE_RESET: pipe_state = OHCI_PIPE_STATE_IDLE; break; case USB_CR_PIPE_CLOSING: break; default: /* * Set the pipe state to error * except for the isoc pipe. */ if (attributes != USB_EP_ATTR_ISOCH) { pipe_state = OHCI_PIPE_STATE_ERROR; pp->pp_error = completion_reason; } break; } pp->pp_state = pipe_state; if (tw && tw->tw_curr_xfer_reqp) { curr_xfer_reqp = tw->tw_curr_xfer_reqp; tw->tw_curr_xfer_reqp = NULL; tw->tw_curr_isoc_pktp = NULL; } else { ASSERT(pp->pp_client_periodic_in_reqp != NULL); curr_xfer_reqp = pp->pp_client_periodic_in_reqp; pp->pp_client_periodic_in_reqp = NULL; } ASSERT(curr_xfer_reqp != NULL); mutex_exit(&ohcip->ohci_int_mutex); usba_hcdi_cb(ph, curr_xfer_reqp, completion_reason); mutex_enter(&ohcip->ohci_int_mutex); } /* * ohci kstat functions */ /* * ohci_create_stats: * * Allocate and initialize the ohci kstat structures */ static void ohci_create_stats(ohci_state_t *ohcip) { char kstatname[KSTAT_STRLEN]; const char *dname = ddi_driver_name(ohcip->ohci_dip); char *usbtypes[USB_N_COUNT_KSTATS] = {"ctrl", "isoch", "bulk", "intr"}; uint_t instance = ohcip->ohci_instance; ohci_intrs_stats_t *isp; int i; if (OHCI_INTRS_STATS(ohcip) == NULL) { (void) snprintf(kstatname, KSTAT_STRLEN, "%s%d,intrs", dname, instance); OHCI_INTRS_STATS(ohcip) = kstat_create("usba", instance, kstatname, "usb_interrupts", KSTAT_TYPE_NAMED, sizeof (ohci_intrs_stats_t) / sizeof (kstat_named_t), KSTAT_FLAG_PERSISTENT); if (OHCI_INTRS_STATS(ohcip)) { isp = OHCI_INTRS_STATS_DATA(ohcip); kstat_named_init(&isp->ohci_hcr_intr_total, "Interrupts Total", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_not_claimed, "Not Claimed", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_so, "Schedule Overruns", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_wdh, "Writeback Done Head", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_sof, "Start Of Frame", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_rd, "Resume Detected", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_ue, "Unrecoverable Error", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_fno, "Frame No. Overflow", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_rhsc, "Root Hub Status Change", KSTAT_DATA_UINT64); kstat_named_init(&isp->ohci_hcr_intr_oc, "Change In Ownership", KSTAT_DATA_UINT64); OHCI_INTRS_STATS(ohcip)->ks_private = ohcip; OHCI_INTRS_STATS(ohcip)->ks_update = nulldev; kstat_install(OHCI_INTRS_STATS(ohcip)); } } if (OHCI_TOTAL_STATS(ohcip) == NULL) { (void) snprintf(kstatname, KSTAT_STRLEN, "%s%d,total", dname, instance); OHCI_TOTAL_STATS(ohcip) = kstat_create("usba", instance, kstatname, "usb_byte_count", KSTAT_TYPE_IO, 1, KSTAT_FLAG_PERSISTENT); if (OHCI_TOTAL_STATS(ohcip)) { kstat_install(OHCI_TOTAL_STATS(ohcip)); } } for (i = 0; i < USB_N_COUNT_KSTATS; i++) { if (ohcip->ohci_count_stats[i] == NULL) { (void) snprintf(kstatname, KSTAT_STRLEN, "%s%d,%s", dname, instance, usbtypes[i]); ohcip->ohci_count_stats[i] = kstat_create("usba", instance, kstatname, "usb_byte_count", KSTAT_TYPE_IO, 1, KSTAT_FLAG_PERSISTENT); if (ohcip->ohci_count_stats[i]) { kstat_install(ohcip->ohci_count_stats[i]); } } } } /* * ohci_destroy_stats: * * Clean up ohci kstat structures */ static void ohci_destroy_stats(ohci_state_t *ohcip) { int i; if (OHCI_INTRS_STATS(ohcip)) { kstat_delete(OHCI_INTRS_STATS(ohcip)); OHCI_INTRS_STATS(ohcip) = NULL; } if (OHCI_TOTAL_STATS(ohcip)) { kstat_delete(OHCI_TOTAL_STATS(ohcip)); OHCI_TOTAL_STATS(ohcip) = NULL; } for (i = 0; i < USB_N_COUNT_KSTATS; i++) { if (ohcip->ohci_count_stats[i]) { kstat_delete(ohcip->ohci_count_stats[i]); ohcip->ohci_count_stats[i] = NULL; } } } /* * ohci_do_intrs_stats: * * ohci status information */ static void ohci_do_intrs_stats( ohci_state_t *ohcip, int val) { if (OHCI_INTRS_STATS(ohcip)) { OHCI_INTRS_STATS_DATA(ohcip)->ohci_hcr_intr_total.value.ui64++; switch (val) { case HCR_INTR_SO: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_so.value.ui64++; break; case HCR_INTR_WDH: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_wdh.value.ui64++; break; case HCR_INTR_SOF: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_sof.value.ui64++; break; case HCR_INTR_RD: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_rd.value.ui64++; break; case HCR_INTR_UE: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_ue.value.ui64++; break; case HCR_INTR_FNO: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_fno.value.ui64++; break; case HCR_INTR_RHSC: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_rhsc.value.ui64++; break; case HCR_INTR_OC: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_oc.value.ui64++; break; default: OHCI_INTRS_STATS_DATA(ohcip)-> ohci_hcr_intr_not_claimed.value.ui64++; break; } } } /* * ohci_do_byte_stats: * * ohci data xfer information */ static void ohci_do_byte_stats(ohci_state_t *ohcip, size_t len, uint8_t attr, uint8_t addr) { uint8_t type = attr & USB_EP_ATTR_MASK; uint8_t dir = addr & USB_EP_DIR_MASK; if (dir == USB_EP_DIR_IN) { OHCI_TOTAL_STATS_DATA(ohcip)->reads++; OHCI_TOTAL_STATS_DATA(ohcip)->nread += len; switch (type) { case USB_EP_ATTR_CONTROL: OHCI_CTRL_STATS(ohcip)->reads++; OHCI_CTRL_STATS(ohcip)->nread += len; break; case USB_EP_ATTR_BULK: OHCI_BULK_STATS(ohcip)->reads++; OHCI_BULK_STATS(ohcip)->nread += len; break; case USB_EP_ATTR_INTR: OHCI_INTR_STATS(ohcip)->reads++; OHCI_INTR_STATS(ohcip)->nread += len; break; case USB_EP_ATTR_ISOCH: OHCI_ISOC_STATS(ohcip)->reads++; OHCI_ISOC_STATS(ohcip)->nread += len; break; } } else if (dir == USB_EP_DIR_OUT) { OHCI_TOTAL_STATS_DATA(ohcip)->writes++; OHCI_TOTAL_STATS_DATA(ohcip)->nwritten += len; switch (type) { case USB_EP_ATTR_CONTROL: OHCI_CTRL_STATS(ohcip)->writes++; OHCI_CTRL_STATS(ohcip)->nwritten += len; break; case USB_EP_ATTR_BULK: OHCI_BULK_STATS(ohcip)->writes++; OHCI_BULK_STATS(ohcip)->nwritten += len; break; case USB_EP_ATTR_INTR: OHCI_INTR_STATS(ohcip)->writes++; OHCI_INTR_STATS(ohcip)->nwritten += len; break; case USB_EP_ATTR_ISOCH: OHCI_ISOC_STATS(ohcip)->writes++; OHCI_ISOC_STATS(ohcip)->nwritten += len; break; } } } /* * ohci_print_op_regs: * * Print Host Controller's (HC) Operational registers. */ static void ohci_print_op_regs(ohci_state_t *ohcip) { uint_t i; USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\n\tOHCI%d Operational Registers\n", ddi_get_instance(ohcip->ohci_dip)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_revision: 0x%x \t\thcr_control: 0x%x", Get_OpReg(hcr_revision), Get_OpReg(hcr_control)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_cmd_status: 0x%x \t\thcr_intr_enable: 0x%x", Get_OpReg(hcr_cmd_status), Get_OpReg(hcr_intr_enable)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_intr_disable: 0x%x \thcr_HCCA: 0x%x", Get_OpReg(hcr_intr_disable), Get_OpReg(hcr_HCCA)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_periodic_curr: 0x%x \t\thcr_ctrl_head: 0x%x", Get_OpReg(hcr_periodic_curr), Get_OpReg(hcr_ctrl_head)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_ctrl_curr: 0x%x \t\thcr_bulk_head: 0x%x", Get_OpReg(hcr_ctrl_curr), Get_OpReg(hcr_bulk_head)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_bulk_curr: 0x%x \t\thcr_done_head: 0x%x", Get_OpReg(hcr_bulk_curr), Get_OpReg(hcr_done_head)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_frame_interval: 0x%x " "\thcr_frame_remaining: 0x%x", Get_OpReg(hcr_frame_interval), Get_OpReg(hcr_frame_remaining)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_frame_number: 0x%x \thcr_periodic_strt: 0x%x", Get_OpReg(hcr_frame_number), Get_OpReg(hcr_periodic_strt)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_transfer_ls: 0x%x \t\thcr_rh_descriptorA: 0x%x", Get_OpReg(hcr_transfer_ls), Get_OpReg(hcr_rh_descriptorA)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_rh_descriptorB: 0x%x \thcr_rh_status: 0x%x", Get_OpReg(hcr_rh_descriptorB), Get_OpReg(hcr_rh_status)); USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\tRoot hub port status"); for (i = 0; i < (Get_OpReg(hcr_rh_descriptorA) & HCR_RHA_NDP); i++) { USB_DPRINTF_L3(PRINT_MASK_ATTA, ohcip->ohci_log_hdl, "\thcr_rh_portstatus 0x%x: 0x%x ", i, Get_OpReg(hcr_rh_portstatus[i])); } } /* * ohci_print_ed: */ static void ohci_print_ed( ohci_state_t *ohcip, ohci_ed_t *ed) { uint_t ctrl = Get_ED(ed->hced_ctrl); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_print_ed: ed = 0x%p", (void *)ed); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\thced_ctrl: 0x%x %s", ctrl, ((Get_ED(ed->hced_headp) & HC_EPT_Halt) ? "halted": "")); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\ttoggle carry: 0x%x", Get_ED(ed->hced_headp) & HC_EPT_Carry); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tctrl: 0x%x", Get_ED(ed->hced_ctrl)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\ttailp: 0x%x", Get_ED(ed->hced_tailp)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\theadp: 0x%x", Get_ED(ed->hced_headp)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tnext: 0x%x", Get_ED(ed->hced_next)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tprev: 0x%x", Get_ED(ed->hced_prev)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tnode: 0x%x", Get_ED(ed->hced_node)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\treclaim_next: 0x%x", Get_ED(ed->hced_reclaim_next)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\treclaim_frame: 0x%x", Get_ED(ed->hced_reclaim_frame)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tstate: 0x%x", Get_ED(ed->hced_state)); } /* * ohci_print_td: */ static void ohci_print_td( ohci_state_t *ohcip, ohci_td_t *td) { uint_t i; uint_t ctrl = Get_TD(td->hctd_ctrl); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "ohci_print_td: td = 0x%p", (void *)td); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tPID: 0x%x ", ctrl & HC_TD_PID); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tDelay Intr: 0x%x ", ctrl & HC_TD_DI); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tData Toggle: 0x%x ", ctrl & HC_TD_DT); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tError Count: 0x%x ", ctrl & HC_TD_EC); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tctrl: 0x%x ", Get_TD(td->hctd_ctrl)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tcbp: 0x%x ", Get_TD(td->hctd_cbp)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tnext_td: 0x%x ", Get_TD(td->hctd_next_td)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tbuf_end: 0x%x ", Get_TD(td->hctd_buf_end)); for (i = 0; i < 4; i++) { USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\toffset[%d]: 0x%x ", i, Get_TD(td->hctd_offsets[i])); } USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\ttrans_wrapper: 0x%x ", Get_TD(td->hctd_trans_wrapper)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tstate: 0x%x ", Get_TD(td->hctd_state)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\ttw_next_td: 0x%x ", Get_TD(td->hctd_tw_next_td)); USB_DPRINTF_L3(PRINT_MASK_LISTS, ohcip->ohci_log_hdl, "\tctrl_phase: 0x%x ", Get_TD(td->hctd_ctrl_phase)); } /* * quiesce(9E) entry point. * * This function is called when the system is single-threaded at high * PIL with preemption disabled. Therefore, this function must not be * blocked. * * This function returns DDI_SUCCESS on success, or DDI_FAILURE on failure. * DDI_FAILURE indicates an error condition and should almost never happen. * * define as a wrapper for sparc, or warlock will complain. */ #ifdef __sparc int ohci_quiesce(dev_info_t *dip) { return (ddi_quiesce_not_supported(dip)); } #else int ohci_quiesce(dev_info_t *dip) { ohci_state_t *ohcip = ohci_obtain_state(dip); if (ohcip == NULL) return (DDI_FAILURE); #ifndef lint _NOTE(NO_COMPETING_THREADS_NOW); #endif if (ohcip->ohci_flags & OHCI_INTR) { /* Disable all HC ED list processing */ Set_OpReg(hcr_control, (Get_OpReg(hcr_control) & ~(HCR_CONTROL_CLE | HCR_CONTROL_BLE | HCR_CONTROL_PLE | HCR_CONTROL_IE))); /* Disable all HC interrupts */ Set_OpReg(hcr_intr_disable, (HCR_INTR_SO | HCR_INTR_WDH | HCR_INTR_RD | HCR_INTR_UE)); /* Disable Master and SOF interrupts */ Set_OpReg(hcr_intr_disable, (HCR_INTR_MIE | HCR_INTR_SOF)); /* Set the Host Controller Functional State to Reset */ Set_OpReg(hcr_control, ((Get_OpReg(hcr_control) & (~HCR_CONTROL_HCFS)) | HCR_CONTROL_RESET)); /* * Workaround for ULI1575 chipset. Following OHCI Operational * Memory Registers are not cleared to their default value * on reset. Explicitly set the registers to default value. */ if (ohcip->ohci_vendor_id == PCI_ULI1575_VENID && ohcip->ohci_device_id == PCI_ULI1575_DEVID) { Set_OpReg(hcr_control, HCR_CONTROL_DEFAULT); Set_OpReg(hcr_intr_enable, HCR_INT_ENABLE_DEFAULT); Set_OpReg(hcr_HCCA, HCR_HCCA_DEFAULT); Set_OpReg(hcr_ctrl_head, HCR_CONTROL_HEAD_ED_DEFAULT); Set_OpReg(hcr_bulk_head, HCR_BULK_HEAD_ED_DEFAULT); Set_OpReg(hcr_frame_interval, HCR_FRAME_INTERVAL_DEFAULT); Set_OpReg(hcr_periodic_strt, HCR_PERIODIC_START_DEFAULT); } ohcip->ohci_hc_soft_state = OHCI_CTLR_SUSPEND_STATE; } /* Unmap the OHCI registers */ if (ohcip->ohci_regs_handle) { /* Reset the host controller */ Set_OpReg(hcr_cmd_status, HCR_STATUS_RESET); } #ifndef lint _NOTE(COMPETING_THREADS_NOW); #endif return (DDI_SUCCESS); } #endif /* __sparc */
def check_os(): my_system = platform.system() return my_system
def property_widgets(self, widget=None, current_widget=False): result = list() if current_widget: widget = widget or self._stacked_widget.current_widget() else: widget = widget or self._stacked_widget for child in qtutils.iterate_children(widget, skip='skipChildren'): if child.property('prop') is not None: result.append(child) return result
On Monday, November 6, major media-acquisition news landed: 21st Century Fox has reportedly held talks to sell all of its assets to Disney . CNBC's unnamed sources say those talks have since stalled, but the mere possibility got nerd tongues wagging. What would happen if those two media giants joined in unholy matrimony? In addition to questions about Disney and Fox's shared rights to Marvel Comics properties, one franchise stood out: Star Wars. Our own Lee Hutchinson talked at length about how Fox figures into the future of Star Wars' past, so we're resurfacing this 2014 article, which looks at the logistical and legal hurdles that existed on the eve of the original trilogy's first major Blu-ray launch. Until we hear any firmer news about Fox and Disney, of course, this is all a bit of a pipe dream. But who knows? Disney is doing all kinds of things with the Star Wars universe now that it has purchased the franchise away from George Lucas. In addition to the three sequel films, there will be "at least three" spin-off movies, which will likely be origin stories for some of the supporting cast of Star Wars characters. The House of Mouse is pouring a tremendous amount of time and money into Star Wars, and Disney could be the new arbiter of the Holy Grail of Star Wars requests: a remastered release of the unedited, non-special-edition original trilogy. Unadulterated, "pure" versions of the original Star Wars films are difficult to come by. Except for one sad, low-resolution release on DVD in 2006 (which we'll discuss in a moment), the films have only been available in their modified "Special Edition" forms since 1997, when George Lucas re-released the films to theaters with a series of changes. Some of those changes aren't bad at all—the fancy new attack on the Death Star in Episode IV is perfectly cromulent—but others are absolutely terrible. In Return of the Jedi, Jabba's palace gains an asinine CGI-filled song-and-dance interlude. Dialogue is butchered in Empire Strikes Back. And in the first movie, perhaps most famously, Han no longer shoots first. Each subsequent release has piled on more and more changes, culminating in the Star Wars Blu-ray release, which now has Return of the Jedi climaxing with Darth Vader howling "NOOOOOOOO!" as he flings the Emperor into the shaft (spoiler alert from 1983, I guess). For every round of changes, the fan outcry for an unedited original release has grown. And now that Disney has its hands wrapped firmly around the Star Wars steering wheel, the company seems to be in the perfect position to give the fans what they want. But assuming Disney wanted to invest the time and effort into such a release, is it actually possible? Do the original Episodes IV-VI exist in a restorable state, or is the oft-repeated story that they were "destroyed" during the editing of the 1997 Special Edition re-releases actually true? And even if a restoration is actually possible, would Disney be able to do the work and release the movies under the terms of its existing Star Wars license? It turns out that these two questions both have complicated answers. The quick spoiler versions are "almost certainly yes" and "no, at least not for now," but the long answers require going down a number of different rabbit holes. Strap in, because we're about to make the jump to light speed. Making Han shoot first again The last time George Lucas had anything definitive to say about the original original trilogy appears to have been in an interview with The Today Show, 10 years ago: The special edition, that's the one I wanted out there. The other movie, it's on VHS, if anybody wants it. ... I'm not going to spend the—we're talking millions of dollars here, the money and the time to refurbish that, because, to me, it doesn't really exist anymore. It's like this is the movie I wanted it to be, and I'm sorry you saw half a completed film and fell in love with it. Further, Lucasfilm issued a statement in 2006 that seemed to put to rest any rumors that the original versions of the film exist: As you may know, an enormous amount of effort was put into digitally restoring the negatives for the Special Editions. In one scene alone, nearly one million pieces of dirt had to be removed, and the Special Editions were created through a frame-by-frame digital restoration. The negatives of the movies were permanently altered for the creation of the Special Editions, and existing prints of the first versions are in poor condition. Ars alum Ben Kuchera invested considerable time and effort into debunking those claims back in 2010, enlisting the aid of author and Star Wars expert Michael Kaminski. As Kuchera noted in his 2010 piece (and as many others have noted since), Lucasfilm isn't exactly lying when it says that the original negatives were permanently altered—but it's not being wholly truthful, either. The theatrical releases of the films were last made available to the public as companion features on the DVD special edition releases in 2006. The sources for the DVD transfers were digital videotapes, which, as SaveStarWars.com explains, were created in 1993 via telecine from an interpositive struck from the original negatives back in 1985. The same telecine was later given the THX treatment and used as the source for the 1995 Laserdisc release of the trilogy, which—up until the DVD release in 2006—was considered the definitive reference version of Star Wars on a home video format. This all sounds good, since the DVD release and the previously definitive Laserdisc both come from the same source. But it's not: the quality of the original edits on DVD was vastly inferior to the quality of the special edition versions. The transfer isn't anamorphic, and the audio is compressed Dolby 2.0. Further, as SaveStarWars demonstrates, the telecine source used for the DVDs was subject to a high degree of digital noise removal, which erases fine details. Looking at a few still frames side by side, the difference is quite obvious; it's even more obvious in motion. Fixing this for a new release would require going back to some kind of analog source, like an interpositive or the original negatives. Lucasfilm claims the negatives themselves were "permanently altered" for the special editions, so that's a bust—or is it? Here, it turns out, is where Lucasfilm was twisting the truth. Quoting SaveStarWars.com: The negative is conformed to the Special Edition edit, because there can only be one original negative. So, technically speaking, the negative assembly of the originals does not exist. But it would be very easy to simply put the original pieces back in and conform it to the original versions. Actually, in a theoretical modern restoration, they would just scan the original pieces and make a digital edit, especially since disassembling the negative puts a lot of wear on it. There are also secondary sources, such as separation masters and interpositives, both of which were used to make duplicate pieces to repair parts of the original negative for the 1997 release. So, basically, the official Lucasfilm stance is a lot of crap, designed to confuse people who don't have a thorough knowledge of how post-production works. Sounds simple—all Disney would theoretically have to do is grab all the original negatives, scan them in 4k or 8k resolution (which is standard procedure for remastering a film these days), and boom, Star Wars! Right? Things are never that simple. It turns out that the "original" negative is actually in pretty terrible shape. Kaminski's detailed recounting of the restoration process at The Secret History of Star Wars is the definitive one. To summarize, when Lucasfilm employees pulled the original negatives from their storage cans in 1994 to start restoration work for the special editions, they found the film stock had drastically faded colors and exhibited a tremendous amount of damage. A number of different specialist companies were employed by Lucasfilm to carefully clean, re-color, and reconstruct the negatives. There were a number of different film stocks edited together, and so the process included a physical disassembly of the negative into its component stocks before hand-cleaning each section of negative using different stock-appropriate methods. It was a detailed and complex procedure, and not everything that was done to the negative was fully documented. Kaminski notes that, for some of the segments featuring visual effects, Lucasfilm and Industrial Light and Magic went back to original VFX components and re-composited them from scratch, effectively creating new negatives for those sections, and "[w]hen these were finished, they were printed back onto film and cut into the O-neg [the original negative], again replacing the originals. The O-neg was slowly being subsumed by new material." Those sequences' negatives are waiting in film cans. The new alterations also included the updated special-edition VFX sequences, though. The sections of negative those VFX sequences replaced—like huge swaths of the Death Star attack at the end of Episode IV, for example—were almost certainly put back into storage. The broad consensus across numerous expert sources, including Kaminski, is that all except a few minutes' run-time of all three original Star Wars films were painstakingly restored to pristine quality in one way or another. Those segments that weren't fully restored—like Han shooting Greedo first, or the non-CGI dancer sequence at Jabba’s palace—were likely at least partially restored. Even if not, those sequences' negatives are waiting in film cans. Stated simply: the vast majority of the restoration work to release a beautiful HD version of the original trilogy has already been completed.
/** * Decode the specified Strings into cell constraints. */ private static CellConstraints[] decode(String[] cellConstraints) { CellConstraints[] decoded = new CellConstraints[cellConstraints.length]; for(int c = 0; c < cellConstraints.length; c++) { decoded[c] = new CellConstraints(cellConstraints[c]); } return decoded; }
/* * File: bam_reader.cc * Author: <NAME> <thomas(at)bioinf.uni-leipzig.de> * * Created on January 20, 2016, 1:13 PM */ #include <deque> #include <string> #include <vector> #include <math.h> #include <queue> #include <boost/unordered/unordered_map.hpp> #include <tuple> #include <stdlib.h> #ifdef _OPENMP #include <omp.h> #endif #include "bam_reader.h" #include "Datatype_Templates/misc_types.h" #include "Chromosome/exon.h" #include "../Datatype_Templates/move.h" #include "Chromosome/connection_iterator.h" #include "../Options/options.h" #include "../Logger/logger.h" #include "Chromosome/read_collection.h" #include "Chromosome/raw_series_counts.h" //#include <chrono> //using namespace std::chrono; bam_reader::bam_reader() { } bam_reader::~bam_reader() { } void bam_reader::finalize(const std::string &chrom_name) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Finalize " + chrom_name + ".\n"); #endif chromosome *chrom_fwd, *chrom_rev; #pragma omp critical(chromosome_map_lock) { chrom_fwd = &chromosome_map_fwd[chrom_name]; if (options::Instance()->is_stranded()) { chrom_rev = &chromosome_map_rev[chrom_name]; } } #pragma omp critical(iterator_map_lock) { #pragma omp critical(chromosome_map_lock) { if (options::Instance()->is_stranded()) { iterator_map[chrom_name] = connection_iterator(&chromosome_map_fwd[chrom_name], &chromosome_map_rev[chrom_name]); } else { iterator_map[chrom_name] = connection_iterator(&chromosome_map_fwd[chrom_name]); } } } } void bam_reader::split_independent_component(connected *conn, greader_list<connected> &all_connected) { // we want to split conn into a suitable list of connected! struct reg_save { greader_refsorted_list<exon*> exons; }; std::list<reg_save> regions; for(greader_refsorted_list<raw_atom* >::iterator raw_it = conn->atoms->begin(); raw_it != conn->atoms->end(); ++raw_it) { if (regions.empty()) { // we add a new on regions.push_back(reg_save()); // we make copies, yeay! std::copy((*raw_it)->exons->begin(), (*raw_it)->exons->end(), std::inserter(regions.back().exons, regions.back().exons.end())); } else { greader_list<reg_save* > matched_regions; for (std::list<reg_save>::iterator reg_it = regions.begin(); reg_it != regions.end(); ++reg_it) { if (!reg_it->exons.is_disjoint((*raw_it)->exons.ref())) { // this is an overlap! matched_regions.push_back(&*reg_it); } } if (matched_regions.size() == 0) { regions.push_back(reg_save()); // we make copies, yeay! std::copy((*raw_it)->exons->begin(), (*raw_it)->exons->end(), std::inserter(regions.back().exons, regions.back().exons.end())); } else if (matched_regions.size() == 1) { std::copy((*raw_it)->exons->begin(), (*raw_it)->exons->end(), std::inserter(matched_regions.back()->exons, matched_regions.back()->exons.begin())); } else { // matched_regions.size() > 1 greader_list<reg_save* >::iterator fm = matched_regions.begin(); greader_list<reg_save* >::iterator match_it = fm; ++match_it; std::copy((*raw_it)->exons->begin(), (*raw_it)->exons->end(), std::inserter((*fm)->exons, (*fm)->exons.begin())); for (; match_it != matched_regions.end(); ++match_it) { std::copy((*match_it)->exons.begin(), (*match_it)->exons.end(), std::inserter((*fm)->exons, (*fm)->exons.begin())); for (std::list<reg_save>::iterator reg_it = regions.begin(); reg_it != regions.end(); ++reg_it) { if (&*reg_it == *match_it) { regions.erase(reg_it); break; } } } } } } // now create connected as separate regions! if (regions.size() < 2) { all_connected.push_back(*conn); return; } connected old_one = *conn; // all is lazy, no problem! for (std::list<reg_save>::iterator reg_it = regions.begin(); reg_it != regions.end(); ++reg_it) { // logger::Instance()->info("New Region.\n"); all_connected.push_back(connected()); connected* conn_it = &all_connected.back(); conn_it->intel_count = old_one.intel_count; conn_it->avg_split = old_one.avg_split; conn_it->start = (*reg_it->exons.begin())->start; conn_it->end = (*reg_it->exons.rbegin())->end; for (greader_refsorted_list<exon*>::iterator fexi = reg_it->exons.begin(); fexi != reg_it->exons.end(); ++fexi) { conn_it->fossil_exons->push_back(*fexi); } // second pass for matches to sort them in for(greader_refsorted_list<raw_atom* >::iterator raw_it = old_one.atoms->begin(); raw_it != old_one.atoms->end(); ++raw_it) { if (!reg_it->exons.is_disjoint((*raw_it)->exons.ref())) { // this is a match! conn_it->atoms->insert(conn_it->atoms->end(), *raw_it); } } // clean out now missing pairs for(greader_refsorted_list<raw_atom* >::iterator a_it = conn_it->atoms->begin(); a_it != conn_it->atoms->end(); ++a_it) { for (paired_map<raw_atom*, gmap<int, rcount> >::iterator p_it = (*a_it)->paired.begin(); p_it!= (*a_it)->paired.end(); ) { if ( reg_it->exons.is_disjoint((p_it->first)->exons.ref()) ) { p_it = (*a_it)->paired.erase(p_it); } else { ++p_it; } } } } } void bam_reader::discard(const std::string &chrom_name) { #ifdef ALLOW_DEBUG logger::Instance()->info("Discard finished " + chrom_name + ".\n"); #endif #pragma omp critical(iterator_map_lock) { #pragma omp critical(chromosome_map_lock) { if (options::Instance()->is_stranded()) { chromosome_map_fwd.erase(chrom_name); chromosome_map_rev.erase(chrom_name); iterator_map.erase(chrom_name); } else { chromosome_map_fwd.erase(chrom_name); iterator_map.erase(chrom_name); } } } } unsigned int bam_reader::get_num_connected(const std::string &chrom_name) { unsigned int total; #pragma omp critical(iterator_map_lock) { total = iterator_map[chrom_name].total(); } return total; } bool bam_reader::populate_next_group(const std::string &chrom_name, greader_list<connected> &all_connected, exon_meta* meta) { greader_list<connected>::iterator ob; bool exit = false; connection_iterator* it; #pragma omp critical(iterator_map_lock) { it = &iterator_map[chrom_name]; if (!it->next(ob, meta->order_index)) { exit = true; } } if (exit) { #ifdef ALLOW_DEBUG logger::Instance()->info("Exit.\n"); #endif return false; } // general statistics meta->chromosome = chrom_name; if (it->in_fwd) { meta->strand = "+"; } else { meta->strand = "-"; } if (options::Instance()->is_stranded()) { // logger::Instance()->info("Compute Avrg " + std::to_string(it->fwd->average_read_lenghts) + " " + std::to_string(it->bwd->average_read_lenghts) +".\n"); if (it->fwd->average_read_lenghts < 1.0) { meta->avrg_read_length = it->bwd->average_read_lenghts ; } else if (it->bwd->average_read_lenghts < 1.0) { meta->avrg_read_length = it->fwd->average_read_lenghts ; } else { meta->avrg_read_length = it->fwd->average_read_lenghts / 2 + it->bwd->average_read_lenghts / 2 ; } } else { meta->avrg_read_length = it->fwd->average_read_lenghts; } // split up components split_independent_component(&*ob, all_connected); // logger::Instance()->info("Populate Group " + chrom_name + " " + std::to_string((*ob->fossil_exons.ref().begin())->start ) + "-" + std::to_string((*ob->fossil_exons.ref().rbegin())->end) +".\n"); return true; } void bam_reader::populate_next_single(const std::string &chrom_name, connected *ob, pre_graph* raw, exon_meta* meta) { // logger::Instance()->info("Populate Single " + chrom_name + " " + std::to_string((*ob->fossil_exons.ref().begin())->start ) + "-" + std::to_string((*ob->fossil_exons.ref().rbegin())->end) +".\n"); // set size, and add in mean meta->set_size(ob->fossil_exons.ref().size()); raw->set_size(ob->fossil_exons.ref().size()); unsigned int i=0; for (greader_list<exon* >::iterator e_it = ob->fossil_exons.ref().begin(); e_it != ob->fossil_exons.ref().end(); ++e_it,++i) { (*e_it)->id = i; meta->exons[i] = exon_meta::exon_meta_info(); meta->exons[i].left = (*e_it)->start; meta->exons[i].right = (*e_it)->end; meta->exons[i].exon_length = meta->exons[i].right - meta->exons[i].left + 1; meta->exons[i].id = i; #ifdef ALLOW_DEBUG logger::Instance()->debug("Meta Exon" + std::to_string( i ) + " " + std::to_string( meta->exons[i].left ) + " " + std::to_string(meta->exons[i].right) +"\n"); #endif } #ifdef ALLOW_DEBUG logger::Instance()->debug("Atoms " + std::to_string(ob->atoms.ref().size())+".\n"); #endif // now convert all atoms to pregraph types // atoms are sorted! unsigned int id = 0; for (greader_refsorted_list<raw_atom* > ::iterator atom = ob->atoms.ref().begin(); atom != ob->atoms.ref().end(); ++atom) { if (!(*atom)->has_coverage && !(*atom)->reference_atom) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Raw Omitted " + std::to_string( (long) *atom) + " " + (*atom)->to_string() +"\n"); #endif continue; } (*atom)->id = id; #ifdef ALLOW_DEBUG logger::Instance()->debug("Atom " + std::to_string( (long) *atom) + " " + (*atom)->to_string() + " " + std::to_string((*atom)->id) + " " + std::to_string((*atom)->has_coverage) + " " + std::to_string((*atom)->reference_atom) +"\n"); #endif raw->singled_bin_list.push_back(exon_group(i, (*atom)->exons->size())); exon_group* new_group = &raw->singled_bin_list.back(); greader_refsorted_list<exon*>::iterator ae_it = (*atom)->exons.ref().begin(); new_group->range_start = (*ae_it)->id; for (; ae_it != (*atom)->exons.ref().end(); ++ae_it) { new_group->set( (*ae_it)->id, true ); } --ae_it; new_group->range_end = (*ae_it)->id; ++id; } std::set<unsigned int> guide_starts; std::set<unsigned int> guide_ends; for (greader_refsorted_list<raw_atom* > ::iterator atom = ob->atoms.ref().begin(); atom != ob->atoms.ref().end(); ++atom) { if (!(*atom)->has_coverage && !(*atom)->reference_atom) { continue; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Next Atom " + std::to_string((*atom)->id) + " " + (*atom)->to_string() + ".\n"); #endif exon_group* lr = &raw->singled_bin_list[(*atom)->id]; lr->length_filterd = (*atom)->length_filtered; lr->drain_evidence = (*atom)->drain_evidence; lr->source_evidence = (*atom)->source_evidence; lr->reference_atom = (*atom)->reference_atom; lr->has_coverage = (*atom)->has_coverage; if (lr->reference_atom) { if (lr->range_start != 0) { guide_starts.insert(lr->range_start); } if (lr->range_end != i-1) { guide_ends.insert(lr->range_end); } lr->reference_name = (*atom)->reference_name; lr->reference_gene = (*atom)->reference_gene; } for(gmap<int, raw_series_counts>::iterator rsci = (*atom)->raw_series.begin(); rsci != (*atom)->raw_series.end(); ++rsci) { int id = rsci->first; lr->count_series[id].init((*atom)->exons->size()); lr->count_series[id].read_count = rsci->second.count; lr->count_series[id].frag_count = rsci->second.count - rsci->second.paired_count; lr->count_series[id].total_lefts = rsci->second.total_lefts; lr->count_series[id].total_rights = rsci->second.total_rights; lr->count_series[id].lefts = rsci->second.lefts; lr->count_series[id].rights = rsci->second.rights; std::map< rpos,rcount >::iterator hsi = rsci->second.hole_starts->begin(); std::map< rpos,rcount >::iterator hei = rsci->second.hole_ends->begin(); unsigned int index = 0; for (greader_refsorted_list<exon*>::iterator ae_it = (*atom)->exons.ref().begin(); ae_it != (*atom)->exons.ref().end(); ++ae_it, ++index) { while (hsi != rsci->second.hole_starts->end() && hsi->first <= meta->exons[(*ae_it)->id].right && hsi->first >= meta->exons[(*ae_it)->id].left) { lr->count_series[id].hole_starts[index].insert(*hsi); lr->count_series[id].hole_start_counts[index] += hsi->second; ++hsi; } while (hei != rsci->second.hole_ends->end() && hei->first <= meta->exons[(*ae_it)->id].right && hei->first >= meta->exons[(*ae_it)->id].left) { lr->count_series[id].hole_ends[index].insert(*hei); lr->count_series[id].hole_end_counts[index] += hei->second; ++hei; } } } paired_map<raw_atom* , gmap<int, rcount> >::iterator it = (*atom)->paired.begin(); while(it != (*atom)->paired.end() && !it->first->has_coverage && !it->first->reference_atom) { ++it; } if (it != (*atom)->paired.end() ) { graph_list<paired_exon_group>::iterator start; exon_group* rr = &raw->singled_bin_list[it->first->id]; #ifdef ALLOW_DEBUG logger::Instance()->debug("RR1 " + std::to_string( (long) it->first) + " " + std::to_string(it->first->id) + " " + it->first->to_string() + ".\n"); #endif start = raw->paired_bin_list.insert(raw->paired_bin_list.end(), paired_exon_group(lr, rr, it->second)); ++it; for (; it!= (*atom)->paired.end(); ++it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("RR2 " + std::to_string( (long) it->first) + " " + std::to_string(it->first->id) + " " + it->first->to_string() + ".\n"); #endif if (it->first->has_coverage || it->first->reference_atom) { exon_group* rr = &raw->singled_bin_list[it->first->id]; start = raw->paired_bin_list.insert(start, paired_exon_group(lr, rr, it->second)); } } std::sort(start, raw->paired_bin_list.end()); // sorting is done to achieve consistent output } } if (!guide_starts.empty() || !guide_ends.empty()) { for(graph_list<exon_group>::iterator sb_it = raw->singled_bin_list.begin(); sb_it != raw->singled_bin_list.end(); ++sb_it) { if (guide_starts.find(sb_it->range_start) != guide_starts.end()) { sb_it->source_evidence = true; } if (guide_ends.find(sb_it->range_end) != guide_ends.end()) { sb_it->drain_evidence = true; } } } #pragma omp critical(chromosome_map_lock) { chromosome *chrom_fwd, *chrom_rev; chrom_fwd = &chromosome_map_fwd[chrom_name]; raw->average_fragment_length = 2*chrom_fwd->average_read_lenghts + ob->avg_split; #ifdef ALLOW_DEBUG logger::Instance()->debug("Average Fragsize " + std::to_string(chrom_fwd->average_read_lenghts) + " " + std::to_string(ob->avg_split) + "\n"); #endif if (options::Instance()->is_stranded()) { chrom_rev = &chromosome_map_rev[chrom_name]; rpos flen = 2*chrom_rev->average_read_lenghts + ob->avg_split; if (raw->average_fragment_length < flen) { raw->average_fragment_length = flen; } } } #ifdef ALLOW_DEBUG logger::Instance()->debug("Average Fragsize " + std::to_string(raw->average_fragment_length) + "\n"); #endif // we can finally kill all contents of the connected ob->atoms.ref().clear(); ob->fossil_exons.ref().clear(); ob->reads.ref().clear(); } unsigned long bam_reader::return_read_count(const std::string &file_name) { htsFile* file = hts_open(file_name.c_str(), "r"); hts_idx_t* idx = sam_index_load(file, file_name.c_str()); bam_hdr_t *hdr = sam_hdr_read(file); uint64_t mapped_total = 0; uint64_t mapped; uint64_t unmapped; for (unsigned int i=0; i < hdr->n_targets; i++) { hts_idx_get_stat(idx, i, &mapped, &unmapped); mapped_total += mapped; } hts_close(file); return mapped_total; } void bam_reader::return_chromosome_names(const std::string &file_name, greader_name_set<std::string> &return_list) { // open reader htsFile* file = hts_open(file_name.c_str(), "r"); // open header bam_hdr_t *hdr = sam_hdr_read(file); for(int i = 0; i < hdr->n_targets; i++) { return_list.insert(hdr->target_name[i]); } hts_close(file); } void bam_reader::read_chromosome(std::vector<std::string> file_names, std::string chrom_name) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Read chromosome " + chrom_name+".\n"); #endif // get or create the chromosome by name // we only use fwd if unstranded chromosome *chrom_fwd, *chrom_rev; #pragma omp critical(chromosome_map_lock) { chrom_fwd = &chromosome_map_fwd[chrom_name]; if (options::Instance()->is_stranded()) { chrom_rev = &chromosome_map_rev[chrom_name]; } } struct bam_file_ { htsFile* file; hts_idx_t* idx; bam_hdr_t *hdr; hts_itr_t *iter; bam1_t *read; int status; unsigned int index; }; std::vector<bam_file_> fileh(file_names.size()); { unsigned int i = 0; for(std::vector<std::string>::iterator it = file_names.begin(); it != file_names.end() ; it++, i++) { // open reader fileh[i].file = hts_open(it->c_str(), "r"); // we need an index, because otherwise we need to read the whole file multiple times // we can use the same name, it will add the bai automatically fileh[i].idx = sam_index_load(fileh[i].file, it->c_str()); // we also need to the header for indexes fileh[i].hdr = sam_hdr_read(fileh[i].file); fileh[i].iter = NULL; // iterator and object to read stuff into fileh[i].iter = sam_itr_querys(fileh[i].idx, fileh[i].hdr, chrom_name.c_str()); fileh[i].read = bam_init1(); fileh[i].status = sam_itr_next(fileh[i].file, fileh[i].iter, fileh[i].read); fileh[i].index = i; } } // we need to move through the list of already there exons r_border_set<rpos>::iterator ex_start_fwd_it = chrom_fwd->fixed_exon_starts.ref().begin(); r_border_set<rpos>::iterator ex_end_fwd_it = chrom_fwd->fixed_exon_ends.ref().begin(); r_border_set<rpos>::iterator ex_start_rev_it, ex_end_rev_it; if (options::Instance()->is_stranded()) { ex_start_rev_it = chrom_rev->fixed_exon_starts.ref().begin(); ex_end_rev_it = chrom_rev->fixed_exon_ends.ref().begin(); } // strands can overlap without rpos left_border_fwd = 0; rpos right_border_fwd = 0; rpos left_border_rev = 0; rpos right_border_rev = 0; bool evidence_plus = false; bool evidence_minus = false; bool discourage_plus = false; bool discourage_minus = false; unsigned int total_count = (file_names.size() + options::Instance()->get_pooling() -1 ) / options::Instance()->get_pooling(); // division ceiling(size / pool) // main loop of this, we always read ALL reads, but compacting is called multiple times std::vector<bam_file_>::iterator next = fileh.end(); while ( true ) { if ( next != fileh.end() ) next->status = sam_itr_next(next->file, next->iter, next->read); next = fileh.end(); for(std::vector<bam_file_>::iterator it = fileh.begin(); it != fileh.end(); it++) { if ( it->status >= 0) { if ( next == fileh.end() || next->read->core.pos > it->read->core.pos ) { next = it; } } } if ( next == fileh.end() ) { break; } bam1_t *read = next->read; unsigned int index = next->index; std::string id_prefix = std::to_string(index); #ifdef ALLOW_DEBUG logger::Instance()->debug("Next Sam Iter " + std::string(bam_get_qname(read)) + "\n"); #endif // if (read->core.flag & BAM_FSECONDARY || read->core.flag & BAM_FUNMAP || read->core.qual < 1) { // this is a secondary alignment // only use primary alignments #ifdef ALLOW_DEBUG logger::Instance()->debug("Skip by Flags " + std::to_string(read->core.flag & BAM_FSECONDARY) + " - " + std::to_string(read->core.flag & BAM_FUNMAP) + " - " + std::to_string(read->core.qual) + "\n"); #endif continue; } uint32_t* cigar = bam_get_cigar(read); bool long_intron = false; bool too_small = false; bool has_intron = false; for(int i = 0; i < read->core.n_cigar; i++) { const int op = bam_cigar_op(cigar[i]); const int ol = bam_cigar_oplen(cigar[i]); // do we have an N? if (op == BAM_CREF_SKIP && ol > options::Instance()->get_maximal_intron_size()) { logger::Instance()->info("Skip Read With Overly Long Intron " + std::string(bam_get_qname(read)) + "\n"); long_intron = true; break; } if (op == BAM_CREF_SKIP) { has_intron = true; } if (op == BAM_CMATCH && i != 0 && i != read->core.n_cigar - 1 && ol < 3) { too_small = true; break; } } if(long_intron || too_small) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Skip Inconsistend " + std::to_string(long_intron) + " - " + std::to_string(too_small) + "\n"); #endif //next->status = sam_itr_next(next->file, next->iter, next->read); continue; } // process stranded or unstranded if (options::Instance()->is_stranded()) { // we need to get strand information, use XS tag from mappers char xs = '.'; uint8_t* ptr = bam_aux_get(read, "XS"); if (ptr) { char src_strand_char = bam_aux2A(ptr); if (src_strand_char == '-') { xs = '-'; } else if (src_strand_char == '+') { xs = '+'; } } if (options::Instance()->get_strand_type() == options::unknown ) { if (xs == '+') { // we have a + junction, discourage - rpos left = read->core.pos + 1; if (left <= right_border_rev ) { //overlapping to - discourage_minus = true; } if (left <= right_border_fwd ) { //overlapping to + evidence_plus = true; } } else if (xs == '-') { // we have a - junction, discourage + rpos left = read->core.pos + 1; if (left <= right_border_rev ) { //overlapping to - evidence_minus = true; } if (left <= right_border_fwd ) { //overlapping to + discourage_plus = true; } } else if ( xs == '.') { if (has_intron) { logger::Instance()->warning("No strand information on spliced read "); logger::Instance()->warning(bam_get_qname(read)); logger::Instance()->warning(". Add XS tag, specify library or turn off stranded option. Read was Skipped. \n"); continue; } //logger::Instance()->debug("Border Check: + "+ std::to_string(right_border_fwd) + " - " + std::to_string(right_border_fwd) + "\n"); // try to resurrect it rpos left = read->core.pos + 1; //logger::Instance()->debug("Read Left: + "+ std::to_string(left) + "\n"); if ( left <= right_border_fwd && left > right_border_rev ) { xs = '+'; } else if ( left > right_border_fwd && left <= right_border_rev ) { xs = '-'; } } if (xs == '+') { //logger::Instance()->debug("Add + " + std::to_string(evidence_plus) + " " + std::to_string(discourage_plus) + "\n"); process_read(read, left_border_fwd, right_border_fwd, evidence_plus, discourage_plus, chrom_fwd, id_prefix, ex_start_fwd_it, ex_end_fwd_it, options::Instance()->get_input_to_id()[index], total_count); } else if (xs == '-'){ //logger::Instance()->debug("Add - " + std::to_string(evidence_minus) + " " + std::to_string(discourage_minus) + "\n"); process_read(read, left_border_rev, right_border_rev, evidence_minus, discourage_minus, chrom_rev, id_prefix, ex_start_rev_it, ex_end_rev_it, options::Instance()->get_input_to_id()[index], total_count); } else { //logger::Instance()->debug("Add Both + "+ std::to_string(evidence_plus) + " " + std::to_string(discourage_plus) +"\n"); process_read(read, left_border_fwd, right_border_fwd, evidence_plus, discourage_plus, chrom_fwd, id_prefix, ex_start_fwd_it, ex_end_fwd_it, options::Instance()->get_input_to_id()[index], total_count); //logger::Instance()->debug("Add Both - " + std::to_string(evidence_minus) + " " + std::to_string(discourage_minus) + "\n"); process_read(read, left_border_rev, right_border_rev, evidence_minus, discourage_minus, chrom_rev, id_prefix, ex_start_rev_it, ex_end_rev_it, options::Instance()->get_input_to_id()[index], total_count); } } else { char strand = '.'; uint32_t sam_flag = read->core.flag; bool antisense_aln = sam_flag & BAM_FREVERSE; //16 // 1 64 1 if (((sam_flag & BAM_FPAIRED) && (sam_flag & BAM_FREAD1)) || !(sam_flag & BAM_FPAIRED)) // first-in-pair or single-end { switch(options::Instance()->get_strand_type()) { case options::FF: case options::FR: (antisense_aln) ? strand = '-' : strand = '+'; break; case options::RF: case options::RR: (antisense_aln) ? strand = '+' : strand = '-'; break; } } else if ((sam_flag & BAM_FPAIRED) && (sam_flag & BAM_FREAD2)) // second-in-pair read { switch(options::Instance()->get_strand_type()) { case options::FF: case options::RF: (antisense_aln) ? strand = '-' : strand = '+'; break; case options::FR: case options::RR: (antisense_aln) ? strand = '+' : strand = '-'; break; } } if (strand == '.') { strand = xs; } if (strand == '+') { if (xs != '-') process_read(read, left_border_fwd, right_border_fwd, chrom_fwd, id_prefix, ex_start_fwd_it, ex_end_fwd_it, options::Instance()->get_input_to_id()[index], total_count); } else if (strand == '-'){ if (xs != '+') process_read(read, left_border_rev, right_border_rev, chrom_rev, id_prefix, ex_start_rev_it, ex_end_rev_it, options::Instance()->get_input_to_id()[index], total_count); } } } else { process_read(read, left_border_fwd, right_border_fwd, chrom_fwd, id_prefix, ex_start_fwd_it, ex_end_fwd_it, options::Instance()->get_input_to_id()[index], total_count); } } if (options::Instance()->is_stranded()) { //logger::Instance()->debug("Finish + " + std::to_string(evidence_plus) + " " + std::to_string(discourage_plus) + "\n"); //logger::Instance()->debug("Finish - " + std::to_string(evidence_minus) + " " + std::to_string(discourage_minus) + "\n"); if (evidence_plus || !discourage_plus) finish_block(chrom_fwd, left_border_fwd, right_border_fwd, ex_start_fwd_it, ex_end_fwd_it, total_count); if (evidence_minus || !discourage_minus) finish_block(chrom_rev, left_border_rev, right_border_rev, ex_start_rev_it, ex_end_rev_it, total_count); } else { finish_block(chrom_fwd, left_border_fwd, right_border_fwd, ex_start_fwd_it, ex_end_fwd_it, total_count); } if (options::Instance()->is_stranded()) { reset_reads(chrom_fwd); reset_reads(chrom_rev); } else { reset_reads(chrom_fwd); } // destroy everything again for(std::vector<bam_file_>::iterator it = fileh.begin(); it != fileh.end(); it++) { hts_idx_destroy(it->idx); hts_itr_destroy(it->iter); bam_hdr_destroy(it->hdr); hts_close(it->file); bam_destroy1(it->read); } // logger::Instance()->error("Chr "+chrom_name+"\n"); // // logger::Instance()->info("atoms " + std::to_string(chrom_fwd->atoms.size()) + "\n"); // for (greader_list<raw_atom>::iterator it = chrom_fwd->atoms.begin(); it != chrom_fwd->atoms.end(); ++it) { // logger::Instance()->info("RAF " + std::to_string( (long) &*it ) + " " + it->to_string() + "\n"); // } // // logger::Instance()->info("atoms " + std::to_string(chrom_rev->atoms.size()) + "\n"); // for (greader_list<raw_atom>::iterator it = chrom_rev->atoms.begin(); it != chrom_rev->atoms.end(); ++it) { // logger::Instance()->info("RAR " + std::to_string( (long) &*it ) + " " + it->to_string() + "\n"); // } // logger::Instance()->error("Chr Fwd "+chrom_name+"\n"); // logger::Instance()->error("fossil_exons " + std::to_string(chrom_fwd->fossil_exons.size()) + "\n"); // logger::Instance()->error("fixed_exon_starts " + std::to_string(chrom_fwd->fixed_exon_starts->size()) + "\n"); // logger::Instance()->error("fixed_exon_ends " + std::to_string(chrom_fwd->fixed_exon_ends->size()) + "\n"); // // logger::Instance()->error("chrom_fragments " + std::to_string(chrom_fwd->chrom_fragments.size()) + "\n"); // logger::Instance()->error("atoms " + std::to_string(chrom_fwd->atoms.size()) + "\n"); // // logger::Instance()->error("reads " + std::to_string(chrom_fwd->reads.size()) + "\n"); // long sum_reads = 0; // for (double_deque<read_collection>::iterator it = chrom_fwd->reads.begin(); it != chrom_fwd->reads.end(); ++it ) { // sum_reads += (*it)->size(); // } // logger::Instance()->error("reads containing " + std::to_string(sum_reads) + "\n"); // // logger::Instance()->error("tmps " + std::to_string(chrom_fwd->read_queue.size()) +" "+ std::to_string(chrom_fwd->interval_queue.size())+" "+ std::to_string(chrom_fwd->splice_queue.size())+ "\n"); // // logger::Instance()->error("Chr Rev "+chrom_name+"\n"); // logger::Instance()->error("fossil_exons " + std::to_string(chrom_rev->fossil_exons.size()) + "\n"); // logger::Instance()->error("fixed_exon_starts " + std::to_string(chrom_rev->fixed_exon_starts->size()) + "\n"); // // logger::Instance()->error("fixed_exon_ends " + std::to_string(chrom_rev->fixed_exon_ends->size()) + "\n"); // // logger::Instance()->error("chrom_fragments " + std::to_string(chrom_rev->chrom_fragments.size()) + "\n"); // logger::Instance()->error("atoms " + std::to_string(chrom_rev->atoms.size()) + "\n"); // // logger::Instance()->error("reads " + std::to_string(chrom_rev->reads.size()) + "\n"); // sum_reads = 0; // for (double_deque<read_collection>::iterator it = chrom_rev->reads.begin(); it != chrom_rev->reads.end(); ++it ) { // sum_reads += (*it)->size(); // } // logger::Instance()->error("reads containing " + std::to_string(sum_reads) + "\n"); // // logger::Instance()->error("tmps " + std::to_string(chrom_rev->read_queue.size()) +" "+ std::to_string(chrom_rev->interval_queue.size())+" "+ std::to_string(chrom_rev->splice_queue.size())+ "\n"); } void bam_reader::process_read( bam1_t *bread, rpos &left_border, rpos &right_border, chromosome* chrom, const std::string id_prefix, r_border_set<rpos>::iterator &ex_start_it, r_border_set<rpos>::iterator &ex_end_it, int index, unsigned int total_inputs) { bool evidence = true; bool discourage = false; process_read( bread, left_border, right_border, evidence, discourage, chrom, id_prefix, ex_start_it, ex_end_it, index, total_inputs); } void bam_reader::process_read( bam1_t *bread, rpos &left_border, rpos &right_border, bool &evidence, bool &discourage, chromosome* chrom, const std::string id_prefix, r_border_set<rpos>::iterator &ex_start_it, r_border_set<rpos>::iterator &ex_end_it, int index, unsigned int total_inputs) { // logger::Instance()->error("Process: ID " + std::to_string(index) + "\n"); rpos left, right; greader_list<interval> junctions; greader_list<std::pair<rpos, rpos> > splices; rread* prev = NULL; if (!chrom->read_queue.empty()) { prev = &*chrom->read_queue.rbegin(); } rread* new_read = parse_read(bread, chrom, junctions, splices, left, right, id_prefix, ex_start_it, ex_end_it, index); new_read->add_count(index); #ifdef ALLOW_DEBUG logger::Instance()->debug("Add read "); if (new_read->id_set && ! (bread->core.flag & BAM_FSECONDARY)) logger::Instance()->debug(new_read->ids.ref()[index].front()); logger::Instance()->debug( " " + std::to_string(left) + " " + std::to_string(right) + " BORDER " + std::to_string(right_border) + ".\n"); #endif // heuristic early merge down of identical reads! if (prev != NULL && new_read->left_limit == prev->left_limit && new_read->right_limit == prev->right_limit) { // boundaries are the same, test intervals for compacting! bool merge = true; greader_list<interval>::reverse_iterator cur_it = junctions.rbegin(); greader_list<interval>::reverse_iterator prev_it = chrom->interval_queue.rbegin(); for (; cur_it != junctions.rend(); ++cur_it, ++prev_it) { if ( cur_it->left != prev_it->left || cur_it->right != prev_it->right ) { merge = false; break; } } if( merge ) { for (greader_list<std::pair<rpos, rpos> >::iterator is = splices.begin(); is != splices.end(); ++is) { chrom->splice_queue[*is][index].first += 1; chrom->splice_queue[*is][index].second = chrom->splice_queue[*is][index].second || new_read->primary; } prev->add_count(index); prev->primary = prev->primary || new_read->primary; if (new_read->id_set) { prev->add_id( new_read->ids.ref()[index].front(), index); } chrom->read_queue.pop_back(); return; } } rpos ejd = options::Instance()->get_exon_join_distance(); // we have a new region, so start new with net read if (left_border == right_border && right_border == 0) { left_border = left; right_border = right; } if (left <= right_border + 1 + ejd && right >= right_border ) { right_border = right; #ifdef ALLOW_DEBUG logger::Instance()->debug("Added read extends border.\n"); #endif } else if (left > right_border + 1 + ejd) { // not overlapping on chromosome, so we need to finish up so far collected data rread last = *new_read; chrom->read_queue.pop_back(); if (evidence || !discourage) { // we have evidence for this strand or it was at least not discouraged finish_block(chrom, left_border, right_border, ex_start_it, ex_end_it, total_inputs); } else { // just clean up and skip! chrom->read_queue.clear(); chrom->interval_queue.clear(); chrom->splice_queue.clear(); } evidence = false; discourage = false; rread* re_add = chrom->addQueuedRead(last); for (greader_list<interval>::iterator it = junctions.begin(); it != junctions.end(); ++it) { it->parent = re_add; } left_border = left; right_border = right; } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("No border manipulation.\n"); #endif } for (greader_list<std::pair<rpos, rpos> >::iterator is = splices.begin(); is != splices.end(); ++is) { chrom->splice_queue[*is][index].first += 1; chrom->splice_queue[*is][index].second = chrom->splice_queue[*is][index].second || new_read->primary; logger::Instance()->debug("Add Splice "+ std::to_string(is->first) + " - " + std::to_string(is->second) +"\n"); } std::copy(junctions.begin(), junctions.end(), std::back_inserter(chrom->interval_queue)); } rread* bam_reader::parse_read( bam1_t *bread, chromosome* chrom, greader_list<interval> &junctions, greader_list<std::pair<rpos, rpos> > &splices, rpos &left, rpos &right, const std::string id_prefix, r_border_set<rpos>::iterator &ex_start_it, r_border_set<rpos>::iterator &ex_end_it, int index) { rread* new_read; ++chrom->read_count; // && bread->core.flag & BAM_FPROPER_PAIR Why only proper pair?? if ( bread->core.flag & BAM_FPAIRED && bread->core.tid == bread->core.mtid && !(bread->core.flag & BAM_FSECONDARY)) { // this is a proper paired read on same ref // add this read to the chromosome std::string id(bam_get_qname(bread)); id.append("_").append(id_prefix); // new read object and add it to chromosome (we cannot have any name twice) new_read = chrom->addQueuedRead(rread(id, index)); if (bread->core.mpos > bread->core.pos) { // count only left pair! ++chrom->frag_count; } } else { new_read = chrom->addQueuedRead(rread()); ++chrom->frag_count; } new_read->primary = !(bread->core.flag & BAM_FSECONDARY); bam1_core_t *rcore = &bread->core; left = rcore->pos + 1; // original is 0 based, we make this 1 based, much nicer rpos start = left; rpos offset = 0; // stepwise averaging chrom->average_read_lenghts += (rcore->l_qseq - chrom->average_read_lenghts) / chrom->read_count; // advance the known exon iterators // we use those to keep track of new split evidence to avoid double definitions ex_start_it = chrom->fixed_exon_starts.ref().lower_bound(ex_start_it, chrom->fixed_exon_starts.ref().end(), left); ex_end_it = chrom->fixed_exon_ends.ref().lower_bound(ex_end_it, chrom->fixed_exon_ends.ref().end(), left); std::deque<rpos> new_ends; std::deque<rpos> new_starts; greader_list<interval> new_junctions; // go over the read and it's split info // M = 0 I = 1 D = 2 N = 3 S = 4 H = 5 P = 6 uint32_t* cigar = bam_get_cigar(bread); for(int i = 0; i < rcore->n_cigar; i++) { const int op = bam_cigar_op(cigar[i]); const int ol = bam_cigar_oplen(cigar[i]); #ifdef ALLOW_DEBUG logger::Instance()->debug("Operator " + std::to_string(op) + " " + std::to_string(ol) + ".\n"); #endif // do we have an N? if (op == BAM_CREF_SKIP) { rpos end = start + offset; new_junctions.push_back(interval(new_read) ); interval* interv = &new_junctions.back(); interv->left = start; interv->right = end-1; #ifdef ALLOW_DEBUG logger::Instance()->debug("Add Interval1 " + std::to_string(start) + " " + std::to_string(end-1)+".\n"); #endif start = end + ol; offset = 0; } else if (op == BAM_CMATCH || op == BAM_CDEL || op == BAM_CEQUAL || op == BAM_CDIFF) { offset = offset + ol; } } // last interval rpos end = start + offset; new_junctions.push_back(interval(new_read) ); interval* interv = &new_junctions.back(); interv->left = start; interv->right = end-1; // THIS is including obviously // filter out splits sites that are too small to count greader_list<interval>::iterator nj = new_junctions.begin(); greader_list<interval>::iterator njn = nj; ++njn; while (njn != new_junctions.end()) { if (njn->left - nj->right < options::Instance()->get_min_intron_length()) { njn->left = nj->left; } else { junctions.push_back(*nj); new_read->add_length(&junctions.back()); splices.push_back( std::make_pair(nj->right+1, njn->left-1)); new_ends.push_back(nj->right); new_starts.push_back(njn->left); } nj = njn; ++njn; } junctions.push_back(*nj); new_read->add_length(&junctions.back()); // this is over filtering bool no_start = false; if (junctions.front().right - junctions.front().left + 1 < options::Instance()->get_min_junction_anchor() && junctions.size() > 1) { no_start = true; splices.pop_front(); } bool no_end = false; if (junctions.back().right - junctions.back().left + 1 < options::Instance()->get_min_junction_anchor() && junctions.size() > 1) { no_end = true; if (!splices.empty()) splices.pop_back(); } for (std::deque<rpos>::iterator it = new_ends.begin(); it != new_ends.end(); ++it) { bool drop = ( it == new_ends.begin() && no_start ) || ( it == new_ends.end()-1 && no_end ); add_known_end(chrom, *it, ex_end_it, !drop, index); } for (std::deque<rpos>::iterator it = new_starts.begin(); it != new_starts.end(); ++it) { bool drop = ( it == new_starts.begin() && no_start ) || ( it == new_starts.end()-1 && no_end ); add_known_start(chrom, *it, ex_start_it, !drop, index); } new_read->set_left_limit(junctions.front().left); new_read->set_right_limit(junctions.back().right); #ifdef ALLOW_DEBUG logger::Instance()->debug("Add Interval2 " + std::to_string(start) + " " + std::to_string(end-1)+".\n"); #endif right = end-1; return new_read; } void bam_reader::add_known_start( chromosome* chrom, const rpos pos, r_border_set<rpos>::iterator &ex_start_it, bool evidence, int index) { r_border_set<rpos>::iterator lower = chrom->fixed_exon_starts.ref().lower_bound(ex_start_it, chrom->fixed_exon_starts.ref().end(), pos); unsigned int max_extend = options::Instance()->get_max_pos_extend(); if (lower != chrom->fixed_exon_starts.ref().end()) { if (pos + max_extend >= *lower && pos <= *lower + max_extend) { return; // this already is a known site! } } if (lower != chrom->fixed_exon_starts.ref().begin()) { // only test previous element if it exists! --lower; if (pos + max_extend >= *lower && pos <= *lower + max_extend) { return; // this already is a known site! } } // this were all possible hit // we found something entirely new! chrom->known_starts.push_back(chromosome::raw_position(pos, evidence, index)); } void bam_reader::add_known_end( chromosome* chrom, const rpos pos, r_border_set<rpos>::iterator &ex_end_it, bool evidence, int index) { r_border_set<rpos>::iterator lower = chrom->fixed_exon_ends.ref().lower_bound(ex_end_it, chrom->fixed_exon_ends.ref().end(), pos); unsigned int max_extend = options::Instance()->get_max_pos_extend(); if (lower != chrom->fixed_exon_ends.ref().end()) { if (pos + max_extend >= *lower && pos <= *lower + max_extend) { return; // this already is a known site! } } if (lower != chrom->fixed_exon_ends.ref().begin()) { // only test previous element if it exists! --lower; if (pos + max_extend >= *lower && pos <= *lower + max_extend) { return; // this already is a known site! } } // this were all possible hit // we found something entirely new! chrom->known_ends.push_back(chromosome::raw_position(pos, evidence, index)); } void bam_reader::finish_block(chromosome* chrom, rpos &left, rpos &right, r_border_set<rpos>::iterator &ex_start_it, r_border_set<rpos>::iterator &ex_end_it, unsigned int total_inputs) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Finish block " + std::to_string(left) + " - " + std::to_string(right) + ".\n"); #endif // auto t1 = high_resolution_clock::now(); // create overlapping regions from new intervals greader_list<std::pair<rpos, rpos> > raw; create_raw_exons(chrom, raw, right); // auto t2 = high_resolution_clock::now(); // obscure safe condition for very low read count input if (raw.empty()) { return; } // for(greader_list<std::pair<rpos, rpos> >::iterator ri = raw.begin(); ri != raw.end(); ++ri) { // logger::Instance()->debug("RAW EXON "+ std::to_string(ri->first) + " " + std::to_string(ri->second) +"\n"); // } // // // for (greader_list<connected>::iterator it = chrom->chrom_fragments.begin() ; it != chrom->chrom_fragments.end(); ++it) { // logger::Instance()->debug("Pre Connected "+ std::to_string(it->start) + " " + std::to_string(it->end) +"\n"); // } // we update the know connected areas connected* conn = &*insert_fragment(chrom, left, right); // auto t3 = high_resolution_clock::now(); // for (greader_list<connected>::iterator it = chrom->chrom_fragments.begin() ; it != chrom->chrom_fragments.end(); ++it) { // logger::Instance()->debug("After Connected "+ std::to_string(it->start) + " " + std::to_string(it->end) +"\n"); // // } // logger::Instance()->debug("Inside Connected "+ std::to_string(conn->start) + " " + std::to_string(conn->end) +"\n"); // // for(greader_refsorted_list<raw_atom*>::iterator atom_it = conn->atoms.ref().begin(); atom_it != conn->atoms.ref().end(); ++atom_it) { // logger::Instance()->debug("Preexisting Atom A " + std::to_string((long) *atom_it) + " " + (*atom_it)->to_string() + ".\n"); // } // for(greader_list<raw_atom>::iterator atom_it = chrom->atoms.begin(); atom_it != chrom->atoms.end(); ++atom_it) { // logger::Instance()->debug("Base Atom A " + std::to_string((long) &*atom_it) + " " + atom_it->to_string() + ".\n"); // } // #ifdef ALLOW_DEBUG // for (greader_list<exon* >::iterator e_it = conn->fossil_exons.ref().begin(); e_it != conn->fossil_exons.ref().end(); ++e_it) { // logger::Instance()->debug("FINAL EXON A: " + std::to_string((*e_it)->start) + "-" + std::to_string((*e_it)->end)+"\n"); // } // #endif // if we have new data, cluster for best support! // this filters out unsupported clusters by low coverage greader_list<rpos> clustered_starts; greader_list<chromosome::raw_position >::iterator end_starts = cluster(chrom->known_starts, clustered_starts, right, total_inputs); greader_list<rpos> clustered_ends; greader_list<chromosome::raw_position >::iterator end_ends = cluster(chrom->known_ends, clustered_ends, right, total_inputs); // reset known lists for next round chrom->known_starts.erase(chrom->known_starts.begin(), end_starts); chrom->known_ends.erase(chrom->known_ends.begin(), end_ends); std::map< std::pair<rpos, rpos>, bool > junction_validation; filter_clusters(chrom, clustered_starts, clustered_ends, junction_validation, total_inputs); // auto t4 = high_resolution_clock::now(); // the starts and ends should reflect the the clustered starts and ends! solidify_raw_exons_ends(chrom, raw, clustered_starts, clustered_ends); // auto t5 = high_resolution_clock::now(); // if (!conn->guided && options::Instance()->is_trimming()) trim_exons_1(chrom, raw, clustered_starts, clustered_ends); if (!conn->guided && options::Instance()->is_trimming()) trim_exons_2(chrom, raw, clustered_starts, clustered_ends); // auto t6 = high_resolution_clock::now(); // combine them with others update_existing_exons(conn, chrom, raw, left, right); // auto t7 = high_resolution_clock::now(); // #ifdef ALLOW_DEBUG // for (greader_list<exon* >::iterator e_it = conn->fossil_exons.ref().begin(); e_it != conn->fossil_exons.ref().end(); ++e_it) { // logger::Instance()->debug("FINAL EXON B: " + std::to_string((*e_it)->start) + "-" + std::to_string((*e_it)->end)+"\n"); // } // #endif // // for(greader_refsorted_list<raw_atom*>::iterator atom_it = conn->atoms.ref().begin(); atom_it != conn->atoms.ref().end(); ++atom_it) { // logger::Instance()->debug("Preexisting Atom B " + std::to_string((long) *atom_it) + " " + (*atom_it)->to_string() + ".\n"); // } // for(greader_list<raw_atom>::iterator atom_it = chrom->atoms.begin(); atom_it != chrom->atoms.end(); ++atom_it) { // logger::Instance()->debug("Base Atom B " + std::to_string((long) &*atom_it) + " " + atom_it->to_string() + ".\n"); // } // // split exons on clustered split_exons(conn, chrom, clustered_starts, left, right, 1); split_exons(conn, chrom, clustered_ends, left, right, 0); // auto t8 = high_resolution_clock::now(); // #ifdef ALLOW_DEBUG // for (greader_list<exon* >::iterator e_it = conn->fossil_exons.ref().begin(); e_it != conn->fossil_exons.ref().end(); ++e_it) { // logger::Instance()->debug("FINAL EXON C: " + std::to_string((*e_it)->start) + "-" + std::to_string((*e_it)->end)+"\n"); // } // #endif // // for(greader_refsorted_list<raw_atom*>::iterator atom_it = conn->atoms.ref().begin(); atom_it != conn->atoms.ref().end(); ++atom_it) { // logger::Instance()->debug("Preexisting Atom C " + std::to_string((long) *atom_it) + " " + (*atom_it)->to_string() + ".\n"); // for(gmap<int, raw_series_counts>::iterator rsci = (*atom_it)->raw_series.begin(); rsci != (*atom_it)->raw_series.end(); ++rsci) { // logger::Instance()->debug("Count " + std::to_string(rsci->second.total_lefts) + " " + std::to_string(rsci->second.total_rights) + ".\n"); // rcount sleft = 0; // rcount sright = 0; // for (std::map< rpos,rcount >::iterator r_it = rsci->second.lefts->begin(); r_it != rsci->second.lefts->end();++r_it) { // logger::Instance()->debug("Left " + std::to_string(r_it->first) + " " + std::to_string(r_it->second) + ".\n"); // sleft += r_it->second; // } // for (std::map< rpos,rcount >::iterator r_it = rsci->second.rights->begin(); r_it != rsci->second.rights->end();++r_it) { // logger::Instance()->debug("Right " + std::to_string(r_it->first) + " " + std::to_string(r_it->second) + ".\n"); // sright += r_it->second; // } // for (std::map< rpos,rcount >::iterator r_it = rsci->second.hole_ends->begin(); r_it != rsci->second.hole_ends->end();++r_it) { // logger::Instance()->debug("H Left " + std::to_string(r_it->first) + " " + std::to_string(r_it->second) + ".\n"); // sleft += r_it->second; // } // for (std::map< rpos,rcount >::iterator r_it = rsci->second.hole_starts->begin(); r_it != rsci->second.hole_starts->end();++r_it) { // logger::Instance()->debug("H Right " + std::to_string(r_it->first) + " " + std::to_string(r_it->second) + ".\n"); // sright += r_it->second; // } // if (sleft != sright) { // logger::Instance()->debug("ALERT: " + std::to_string(sleft) + "-" + std::to_string(sright)+"\n"); // } // } // } // for(greader_list<raw_atom>::iterator atom_it = chrom->atoms.begin(); atom_it != chrom->atoms.end(); ++atom_it) { // logger::Instance()->debug("Base Atom C " + std::to_string((long) &*atom_it) + " " + atom_it->to_string() + ".\n"); // } // // // now look at new separators add_to_fixed_clusters(chrom->fixed_exon_starts, clustered_starts, ex_start_it); add_to_fixed_clusters(chrom->fixed_exon_ends, clustered_ends, ex_end_it); // auto t9 = high_resolution_clock::now(); // logger::Instance()->debug("Clustered Starts: "); // for (greader_list<rpos>::iterator f = clustered_starts.begin(); f!= clustered_starts.end(); ++f) { // logger::Instance()->debug(std::to_string(*f) + ","); // } // logger::Instance()->debug("\n"); // logger::Instance()->debug("Clustered Ends: "); // for (greader_list<rpos>::iterator f = clustered_ends.begin(); f!= clustered_ends.end(); ++f) { // logger::Instance()->debug(std::to_string(*f) + ","); // } // logger::Instance()->debug("\n"); // // logger::Instance()->debug("Fixed Starts: "); // for (r_border_set<rpos>::iterator f = chrom->fixed_exon_starts->begin(); f!= chrom->fixed_exon_starts->end(); ++f) { // logger::Instance()->debug(std::to_string(*f) + ","); // } // logger::Instance()->debug("\n"); // logger::Instance()->debug("Fixed Ends: "); // for (r_border_set<rpos>::iterator f = chrom->fixed_exon_ends->begin(); f!= chrom->fixed_exon_ends->end(); ++f) { // logger::Instance()->debug(std::to_string(*f) + ","); // } // logger::Instance()->debug("\n"); // now assign new reads assign_reads(conn, chrom); // auto t10 = high_resolution_clock::now(); // for ( greader_list<rread>::iterator r_it = chrom->read_queue.begin(); r_it != chrom->read_queue.end(); ++r_it) { //// logger::Instance()->debug("FREAD1: "); //// for (greader_list<std::string>::iterator id = r_it->ids->begin(); id != r_it->ids->end(); ++id) { //// logger::Instance()->debug(*id + ", "); //// } // logger::Instance()->debug("Exons1: "); // for (greader_refsorted_list<exon*>::iterator e = r_it->atom->exons->begin(); e != r_it->atom->exons->end(); ++e) { // logger::Instance()->debug( std::to_string((*e)->start) + "-" + std::to_string((*e)->end) + ", "); // } // logger::Instance()->debug("\n"); // } filter_outer_read_junctions(chrom, junction_validation, total_inputs); // auto t11 = high_resolution_clock::now(); // for ( greader_list<rread>::iterator r_it = chrom->read_queue.begin(); r_it != chrom->read_queue.end(); ++r_it) { //// logger::Instance()->debug("FREAD2: "); //// for (greader_list<std::string>::iterator id = r_it->ids->begin(); id != r_it->ids->end(); ++id) { //// logger::Instance()->debug(*id + ", "); //// } // logger::Instance()->debug("Exons2: "); // for (greader_refsorted_list<exon*>::iterator e = r_it->atom->exons->begin(); e != r_it->atom->exons->end(); ++e) { // logger::Instance()->debug( std::to_string((*e)->start) + "-" + std::to_string((*e)->end) + ", "); // } // logger::Instance()->debug("\n"); // } // // reduce down to minimal amount of atoms reduce_atoms(conn, chrom); // auto t12 = high_resolution_clock::now(); // // for(greader_refsorted_list<raw_atom*>::iterator atom_it = conn->atoms.ref().begin(); atom_it != conn->atoms.ref().end(); ++atom_it) { // logger::Instance()->debug("Preexisting Atom D " + std::to_string((long) *atom_it) + " " + (*atom_it)->to_string() + ".\n"); // } // for(greader_list<raw_atom>::iterator atom_it = chrom->atoms.begin(); atom_it != chrom->atoms.end(); ++atom_it) { // logger::Instance()->debug("Base Atom D " + std::to_string((long) &*atom_it) + " " + atom_it->to_string() + ".\n"); // } // chrom->read_queue.clear(); chrom->interval_queue.clear(); chrom->splice_queue.clear(); mark_or_reduce_paired_atoms(conn, chrom , conn->atoms.ref().begin(), conn->atoms.ref().end()); // auto t13 = high_resolution_clock::now(); reduce_reads(conn); // auto t14 = high_resolution_clock::now(); // // auto d1 = duration_cast<microseconds>(t2 - t1); // auto d2 = duration_cast<microseconds>(t3 - t2); // auto d3 = duration_cast<microseconds>(t4 - t3); // auto d4 = duration_cast<microseconds>(t5 - t4); // auto d5 = duration_cast<microseconds>(t6 - t5); // auto d6 = duration_cast<microseconds>(t7 - t6); // auto d7 = duration_cast<microseconds>(t8 - t7); // auto d8 = duration_cast<microseconds>(t9 - t8); // auto d9 = duration_cast<microseconds>(t10 - t9); // auto d10 = duration_cast<microseconds>(t11 - t10); // auto d11 = duration_cast<microseconds>(t12 - t11); // auto d12 = duration_cast<microseconds>(t13 - t12); // auto d13 = duration_cast<microseconds>(t14 - t13); // // logger::Instance()->info( "Finish Block - Timer: " + std::to_string(d1.count()) + " " + std::to_string(d2.count()) + " " + std::to_string(d3.count()) + " "+ std::to_string(d4.count()) + " "+ std::to_string(d5.count()) + " "+ std::to_string(d6.count()) + " "+ std::to_string(d7.count()) + " "+ std::to_string(d8.count()) + " "+ std::to_string(d9.count()) + " "+ std::to_string(d10.count()) + " "+ std::to_string(d11.count()) + " " + std::to_string(d12.count()) + " " + std::to_string(d13.count()) + "\n"); // if (!conn->guided) filter_bins(conn, chrom); } // ######### CLUSTERING ######### greader_list<chromosome::raw_position >::iterator bam_reader::cluster( greader_list<chromosome::raw_position > &in, greader_list<rpos> &out, rpos &right, int total_inputs) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Cluster Group.\n"); #endif // nothing to do here if (in.empty()) { return in.end(); } // we need a sorted one to get found in.sort(); // get option const unsigned int extend = options::Instance()->get_max_pos_extend(); // init basic values unsigned int count = 1; greader_list<chromosome::raw_position >::iterator it = in.begin(); rpos current = it->position; bool evidence = it->evidence; std::set<int> indices; indices.insert(it->index); ++it; std::vector<std::pair<rpos,unsigned int> > basic_grouping; // now search for connected blocks for (; it != in.end() && it->position <= right; ++it) { if (it->position == current) { // repeated element ++count; evidence = evidence || it->evidence; indices.insert(it->index); } else { // different element #ifdef ALLOW_DEBUG logger::Instance()->debug("Add cluster candidate group: "+ std::to_string(current) + ", " + std::to_string(count)+".\n"); #endif basic_grouping.push_back(std::make_pair(current, count)); // do we have combine to real cluster? if (current + extend < it->position ) { // *it > current by sorting // end of cluster found, process values in temp if (evidence && ( indices.size() * 100 / total_inputs >= options::Instance()->get_vote_percentage_low() ) ) { DKMeans(basic_grouping, out, extend); } basic_grouping.clear(); evidence = false; indices.clear(); } current = it->position; count = 1; evidence = evidence || it->evidence; indices.insert(it->index); } } #ifdef ALLOW_DEBUG logger::Instance()->debug("Add cluster candidate group: "+ std::to_string(current) + ", " + std::to_string(count)+".\n"); #endif basic_grouping.push_back(std::make_pair(current, count)); if (evidence && ( indices.size() * 100 / total_inputs >= 50 ) ) { DKMeans(basic_grouping, out, extend); } return it; } void bam_reader::cluster_clean(greader_list<std::pair<rpos, rcount> > &in, greader_list<rpos> &out) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Cluster Group.\n"); #endif // nothing to do here if (in.empty()) { return; } // get option const unsigned int extend = options::Instance()->get_max_pos_extend(); // init basic values greader_list<std::pair<rpos, rcount> >::iterator it = in.begin(); rpos current = it->first; rcount count = it->second; ++it; std::vector<std::pair<rpos,unsigned int> > basic_grouping; // now search for connected blocks for (; it != in.end(); ++it) { if (it->first == current) { // repeated element count += it->second; } else { // different element #ifdef ALLOW_DEBUG logger::Instance()->debug("Add cluster candidate group: "+ std::to_string(current) + ", " + std::to_string(count)+".\n"); #endif basic_grouping.push_back(std::make_pair(current, count)); // do we have combine to real cluster? if (current + extend < it->first ) { // *it > current by sorting // end of cluster found, process values in temp DKMeans(basic_grouping, out, extend); basic_grouping.clear(); } current = it->first; count = it->second; } } #ifdef ALLOW_DEBUG logger::Instance()->debug("Add cluster candidate group: "+ std::to_string(current) + ", " + std::to_string(count)+".\n"); #endif basic_grouping.push_back(std::make_pair(current, count)); DKMeans(basic_grouping, out, extend); } void bam_reader::DKMeans( std::vector<std::pair<rpos,unsigned int> > &in, greader_list<rpos> &out, const unsigned int &extend) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Inner Custer Group.\n"); #endif // filter low coverage! unsigned int count = 0; for (std::vector<std::pair<rpos,unsigned int> >::iterator in_it = in.begin(); in_it != in.end(); ++in_it) { count += in_it->second; } if (count < options::Instance()->get_min_junction_coverage()) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Cluster filtered " + std::to_string(in[0].first)+".\n"); #endif return; } greader_list<rpos>::iterator end_it = out.end(); const unsigned int size = in.size(); if (in.size() == 1) { out.push_back(in[0].first); #ifdef ALLOW_DEBUG logger::Instance()->debug("Clustered Position added " + std::to_string(in[0].first)+".\n"); #endif return; } //################ FORWARD ################ // 2D dynamic lookup table arrays double *costs = new double[size*size]; unsigned int *back = new unsigned int[size*size]; // we use unnormalized weighted least square // d = sum w_i (x_i - mu)^2 // temporary mean values double mean_x1; unsigned int n_x1; costs[0] = 0.0; back[0] = 1; // init first row without backvalue k=0 mean_x1 = in[0].first; n_x1 = in[0].second; for(unsigned int i = 1; i < size; ++i) { back[i] = 0; if (in[i].first > in[0].first + 2 * extend ) { // cannot make this extend, add special cancel value costs[i] = -1.0; // all negative will be treated as invalid } else { double new_mean_x1 = mean_x1 + in[i].second / (double) (n_x1 + in[i].second) * (in[i].first - mean_x1); costs[i ] = costs[(i-1)] + in[i].second * (in[i].first - mean_x1)* (in[i].first - new_mean_x1); mean_x1 = new_mean_x1; n_x1 += in[i].second; } } // now update first diagonal as always 0 // for(unsigned int k = 1; k < size; ++k) { // costs[k*size+k] = 0; // } for(unsigned int k = 1; k < size; ++k) { for(unsigned int i = k+1; i < size; ++i) { double d = 0.0; double mean_xj = 0.0; unsigned int n_xj = 0; int min = -1; double min_cost = -1; for(unsigned int j = i; j > k; --j) { if (in[i].first > in[j].first + 2 * extend ) { break; } double new_mean_xj = mean_xj + in[j].second / (double) (n_xj + in[j].second) * (in[j].first - mean_xj); d = d + in[j].second * (in[j].first - mean_xj)* (in[j].first - new_mean_xj); mean_xj = new_mean_xj; n_xj += in[j].second; if (costs[(k-1) * size + j -1] >= 0) { if (min == -1) { min = j; min_cost = d + costs[(k-1) * size + j -1]; } else { if( d + costs[(k-1) * size + j -1] < min_cost) { min = j; min_cost = d + costs[(k-1) * size + j -1]; } } } } if (min < 0) { // no minimum found, invalid state! costs[ k * size + i] = -1; } else { costs[ k * size + i] = min_cost; back[ k * size + i ] = min; } } } // logger::Instance()->debug("Matrix \n"); // for(unsigned int k = 0; k < size; ++k) { // for(unsigned int i = k; i < size; ++i) { // logger::Instance()->debug(" " + std::to_string(costs[ k * size + i]) + ";" + std::to_string(back[ k * size + i])); // } // logger::Instance()->debug("\n"); // } //################ Backwards ################ greader_list<std::pair<rpos, rcount> > results; // find smallest number of clusters k that contains all points unsigned int end = 0; for (; end < size; ++end ) { if (costs[ end * size + (size-1)] > 0) { break; } } unsigned int range_start; unsigned int range_end = size - 1; for ( int k = end; k>=0; --k) { range_start = back[ k * size + range_end]; double mean = 0.0 ; unsigned int n = 0; for (unsigned int i = range_start; i <= range_end; ++i) { mean = mean + in[i].second / (double) (n + in[i].second) * (in[i].first - mean); n += in[i].second; } if (n < options::Instance()->get_min_junction_coverage()) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Clustered Position below threshold " + std::to_string(round(mean))+".\n"); #endif } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("Clustered Position Queue " + std::to_string(round(mean)) +" , " + std::to_string(n)+".\n"); #endif results.push_front(std::make_pair(round(mean), n)); //end_it = out.insert(end_it, round(mean)); } range_end = range_start - 1; } // clean up delete [] costs; delete [] back; cluster_clean(results, out); } void bam_reader::add_to_fixed_clusters(lazy<r_border_set<rpos> > &fixed, greader_list<rpos> &new_clust, r_border_set<rpos>::iterator &pos_mark) { // for deque, linear join to new list is fastest, change if you switch type of r_border_set if (fixed.ref().empty()) { // first round, make this fast // unfortunately we need to copy and not move, as new is read individually // should not cost too much time though, when in doubt profile std::copy(new_clust.begin(), new_clust.end(), std::back_inserter(fixed.ref()) ); pos_mark = fixed.ref().end(); #ifdef ALLOW_DEBUG logger::Instance()->debug("Copy full.\n"); #endif } else { lazy<r_border_set<rpos> > new_list; rpos border; if (pos_mark == fixed.ref().end()) { border = *(--pos_mark); } else { border = *pos_mark; } // move through both lists in parallel and copy/ move to new list r_border_set<rpos>::iterator i1 = fixed.ref().begin(); greader_list<rpos>::iterator i2 = new_clust.begin(); while (true) { if (i1 == fixed.ref().end()) { std::copy(i2, new_clust.end(), std::back_inserter(new_list.ref()) ); break; } if (i2 == new_clust.end()) { _MOVE_RANGE(i1, fixed.ref().end(), std::back_inserter(new_list.ref())); break; } if (*i1 < *i2) { new_list.ref().push_back(_MOVE(*i1)); ++i1; } else { new_list.ref().push_back(_MOVE(*i2)); ++i2; } } fixed = new_list; pos_mark = fixed.ref().lower_bound(fixed.ref().begin(), fixed.ref().end(), border); } } void bam_reader::filter_clusters(chromosome* chrom, greader_list<rpos> &starts, greader_list<rpos> &ends, std::map< std::pair<rpos, rpos>, bool > &junction_validation, unsigned int total_inputs) { // logger::Instance()->debug("Filter Clusters.\n"); if ( ( starts.empty() && chrom->fixed_exon_starts->empty() ) || ( ends.empty() && chrom->fixed_exon_ends->empty()) ) { // nothing to do return; } struct s_elem { s_elem() : total_count(0), sources(0), primary(false) {} s_elem(rcount tc, unsigned int s) : total_count(tc), sources(s) {} void add(rcount tc, unsigned int s, bool p) { total_count += tc; if (s > sources) { sources = s; } primary = primary || p; }; rcount total_count; unsigned int sources; bool primary; }; // position: splices there: total_count, from x sources gmap<rpos, std::map< std::pair<rpos, rpos>, s_elem > > s_map; gmap<rpos, std::map< std::pair<rpos, rpos>, s_elem > > e_map; for(std::map< std::pair<rpos, rpos>, std::map< int, std::pair<unsigned int, bool > > >::iterator sci = chrom->splice_queue.begin(); sci != chrom->splice_queue.end(); ++sci) { rpos sf = sci->first.first - 1; rpos st = sci->first.second + 1; rpos s_target, e_target; if(starts.empty() ) { s_target = 0; } else { greader_list<rpos>::iterator lbs = std::lower_bound(starts.begin(), starts.end(), st); // jump end is exon start if(lbs == starts.end()) { // we hit the end, can only be the last one then --lbs; s_target = *lbs; } else { s_target = *lbs; if (lbs != starts.begin()) { greader_list<rpos>::iterator lbs_p = lbs; --lbs_p; //logger::Instance()->debug("Switch Test "+ std::to_string(*lbs_p) + " - " + std::to_string(*lbs) + " : " + std::to_string(st) + ".\n"); if (st - *lbs_p < *lbs - st) { s_target = *lbs_p; } } } } if ( (st > s_target && st - s_target > options::Instance()->get_max_pos_extend()) || (st < s_target && s_target - st > options::Instance()->get_max_pos_extend()) ) { if (!chrom->fixed_exon_starts->empty()) { // not found look at fixed greader_list<rpos>::iterator lbsf = std::lower_bound(chrom->fixed_exon_starts->begin(), chrom->fixed_exon_starts->end(), st); if(lbsf == chrom->fixed_exon_starts->end()) { // we hit the end, can only be the last one then --lbsf; s_target = *lbsf; } else { s_target = *lbsf; if (lbsf != chrom->fixed_exon_starts->begin()) { greader_list<rpos>::iterator lbs_p = lbsf; --lbs_p; if (st - *lbs_p < *lbsf - st) { s_target = *lbs_p; } } } } if ( (st > s_target && st - s_target > options::Instance()->get_max_pos_extend()) || (st < s_target && s_target - st > options::Instance()->get_max_pos_extend()) ) { junction_validation[std::make_pair(sf + 1 , st - 1)] = false; #ifdef ALLOW_DEBUG logger::Instance()->debug("Junction False Init Start "+ std::to_string(sf) + " - " + std::to_string(st) + ".\n"); #endif continue; } } if(ends.empty() ) { e_target = 0; } else { greader_list<rpos>::iterator lbe = std::lower_bound(ends.begin(), ends.end(), sf); // jump end is exon start if(lbe == ends.end()) { // we hit the end, can only be the last one then --lbe; e_target = *lbe; } else { e_target = *lbe; if (lbe != ends.begin()) { greader_list<rpos>::iterator lbe_p = lbe; --lbe_p; //logger::Instance()->debug("Switch Test "+ std::to_string(*lbe_p) + " - " + std::to_string(*lbe) + " : " + std::to_string(sf) + ".\n"); if (sf - *lbe_p < *lbe - sf) { e_target = *lbe_p; } } } } if ( (sf > e_target && sf - e_target > options::Instance()->get_max_pos_extend()) || (sf < e_target && e_target - sf > options::Instance()->get_max_pos_extend()) ) { if (! chrom->fixed_exon_ends->empty() ) { greader_list<rpos>::iterator lbef = std::lower_bound(chrom->fixed_exon_ends->begin(), chrom->fixed_exon_ends->end(), sf); // jump end is exon start if(lbef == chrom->fixed_exon_ends->end()) { // we hit the end, can only be the last one then --lbef; e_target = *lbef; } else { e_target = *lbef; if (lbef != chrom->fixed_exon_ends->begin()) { greader_list<rpos>::iterator lbe_p = lbef; --lbe_p; //logger::Instance()->debug("Switch Test "+ std::to_string(*lbe_p) + " - " + std::to_string(*lbef) + " : " + std::to_string(sf) + ".\n"); if (sf - *lbe_p < *lbef - sf) { e_target = *lbe_p; } } } } if ( (sf > e_target && sf - e_target > options::Instance()->get_max_pos_extend()) || (sf < e_target && e_target - sf > options::Instance()->get_max_pos_extend()) ) { junction_validation[std::make_pair(sf + 1, st - 1)] = false; #ifdef ALLOW_DEBUG logger::Instance()->debug("Junction False Init End "+ std::to_string(sf) + " - " + std::to_string(st) + ".\n"); #endif continue; } } rcount total_count = 0; rcount primary = false; for (std::map< int, std::pair<unsigned int, bool > >::iterator scci = sci->second.begin(); scci != sci->second.end(); scci++) { total_count += scci->second.first; primary = primary || scci->second.second; } unsigned int source_number = sci->second.size(); s_map[s_target][ std::make_pair(e_target + 1, s_target - 1) ].add(total_count, source_number, primary); e_map[e_target][ std::make_pair(e_target + 1, s_target - 1) ].add(total_count, source_number, primary); junction_validation[std::make_pair(e_target + 1, s_target - 1)] = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Junction Init "+ std::to_string(e_target + 1) + " - " + std::to_string(s_target - 1) +" from " + std::to_string(sf) + " - " + std::to_string(st) + ".\n"); #endif } const int max_del_count = 3; const int evidence_min = 10; const int evidence_factor = 8; for (gmap<rpos, std::map< std::pair<rpos, rpos>, s_elem > >::iterator mi = s_map.begin(); mi != s_map.end(); ++mi) { rcount max_count = 0; rcount sum = 0; bool primary = false; for (std::map< std::pair<rpos, rpos>, s_elem >::iterator si = mi->second.begin(); si != mi->second.end(); ++si) { s_elem& se = si->second; if (se.total_count > max_count) { max_count = se.total_count; } sum += se.total_count; primary = primary || se.primary; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Junction Start ------------ at " + std::to_string(mi->first) + "\n"); #endif for (std::map< std::pair<rpos, rpos>, s_elem >::iterator si = mi->second.begin(); si != mi->second.end(); ++si) { std::pair<rpos, rpos> pos = si->first; s_elem& se = si->second; if ( ( total_inputs > 1 && (se.sources < 2 || se.sources * 100 / total_inputs < options::Instance()->get_vote_percentage_low()) ) || (se.total_count <= max_del_count && max_count > evidence_min && max_count > se.total_count * evidence_factor) || sum < options::Instance()->get_min_junction_coverage() || !primary ){ junction_validation[std::make_pair(pos.first , pos.second)] = false; #ifdef ALLOW_DEBUG logger::Instance()->debug("Invalid Junction Start "+ std::to_string(pos.first) + " - " + std::to_string(pos.second) + " because " + "T" + std::to_string(total_inputs) + ":" + std::to_string(se.sources) + " S" + std::to_string(sum) + " M" + std::to_string(max_del_count) + ":" + std::to_string(se.total_count) + ".\n"); #endif } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("Valid Junction Start "+ std::to_string(pos.first) + " - " + std::to_string(pos.second) + " because " + "T" + std::to_string(total_inputs) + ":" + std::to_string(se.sources) + " S" + std::to_string(sum) + " M" + std::to_string(max_del_count) + ":" + std::to_string(se.total_count) + ".\n"); #endif } } } for (gmap<rpos, std::map< std::pair<rpos, rpos>, s_elem > >::iterator mi = e_map.begin(); mi != e_map.end(); ++mi) { rcount max_count = 0; rcount sum = 0; bool primary = true; for (std::map< std::pair<rpos, rpos>, s_elem >::iterator si = mi->second.begin(); si != mi->second.end(); ++si) { s_elem& se = si->second; if (se.total_count > max_count) { max_count = se.total_count; } sum += se.total_count; primary = primary || se.primary; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Junction End ------------ at " + std::to_string(mi->first) + "\n"); #endif for (std::map< std::pair<rpos, rpos>, s_elem >::iterator si = mi->second.begin(); si != mi->second.end(); ++si) { std::pair<rpos, rpos> pos = si->first; s_elem& se = si->second; if ( ( total_inputs > 1 && (se.sources < 2 || se.sources * 100 / total_inputs < options::Instance()->get_vote_percentage_low()) ) || (se.total_count <= max_del_count && max_count > evidence_min && max_count > se.total_count * evidence_factor) || sum < options::Instance()->get_min_junction_coverage() || !primary ){ junction_validation[std::make_pair(pos.first , pos.second)] = false; #ifdef ALLOW_DEBUG logger::Instance()->debug("Invalid Junction End "+ std::to_string(pos.first) + " - " + std::to_string(pos.second) + " because " + "T" + std::to_string(total_inputs) + ":" + std::to_string(se.sources) + " S" + std::to_string(sum) + " M" + std::to_string(max_del_count) + ":" + std::to_string(se.total_count) + ".\n"); #endif } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("Valid Junction End "+ std::to_string(pos.first) + " - " + std::to_string(pos.second) + " because " + "T" + std::to_string(total_inputs) + ":" + std::to_string(se.sources) + " S" + std::to_string(sum) + " M" + std::to_string(max_del_count) + ":" + std::to_string(se.total_count) + ".\n"); #endif } } } for (greader_list<rpos>::iterator ri = starts.begin(); ri != starts.end(); ) { bool valid = false; for (std::map< std::pair<rpos, rpos>, s_elem >::iterator si = s_map[*ri].begin(); si != s_map[*ri].end() ; ++si) { valid = valid || junction_validation[std::make_pair(si->first.first, si->first.second)]; } if (valid) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Valid Start "+ std::to_string(*ri) + ".\n"); #endif ++ri; } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("Invalid Start "+ std::to_string(*ri) + ".\n"); #endif ri = starts.erase(ri); } } for (greader_list<rpos>::iterator ri = ends.begin(); ri != ends.end(); ) { bool valid = false; for (std::map< std::pair<rpos, rpos>, s_elem >::iterator si = e_map[*ri].begin(); si != e_map[*ri].end() ; ++si) { valid = valid || junction_validation[std::make_pair(si->first.first, si->first.second)]; } if (valid) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Valid End "+ std::to_string(*ri) + ".\n"); #endif ++ri; } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("Invalid End "+ std::to_string(*ri) + ".\n"); #endif ri = ends.erase(ri); } } } // ######### EXONS ######### void bam_reader::create_raw_exons( chromosome* chrom, greader_list<std::pair<rpos, rpos> > &out, rpos &right) { greader_list<std::pair<rpos, rpos> > pre_out; chrom->interval_queue.sort(); // in bam only reads where sorted //std::vector<rpos> internal; std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; //= std::priority_queue<rpos, std::vector<rpos>, std::greater<rpos> >(internal); unsigned int count = 0; bool in = false; bool primary = false; rpos start; for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && (it)->right <= right; ++it) { // logger::Instance()->debug("Interval " + std::to_string((it)->left) + " " + std::to_string((it)->right)+".\n"); count += it->parent->global_count; // increase count for current interval end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); while ((it)->left > end_queue.top().first + 1) { count -= end_queue.top().second; // minus count because we have already added counts that are right of end_queue pos if (in && count - it->parent->global_count < options::Instance()->get_min_coverage()) { // #ifdef ALLOW_DEBUG // logger::Instance()->debug("Connected area A " + std::to_string(start) + " " + std::to_string(end_queue.top().first) +".\n"); // #endif if (primary) { pre_out.push_back(std::make_pair(start, end_queue.top().first)); } in = false; primary = false; } end_queue.pop(); } primary = primary || it->parent->primary; // logger::Instance()->debug("Check Primary " + std::to_string( it->parent->primary) + ".\n"); // did we find a start? if (!in && count >= options::Instance()->get_min_coverage()) { in = true; start = (it)->left; } } if (in && count >= options::Instance()->get_min_coverage()) { // we need to find the end still! rpos last = end_queue.top().first; while ( count >= options::Instance()->get_min_coverage()) { // guaranteed to enter it once! count -= end_queue.top().second; last = end_queue.top().first; end_queue.pop(); } // #ifdef ALLOW_DEBUG // logger::Instance()->debug("Connected area B " + std::to_string(start) + " " + std::to_string(chrom->interval_queue.back().right)+".\n"); // #endif if (primary) { pre_out.push_back(std::make_pair(start, last)); } } // this removes way to much! There actually are exons smaller than that! // if (options::Instance()->get_min_junction_anchor() != 0) { // for(greader_list<std::pair<rpos, rpos> >::iterator it = out.begin(); it != out.end(); ) { // outs are disconnected by definition // if (it->second - it->first + 1 < options::Instance()->get_min_junction_anchor()) { // it = out.erase(it); // } else { // ++it; // } // } // } greader_list<std::pair<rpos, rpos> >::iterator it = pre_out.begin(); if( it == pre_out.end() ) { out = pre_out; // kinda useless return; } greader_list<std::pair<rpos, rpos> >::iterator next = it; ++next; while( next != pre_out.end() ) { // connect output if (next->first - it->second < options::Instance()->get_exon_join_distance()) { next->first = it->first; } else { out.push_back(*it); } it = next; ++next; } out.push_back(*it); } void bam_reader::trim_exons_3(chromosome* chrom, greader_list<std::pair<rpos, rpos> > &raw, greader_list<rpos> &starts, greader_list<rpos> &ends) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim IN 3.\n"); #endif greader_list<exon> split_raw; greader_list<rpos>::iterator s_it = starts.begin(); greader_list<rpos>::iterator e_it = ends.begin(); for(greader_list<std::pair<rpos, rpos> >::iterator raw_it = raw.begin(); raw_it != raw.end(); ++raw_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("RAW it " + std::to_string(raw_it->first) + " " + std::to_string(raw_it->second) +".\n"); #endif rpos pos = raw_it->first; bool trimmable_start = true; while (true) { if (s_it == starts.end() && e_it == ends.end()) { break; } rpos next; bool trimmable_end; bool next_trimmable_start; if (s_it == starts.end() || (e_it != ends.end() && *e_it + 1 < *s_it ) ) { next = *e_it + 1; trimmable_end = false; next_trimmable_start = true; if (next-1 > raw_it->second) { break; } ++e_it; } else if (e_it == ends.end() || *e_it + 1 > *s_it ) { next = *s_it; trimmable_end = true; next_trimmable_start = false; if (next-1 > raw_it->second) { break; } ++s_it; } else { next = *s_it; trimmable_end = false; next_trimmable_start = false; if (next-1 > raw_it->second) { break; } ++s_it; ++e_it; } split_raw.push_back(exon(pos, next-1, trimmable_start, trimmable_end)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim A " + std::to_string(pos) + " " + std::to_string(next-1) + " : " + std::to_string(trimmable_start) + " " + std::to_string(trimmable_end) +".\n"); #endif trimmable_start = next_trimmable_start; pos = next; } if (pos != raw_it->second) { split_raw.push_back(exon(pos, raw_it->second, trimmable_start, true)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim B " + std::to_string(pos) + " " + std::to_string(raw_it->second) + " : " + std::to_string(trimmable_start) + " 1.\n"); #endif } } const unsigned int window_size = 100; // here we have the raw exons bool modified = false; for(greader_list<exon>::iterator sr_it = split_raw.begin(); sr_it != split_raw.end(); ++sr_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Test Trim " + std::to_string(sr_it->start) + " " + std::to_string(sr_it->end) +".\n"); #endif if ( sr_it->end - sr_it->start < 2 * window_size) continue; struct rg { rg(rpos l, rpos r, rcount c) : left(l), right(r), count(c) {} rpos left, right; rcount count; }; greader_list<rg > pre_out; std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; rpos ls = sr_it->start; rcount count = 0; for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && it->left <= sr_it->end; ++it) { if (it->right < sr_it->start) { continue; } end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); while ((it)->left > end_queue.top().first + 1) { if (end_queue.top().first >= ls) pre_out.push_back(rg(ls, end_queue.top().first, count)); // add this if it is a new change only! ls = end_queue.top().first + 1; // set next interval start to next base count -= end_queue.top().second; // update counts end_queue.pop(); // next in queue } if (ls < (it)->left) { pre_out.push_back(rg(ls, (it)->left - 1, count)); // add this if it is a new change only! ls = (it)->left; } count += it->parent->global_count; // increase count for current interval } if (ls <= sr_it->end) { pre_out.push_back(rg(ls, sr_it->end, count)); } greader_list<rg >::iterator start_chi = pre_out.begin(); greader_list<rg >::iterator middle_chi; greader_list<rg >::iterator end_chi = pre_out.begin(); rpos chi_left_length = 0; rcount chi_left = 0; rpos chi_right_length = 0; rcount chi_right = 0; for ( ; end_chi != pre_out.end() && chi_left_length < window_size; ++end_chi) { // fill first chi, left chi_left_length += end_chi->right - end_chi->left + 1; chi_left += end_chi->count * (end_chi->right - end_chi->left + 1); } if (end_chi == pre_out.end()) continue; middle_chi = end_chi; // first of chi_right, accordingly for ( ; end_chi != pre_out.end() && chi_right_length < window_size; ++end_chi) { // fill second chi, right chi_right_length += end_chi->right - end_chi->left + 1; chi_right += end_chi->count * (end_chi->right - end_chi->left + 1); } float best_start_ratio = 0; rpos best_start = 0; float best_end_ratio = 0; rpos best_end = 0; // now we loop over this bad boy to test all locations! while (true) { // this is easier to read with a break condition // check for current setting float avrg_left = chi_left / float(chi_left_length); float avrg_right = chi_right / float(chi_right_length); //logger::Instance()->debug("Avrg at " + std::to_string(middle_chi->left) + " : " + std::to_string(avrg_left) + " , " + std::to_string(avrg_right) + ".\n"); if (avrg_right > avrg_left && avrg_right - avrg_left > 25) { // possibly trim to source if (avrg_right * 0.1 > avrg_left) { // take this as source trim float opt_cost = avrg_right / avrg_left; if (opt_cost > best_start_ratio) { best_start_ratio = opt_cost; best_start = middle_chi->left; } } } else if (avrg_right < avrg_left && avrg_left - avrg_right > 25) { // possibly trim to drain if (avrg_left * 0.1 > avrg_right) { // take this as drain trim float opt_cost = avrg_left / avrg_right; if (opt_cost > best_end_ratio) { best_end_ratio = opt_cost; best_end = middle_chi->left; } } } // update to next position chi_left_length += middle_chi->right - middle_chi->left + 1; chi_left += middle_chi->count * (middle_chi->right - middle_chi->left + 1); chi_right_length -= middle_chi->right - middle_chi->left + 1; chi_right -= middle_chi->count * (middle_chi->right - middle_chi->left + 1); ++middle_chi; while ( chi_left_length - (start_chi->right - start_chi->left + 1) > window_size) { chi_left_length -= start_chi->right - start_chi->left + 1; chi_left -= start_chi->count * (start_chi->right - start_chi->left + 1); ++start_chi; } for ( ; end_chi != pre_out.end() && chi_right_length < window_size; ++end_chi) { // fill second chi, right chi_right_length += end_chi->right - end_chi->left + 1; chi_right += end_chi->count * (end_chi->right - end_chi->left + 1); } if (chi_right_length < window_size) { break; // get outa herem we can no longer do stuff! } } //if (!sr_it->fixed_start) best_start = 0; //if (!sr_it->fixed_end) best_end = 0; if(best_start < best_end) { // if no start is found, its 0 if (best_start) { // we have a start and an end! if (sr_it->fixed_start && sr_it->fixed_end) { // we cut this exon to a middle isle of coverage sr_it->start = best_start; sr_it->end = best_end; modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Trimmed to Island " + std::to_string(best_start) + " , " + std::to_string(best_end) + ".\n"); #endif } } else { // just the end! if (sr_it->fixed_end) { // end is trimmable // cut to end sr_it->end = best_end; modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Trimmed End to " + std::to_string(best_end) + ".\n"); #endif } else { sr_it = split_raw.insert(sr_it, exon(sr_it->start, best_end)); ++sr_it; sr_it->start = std::min(best_end + 10, sr_it->end); modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Trimmed Middle 2 " + std::to_string(best_end) + " , " + std::to_string(best_start) + ".\n"); #endif } } } else if(best_start > best_end) { // if no end is found, its 0 if (best_end) { // we have a start and an end! // we cut out the middle region sr_it = split_raw.insert(sr_it, exon(sr_it->start, best_end)); ++sr_it; sr_it->start = best_start; modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Trimmed Middle " + std::to_string(best_end) + " , " + std::to_string(best_start) + ".\n"); #endif } else { // just the start! // cut to start if (sr_it->fixed_start) { // start is trimmable sr_it->start = best_start; modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Trimmed Start to " + std::to_string(best_start) + ".\n"); #endif } else { sr_it = split_raw.insert(sr_it, exon(sr_it->start, best_start)); ++sr_it; sr_it->start = std::min(best_start + 10, sr_it->end); modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Trimmed Middle 3 " + std::to_string(best_end) + " , " + std::to_string(best_start) + ".\n"); #endif } } } } if (modified) { raw.clear(); for(greader_list<exon>::iterator sr_it = split_raw.begin(); sr_it != split_raw.end(); ++sr_it) { if (!raw.empty() && raw.back().second + 1 == sr_it->start) { raw.back().second = sr_it->end; } else { raw.push_back(std::make_pair(sr_it->start, sr_it->end)); } } } #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim OUT.\n"); #endif } void bam_reader::trim_exons_2(chromosome* chrom, greader_list<std::pair<rpos, rpos> > &raw, greader_list<rpos> &starts, greader_list<rpos> &ends) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim IN.\n"); #endif greader_list<exon> split_raw; greader_list<rpos>::iterator s_it = starts.begin(); greader_list<rpos>::iterator e_it = ends.begin(); for(greader_list<std::pair<rpos, rpos> >::iterator raw_it = raw.begin(); raw_it != raw.end(); ++raw_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("RAW it " + std::to_string(raw_it->first) + " " + std::to_string(raw_it->second) +".\n"); #endif rpos pos = raw_it->first; bool trimmable_start = true; while (true) { if (s_it == starts.end() && e_it == ends.end()) { break; } rpos next; bool trimmable_end; bool next_trimmable_start; if (s_it == starts.end() || (e_it != ends.end() && *e_it + 1 < *s_it ) ) { next = *e_it + 1; trimmable_end = false; next_trimmable_start = true; if (next-1 > raw_it->second) { break; } ++e_it; } else if (e_it == ends.end() || *e_it + 1 > *s_it ) { next = *s_it; trimmable_end = true; next_trimmable_start = false; if (next-1 > raw_it->second) { break; } ++s_it; } else { next = *s_it; trimmable_end = false; next_trimmable_start = false; if (next-1 > raw_it->second) { break; } ++s_it; ++e_it; } split_raw.push_back(exon(pos, next-1, trimmable_start, trimmable_end)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim A " + std::to_string(pos) + " " + std::to_string(next-1) + " : " + std::to_string(trimmable_start) + " " + std::to_string(trimmable_end) +".\n"); #endif trimmable_start = next_trimmable_start; pos = next; } if (pos != raw_it->second) { split_raw.push_back(exon(pos, raw_it->second, trimmable_start, true)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim B " + std::to_string(pos) + " " + std::to_string(raw_it->second) + " : " + std::to_string(trimmable_start) + " 1.\n"); #endif } } // here we have the raw exons bool modified = false; for(greader_list<exon>::iterator sr_it = split_raw.begin(); sr_it != split_raw.end(); ++sr_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Test Trim " + std::to_string(sr_it->start) + " " + std::to_string(sr_it->end) +".\n"); #endif std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; unsigned int count = 0; unsigned int start_count = 0; rcount max = 0; rcount min = std::numeric_limits<rcount>::max(); for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && it->left <= sr_it->end; ++it) { if (it->right < sr_it->start) { continue; } count += it->parent->global_count; // increase count for current interval if (it->left <= sr_it->start) start_count += count; end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); while ((it)->left > end_queue.top().first + 1) { count -= end_queue.top().second; // minus count because we have already added counts that are right of end_queue pos end_queue.pop(); } if (count > max) { max = count; } if (count < min) { min = count; } } if (count > max) { max = count; } if (count < min) { min = count; } while (!end_queue.empty()) { end_queue.pop(); } rcount percentile = max*0.1; if (percentile < 15) { percentile = 15; } if (percentile > 60) { percentile = 60; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Max Min Percentile " + std::to_string(max) + " " + std::to_string(min) + " " + std::to_string(percentile) +".\n"); #endif if (min > percentile || max < 60) { continue; } greader_list<std::pair<rpos, rpos> > pre_out; unsigned int minimal_report = 40; unsigned int cut_length = 100; count = 0; bool in = false; rpos start; for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && it->left <= sr_it->end; ++it) { if (it->right < sr_it->start) { continue; } count += it->parent->global_count; // increase count for current interval end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); while ((it)->left > end_queue.top().first + 1) { count -= end_queue.top().second; // minus count because we have already added counts that are right of end_queue pos if (in && count - it->parent->global_count < percentile) { in = false; if (!pre_out.empty() && start - pre_out.back().second < minimal_report) { pre_out.back().second = end_queue.top().first; #ifdef ALLOW_DEBUG logger::Instance()->debug("Extend Area " + std::to_string(end_queue.top().first) +".\n"); #endif } else { if (end_queue.top().first - start + 1 >= minimal_report) { pre_out.push_back(std::make_pair(start, end_queue.top().first)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Area " + std::to_string(start) + " " + std::to_string(end_queue.top().first) +".\n"); #endif } } } end_queue.pop(); } if (pre_out.size() > 2) { break; } // did we find a start? if (!in && count >= percentile) { in = true; start = std::max((it)->left, sr_it->start); } } if (in && count >= percentile) { if (!pre_out.empty() && start - pre_out.back().second < minimal_report) { pre_out.back().second = sr_it->end; #ifdef ALLOW_DEBUG logger::Instance()->debug("Extend Area " + std::to_string(sr_it->end) +".\n"); #endif } else { if (sr_it->end - start + 1 >= minimal_report) { pre_out.push_back(std::make_pair(start, sr_it->end)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Area " + std::to_string(start) + " " + std::to_string( sr_it->end) +".\n"); #endif } } } if (pre_out.size() > 2) { continue; } // we missuse the fixed parts for trimming information! if (pre_out.size() == 1) { if (sr_it->start == pre_out.back().first) { // greader_list<exon>::iterator sr_it_p = sr_it; // ++sr_it_p; if (pre_out.back().second - pre_out.back().first + 1 >= cut_length && sr_it->end - pre_out.back().second >= cut_length && sr_it->fixed_end && get_max_region(chrom, pre_out.back().first, pre_out.back().second) > 50 ) { // && (sr_it_p == split_raw.end() || sr_it->end+1 != sr_it_p->start || get_max_region(chrom, sr_it_p->start, sr_it_p->end) > count + 20 )) { // && test_decreasing(chrom, pre_out.back().first, pre_out.back().second)) { // we cut this to the end! sr_it->end = pre_out.back().second; modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Trimmed End.\n"); #endif } } else if (sr_it->end == pre_out.back().second) { // greader_list<exon>::iterator sr_it_p = sr_it; // if (sr_it != split_raw.begin()) --sr_it_p; if (pre_out.back().second - pre_out.back().first + 1 >= cut_length && pre_out.back().first - sr_it->start >= cut_length && sr_it->fixed_start && get_max_region(chrom, pre_out.back().first, pre_out.back().second) > 50 ){ // && (sr_it == split_raw.begin() || sr_it_p->end+1 != sr_it->start || get_max_region(chrom, sr_it_p->start, sr_it_p->end) > start_count + 20)) { // && test_increasing(chrom, pre_out.back().first, pre_out.back().second)) { // we cut this to the start! sr_it->start = pre_out.back().first; modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Trimmed Start.\n"); #endif } } } else { // if (sr_it->start == pre_out.front().first && sr_it->end == pre_out.back().second // && pre_out.back().second - pre_out.back().first + 1 >= cut_length // && pre_out.front().second - pre_out.front().first + 1 >= cut_length // && pre_out.back().first - pre_out.front().second +1 >= cut_length // && get_max_region(chrom, pre_out.front().first, pre_out.front().second) > 50 // && get_max_region(chrom, pre_out.back().first, pre_out.back().second) > 50) { // // && test_decreasing(chrom, pre_out.front().first, pre_out.front().second) // // && test_increasing(chrom, pre_out.back().first, pre_out.back().second)) { // // && sr_it->fixed_end && sr_it->fixed_start) { // // sr_it = split_raw.insert(sr_it, exon(sr_it->start, pre_out.front().second)); // ++sr_it; // sr_it->start = pre_out.back().first; // modified = true; // #ifdef ALLOW_DEBUG // logger::Instance()->debug("Double Trimmed.\n"); // #endif // } } } if (modified) { raw.clear(); for(greader_list<exon>::iterator sr_it = split_raw.begin(); sr_it != split_raw.end(); ++sr_it) { if (!raw.empty() && raw.back().second + 1 == sr_it->start) { raw.back().second = sr_it->end; } else { raw.push_back(std::make_pair(sr_it->start, sr_it->end)); } } } #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim OUT.\n"); #endif } void bam_reader::trim_exons_1(chromosome* chrom, greader_list<std::pair<rpos, rpos> > &raw, greader_list<rpos> &starts, greader_list<rpos> &ends) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim 1 IN.\n"); #endif greader_list<exon> split_raw; greader_list<rpos>::iterator s_it = starts.begin(); greader_list<rpos>::iterator e_it = ends.begin(); for(greader_list<std::pair<rpos, rpos> >::iterator raw_it = raw.begin(); raw_it != raw.end(); ++raw_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("RAW it " + std::to_string(raw_it->first) + " " + std::to_string(raw_it->second) +".\n"); #endif rpos pos = raw_it->first; bool trimmable_start = true; while (true) { if (s_it == starts.end() && e_it == ends.end()) { break; } rpos next; bool trimmable_end; bool next_trimmable_start; if (s_it == starts.end() || (e_it != ends.end() && *e_it + 1 < *s_it ) ) { next = *e_it + 1; trimmable_end = false; next_trimmable_start = true; if (next-1 > raw_it->second) { break; } ++e_it; } else if (e_it == ends.end() || *e_it + 1 > *s_it ) { next = *s_it; trimmable_end = true; next_trimmable_start = false; if (next-1 > raw_it->second) { break; } ++s_it; } else { next = *s_it; trimmable_end = false; next_trimmable_start = false; if (next-1 > raw_it->second) { break; } ++s_it; ++e_it; } split_raw.push_back(exon(pos, next-1, trimmable_start, trimmable_end)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim A " + std::to_string(pos) + " " + std::to_string(next-1) + " : " + std::to_string(trimmable_start) + " " + std::to_string(trimmable_end) +".\n"); #endif trimmable_start = next_trimmable_start; pos = next; } if (pos != raw_it->second) { split_raw.push_back(exon(pos, raw_it->second, trimmable_start, true)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim B " + std::to_string(pos) + " " + std::to_string(raw_it->second) + " : " + std::to_string(trimmable_start) + " 1.\n"); #endif } } // here we have the raw exons bool modified = false; rcount total_max = 0; std::deque<std::deque<std::pair<rpos, rpos> > > all_regions; std::deque<std::deque<float> > all_aves; for(greader_list<exon>::iterator sr_it = split_raw.begin(); sr_it != split_raw.end(); ++sr_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Test Trim " + std::to_string(sr_it->start) + " " + std::to_string(sr_it->end) +".\n"); #endif // if (! (!sr_it->fixed_start && !sr_it->fixed_end) ) { // continue; // } all_regions.push_back(std::deque<std::pair<rpos, rpos> >()); std::deque<std::pair<rpos, rpos> >& regions = all_regions.back(); all_aves.push_back(std::deque<float>()); std::deque<float>& aves = all_aves.back(); // find 0 seperated parts std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; unsigned int count = 0; rcount bases = 0; rpos start_pos = sr_it->start; for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && it->left <= sr_it->end; ++it) { if (it->right < sr_it->start) { continue; } end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); rpos last_pos = 0; while ((it)->left > end_queue.top().first + 1) { count -= end_queue.top().second; last_pos = end_queue.top().first; // minus count because we have already added counts that are right of end_queue pos end_queue.pop(); } if (count > total_max) { total_max = count; } if (count == 0 && last_pos != 0) { bases += (std::min((it)->right, last_pos) - std::max((it)->left, start_pos) + 1 ) * it->parent->global_count; rpos length = last_pos - start_pos + 1; // logger::Instance()->debug("B1 " + std::to_string(bases) + " " + std::to_string(length) + ".\n"); float average = bases/(float)length; regions.push_back(std::make_pair(start_pos, last_pos)); aves.push_back(average); bases = 0; start_pos = (it)->left; } bases += (std::min((it)->right, sr_it->end) - std::max((it)->left, start_pos) + 1 ) * it->parent->global_count; count += it->parent->global_count; // increase count for current interval } rpos length = sr_it->end - start_pos + 1; // logger::Instance()->debug("B2 " + std::to_string(bases) + " " + std::to_string(length) + ".\n"); float average = bases/(float)length; regions.push_back(std::make_pair(start_pos, sr_it->end)); aves.push_back(average); } std::deque<std::deque<std::pair<rpos, rpos> > >::iterator ari = all_regions.begin(); std::deque<std::deque<float> >::iterator aai = all_aves.begin(); for(greader_list<exon>::iterator sr_it = split_raw.begin(); sr_it != split_raw.end(); ++sr_it, ++ari, ++aai) { std::deque<std::pair<rpos, rpos> >& regions = *ari; std::deque<float>& aves = *aai; if (regions.size() == 1) { continue; } std::deque<std::pair<rpos, rpos> >::iterator ri_a = regions.begin(); std::deque<float>::iterator ai_a = aves.begin(); std::deque<std::pair<rpos, rpos> >::iterator ri_b = ri_a; ++ri_b; std::deque<float>::iterator ai_b = ai_a; ++ai_b; for (; ri_b != regions.end(); ++ri_b, ++ri_a, ++ai_b, ++ai_a) { float max = std::max(*ai_a, *ai_b); float min = std::min(*ai_a, *ai_b); #ifdef ALLOW_DEBUG logger::Instance()->debug("Split Test at " + std::to_string(ri_a->second) + " _ " + std::to_string(ri_b->first) + " " + std::to_string(min) + " " + std::to_string(max) + ".\n"); #endif if ( (max * options::Instance()->get_trimming_rate() > min && min < 20) || (min > 25 ) ) { // we do want to seperate those sr_it = split_raw.insert(sr_it, exon(sr_it->start, ri_a->second)); ++sr_it; sr_it->start = ri_b->first; modified = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Split Gap at " + std::to_string(ri_a->second) + " _ " + std::to_string(ri_b->first) + ".\n"); #endif } } } if (modified) { raw.clear(); for(greader_list<exon>::iterator sr_it = split_raw.begin(); sr_it != split_raw.end(); ++sr_it) { if (!raw.empty() && raw.back().second + 1 == sr_it->start) { raw.back().second = sr_it->end; } else { raw.push_back(std::make_pair(sr_it->start, sr_it->end)); } } } #ifdef ALLOW_DEBUG logger::Instance()->debug("Trim OUT.\n"); #endif } bool bam_reader::test_increasing(chromosome* chrom, rpos start, rpos end) { std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; unsigned int count = 0; rcount max = 0; for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && it->left <= end; ++it) { if (it->right < start) { continue; } count += it->parent->global_count; // increase count for current interval end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); while ((it)->left > end_queue.top().first + 1) { count -= end_queue.top().second; // minus count because we have already added counts that are right of end_queue pos end_queue.pop(); } if (count > max) { max = count; //} else if ( (max - count) * 100 / max > 25 && max - count > 20) { } else if ( max - count > 15) { return false; } } return true; } bool bam_reader::test_decreasing(chromosome* chrom, rpos start, rpos end) { std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; unsigned int count = 0; rcount min = std::numeric_limits<rcount>::max(); for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && it->left <= end; ++it) { if (it->right < start) { continue; } count += it->parent->global_count; // increase count for current interval end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); while ((it)->left > end_queue.top().first + 1) { count -= end_queue.top().second; // minus count because we have already added counts that are right of end_queue pos end_queue.pop(); } if (count < min) { min = count; //} else if ( (count - min) * 100 / count > 25 && count - min > 20 ) { } else if ( count - min > 15 ) { return false; } } return true; } rcount bam_reader::get_max_region(chromosome* chrom, rpos start, rpos end) { std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; unsigned int count = 0; rcount max = 0; for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && it->left <= end; ++it) { if (it->right < start) { continue; } count += it->parent->global_count; // increase count for current interval end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); while ((it)->left > end_queue.top().first + 1) { count -= end_queue.top().second; // minus count because we have already added counts that are right of end_queue pos end_queue.pop(); } if (count > max) { max = count; } } if (count > max) { max = count; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Found MAX " + std::to_string(max) + " " + std::to_string(start) + " " + std::to_string(end) +".\n"); #endif return max; } bool bam_reader::get_average_to_first_zero_from_left(chromosome* chrom, rpos start, rpos end, float& average, rpos & length, rpos & new_end, rpos & next_start) { std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; unsigned int count = 0; rcount bases = 0; for (greader_list<interval>::iterator it = chrom->interval_queue.begin(); it != chrom->interval_queue.end() && it->left <= end; ++it) { if (it->right < start) { continue; } end_queue.push( std::make_pair((it)->right, it->parent->global_count) ); rpos last_pos = 0; while ((it)->left > end_queue.top().first + 1) { count -= end_queue.top().second; last_pos = end_queue.top().first; // minus count because we have already added counts that are right of end_queue pos end_queue.pop(); } if (count == 0 && last_pos != 0) { bases += (std::min((it)->right, last_pos) - std::max((it)->left, start) + 1 ) * it->parent->global_count; length = last_pos - start + 1; average = bases/(float)length; new_end = last_pos; next_start = (it)->left; return true; } bases += (std::min((it)->right, end) - std::max((it)->left, start) + 1 ) * it->parent->global_count; count += it->parent->global_count; // increase count for current interval } length = end - start + 1; average = bases/(float)length; return false; } bool bam_reader::get_average_to_first_zero_from_right(chromosome* chrom, rpos start, rpos end, float& average, rpos & length, rpos & new_start, rpos & next_end) { std::priority_queue< std::pair<rpos, rcount>, std::vector<std::pair<rpos, rcount>>, std::greater<std::pair<rpos, rcount> > > end_queue; unsigned int count = 0; rcount bases = 0; for (greader_list<interval>::reverse_iterator it = chrom->interval_queue.rbegin(); it != chrom->interval_queue.rend() && it->right >= start; ++it) { if (it->left > end) { continue; } end_queue.push( std::make_pair((it)->left, it->parent->global_count) ); rpos last_pos = 0; while ((it)->right + 1 < end_queue.top().first) { count -= end_queue.top().second; last_pos = end_queue.top().first; // minus count because we have already added counts that are right of end_queue pos end_queue.pop(); } if (count == 0 && last_pos != 0) { bases += (std::min((it)->right, end) - std::max((it)->left, last_pos) + 1 ) * it->parent->global_count; length = last_pos - start + 1; average = bases/(float)length; new_start = last_pos; next_end = (it)->left; return true; } bases += (std::min((it)->right, end) - std::max((it)->left, start) + 1 ) * it->parent->global_count; count += it->parent->global_count; // increase count for current interval } length = end - start + 1; average = bases/(float)length; return false; } void bam_reader::solidify_raw_exons_ends(chromosome* chrom, greader_list<std::pair<rpos, rpos> > &raw, greader_list<rpos> &starts, greader_list<rpos> &ends) { const unsigned int extend = options::Instance()->get_max_pos_extend(); r_border_set<rpos>::iterator fs_it = chrom->fixed_exon_starts->begin(); r_border_set<rpos>::iterator fe_it = chrom->fixed_exon_ends->begin(); greader_list<rpos>::iterator cs_it = starts.begin(); greader_list<rpos>::iterator ce_it = ends.begin(); for(greader_list<std::pair<rpos, rpos> >::iterator raw_it = raw.begin(); raw_it != raw.end(); ) { rpos start = raw_it->first; rpos end = raw_it->second; fs_it = std::lower_bound(fs_it, chrom->fixed_exon_starts.ref().end(), start); fe_it = std::lower_bound(fe_it, chrom->fixed_exon_ends.ref().end(), end); cs_it = std::lower_bound(cs_it, starts.end(), start); ce_it = std::lower_bound(ce_it, ends.end(), end); // we look for updated starts if (fs_it!= chrom->fixed_exon_starts.ref().end() && start + extend >= *fs_it && start <= *fs_it + extend) { // match to fixed raw_it->first = *fs_it; } else if (cs_it!= starts.end() && start + extend >= *cs_it && start <= *cs_it + extend) { // match to cluster raw_it->first = *cs_it; } else if (fs_it!= chrom->fixed_exon_starts.ref().begin() && start + extend >= *(fs_it-1) && start <= *(fs_it-1) + extend) { // match to fixed previous raw_it->first = *(fs_it-1); } else if (cs_it!= starts.begin() && start + extend >= *(cs_it-1) && start <= *(cs_it-1) + extend) { // match to cluster previous raw_it->first = *(cs_it-1); } // we look for updated starts if (fe_it!= chrom->fixed_exon_ends.ref().end() && end + extend >= *fe_it && end <= *fe_it + extend) { // match to fixed raw_it->second = *fe_it; } else if (ce_it!= ends.end() && end + extend >= *ce_it && end <= *ce_it + extend) { // match to cluster raw_it->second = *ce_it; } else if (fe_it!= chrom->fixed_exon_ends.ref().begin() && end + extend >= *(fe_it-1) && end <= *(fe_it-1) + extend) { // match to fixed previous raw_it->second = *(fe_it-1); } else if (ce_it!= ends.begin() && end + extend >= *(ce_it-1) && end <= *(ce_it-1) + extend) { // match to cluster previous raw_it->second = *(ce_it-1); } if (raw_it->first > raw_it->second || raw_it->second - raw_it->first < options::Instance()->get_min_raw_exon_size()) { raw_it = raw.erase(raw_it); } else { ++raw_it; } } } void bam_reader::update_existing_exons( connected* connected, chromosome* chrom, greader_list<std::pair<rpos, rpos> > &raw, rpos &left, rpos &right) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Update existing. " + std::to_string(connected->fossil_exons.ref().size()) + "\n"); for ( greader_list<exon* >::iterator fix_it = connected->fossil_exons.ref().begin(); fix_it != connected->fossil_exons.ref().end(); ++fix_it) { logger::Instance()->debug("Fossil " + std::to_string((*fix_it)->start) + " " + std::to_string((*fix_it)->end) + " Fixed " + std::to_string((*fix_it)->fixed_start) + "-" + std::to_string((*fix_it)->fixed_end) + "\n"); } for ( greader_list<std::pair<rpos, rpos> >::iterator raw_it = raw.begin(); raw_it != raw.end(); ++raw_it) { logger::Instance()->debug("Raw " + std::to_string(raw_it->first) + " " + std::to_string(raw_it->second) + "\n"); } #endif greader_list<std::pair<rpos, rpos> >::iterator raw_it = raw.begin(); greader_list<exon* >::iterator fix_it = connected->fossil_exons.ref().begin(); #ifdef ALLOW_DEBUG if (fix_it != connected->fossil_exons.ref().end()) logger::Instance()->debug("Fossil Start " + std::to_string((*fix_it)->start) + " " + std::to_string((*fix_it)->end) + "\n"); #endif // move fix to first in range while (fix_it != connected->fossil_exons.ref().end() && (*fix_it)->end < left) { ++fix_it; #ifdef ALLOW_DEBUG if (fix_it != connected->fossil_exons.ref().end()) logger::Instance()->debug("Fossil move " + std::to_string((*fix_it)->start) + " " + std::to_string((*fix_it)->end) + "\n"); #endif } for ( ; raw_it != raw.end(); ++raw_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Raw it " + std::to_string(raw_it->first) + " " + std::to_string(raw_it->second) + "\n"); #endif // find fist possibly intersecting while (fix_it != connected->fossil_exons.ref().end() && raw_it->first > (*fix_it)->end) { ++fix_it; } if (fix_it == connected->fossil_exons.ref().end()) { chrom->fossil_exons.push_back(exon(raw_it->first, raw_it->second)); connected->fossil_exons.ref().push_back(&chrom->fossil_exons.back()); fix_it = connected->fossil_exons.ref().end(); #ifdef ALLOW_DEBUG logger::Instance()->debug("Insert without competing.\n"); #endif continue; } if (raw_it->second < (*fix_it)->start ) { // this means no overlap! so just add it // add to new exon to the right and stay at new insert chrom->fossil_exons.push_back(exon(raw_it->first, raw_it->second)); fix_it = connected->fossil_exons.ref().insert(fix_it, &chrom->fossil_exons.back()); ++fix_it; #ifdef ALLOW_DEBUG logger::Instance()->debug("Add new Exon in between existing without touch: " + std::to_string(raw_it->first) + " - "+std::to_string(raw_it->second)+".\n"); #endif continue; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Fossil pre " + std::to_string((*fix_it)->start) + " " + std::to_string((*fix_it)->end) + "\n"); #endif if (raw_it->first < (*fix_it)->start && raw_it->second >= (*fix_it)->start ) { // we have an actual overlap top the front // this by the iteration is the first such overlap, therefore no previous to consider // hence we extend by the given length if((*fix_it)->fixed_start) { // fix, so add new exon // add to the left then return to current position chrom->fossil_exons.push_back(exon(raw_it->first, (*fix_it)->start-1)); fix_it = connected->fossil_exons.ref().insert(fix_it, &chrom->fossil_exons.back()); (*fix_it)->fixed_end = true; ++fix_it; #ifdef ALLOW_DEBUG logger::Instance()->debug("Add new Exon before existing: " + std::to_string(raw_it->first) + " - "+std::to_string((*fix_it)->start-1)+".\n"); #endif } else { // just extend existing one (*fix_it)->start = raw_it->first; #ifdef ALLOW_DEBUG logger::Instance()->debug("Extend existing Exon to left: " + std::to_string((*fix_it)->start) + " - "+std::to_string((*fix_it)->end)+".\n"); #endif } } // now loop till end and fix possible holes // do we overlap to the end? while (fix_it != connected->fossil_exons.ref().end() && raw_it->second > (*fix_it)->end) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Right \n"); #endif // extend to right, see for next element greader_list<exon* >::iterator next = fix_it; ++next; if (next == connected->fossil_exons.ref().end() || (*next)->start > raw_it->second) { // overlap, but no following exon in the overlap if ((*fix_it)->fixed_end) { // add to new exon to the right and stay at new insert chrom->fossil_exons.push_back(exon((*fix_it)->end+1, raw_it->second)); #ifdef ALLOW_DEBUG logger::Instance()->debug("Add new Exon after existing: " + std::to_string((*fix_it)->end+1) + " - "+std::to_string(raw_it->second)+".\n"); #endif fix_it = connected->fossil_exons.ref().insert(next, &chrom->fossil_exons.back()); (*fix_it)->fixed_start = true; } else { // modify existing (*fix_it)->end = raw_it->second; #ifdef ALLOW_DEBUG logger::Instance()->debug("Extend existing Exon to right: " + std::to_string((*fix_it)->start) + " - "+std::to_string((*fix_it)->end)+".\n"); #endif } ++fix_it; } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("Test Fix " + std::to_string((*fix_it)->fixed_start) + " " + std::to_string((*fix_it)->fixed_end) + " " + std::to_string((*next)->fixed_start) + " " + std::to_string((*next)->fixed_end) + "\n"); #endif // this means we have to close the gap between two exons if ( (*fix_it)->fixed_end && (*next)->fixed_start) { // insert new between two #ifdef ALLOW_DEBUG logger::Instance()->debug("Add new Exon in between existing: " + std::to_string((*fix_it)->end+1) + " - "+std::to_string((*next)->start-1)+".\n"); #endif if ((*fix_it)->end +1 != (*next)->start) { chrom->fossil_exons.push_back(exon((*fix_it)->end+1, (*next)->start-1)); fix_it = connected->fossil_exons.ref().insert(next, &chrom->fossil_exons.back()); (*fix_it)->fixed_start = true; (*fix_it)->fixed_end = true; } ++fix_it; } else if (!(*fix_it)->fixed_end && (*next)->fixed_start) { // extend and fix left (*fix_it)->end = (*next)->start-1; (*fix_it)->fixed_end = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Extend existing Exon to left fixed: " + std::to_string((*fix_it)->start) + " - "+std::to_string((*fix_it)->end)+".\n"); #endif ++fix_it; } else if ( (*fix_it)->fixed_end && !(*next)->fixed_start) { (*next)->start = (*fix_it)->end+1; (*next)->fixed_start = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Extend existing Exon to right fixed: " + std::to_string((*fix_it)->start) + " - "+std::to_string((*next)->end)+".\n"); #endif ++fix_it; } else { // we need to merge two separate exons... // get everything in next and erase itr (*next)->start = (*fix_it)->start; (*next)->fixed_start = (*fix_it)->fixed_start; // sadly we need to resort... lazy<greader_refsorted_list<raw_atom* > > new_atom_order; for (greader_refsorted_list<raw_atom* >::iterator m = connected->atoms.ref().begin(); m != connected->atoms.ref().end() ; ++m) { greader_refsorted_list<exon*>::iterator old_left = (*m)->exons.ref().find(*fix_it); if (old_left != (*m)->exons.ref().end()) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Switch " + std::to_string( (*fix_it)->start ) + "-" + std::to_string((*fix_it)->end) + " ; " + std::to_string( (*old_left)->start ) + "-" + std::to_string((*old_left)->end) + " ; " + std::to_string( (*next)->start ) + "-" + std::to_string((*next)->end) + ".\n"); #endif (*m)->exons.ref().erase(*old_left); // erase if found (*m)->exons.ref().insert(*next); // insert other instead } // we need to filter out any duplicates here! Old version was just over complicated greader_refsorted_list<raw_atom* >::iterator naoi = new_atom_order->find(*m); if (naoi != new_atom_order->end()) { // we already have this, so change out paired info for (greader_refsorted_list<raw_atom* >::iterator m2 = connected->atoms.ref().begin(); m2 != connected->atoms.ref().end() ; ++m2) { // single partner paired_map<raw_atom*, gmap<int, rcount> >::iterator ri = (*m2)->paired.find(*m); if (ri != (*m2)->paired.end()) { for ( gmap<int, rcount>::iterator rii = ri->second.begin(); rii != ri->second.end(); ++rii) { (*m2)->paired[*naoi][rii->first] += rii->second; } (*m2)->paired.erase(ri); } } for (gmap<int, raw_series_counts>::iterator rsci = (*m)->raw_series.begin(); rsci != (*m)->raw_series.end(); ++rsci ) { (*naoi)->raw_series[rsci->first].add_other_max_min(rsci->second, (*next)->start, (*next)->end); } } else { new_atom_order->insert(*m); } } connected->atoms = new_atom_order; #ifdef ALLOW_DEBUG logger::Instance()->debug("Joined two existing Exons: " + std::to_string((*fix_it)->start) + " - "+std::to_string((*fix_it)->end)+".\n"); #endif // now erase fix_it from real list greader_refsafe_list<exon>::iterator rem = std::find(chrom->fossil_exons.begin(), chrom->fossil_exons.end(), **fix_it); fix_it = connected->fossil_exons.ref().erase(fix_it); chrom->fossil_exons.erase(rem); } } } } } void bam_reader::split_exons( connected* connected, chromosome* chrom, greader_list<rpos> &splits, rpos &left, rpos &right, int correction) { // move linear through both and split exons greader_list<exon* >::iterator e_it = connected->fossil_exons.ref().begin(); greader_list<rpos>::iterator s_it = splits.begin(); while (e_it != connected->fossil_exons.ref().end() && (*e_it)->end < left ) { ++e_it; } unsigned int max_extend = options::Instance()->get_max_pos_extend(); for (; e_it != connected->fossil_exons.ref().end() && (*e_it)->start <= right && s_it != splits.end(); ++e_it ) { while (*s_it <= (*e_it)->end && s_it != splits.end()) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Test " + std::to_string(*s_it) + " " + std::to_string((*e_it)->start) + ":" + std::to_string((*e_it)->fixed_start) + "-" + std::to_string((*e_it)->end) + ":" + std::to_string((*e_it)->fixed_end) + ".\n"); #endif if ( !(*e_it)->fixed_start && correction == 1 && ((*s_it >= (*e_it)->start && *s_it - (*e_it)->start <= max_extend) || (*s_it < (*e_it)->start && (*e_it)->start - *s_it <= max_extend))) { if (*s_it > (*e_it)->start) { for (greader_refsorted_list<raw_atom* >::iterator a = connected->atoms.ref().begin(); a != connected->atoms.ref().end() ; ++a) { for(gmap<int, raw_series_counts>::iterator rsi = (*a)->raw_series.begin(); rsi != (*a)->raw_series.end(); ++rsi) { for (std::map< rpos,rcount >::iterator li = rsi->second.lefts->begin(); li != rsi->second.lefts->end(); ) { if ( li->first < *s_it && li->first >= (*e_it)->start) { rsi->second.lefts.ref()[*s_it] += li->second; li = rsi->second.lefts->erase(li); } else { ++li; } } for (std::map< rpos,rcount >::iterator ri = rsi->second.rights->begin(); ri != rsi->second.rights->end(); ) { if ( ri->first < *s_it && ri->first >= (*e_it)->start) { rsi->second.rights.ref()[*s_it] += ri->second; ri = rsi->second.rights->erase(ri); } else { ++ri; } } for (std::map< rpos,rcount >::iterator li = rsi->second.hole_starts->begin(); li != rsi->second.hole_starts->end(); ) { if ( li->first < *s_it && li->first >= (*e_it)->start) { rsi->second.hole_starts.ref()[*s_it] += li->second; li = rsi->second.hole_starts->erase(li); } else { ++li; } } for (std::map< rpos,rcount >::iterator ri = rsi->second.hole_ends->begin(); ri != rsi->second.hole_ends->end(); ) { if ( ri->first < *s_it && ri->first >= (*e_it)->start) { rsi->second.hole_ends.ref()[*s_it] += ri->second; ri = rsi->second.hole_ends->erase(ri); } else { ++ri; } } } } } (*e_it)->fixed_start = true; (*e_it)->start = *s_it; #ifdef ALLOW_DEBUG logger::Instance()->debug("Set Fixed Start.\n"); #endif } if ( !(*e_it)->fixed_end && correction == 0 && ((*s_it >= (*e_it)->end && *s_it - (*e_it)->end <= max_extend) || (*s_it < (*e_it)->end && (*e_it)->end - *s_it <= max_extend))) { if (*s_it < (*e_it)->end) { for (greader_refsorted_list<raw_atom* >::iterator a = connected->atoms.ref().begin(); a != connected->atoms.ref().end() ; ++a) { for(gmap<int, raw_series_counts>::iterator rsi = (*a)->raw_series.begin(); rsi != (*a)->raw_series.end(); ++rsi) { for (std::map< rpos,rcount >::iterator li = rsi->second.lefts->begin(); li != rsi->second.lefts->end(); ) { if ( li->first > *s_it && li->first <= (*e_it)->end) { rsi->second.lefts.ref()[*s_it] += li->second; li = rsi->second.lefts->erase(li); } else { ++li; } } for (std::map< rpos,rcount >::iterator ri = rsi->second.rights->begin(); ri != rsi->second.rights->end(); ) { if ( ri->first > *s_it && ri->first <= (*e_it)->end) { rsi->second.rights.ref()[*s_it] += ri->second; ri = rsi->second.rights->erase(ri); } else { ++ri; } } for (std::map< rpos,rcount >::iterator li = rsi->second.hole_starts->begin(); li != rsi->second.hole_starts->end(); ) { if ( li->first > *s_it && li->first <= (*e_it)->end) { rsi->second.hole_starts.ref()[*s_it] += li->second; li = rsi->second.hole_starts->erase(li); } else { ++li; } } for (std::map< rpos,rcount >::iterator ri = rsi->second.hole_ends->begin(); ri != rsi->second.hole_ends->end(); ) { if ( ri->first > *s_it && ri->first <= (*e_it)->end) { rsi->second.hole_ends.ref()[*s_it] += ri->second; ri = rsi->second.hole_ends->erase(ri); } else { ++ri; } } } } } (*e_it)->fixed_end = true; (*e_it)->end = *s_it; #ifdef ALLOW_DEBUG logger::Instance()->debug("Set Fixed End.\n"); #endif } if (*s_it >= (*e_it)->start && *s_it+1-correction - (*e_it)->start > max_extend && (*e_it)->end - *s_it > max_extend) { // we have an overlap AND need to split // NOTE: if points are outside of exon, they are rejected for splitting as they be construction lie within max_extend // insert new exon to the left chrom->split_exon(*s_it-correction, e_it, connected); #ifdef ALLOW_DEBUG logger::Instance()->debug("Split Exon at " + std::to_string(*s_it) + ".\n"); #endif } else if (*s_it < (*e_it)->start && (*e_it)->start - *s_it > max_extend && ( e_it == connected->fossil_exons.ref().begin() || *s_it - (*(e_it-1))->end > max_extend)) { // in between, remove this one s_it = splits.erase(s_it); continue; } ++s_it; } } --e_it; for ( ; s_it != splits.end(); ) { // TODO: simplify if ( !(*e_it)->fixed_start && ((*s_it >= (*e_it)->start && *s_it - (*e_it)->start <= max_extend) || (*s_it < (*e_it)->start && (*e_it)->start - *s_it <= max_extend))) { (*e_it)->fixed_start = true; } if ( !(*e_it)->fixed_end && ((*s_it >= (*e_it)->end && *s_it - (*e_it)->end <= max_extend) || (*s_it < (*e_it)->end && (*e_it)->end - *s_it <= max_extend))) { (*e_it)->fixed_end = true; } if (*s_it >= (*e_it)->end && *s_it - (*(e_it))->end > max_extend) { s_it = splits.erase(s_it); } else { ++s_it; } } } //############ Fragments ############## greader_list<connected>::iterator bam_reader::insert_fragment(chromosome* chrom, rpos &left, rpos &right) { greader_list<connected>::iterator merge_start, merge_end; bool found = false; #ifdef ALLOW_DEBUG logger::Instance()->debug("Test Frag " + std::to_string(left) + " " + std::to_string(right) + "\n"); for(greader_list<connected>::iterator it = chrom->chrom_fragments.begin();it != chrom->chrom_fragments.end(); ++it){ logger::Instance()->debug("InFRAG " + std::to_string(it->start) + " " + std::to_string(it->end) + "\n"); } #endif greader_list<connected>::iterator it = chrom->chrom_fragments.begin(); #ifdef ALLOW_DEBUG logger::Instance()->debug("Fraga " + std::to_string(it->start) + " " + std::to_string(it->end) + "\n"); #endif for ( ; it != chrom->chrom_fragments.end() && right >= it->start; ++it) { // mark all overlap, connected areas #ifdef ALLOW_DEBUG logger::Instance()->debug("Fragb " + std::to_string(it->start) + " " + std::to_string(it->end) + "\n"); #endif if ( (it->end >= left && it->end <= right) || (it->start >= left && it->start <= right) || (it->start < left && it->end > right )) { if (!found) { found = true; merge_start = it; } merge_end = it; } } greader_list<connected>::iterator ret; if (!found) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Insert new Fragment " + std::to_string(left) + " - "+std::to_string(right)+".\n"); #endif greader_list<connected>::iterator el = chrom->chrom_fragments.insert(it, connected()); el->start = left; el->end = right; lazy< std::deque<read_collection> > new_inner = chrom->reads.add_inner(); el->reads.ref().push_back(new_inner); ret = el; } else if (found && merge_start==merge_end) { // just modify this one, exons are changed later ret = merge_start; if (left < merge_start->start) { merge_start->start = left; } if (right > merge_start->end) { merge_start->end = right; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Extend Region with " + std::to_string(left) + " - "+std::to_string(right) + " to " + std::to_string(merge_start->start) + " - " + std::to_string(merge_start->end) +".\n"); #endif } else { // modify first one to merge greader_list<connected>::iterator m = merge_start; // logger::Instance()->debug("======== " + std::to_string(m->atoms.ref().size()) + "\n"); // logger::Instance()->debug("Connected " + std::to_string(m->start) + " " + std::to_string(m->end) + "\n"); // for ( greader_list<exon* >::iterator fix_it = m->fossil_exons.ref().begin(); fix_it != m->fossil_exons.ref().end(); ++fix_it) { // logger::Instance()->debug("Fossil " + std::to_string((*fix_it)->start) + " " + std::to_string((*fix_it)->end) + "\n"); // } ++m; greader_list<connected>::iterator end = merge_end; ++end; for (; m != end; ++m) { // logger::Instance()->debug("========" + std::to_string(m->atoms.ref().size()) + "\n"); // logger::Instance()->debug("Connected " + std::to_string(m->start) + " " + std::to_string(m->end) + "\n"); // for ( greader_list<exon* >::iterator fix_it = m->fossil_exons.ref().begin(); fix_it != m->fossil_exons.ref().end(); ++fix_it) { // logger::Instance()->debug("Fossil " + std::to_string((*fix_it)->start) + " " + std::to_string((*fix_it)->end) + "\n"); // } _MOVE_RANGE(m->fossil_exons.ref().begin(), m->fossil_exons.ref().end(), std::inserter(merge_start->fossil_exons.ref(), merge_start->fossil_exons.ref().end())); _MOVE_RANGE(m->atoms.ref().begin(), m->atoms.ref().end(), std::inserter(merge_start->atoms.ref(), merge_start->atoms.ref().end())); _MOVE_RANGE(m->reads.ref().begin(), m->reads.ref().end(), std::back_inserter(merge_start->reads.ref())); merge_start->avg_split = ((merge_start->avg_split * merge_start->intel_count) + (m->avg_split * m->intel_count)) / (merge_start->intel_count + m->intel_count); merge_start->intel_count += m->intel_count; } if (left < merge_start->start) { merge_start->start = left; } if (right > merge_end->end) { merge_start->end = right; } else { merge_start->end = merge_end->end; } // mark_or_reduce_paired_atoms(&*merge_start, chrom, merge_start->atoms.ref().begin(), merge_start->atoms.ref().end()); done later anyway right now #ifdef ALLOW_DEBUG logger::Instance()->debug("Join two Regions with " + std::to_string(left) + " - "+std::to_string(right) + " to " + std::to_string(merge_start->start) + " - " + std::to_string(merge_start->end) + " " + std::to_string(merge_start->atoms.ref().size()) +".\n"); #endif ++merge_start; ret = chrom->chrom_fragments.erase(merge_start, end); --ret; } for(greader_list<connected>::iterator it = chrom->chrom_fragments.begin();it != chrom->chrom_fragments.end(); ++it){ #ifdef ALLOW_DEBUG logger::Instance()->debug("OUTFRAG " + std::to_string(it->start) + " " + std::to_string(it->end) + "\n"); #endif } return ret; } // ######### ATOMS ######### void bam_reader::assign_reads( connected* conn, chromosome* chrom) { // loop over exons (sorted) greader_list<interval >::iterator i_it_start = chrom->interval_queue.begin(); for (greader_list<exon* >::iterator e_it = conn->fossil_exons.ref().begin(); e_it != conn->fossil_exons.ref().end(); ++e_it) { // loop over (still sorted) intervals for (greader_list<interval >::iterator i_it = i_it_start; i_it != chrom->interval_queue.end(); ++i_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("START " + std::to_string((i_it)->left)+ "-" + std::to_string((i_it)->right)+ ";" + std::to_string((*e_it)->start) + " - " + std::to_string((*e_it)->end) + "\n"); // if ((i_it)->parent->id_set) { // logger::Instance()->debug( "ID " + (i_it)->parent->ids.ref()[0][0]+ "\n"); // } #endif // test for starts if ((i_it)->right < (*e_it)->start && i_it == i_it_start) { // cannot overlap anymore, therefore increase start counter to one ahead ++i_it_start; #ifdef ALLOW_DEBUG logger::Instance()->debug("Skip Start. \n"); #endif continue; } if ((i_it)->left > (*e_it)->end) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Break. \n"); #endif break; } // if ( ( (i_it)->right >= (*e_it)->end && ( // (i_it)->left <= (*e_it)->start // encased // || ( (i_it)->left <= (*e_it)->end && (i_it)->left >= (*e_it)->start // && (*e_it)->end - (i_it)->left > options::Instance()->get_max_pos_extend()) ) // ) // overlap bigger merge // || ( (i_it)->left < (*e_it)->start && (i_it)->right >= (*e_it)->start && (i_it)->right - (*e_it)->start> options::Instance()->get_max_pos_extend() ) // || ( (i_it)->right <= (*e_it)->end && (i_it)->left >= (*e_it)->start) // ) { if ((i_it)->right >= (*e_it)->start && (i_it)->left <= (*e_it)->end) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Overlap \n"); #endif raw_atom* atom; if ( (i_it)->parent->atom == NULL) { atom = (i_it)->parent->create_atom(); // atom->reads.ref().push_back(read_collection((i_it)->parent->get_left_limit(), (i_it)->parent->right_limit, (i_it)->parent->length)); // if ((i_it)->parent->id_set) { // atom->reads.ref().back().add_id((i_it)->parent->id); // } } else { atom = (i_it)->parent->atom; } // test for uncomplete overlaps due to unsupported moves if ( (i_it)->parent->get_left_limit() != (i_it)->left && (i_it)->left > (*e_it)->start + options::Instance()->get_min_raw_exon_size() || (i_it)->parent->get_right_limit() != (i_it)->right && (i_it)->right + options::Instance()->get_min_raw_exon_size() < (*e_it)->end) { (i_it)->parent->block = true; } atom->exons.ref().insert(*e_it); } } } } void bam_reader::filter_outer_read_junctions(chromosome* chrom, std::map< std::pair<rpos, rpos>, bool > &junction_validation, unsigned int total_inputs) { lazy<r_border_set<rpos> > fs = chrom->fixed_exon_starts; lazy<r_border_set<rpos> > fe = chrom->fixed_exon_ends; for ( greader_list<rread>::iterator r_it = chrom->read_queue.begin(); r_it != chrom->read_queue.end(); ++r_it) { if (r_it->atom == NULL || r_it->atom->exons->size() < 2) { // this read was capped by a filter ! continue; } greader_refsorted_list<exon*>::iterator ei = r_it->atom->exons->begin(); greader_refsorted_list<exon*>::iterator ein = ei; ++ein; for ( ; ein != r_it->atom->exons->end() && !r_it->block; ++ei, ++ein ) { //logger::Instance()->debug("Test SPlice " + std::to_string((*ei)->end+1) + ":" + std::to_string((*ein)->start-1) + "\n" ); if ((*ei)->end+1 == (*ein)->start) { continue; } //std::map< std::pair<rpos, rpos>, bool >::iterator jv = junction_validation.find(std::make_pair( (*ei)->end+1, (*ein)->start-1)); //#ifdef ALLOW_DEBUG //logger::Instance()->debug("JV " + std::to_string((*ei)->end+1) + " - " + std::to_string((*ein)->start-1) + " valid " + std::to_string(jv!=junction_validation.end()) + "\n" ); //#endif if ( ! junction_validation[std::make_pair( (*ei)->end+1, (*ein)->start-1)] ) { // not a validated junction, kill whole //logger::Instance()->debug("block\n" ); r_it->block = true; } } if (r_it->block) { continue; } // test first and last atom wheter it is long enough atom rpos begin_start = (*r_it->atom->exons->begin())->start; rpos begin_end = (*r_it->atom->exons->begin())->end; rpos end_start = (*r_it->atom->exons->rbegin())->start; rpos end_end = (*r_it->atom->exons->rbegin())->end; #ifdef ALLOW_DEBUG logger::Instance()->debug("TEST START " + std::to_string(begin_start) + ":" + std::to_string(begin_end) + " to " + std::to_string(r_it->left_limit) + "\n" ); logger::Instance()->debug("TEST END " + std::to_string(end_start) + ":" + std::to_string(end_end) + " to " + std::to_string(r_it->right_limit) + "\n" ); #endif if (r_it->left_limit < begin_start) { // in case other one was missed r_it->left_limit = begin_start; } bool cut_first; if (begin_end - begin_start + 1 > options::Instance()->get_min_raw_exon_size()) { // exon bigger than cutoff cut_first = begin_end - r_it->left_limit + 1 < options::Instance()->get_min_raw_exon_size(); if (!cut_first) { if(!fe->sorted_find(begin_end) && !fs->sorted_find(begin_end+1)) { // unsupported cut_first = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Not in Ends\n"); #endif } } } else { cut_first = begin_start != r_it->left_limit; } if (r_it->right_limit > end_end) { // in case other one was missed r_it->right_limit = end_end; } bool cut_last; if (end_end - end_start + 1 > options::Instance()->get_min_raw_exon_size()) { cut_last = r_it->right_limit - end_start + 1 < options::Instance()->get_min_raw_exon_size(); if (!cut_last) { if(!fs->sorted_find(end_start) && !fe->sorted_find(end_start-1)) { // unsupported cut_last = true; #ifdef ALLOW_DEBUG logger::Instance()->debug("Not in Starts\n"); #endif } } } else { cut_last = end_end != r_it->right_limit; } if (cut_first && cut_last && r_it->atom->exons->size() == 2) { #ifdef ALLOW_DEBUG logger::Instance()->debug("ERASE READ\n"); #endif r_it->block = true; continue; } if (cut_first) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Cut First\n"); #endif r_it->atom->exons->erase(r_it->atom->exons->begin()); // erase and set to new begin r_it->left_limit = (*r_it->atom->exons->begin())->start; } if (cut_last) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Cut Last\n"); #endif r_it->atom->exons->erase(std::prev(r_it->atom->exons->end())); // erase and set to new end r_it->right_limit = (*r_it->atom->exons->rbegin())->end; } } #ifdef ALLOW_DEBUG logger::Instance()->debug("DONE\n"); #endif } void bam_reader::reduce_atoms(connected* conn, chromosome* chrom) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Reduce connected .\n"); #endif // these need to be inserted for ( greader_list<rread>::iterator r_it = chrom->read_queue.begin(); r_it != chrom->read_queue.end(); ++r_it) { if (r_it->atom == NULL || r_it->block) { // this read was capped by a filter ! #ifdef ALLOW_DEBUG logger::Instance()->debug("Filtered Read " + std::to_string(r_it->left_limit) + " " + std::to_string(r_it->right_limit) + " Blocked " + std::to_string(r_it->block) + ".\n"); if (r_it->atom != NULL) { logger::Instance()->debug("Blocked Atom " + r_it->atom->to_string() + ".\n"); } #endif continue; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Search Existing Atom " + r_it->atom->to_string() + ".\n"); #endif // into the existing atoms marked by the raw_atom* atom; greader_refsorted_list<raw_atom*>::iterator atom_it = conn->atoms.ref().find( r_it->atom ); if (atom_it == conn->atoms.ref().end()) { // atom does not exist yet, so just add chrom->atoms.push_back(*r_it->atom); atom = &chrom->atoms.back(); conn->atoms.ref().insert(atom); // we can take the atom as is, but read has still not been added #ifdef ALLOW_DEBUG logger::Instance()->debug("New Atom " + (atom)->to_string() + ".\n"); #endif } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("Existing Atom " + (*atom_it)->to_string() + ".\n"); #endif // we found the correct one, so merge into it atom = *atom_it; } // try and find read collection // min max used for filtered exon boundaries read_collection* rc_joined = new read_collection(std::max(r_it->left_limit, (*atom->exons->begin())->start), std::min(r_it->right_limit, (*atom->exons->rbegin())->end), r_it->length, atom); greader_refsorted_list<read_collection*>::iterator rc_it = atom->reads.ref().find( rc_joined ); read_collection* rc; if (rc_it == atom->reads.ref().end()) { #ifdef ALLOW_DEBUG logger::Instance()->debug("New Collection.\n"); #endif conn->reads.ref().push_to_end(*rc_joined); rc = &conn->reads.ref().get_end(); atom->reads->insert(rc); } else { #ifdef ALLOW_DEBUG logger::Instance()->debug("Existing Collection .\n"); #endif rc = *rc_it; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Collection: " + std::to_string(rc->left_limit) + " " + std::to_string(rc->right_limit) + ".\n"); #endif // now add info of current read! if (r_it->id_set) { for (gmap<int, greader_list<std::string> >::iterator mi = r_it->ids.ref().begin(); mi != r_it->ids.ref().end(); mi++) { _MOVE_RANGE( mi->second.begin(), mi->second.end(), std::back_inserter(rc->open_ids.ref()[mi->first])); } } for (gmap<int, unsigned int>::iterator c_it = r_it->count.begin(); c_it != r_it->count.end(); c_it++ ) { rc->counts.ref()[c_it->first].count += c_it->second; } rc->length_filtered = rc->length_filtered && rc_joined->length_filtered; delete rc_joined; } #ifdef ALLOW_DEBUG logger::Instance()->debug("Final Graph ++++++++++++++++++++++++++++++++\n"); logger::Instance()->debug("Reduced to " + std::to_string(conn->atoms.ref().size())+" atoms.\n"); #endif } void bam_reader::mark_or_reduce_paired_atoms( connected* conn, chromosome* chrom , const greader_refsorted_list<raw_atom*>::iterator &atom_start, const greader_refsorted_list<raw_atom*>::iterator &atom_end) { boost::unordered_map<std::string, std::tuple<raw_atom*, read_collection*, std::string* > > id_map; // set current indices, makes joining faster! unsigned int i=0; for (greader_list<exon* >::iterator e_it = conn->fossil_exons.ref().begin(); e_it != conn->fossil_exons.ref().end(); ++e_it,++i) { (*e_it)->id = i; } for (greader_refsorted_list<raw_atom*>::iterator a_it = atom_start; a_it != atom_end; ++a_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("RAW: " + std::to_string((*(*a_it)->exons->begin())->id) + " " + std::to_string((*(*a_it)->exons->rbegin())->id) + "\n"); #endif for (greader_refsorted_list<read_collection*>::iterator c_it = (*a_it)->reads.ref().begin(); c_it != (*a_it)->reads.ref().end(); ++c_it) { for (gmap<int, greader_list<std::string> >::iterator oi_it = (*c_it)->open_ids.ref().begin(); oi_it != (*c_it)->open_ids.ref().end(); ++oi_it) { int index = oi_it->first; for (greader_list<std::string>::iterator i_it = oi_it->second.begin(); i_it != oi_it->second.end(); ++i_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("ID: " + *i_it + "\n"); #endif boost::unordered_map<std::string, std::tuple<raw_atom*, read_collection*, std::string* > > ::iterator find = id_map.find(*i_it); if ( find == id_map.end()) { id_map[*i_it] = std::make_tuple(&*(*a_it), &*(*c_it), &*i_it); } else { // we found a pair raw_atom* first = std::get<0>(find->second); read_collection* fcol = std::get<1>(find->second); // test if we want to join these two exon* lastexon = *first->exons.ref().rbegin(); exon* firstexon = *(*a_it)->exons.ref().begin(); exon* ll = *first->exons.ref().begin(); exon* rr = *(*a_it)->exons.ref().rbegin(); #ifdef ALLOW_DEBUG logger::Instance()->debug("Test exon join " + std::to_string(ll->id) + "-" + std::to_string(lastexon->id) + ":" + std::to_string(firstexon->id) + "-" + std::to_string(rr->id) + " " + std::to_string(first->exons.ref().size()) + "-" + std::to_string((*a_it)->exons.ref().size()) + "\n" ); logger::Instance()->debug("L " + first->to_string() + "\n" ); logger::Instance()->debug("R " + (*a_it)->to_string() + "\n" ); #endif if ( ( (first->exons.ref().size() ==1 || (*a_it)->exons.ref().size() == 1) && lastexon->id == firstexon->id ) || ll->id == firstexon->id || rr->id == lastexon->id ) { // this happens when left partner is a subset, i.e. only one exon before a split // do nothing but remove after if #ifdef ALLOW_DEBUG logger::Instance()->debug("Subset \n" ); #endif } else if (lastexon->id == firstexon->id || lastexon->id +1 == firstexon->id ) { // join :) if (lastexon->id +1 == firstexon->id) { // there is a gap!, we can only do this if we have a proven split here! one of my major pains bool junction_found = false;; for (greader_refsorted_list<raw_atom*>::iterator ra = atom_start; ra != atom_end; ++ra) { if ( (*(*ra)->exons.ref().begin())->id > lastexon->id ) { break; // we are sorted and this one is bigger! } // we need to find one with firstexon and lastexon in same raw! if ( (*ra)->exons.ref().find(firstexon) != (*ra)->exons.ref().end() && (*ra)->exons.ref().find(lastexon) != (*ra)->exons.ref().end() ) { // we found it junction_found = true; break; } } if (!junction_found) { continue; } } #ifdef ALLOW_DEBUG logger::Instance()->debug("1Join \n" ); #endif raw_atom* joined_atom = new raw_atom(); std::copy(first->exons.ref().begin(), first->exons.ref().end(),std::inserter( joined_atom->exons.ref(), joined_atom->exons.ref().end()) ); std::copy((*a_it)->exons.ref().begin(), (*a_it)->exons.ref().end(),std::inserter( joined_atom->exons.ref(), joined_atom->exons.ref().end()) ); // try to find atom in connected raw_atom* atom; greader_refsorted_list<raw_atom*>::iterator atom_it = conn->atoms.ref().find( joined_atom ); if (atom_it == conn->atoms.ref().end()) { // atom does not exist yet, so just add chrom->atoms.push_back(*joined_atom); atom = &chrom->atoms.back(); conn->atoms.ref().insert(atom); // we can take the atom as is } else { // we found the correct one, so merge into it atom = *atom_it; } // try and find read collection read_collection* rc_joined = new read_collection(fcol->left_limit, (*c_it)->right_limit, false, atom); greader_refsorted_list<read_collection*>::iterator rc_it = atom->reads.ref().find( rc_joined ); read_collection* rc; if (rc_it == atom->reads.ref().end()) { conn->reads.ref().push_to_end(*rc_joined); rc = &conn->reads.ref().get_end(); atom->reads->insert(rc); } else { rc = *rc_it; } rc->counts.ref()[index].count+=2; ++rc->counts.ref()[index].paired_count; --fcol->counts.ref()[index].count; --(*c_it)->counts.ref()[index].count; rc->counts.ref()[index].holes->push_back(std::make_pair(fcol->right_limit, (*c_it)->left_limit)); delete joined_atom; delete rc_joined; // ++fcol->paired[&*(*c_it)]; } else if (lastexon->id > firstexon->id && rr->id > lastexon->id) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Overlap \n" ); #endif bool matching = true; greader_refsorted_list<exon*>::iterator new_right_it = (*a_it)->exons.ref().begin(); greader_refsorted_list<exon*>::iterator old_left_it = first->exons->find(*new_right_it); if (old_left_it == first->exons->end()) { // if first one cannot be found, we already lost! matching = false; } // now loop over all remaining in tandem! for (; old_left_it != first->exons->end(); ++new_right_it, ++old_left_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Test " + std::to_string((*new_right_it)->id) +"\n" ); #endif if ( (*new_right_it)->id != (*old_left_it)->id ) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Unfound\n" ); #endif matching = false; break; } } if (matching) { raw_atom* joined_atom = new raw_atom(); raw_atom* overlap_atom = new raw_atom(); std::copy(first->exons.ref().begin(), first->exons.ref().end(),std::inserter( joined_atom->exons.ref(), joined_atom->exons.ref().end()) ); greader_refsorted_list<exon*>::iterator r_it = (*a_it)->exons.ref().begin(); for (; r_it !=(*a_it)->exons.ref().end() && (*r_it)->id <= lastexon->id; ++r_it ) { overlap_atom->exons->insert(*r_it); } for (; r_it !=(*a_it)->exons.ref().end(); ++r_it ) { joined_atom->exons->insert(*r_it); } ////// JOINED // try to find JOINED atom in connected raw_atom* atom_joined; greader_refsorted_list<raw_atom*>::iterator atom_it = conn->atoms.ref().find( joined_atom ); if (atom_it == conn->atoms.ref().end()) { // atom does not exist yet, so just add chrom->atoms.push_back(*joined_atom); atom_joined = &chrom->atoms.back(); conn->atoms.ref().insert(atom_joined); // we can take the atom as is } else { // we found the correct one, so merge into it atom_joined = *atom_it; } // try and find JOINED read collection read_collection* rc_joined = new read_collection(fcol->left_limit, (*c_it)->right_limit, false, atom_joined); greader_refsorted_list<read_collection*>::iterator rc_it = atom_joined->reads.ref().find( rc_joined ); read_collection* rcj; if (rc_it == atom_joined->reads.ref().end()) { conn->reads.ref().push_to_end(*rc_joined); rcj = &conn->reads.ref().get_end(); atom_joined->reads->insert(rcj); } else { rcj = *rc_it; } ////// OVERLAP if (options::Instance()->is_create_overlap_merge_read()) { // try to find OVERLAP atom in connected raw_atom* atom_overlap; atom_it = conn->atoms.ref().find( overlap_atom ); if (atom_it == conn->atoms.ref().end()) { // atom does not exist yet, so just add chrom->atoms.push_back(*overlap_atom); atom_overlap = &chrom->atoms.back(); conn->atoms.ref().insert(atom_overlap); // we can take the atom as is } else { // we found the correct one, so merge into it atom_overlap = *atom_it; } // try and find OVERLAP read collection read_collection* rc_overlap = new read_collection((*c_it)->left_limit, fcol->right_limit, false, atom_overlap); rc_it = atom_overlap->reads.ref().find( rc_overlap ); read_collection* rco; if (rc_it == atom_overlap->reads.ref().end()) { conn->reads.ref().push_to_end(*rc_overlap); rco = &conn->reads.ref().get_end(); atom_overlap->reads->insert(rco); } else { rco = *rc_it; } ++rco->counts.ref()[index].count; delete rc_overlap; } rcj->counts.ref()[index].count+=2; ++rcj->counts.ref()[index].paired_count; --fcol->counts.ref()[index].count; --(*c_it)->counts.ref()[index].count; delete overlap_atom; delete joined_atom; delete rc_joined; } // ++fcol->paired[&*(*c_it)]; } else { ++fcol->counts.ref()[index].paired[&*(*c_it)]; } (*c_it)->flag_id(*i_it); fcol->flag_id(*std::get<2>(find->second)); id_map.erase(find); // do insert size statistic // heuristic estimating as if all exons in between are part of the atom if (fcol->right_limit <= (*c_it)->left_limit) { ++conn->intel_count; rpos len = 0; if (lastexon->id == firstexon->id) { len = (*c_it)->left_limit - fcol->right_limit; } else { len += lastexon->end - fcol->right_limit + 1; len += (*c_it)->left_limit - firstexon->start + 1; for (unsigned int li = lastexon->id + 1; li < firstexon->id; ++li) { len += conn->fossil_exons->at(li)->end - conn->fossil_exons->at(li)->start + 1; ; } } conn->avg_split += ( len - conn->avg_split) / conn->intel_count; } } } } } } for (greader_refsorted_list<raw_atom*>::iterator a_it = atom_start; a_it != atom_end; ++a_it) { for (greader_refsorted_list<read_collection*>::iterator c_it = (*a_it)->reads.ref().begin(); c_it != (*a_it)->reads.ref().end(); ++c_it) { (*c_it)->clean_flagged_ids(); } } } void bam_reader::filter_bins(connected* conn, chromosome* chrom) { #ifdef ALLOW_DEBUG logger::Instance()->debug("------------ Filter Bins\n"); #endif for (greader_refsorted_list<raw_atom*>::iterator a_it = conn->atoms->begin(); a_it != conn->atoms->end(); ++a_it) { // test if this atom reads overlapping far enough to left and right if ((*a_it)->exons->size() < 2 || !(*a_it)->has_coverage) { continue; } // if ((*a_it)->reads.ref().empty()) { // continue; // } // if ((*a_it)->reads->size() == 1 && (*a_it)->exons->size() > 2 ) { // (*a_it)->reads.ref().clear(); // (*a_it)->count = 0; // (*a_it)->paired_count = 0; // continue; // } rpos begin_end = (*(*a_it)->exons->begin())->end; rpos end_start = (*(*a_it)->exons->rbegin())->start; bool cut_start = true; bool cut_end = true; for(gmap<int, raw_series_counts>::iterator rsci = (*a_it)->raw_series.begin(); rsci != (*a_it)->raw_series.end(); ++rsci) { if ( (rsci->second.lefts->size() == 0 && rsci->second.hole_ends->size() == 0) || (rsci->second.rights->size() == 0 && rsci->second.hole_starts->size() == 0) ) { continue; } rpos left, right; if (rsci->second.lefts->size() == 0) { left = rsci->second.hole_ends->begin()->first; } else if (rsci->second.hole_ends->size() == 0) { left = rsci->second.lefts->begin()->first; } else { left = std::min(rsci->second.lefts->begin()->first, rsci->second.hole_ends->begin()->first); } if (rsci->second.rights->size() == 0) { right = rsci->second.hole_starts->rbegin()->first; } else if (rsci->second.hole_starts->size() == 0) { right = rsci->second.rights->rbegin()->first; } else { right = std::max(rsci->second.rights->rbegin()->first, rsci->second.hole_starts->rbegin()->first); } // logger::Instance()->info("RC " + std::to_string(left) + "-" + std::to_string(right) + " " + std::to_string(begin_end) + "-" + std::to_string(end_start) + "\n"); if (begin_end - left + 1 >= options::Instance()->get_min_junction_anchor()) { cut_start = false; } if (right - end_start + 1 >= options::Instance()->get_min_junction_anchor()) { cut_end = false; } } // logger::Instance()->info("----- Atom " + (*a_it)->to_string() + " " + std::to_string(cut_start) + std::to_string(cut_end) + "\n"); raw_atom* cut_atom = new raw_atom(); if (cut_start && cut_end) { if ((*a_it)->exons->size() == 2) { (*a_it)->reads.ref().clear(); (*a_it)->has_coverage = false; delete cut_atom; continue; } std::copy(std::next((*a_it)->exons->begin()), std::prev((*a_it)->exons->end()),std::inserter( cut_atom->exons.ref(), cut_atom->exons.ref().end()) ); } else if (cut_start) { std::copy(std::next((*a_it)->exons->begin()), (*a_it)->exons->end(),std::inserter( cut_atom->exons.ref(), cut_atom->exons.ref().end()) ); } else if (cut_end) { std::copy((*a_it)->exons->begin(), std::prev((*a_it)->exons->end()),std::inserter( cut_atom->exons.ref(), cut_atom->exons.ref().end()) ); } else { delete cut_atom; continue; } // logger::Instance()->info("New " + cut_atom->to_string() + "\n"); raw_atom* atom; greader_refsorted_list<raw_atom*>::iterator atom_it = conn->atoms.ref().find( cut_atom ); if (atom_it == conn->atoms.ref().end()) { // atom does not exist yet, so just add chrom->atoms.push_back(*cut_atom); atom = &chrom->atoms.back(); conn->atoms.ref().insert(atom); // we can take the atom as is } else { // we found the correct one, so merge into it atom = *atom_it; } // transfer over the series for(gmap<int, raw_series_counts>::iterator rsci = (*a_it)->raw_series.begin(); rsci != (*a_it)->raw_series.end(); ++rsci) { atom->raw_series[rsci->first].add_other_max_min(rsci->second, (*atom->exons->begin())->start, (*atom->exons->rbegin())->end); atom->has_coverage = true; } for( paired_map<raw_atom*, gmap<int, rcount> >::iterator pmi = (*a_it)->paired.begin(); pmi != (*a_it)->paired.end(); ++pmi) { for (gmap<int, rcount>::iterator pci = pmi->second.begin(); pci != pmi->second.end(); ++pci) { atom->paired[pmi->first][pci->first] += pci->second; } } (*a_it)->reads.ref().clear(); (*a_it)->has_coverage = false; delete cut_atom; } #ifdef ALLOW_DEBUG logger::Instance()->debug("------------ Filter Bins Out\n"); #endif } void bam_reader::reduce_reads(connected* conn) { for (greader_refsorted_list<raw_atom*>::iterator a_it = conn->atoms->begin(); a_it != conn->atoms->end(); ++a_it) { #ifdef ALLOW_DEBUG logger::Instance()->debug("------------new ATOM\n"); logger::Instance()->debug((*a_it)->to_string() + "\n"); #endif for (greader_refsorted_list<read_collection*>::iterator c_it = (*a_it)->reads.ref().begin(); c_it != (*a_it)->reads.ref().end(); ++c_it) { for (gmap<int, read_collection::raw_count >::iterator co_it = (*c_it)->counts->begin(); co_it != (*c_it)->counts->end(); ++co_it) { int id = co_it->first; // logger::Instance()->debug("Collection " + std::to_string(co_it->second.count) + " " + std::to_string((co_it->second.paired_count) + "\n"); // logger::Instance()->debug("Add collection \n"); // basic info to break down (*a_it)->raw_series[id].paired_count += co_it->second.paired_count; (*a_it)->raw_series[id].count += co_it->second.count; (*a_it)->length_filtered = (*a_it)->length_filtered && (*c_it)->length_filtered; if (co_it->second.count != 0) { // we still add the rest as 0 for filtering, as we still saw them, just moved them (*a_it)->has_coverage = true; } rcount fragcount = co_it->second.count - co_it->second.paired_count; std::map< rpos,rcount >::iterator fl = (*a_it)->raw_series[id].lefts->find((*c_it)->left_limit); if (fl == (*a_it)->raw_series[id].lefts->end()) { (*a_it)->raw_series[id].lefts->insert(std::make_pair((*c_it)->left_limit, fragcount)); } else { fl->second += fragcount; } std::map< rpos,rcount >::iterator fr = (*a_it)->raw_series[id].rights->find((*c_it)->right_limit); if (fr == (*a_it)->raw_series[id].rights->end()) { (*a_it)->raw_series[id].rights->insert(std::make_pair((*c_it)->right_limit, fragcount)); } else { fr->second += fragcount; } // transfer down coverage info if ((*a_it)->exons.ref().size() == 1) { // if we just have one exon, start to end are bases, we also just use the start } else { // more than two exons, so there is a fixed junction guaranteed in the middle // left (*a_it)->raw_series[id].total_rights += fragcount; (*a_it)->raw_series[id].total_lefts += fragcount; } for (std::deque<std::pair<rpos, rpos> >::iterator hole_it = co_it->second.holes->begin(); hole_it != co_it->second.holes->end();++hole_it) { // unfortunately we need to find the actual exons for this std::map< rpos,rcount >::iterator fl = (*a_it)->raw_series[id].hole_starts->find(hole_it->first); if (fl == (*a_it)->raw_series[id].hole_starts->end()) { (*a_it)->raw_series[id].hole_starts->insert(std::make_pair(hole_it->first, 1)); } else { fl->second += 1; } std::map< rpos,rcount >::iterator fr = (*a_it)->raw_series[id].hole_ends->find(hole_it->second); if (fr == (*a_it)->raw_series[id].hole_ends->end()) { (*a_it)->raw_series[id].hole_ends->insert(std::make_pair(hole_it->second, 1)); } else { fr->second += 1; } } // now transfer actual paired for(paired_map<read_collection*, rcount >::iterator pair_it = co_it->second.paired.begin(); pair_it != co_it->second.paired.end(); ++pair_it) { // logger::Instance()->debug("Add paired \n"); (*a_it)->paired[pair_it->first->parent][id] += pair_it->second; } } } (*a_it)->reads.ref().clear(); } for (double_deque_ref<read_collection>::iterator rc_it = conn->reads->begin(); rc_it != conn->reads->end(); ++rc_it) { rc_it->ref().clear(); } conn->reads->begin()->ref().clear(); } void bam_reader::reset_reads(chromosome* chrom) { #ifdef ALLOW_DEBUG logger::Instance()->debug("Reset reads \n"); #endif // when this is called ALL read collections should be removed down chrom->reads.clear(); for( greader_list<connected>::iterator con = chrom->chrom_fragments.begin(); con!= chrom->chrom_fragments.end(); ++con) { lazy< std::deque<read_collection> > new_inner = chrom->reads.add_inner(); con->reads.ref().push_back(new_inner); } }
def scheduled_operation_get(context, id, columns_to_join=[]): return IMPL.scheduled_operation_get(context, id, columns_to_join)
<filename>b2blaze/connector.py """ Copyright <NAME> 2018 """ import requests import datetime from requests.auth import HTTPBasicAuth from b2blaze.b2_exceptions import B2Exception, B2AuthorizationError, B2InvalidRequestType import sys from hashlib import sha1 from b2blaze.utilities import b2_url_encode, decode_error, get_content_length, StreamWithHashProgress from .api import BASE_URL, API_VERSION, API class B2Connector(object): """ """ def __init__(self, key_id, application_key): """ :param key_id: :param application_key: """ self.key_id = key_id self.application_key = application_key self.account_id = None self.auth_token = None self.authorized_at = None self.api_url = None self.download_url = None self.recommended_part_size = None self.api_session = None #TODO: Part Size self._authorize() @property def authorized(self): """ :return: """ if self.auth_token is None: return False else: if (datetime.datetime.utcnow() - self.authorized_at) > datetime.timedelta(hours=23): self._authorize() return True def _authorize(self): """ :return: """ path = BASE_URL + API.authorize result = requests.get(path, auth=HTTPBasicAuth(self.key_id, self.application_key)) if result.status_code == 200: result_json = result.json() self.authorized_at = datetime.datetime.utcnow() self.account_id = result_json['accountId'] self.auth_token = result_json['authorizationToken'] self.api_url = result_json['apiUrl'] + API_VERSION self.download_url = result_json['downloadUrl'] + API_VERSION + API.download_file_by_id self.recommended_part_size = result_json['recommendedPartSize'] self.api_session = requests.Session() self.api_session.headers.update({ 'Authorization': self.auth_token }) else: raise B2Exception.parse(result) def make_request(self, path, method='get', headers={}, params={}, account_id_required=False): """ :param path: :param method: :param headers: :param params: :param account_id_required: :return: """ if self.authorized: url = self.api_url + path if method == 'get': return self.api_session.get(url, headers=headers) elif method == 'post': if account_id_required: params.update({ 'accountId': self.account_id }) headers.update({ 'Content-Type': 'application/json' }) return self.api_session.post(url, json=params, headers=headers) else: raise B2InvalidRequestType('Request type must be get or post') else: raise B2AuthorizationError('Unknown Error') def upload_file(self, file_contents, file_name, upload_url, auth_token, direct=False, mime_content_type=None, content_length=None, progress_listener=None): """ :param file_contents: :param file_name: :param upload_url: :param auth_token: :param mime_content_type: :param content_length :param progress_listener :return: """ if hasattr(file_contents, 'read'): if content_length is None: content_length = get_content_length(file_contents) file_sha = 'hex_digits_at_end' data = StreamWithHashProgress(stream=file_contents, progress_listener=progress_listener) content_length += data.hash_size() else: if content_length is None: content_length = len(file_contents) file_sha = sha1(file_contents).hexdigest() data = file_contents headers = { 'Content-Type': mime_content_type or 'b2/x-auto', 'Content-Length': str(content_length), 'X-Bz-Content-Sha1': file_sha, 'X-Bz-File-Name': b2_url_encode(file_name), 'Authorization': auth_token } return requests.post(upload_url, headers=headers, data=data) def upload_part(self, file_contents, content_length, part_number, upload_url, auth_token, progress_listener=None): """ :param file_contents: :param content_length: :param part_number: :param upload_url: :param auth_token: :param progress_listener: :return: """ file_sha = 'hex_digits_at_end' data = StreamWithHashProgress(stream=file_contents, progress_listener=progress_listener) content_length += data.hash_size() headers = { 'Content-Length': str(content_length), 'X-Bz-Content-Sha1': file_sha, 'X-Bz-Part-Number': str(part_number), 'Authorization': auth_token } return requests.post(upload_url, headers=headers, data=data) def download_file(self, file_id): """ :param file_id: :return: """ url = self.download_url params = { 'fileId': file_id } headers = { 'Authorization': self.auth_token } return requests.get(url, headers=headers, params=params)
from pathlib import Path import re html_file = Path(__file__).parent.parent/'index.html' text = '' with open(html_file, 'r') as f: text = f.readlines() out = []; write_line = True in_type = False for line in text: write_line = True if re.search('<section class=\'p2 mb2 clearfix bg-white minishadow\'>', line): in_type = False if re.search('<h3 class=\'fl m0\' id=\'eqn_', line): in_type = True if re.search('<h3 class=\'fl m0\' id=\'obj_', line): in_type = True if re.search('<div class=\'pre p1 fill-light mt0\'>', line): write_line = False if re.search('^ *Type:', line) and re.search('<p>', out[-1]): out[-1] = f'{out[-1][0: -2]} style="display: none;">' if write_line: out.append(line) with open(html_file.parent/'index.html', 'w') as f: f.writelines(out)
<reponame>ckski/rgeometry<filename>src/orientation.rs<gh_stars>0 use std::cmp::Ordering; use crate::data::Vector; use crate::PolygonScalar; #[derive(PartialEq, Eq, PartialOrd, Ord, Debug, Copy, Clone)] pub enum Orientation { CounterClockWise, ClockWise, CoLinear, } use Orientation::*; #[derive(PartialEq, Eq, PartialOrd, Ord, Debug, Copy, Clone)] pub enum SoS { CounterClockWise, ClockWise, } // let slope1 = (2^q - 2^r) * (p - q); // let slope2 = (2^q - 2^p) * (r - q); impl Orientation { /// Determine the direction you have to turn if you walk from `p1` /// to `p2` to `p3`. /// /// For fixed-precision types (i8,i16,i32,i64,etc), this function is /// guaranteed to work for any input and never cause any arithmetic overflows. /// /// # Polymorphism /// /// This function works with both [Points](crate::data::Point) and [Vectors](Vector). You should prefer to /// use [Point::orient](crate::data::Point::orient) when possible. /// /// # Examples /// /// ```rust /// # use rgeometry::data::Point; /// # use rgeometry::Orientation; /// let p1 = Point::new([ 0, 0 ]); /// let p2 = Point::new([ 0, 1 ]); // One unit above p1. /// // (0,0) -> (0,1) -> (0,2) == Orientation::CoLinear /// assert!(Orientation::new(&p1, &p2, &Point::new([ 0, 2 ])).is_colinear()); /// // (0,0) -> (0,1) -> (-1,2) == Orientation::CounterClockWise /// assert!(Orientation::new(&p1, &p2, &Point::new([ -1, 2 ])).is_ccw()); /// // (0,0) -> (0,1) -> (1,2) == Orientation::ClockWise /// assert!(Orientation::new(&p1, &p2, &Point::new([ 1, 2 ])).is_cw()); /// ``` /// pub fn new<T>(p1: &[T; 2], p2: &[T; 2], p3: &[T; 2]) -> Orientation where T: PolygonScalar, { // raw_arr_turn(p, q, r) match T::cmp_slope(p1, p2, p3) { Ordering::Less => Orientation::ClockWise, Ordering::Equal => Orientation::CoLinear, Ordering::Greater => Orientation::CounterClockWise, } } /// Locate `p2` in relation to the line determined by the point `p1` and the direction /// vector. /// /// For fixed-precision types (i8,i16,i32,i64,etc), this function is /// guaranteed to work for any input and never cause any arithmetic overflows. /// /// This function is identical to [`Orientation::new`]`(p1, p1+v, p2)` but will never /// cause arithmetic overflows even if `p+v` would overflow. /// /// # Examples /// /// ```rust /// # use rgeometry::data::{Vector,Point}; /// # use rgeometry::Orientation; /// let v = Vector([ 1, 1 ]); // Vector pointing to the top-right corner. /// let p1 = Point::new([ 5, 5 ]); /// assert!(Orientation::along_vector(&p1, &v, &Point::new([ 6, 6 ])).is_colinear()); /// assert!(Orientation::along_vector(&p1, &v, &Point::new([ 7, 8 ])).is_ccw()); /// assert!(Orientation::along_vector(&p1, &v, &Point::new([ 8, 7 ])).is_cw()); /// ``` pub fn along_vector<T>(p1: &[T; 2], vector: &Vector<T, 2>, p2: &[T; 2]) -> Orientation where T: PolygonScalar, { match T::cmp_vector_slope(&vector.0, p1, p2) { Ordering::Less => Orientation::ClockWise, Ordering::Equal => Orientation::CoLinear, Ordering::Greater => Orientation::CounterClockWise, } } pub fn along_perp_vector<T>(p1: &[T; 2], vector: &Vector<T, 2>, p2: &[T; 2]) -> Orientation where T: PolygonScalar, { match T::cmp_perp_vector_slope(&vector.0, p1, p2) { Ordering::Less => Orientation::ClockWise, Ordering::Equal => Orientation::CoLinear, Ordering::Greater => Orientation::CounterClockWise, } } pub fn is_colinear(self) -> bool { matches!(self, Orientation::CoLinear) } pub fn is_ccw(self) -> bool { matches!(self, Orientation::CounterClockWise) } pub fn is_cw(self) -> bool { matches!(self, Orientation::ClockWise) } #[must_use] pub fn then(self, other: Orientation) -> Orientation { match self { Orientation::CoLinear => other, _ => self, } } pub fn break_ties(self, a: u32, b: u32, c: u32) -> SoS { match self { CounterClockWise => SoS::CounterClockWise, ClockWise => SoS::ClockWise, CoLinear => SoS::new(a, b, c), } } pub fn sos(self, other: SoS) -> SoS { match self { CounterClockWise => SoS::CounterClockWise, ClockWise => SoS::ClockWise, CoLinear => other, } } // pub fn around_origin<T>(q: &[T; 2], r: &[T; 2]) -> Orientation // where // T: Ord + Mul<Output = T> + Clone + Extended, // { // raw_arr_turn_origin(q, r) // } #[must_use] pub fn reverse(self) -> Orientation { match self { Orientation::CounterClockWise => Orientation::ClockWise, Orientation::ClockWise => Orientation::CounterClockWise, Orientation::CoLinear => Orientation::CoLinear, } } pub fn ccw_cmp_around_with<T>( vector: &Vector<T, 2>, p1: &[T; 2], p2: &[T; 2], p3: &[T; 2], ) -> Ordering where T: PolygonScalar, { let aq = Orientation::along_vector(p1, vector, p2); let ar = Orientation::along_vector(p1, vector, p3); // let on_zero = |d: &[T; 2]| { // !((d[0] < p[0] && z[0].is_positive()) // || (d[1] < p[1] && z[1].is_positive()) // || (d[0] > p[0] && z[0].is_negative()) // || (d[1] > p[1] && z[1].is_negative())) // }; let on_zero = |d: &[T; 2]| match Orientation::along_perp_vector(p1, vector, d) { CounterClockWise => false, ClockWise => true, CoLinear => true, }; let cmp = || match Orientation::new(p1, p2, p3) { CounterClockWise => Ordering::Less, ClockWise => Ordering::Greater, CoLinear => Ordering::Equal, }; match (aq, ar) { // Easy cases: Q and R are on either side of the line p->z: (CounterClockWise, ClockWise) => Ordering::Less, (ClockWise, CounterClockWise) => Ordering::Greater, // A CoLinear point may be in front of p->z (0 degree angle) or behind // it (180 degree angle). If the other point is clockwise, it must have an // angle greater than 180 degrees and must therefore be greater than the // colinear point. (CoLinear, ClockWise) => Ordering::Less, (ClockWise, CoLinear) => Ordering::Greater, // if Q and R are on the same side of P->Z then the most clockwise point // will have the smallest angle. (CounterClockWise, CounterClockWise) => cmp(), (ClockWise, ClockWise) => cmp(), // CoLinear points have an angle of either 0 degrees or 180 degrees. on_zero // can distinguish these two cases: // on_zero(p) => 0 degrees. // !on_zero(p) => 180 degrees. (CounterClockWise, CoLinear) => { if on_zero(p3) { Ordering::Greater // angle(r) = 0 & 0 < angle(q) < 180. Thus: Q > R } else { Ordering::Less // angle(r) = 180 & 0 < angle(q) < 180. Thus: Q < R } } (CoLinear, CounterClockWise) => { if on_zero(p2) { Ordering::Less } else { Ordering::Greater } } (CoLinear, CoLinear) => match (on_zero(p2), on_zero(p3)) { (true, true) => Ordering::Equal, (false, false) => Ordering::Equal, (true, false) => Ordering::Less, (false, true) => Ordering::Greater, }, } } } // How does the line from (0,0) to q to r turn? // pub fn raw_arr_turn_origin<T>(q: &[T; 2], r: &[T; 2]) -> Orientation // where // T: Ord + Mul<Output = T> + Clone, // // for<'a> &'a T: Mul<Output = T>, // { // let [ux, uy] = q.clone(); // let [vx, vy] = r.clone(); // match (ux * vy).cmp(&(uy * vx)) { // Ordering::Less => ClockWise, // Ordering::Greater => CounterClockWise, // Ordering::Equal => CoLinear, // } // } // pub fn raw_arr_turn<T>(p: &[T; 2], q: &[T; 2], r: &[T; 2]) -> Orientation // where // T: Clone + Mul<T, Output = T> + Sub<Output = T> + Ord, // // for<'a> &'a T: Sub<Output = T>, // { // let [ux, uy] = raw_arr_sub(q, p); // let [vx, vy] = raw_arr_sub(r, p); // match (ux * vy).cmp(&(uy * vx)) { // Ordering::Less => ClockWise, // Ordering::Greater => CounterClockWise, // Ordering::Equal => CoLinear, // } // } // pub fn raw_arr_turn_2<T>(p: &[T; 2], q: &[T; 2], r: &[T; 2]) -> Orientation // where // T: Clone + Mul<T, Output = T> + Sub<Output = T> + Ord, // // for<'a> &'a T: Sub<Output = T>, // { // let slope1 = (q[1].clone() - p[1].clone()) * (r[0].clone() - q[0].clone()); // let slope2 = (r[1].clone() - q[1].clone()) * (q[0].clone() - p[0].clone()); // match slope1.cmp(&slope2) { // Ordering::Less => ClockWise, // Ordering::Greater => CounterClockWise, // Ordering::Equal => CoLinear, // } // } // pub fn extended_orientation_2<T>(p: &[T; 2], q: &[T; 2], r: &[T; 2]) -> Orientation // where // T: Extended, // { // // let slope1 = (q[1].clone() - p[1].clone()) * (r[0].clone() - q[0].clone()); // // let slope2 = (r[1].clone() - q[1].clone()) * (q[0].clone() - p[0].clone()); // let ux = q[0].clone().extend_signed() - p[0].clone().extend_signed(); // let uy = q[1].clone().extend_signed() - p[1].clone().extend_signed(); // let vx = r[0].clone().extend_signed() - p[0].clone().extend_signed(); // let vy = r[1].clone().extend_signed() - p[1].clone().extend_signed(); // let ux_vy_signum = ux.signum() * vy.signum(); // let uy_vx_signum = uy.signum() * vx.signum(); // match ux_vy_signum.cmp(&uy_vx_signum).then_with(|| { // (ux.do_unsigned_abs() * vy.do_unsigned_abs()) // .cmp(&(uy.do_unsigned_abs() * vx.do_unsigned_abs())) // }) { // Ordering::Less => ClockWise, // Ordering::Greater => CounterClockWise, // Ordering::Equal => CoLinear, // } // } // #[inline(never)] // pub fn extended_orientation_3<T>(p: &[T; 2], q: &[T; 2], r: &[T; 2]) -> Orientation // where // T: Extended, // { // // let slope1 = (q[1].clone() - p[1].clone()) * (r[0].clone() - q[0].clone()); // // let slope2 = (r[1].clone() - q[1].clone()) * (q[0].clone() - p[0].clone()); // let (ux, ux_neg) = q[0].clone().diff(p[0].clone()); // let (vy, vy_neg) = r[1].clone().diff(p[1].clone()); // let ux_vy_signum = ux_neg.bitxor(vy_neg); // let (uy, uy_neg) = q[1].clone().diff(p[1].clone()); // let (vx, vx_neg) = r[0].clone().diff(p[0].clone()); // let uy_vx_signum = uy_neg.bitxor(vx_neg); // match uy_vx_signum // .cmp(&ux_vy_signum) // .then_with(|| (ux * vy).cmp(&(uy * vx))) // { // Ordering::Less => ClockWise, // Ordering::Greater => CounterClockWise, // Ordering::Equal => CoLinear, // } // } // pub fn extended_orientation_i64(p: &[i64; 2], q: &[i64; 2], r: &[i64; 2]) -> Ordering { // crate::Extended::cmp_slope(p, q, r) // } // pub fn turn_i64(p: &[i64; 2], q: &[i64; 2], r: &[i64; 2]) -> Orientation { // raw_arr_turn_2(p, q, r) // } // pub fn turn_bigint(p: &[i64; 2], q: &[i64; 2], r: &[i64; 2]) -> Orientation { // use num_bigint::*; // let slope1: BigInt = // (BigInt::from(q[1]) - BigInt::from(p[1])) * (BigInt::from(r[0]) - BigInt::from(q[0])); // let slope2: BigInt = // (BigInt::from(r[1]) - BigInt::from(q[1])) * (BigInt::from(q[0]) - BigInt::from(p[0])); // match slope1.cmp(&slope2) { // Ordering::Less => ClockWise, // Ordering::Greater => CounterClockWise, // Ordering::Equal => CoLinear, // } // } // pub fn turn_i64_fast_or_slow(p: &[i64; 2], q: &[i64; 2], r: &[i64; 2]) -> Orientation { // turn_t_fast_path(p, q, r).unwrap_or_else(|| extended_orientation_3(p, q, r)) // } // pub fn turn_t_fast_path<T>(p: &[T; 2], q: &[T; 2], r: &[T; 2]) -> Option<Orientation> // where // T: Extended, // { // // let slope1 = (q[1].clone().checked_sub(&p[1].clone())?) // // .checked_mul(&r[0].clone().checked_sub(&q[0].clone())?)? // // .extend_signed(); // // let slope2 = (r[1].clone().checked_sub(&q[1].clone())?) // // .checked_mul(&q[0].clone().checked_sub(&p[0].clone())?)? // // .extend_signed(); // let slope1 = (q[1].clone().checked_sub(&p[1].clone())?.extend_signed()) // .checked_mul(&r[0].clone().checked_sub(&q[0].clone())?.extend_signed()); // let slope2 = (r[1].clone().checked_sub(&q[1].clone())?.extend_signed()) // .checked_mul(&q[0].clone().checked_sub(&p[0].clone())?.extend_signed()); // match slope1.cmp(&slope2) { // Ordering::Less => Some(ClockWise), // Ordering::Greater => Some(CounterClockWise), // Ordering::Equal => Some(CoLinear), // } // } // Sort 'p' and 'q' counterclockwise around (0,0) along the 'z' axis. // pub fn ccw_cmp_around_origin_with<T>(z: &[T; 2], p: &[T; 2], q: &[T; 2]) -> Ordering // where // T: Clone + Ord + Mul<Output = T> + Neg<Output = T>, // { // let [zx, zy] = z; // let b: &[T; 2] = &[zy.clone().neg(), zx.clone()]; // let ap = raw_arr_turn_origin(z, p); // let aq = raw_arr_turn_origin(z, q); // let on_zero = |d: &[T; 2]| match raw_arr_turn_origin(b, d) { // CounterClockWise => false, // ClockWise => true, // CoLinear => true, // }; // let cmp = match raw_arr_turn_origin(p, q) { // CounterClockWise => Ordering::Less, // ClockWise => Ordering::Greater, // CoLinear => Ordering::Equal, // }; // match (ap, aq) { // (CounterClockWise, CounterClockWise) => cmp, // (CounterClockWise, ClockWise) => Ordering::Less, // (CounterClockWise, CoLinear) => { // if on_zero(q) { // Ordering::Greater // } else { // Ordering::Less // } // } // (ClockWise, CounterClockWise) => Ordering::Greater, // (ClockWise, ClockWise) => cmp, // (ClockWise, CoLinear) => Ordering::Less, // (CoLinear, CounterClockWise) => { // if on_zero(p) { // Ordering::Less // } else { // Ordering::Greater // } // } // (CoLinear, ClockWise) => Ordering::Less, // (CoLinear, CoLinear) => match (on_zero(p), on_zero(q)) { // (true, true) => Ordering::Equal, // (false, false) => Ordering::Equal, // (true, false) => Ordering::Less, // (false, true) => Ordering::Greater, // }, // } // } // https://arxiv.org/abs/math/9410209 // Simulation of Simplicity. // Break ties (ie colinear orientations) in an arbitrary but consistent way. impl SoS { // p: Point::new([a, 2^a]) // q: Point::new([b, 2^b]) // r: Point::new([c, 2^c]) // new(a,b,c) == Orientation::new(p, q, r) pub fn new(a: u32, b: u32, c: u32) -> SoS { assert_ne!(a, b); assert_ne!(b, c); assert_ne!(c, a); // Combinations: // a<b a<c c<b // b a c => CW _ X _ // c b a => CW _ _ X // a c b => CW X X X // b c a => CCW _ _ _ // c a b => CCW X _ X // a b c => CCW X X _ let ab = a < b; let ac = a < c; let cb = c < b; if ab ^ ac ^ cb { SoS::ClockWise } else { SoS::CounterClockWise } // if a < b { // if a < c && c < b { // SoS::ClockWise // a c b // } else { // SoS::CounterClockWise // a b c, c a b // } // } else if b < c && c < a { // SoS::CounterClockWise // b c a // } else { // SoS::ClockWise // b a c, c b a // } } pub fn orient(self) -> Orientation { match self { SoS::CounterClockWise => Orientation::CounterClockWise, SoS::ClockWise => Orientation::ClockWise, } } #[must_use] pub fn reverse(self) -> SoS { match self { SoS::CounterClockWise => SoS::ClockWise, SoS::ClockWise => SoS::CounterClockWise, } } } #[cfg(test)] mod tests { use super::*; use crate::data::Point; use num::BigInt; use proptest::prelude::*; use test_strategy::proptest; #[test] fn orientation_limit_1() { PolygonScalar::cmp_slope( &[i8::MAX, i8::MAX], &[i8::MIN, i8::MIN], &[i8::MIN, i8::MIN], ); } #[test] fn cmp_slope_1() { assert_eq!( PolygonScalar::cmp_slope(&[0i8, 0], &[1, 1], &[2, 2],), Ordering::Equal ); } #[test] fn cmp_slope_2() { assert_eq!( Orientation::new(&[0i8, 0], &[0, 1], &[2, 2],), Orientation::ClockWise ); } #[test] fn orientation_limit_2() { let options = &[i8::MIN, i8::MAX, 0, -10, 10]; for [a, b, c, d, e, f] in crate::utils::permutations([options; 6]) { PolygonScalar::cmp_slope(&[a, b], &[c, d], &[e, f]); } } #[test] fn cmp_around_1() { use num_bigint::*; let pt1 = [BigInt::from(0), BigInt::from(0)]; let pt2 = [BigInt::from(-1), BigInt::from(1)]; // let pt2 = [BigInt::from(-717193444810564826_i64), BigInt::from(1)]; let vector = Vector([BigInt::from(1), BigInt::from(0)]); assert_eq!( Orientation::ccw_cmp_around_with(&vector, &pt1, &pt2, &pt1), Ordering::Greater ); } #[test] fn sos_unit1() { assert_eq!(SoS::new(0, 1, 2), SoS::CounterClockWise) } #[test] #[should_panic] fn sos_unit2() { SoS::new(0, 0, 1); } #[test] fn sos_unit3() { assert_eq!(SoS::new(99, 0, 1), SoS::CounterClockWise); } #[proptest] fn sos_eq_prop(a: u8, b: u8, c: u8) { if a != b && b != c && c != a { let (a, b, c) = (a as u32, b as u32, c as u32); let one = &BigInt::from(1); let big_a = BigInt::from(a); let big_b = BigInt::from(b); let big_c = BigInt::from(c); let p = Point::new([big_a, one << a]); let q = Point::new([big_b, one << b]); let r = Point::new([big_c, one << c]); prop_assert_eq!(SoS::new(a, b, c).orient(), Orientation::new(&p, &q, &r)); } } #[proptest] fn sos_rev_prop(a: u32, b: u32, c: u32) { if a != b && b != c && c != a { prop_assert_eq!(SoS::new(a, b, c), SoS::new(c, b, a).reverse()); prop_assert_eq!(SoS::new(a, b, c), SoS::new(a, c, b).reverse()); prop_assert_eq!(SoS::new(a, b, c), SoS::new(b, a, c).reverse()); prop_assert_eq!(SoS::new(a, b, c), SoS::new(b, c, a)); } } }
// PackTo packs zip file or an unpacked directory into a CRX3 file. func (e Extension) PackTo(dst string, pk *rsa.PrivateKey) error { if e.isEmpty() { return ErrPathNotFound } return Pack(e.String(), dst, pk) }
use bytes::Buf; use shared::{Deserializable, DeserializationError, Serializable}; /// A shorthand way of referring to a type of [Message](crate::Message). A `Command` is a single byte, while a [Message](crate::Message) is about 90 bytes. #[derive(Debug, Clone, Copy, PartialEq)] pub enum Command { Version, Verack, GetBlocks, GetData, Block, GetHeaders, Headers, Inv, MemPool, MerkleBlock, CmpctBlock, GetBlockTxn, BlockTxn, SendCmpct, NotFound, Tx, Addr, Alert, FeeFilter, FilterAdd, FilterClear, FilterLoad, GetAddr, Ping, Pong, Reject, SendHeaders, } impl Command { pub fn bytes(&self) -> &[u8; 12] { match self { Command::Version => b"version\0\0\0\0\0", Command::Verack => b"verack\0\0\0\0\0\0", Command::GetBlocks => b"getblocks\0\0\0", Command::GetData => b"getdata\0\0\0\0\0", Command::Block => b"block\0\0\0\0\0\0\0", Command::GetHeaders => b"getheaders\0\0", Command::BlockTxn => b"blocktxn\0\0\0\0", Command::CmpctBlock => b"cmpctblock\0\0", Command::Headers => b"headers\0\0\0\0\0", Command::Inv => b"inv\0\0\0\0\0\0\0\0\0", Command::MemPool => b"mempool\0\0\0\0\0", Command::MerkleBlock => b"merkleblock\0", Command::SendCmpct => b"sendcmpct\0\0\0", Command::GetBlockTxn => b"getblocktxn\0", Command::NotFound => b"notfound\0\0\0\0", Command::Tx => b"tx\0\0\0\0\0\0\0\0\0\0", Command::Addr => b"addr\0\0\0\0\0\0\0\0", Command::Alert => b"alert\0\0\0\0\0\0\0", Command::FeeFilter => b"feefilter\0\0\0", Command::FilterAdd => b"filteradd\0\0\0", Command::FilterClear => b"filterclear\0", Command::FilterLoad => b"filterload\0\0", Command::GetAddr => b"getaddr\0\0\0\0\0", Command::Ping => b"ping\0\0\0\0\0\0\0\0", Command::Pong => b"pong\0\0\0\0\0\0\0\0", Command::Reject => b"reject\0\0\0\0\0\0", Command::SendHeaders => b"sendheaders\0", } } } impl Serializable for Command { fn serialize<W>(&self, target: &mut W) -> Result<(), std::io::Error> where W: std::io::Write, { target.write_all(self.bytes()) } } impl Deserializable for Command { fn deserialize<B: Buf>(mut reader: B) -> Result<Command, DeserializationError> { if reader.remaining() < 12 { return Err(DeserializationError::Parse(String::from( "Not enough data left in reader to deserialize Command", ))); } // Note: this is a zero-copy op if the underlying is bytes/bytesmut let buf = reader.copy_to_bytes(12); let command = match &buf[..12] { b"version\0\0\0\0\0" => Command::Version, b"verack\0\0\0\0\0\0" => Command::Verack, b"getblocks\0\0\0" => Command::GetBlocks, b"getdata\0\0\0\0\0" => Command::GetData, b"block\0\0\0\0\0\0\0" => Command::Block, b"getheaders\0\0" => Command::GetHeaders, b"blocktxn\0\0\0\0" => Command::BlockTxn, b"cmpctblock\0\0" => Command::CmpctBlock, b"headers\0\0\0\0\0" => Command::Headers, b"inv\0\0\0\0\0\0\0\0\0" => Command::Inv, b"mempool\0\0\0\0\0" => Command::MemPool, b"merkleblock\0" => Command::MerkleBlock, b"sendcmpct\0\0\0" => Command::SendCmpct, b"getblocktxn\0" => Command::GetBlockTxn, b"notfound\0\0\0\0" => Command::NotFound, b"tx\0\0\0\0\0\0\0\0\0\0" => Command::Tx, b"addr\0\0\0\0\0\0\0\0" => Command::Addr, b"alert\0\0\0\0\0\0\0" => Command::Alert, b"feefilter\0\0\0" => Command::FeeFilter, b"filteradd\0\0\0" => Command::FilterAdd, b"filterclear\0" => Command::FilterClear, b"filterload\0\0" => Command::FilterLoad, b"getaddr\0\0\0\0\0" => Command::GetAddr, b"ping\0\0\0\0\0\0\0\0" => Command::Ping, b"pong\0\0\0\0\0\0\0\0" => Command::Pong, b"reject\0\0\0\0\0\0" => Command::Reject, b"sendheaders\0" => Command::SendHeaders, _ => return Err(DeserializationError::parse(&buf, "Command")), }; Ok(command) } }
Sheree Zielke does not want vengeance against the man who strangled her daughter to death. "I don't hate him, I hate what he did," Zielke said in an interview with CBC Radio's Edmonton AM. "I feel more a sense, I don't know if I can call it this, but maybe compassion," she said. "Because of the life he's carved out for himself. "I can't imagine the horror of spending one's life in prison, away from one's family, knowing that they're completely separated from their children, their loved ones and their friends." Christopher Nagel was charged with first-degree murder on May 4, 2014, one day after his wife, Rienna Nagel, was found dead in the Spruce Grove home where they lived with their five children. Nagel pleaded guilty to second-degree murder on Monday, on the first day of his trial, and will be handed an automatic life sentence. A sentencing hearing, scheduled for Thursday morning, will determine when Nagel should be eligible for parole. Zielke plans to read a victim impact statement in the Edmonton courtroom. The early plea comes a relief for the family. "I don't think people are aware that a family of the victim live in their own kind of prison waiting for a trial to take place," said Zielke. "They're looking for completion, for it to stop so they can carry on. For us, there is a feeling of dread and I don't understand that. "We're glad we're finally moving. We know that, hopefully by the end of Friday, it's done, it's over and we can put this behind us." 'You stay silent' While the RCMP have never publicly released many details about the murder, investigators said after the body was discovered, they believed her death to be the result of domestic violence. Rienna Nagel was 21 when she married her long-time sweetheart, Christopher. Fifteen years into their marriage, he was charged with her murder. Although Zielke said she had concerns about the 15-year relationship, she kept quiet. "When you have adult children and they make decisions about romance and marriage, a parent stands back. And even if you look at the [relationship] and think,'that doesn't look so good,' you can't do anything. "And then you just watch their life go by." The couple had been sweethearts for years before marriage — Zielke had even set them up on their first date — and she didn't want to alienate her daughter. "While you have misgivings, you stay silent, there is nothing you can do," said Zielke "So when something like this happens, you don't how to respond. You don't know how to feel, because you were damned if you did, and you were damned if you didn't." 'I have a God who will take care of justice' In the years since her daughter's death, Zielke has found solace in an unusual place — inside the walls of the Edmonton Institution for Women. She felt compelled to start volunteering there, in the weeks after her daughter's death. Since then, she's carved out a position as a parole coach, where she counsels women — including convicted murderers — prepare for life on the outside. "I went in as a volunteer through the chaplain's department and I didn't quite know what it was I was doing there. I was just led there," said Zielke. "I told the women in the prison in the other day, it kind of feels like a bit of a homecoming for me. Like they're somehow filling my daughter's place. "And whether that is actually true or not, I don't know, but it feels like that." Zielke credits her undying faith in God for her ability to forgive. Although she has complete confidence in the courts, she knows an ultimate power will make the final judgement against her former son-in-law. "People look at me, people who hate him for what he did, and I can't," said Zielke.
#include<bits/stdc++.h> #define ll long long #define fi(a, b) for(int i=a; i<=b; i++) #define fd(a, b) for(int i=a; i>=b; i--) using namespace std; int main(){ ll n, m, sz, i, j, k, ans=0, a[5003]={0}, b[5003]={0},cnta=0, cntb=0; string s; cin>>s; sz=s.size(); for(i=0; i<sz; i++){ if(s[i]=='a'){ cnta++; a[i+1]=cnta; b[i+1]=cntb; } else{ cntb++; b[i+1]=cntb; a[i+1]=cnta; } } for(i=0; i<sz; i++){ ll mx=0, first=0, mid, last; first=a[i+1]; for(j=sz-1; j>=i; j--){ mid=b[j+1]-b[i]; last=a[sz]-a[j+1]; mx=max(mx, mid+last); } mx=mx+first; ans=max(ans, mx); } cout<<ans<<endl; return 0; }
If you’ve ever had a near death experience (NDE) or tried astral projection, you may have seen the silver cord. The silver cord is often referred to as the “life thread” because it supplies energy to the physical body. If the silver cord is severed, the physical body can no longer be sustained and dies. I had a conversation last week with a Christian about the silver cord. She told me she was raised by her Nanny who experienced an NDE in the hospital. Her Nanny stated that she was floating near the ceiling of the hospital room watching her body on the operating table. After floating around she grabbed the silvery-looking cord still attached to her and used to it to pull herself back into her physical body. “I know it sounds like New Age misinformation,” my friend assured me, “but my Nanny was the most sincere person you could ever know.” I told my friend not to worry. It wasn’t misinformation at all; the silver cord is known about in many religious circles, and it’s even in the Bible. A shocked look spread across her face, so I showed her the scripture: “Or ever the silver cord be loosed…Then shall the dust return to the earth as it was: and the spirit shall return unto God who gave it (Ecclesiastes 12:6-7). The Bible clearly supports the teachings of many Eastern religions; if the silver cord is severed, your consciousness can no longer be filtered through the physical vessel. Many Christian commentaries, such as the one written by the evangelist John Wesley, suggest that this silver cord mentioned in Ecclesiastes is referring to the spinal column. This is a poor interpretation. The Hebrew for “loosed” indicates that which is completely removed from the person, not just a breaking of the spinal column. The Appearance and location of the silver cord People usually describe the silver cord as a wispy, etheric-looking filament about one inch in diameter. It’s silvery-grayish in color, and seems to have infinite elasticity, stretching on as far as the astral (emotional) body travels. It is connected to the heart chakra in each of the subtle bodies. A lot of people report seeing the silver cord attached to the head, but perhaps they are confusing this with the consciousness thread that is attached to the crown chakra. The function of the silver cord The sole purpose of the silver cord, or life thread, is to provide the subtle and physical bodies with vital energy. Think of it as a sort of umbilical cord. Just like a baby has an umbilical cord that receives and transfers physical nutrients from the mother, the silver cord serves as a sort of energetic umbilical cord to receive and transfer Prana. Prana is a Sanskrit term meaning “life.” Without Prana, or spirit, the physical body could not operate as it does. This is why once the silver cord is severed, the physical body has no choice but to die. I like to think of the physical body as a sensitive electromagnetic vehicle which filters and grounds spiritual energy and consciousness. By filter I mean that higher (subtle) energy is stepped-down (by chakras) through each subtle body until it manifests in the physical body. Denser matter is the most restrictive, so energy and consciousness is more limited in the physical body than in our subtle bodies. Although the physical body is more restrictive, it provides a highly varied experience for consciousness, so more restriction isn’t necessarily a bad thing. What happens when the silver cord is severed—i.e., true physical death According to esoteric literature, true physical death occurs when the silver cord is severed. So what happens next? While it is true that different people have different experiences, we can discuss these experiences in two ways. After the death of the physical body, consciousness resumes in our more subtle emotional bodies. Depending on the development of one’s consciousness, the emotional world can be a pleasant experience or an unpleasant one. If a person is holding on to a lot of emotional negativity and desire, the experience on the emotional plane isn’t going to be as pleasant because there is no physical body to damper or restrict these emotions. In other words, strong emotional desires would seem amplified without the restriction of the physical body. Imagine having an uncontrollable desire that can’t be quenched! This is the true “hellish” experience that religion really speaks of. Hell is not the eternal abode of the dead like many Christians believe. It is simply an impermanent experience in the cycle between incarnations. Someone whose consciousness operates with higher emotions will have a more pleasant experience on the emotional plane. But this state isn’t permanent either. The emotional body will eventually die too. Like the physical body, the astral body is subject to the law of impermanence. The death of the emotional body is known as the second death in religious literature. Beyond this second death, individual consciousness may or may not sleep (remain conscious). Again, it depends on the level of mental consciousness the person has developed through their many incarnations. In Corinthians the Apostle Paul states: “Behold, I shew you a mystery; We shall not all sleep…” (1 Cor. 15:51). By “sleep” Paul means that not everyone remains conscious after physical death. If the consciousness is developed enough in the mental body, consciousness is regained after the second death and the individual experiences the mental plane. It is believed that this is a more blissful experience because thoughts instantly manifest. It is possible that this will be a “heavenly” experience. It is silly for the church to teach a cut and dry version of one heaven and hell. Even the Apostle Paul said he was taken up to the “third heaven.” Doesn’t it make more sense that there can be different degrees of both heaven and hell. Ultimately, the experiences are more subjective than objective, and largely dependent upon the development of one’s consciousness. The silver cord, then, isn’t just connected to the etheric and physical body. It is connect to all the subtle bodies and serves to transfer spirit to all of them. Closing thoughts I would like to mention a closing thought on the original Bible verses from Ecclesiastes at the beginning of this post. It told us that the body returns to dust, but the spirit goes back to God. The spirit going back to God is speaking of the life-force of the highest self returning to source where it rests before another incarnation. This represents the sum-total of all that we are, and includes the energy that made up the temporary physical, emotional, and mental bodies that we used to develop consciousness through our former experiences. Personally, I believe it would do the world a lot of good to learn these esoteric truths rather than the watered down version of spiritual things we are usually taught in church. Not everything taught in church is bad, it is just incomplete. In order to get the bigger picture, the esoteric interpretation is needed. I believe that a vast majority of Christians are ready for these teachings. They are certainly a better alternative than the typical salvation messages that are being preached today. These salvation messages usually do not end up raising the consciousness of church goers. In fact, it often times lowers it, because many seekers believes that in the watered down message they have already received the prize and give up seeking higher truth. We should never give up on our development. It should be an eternal endeavor.
def import_density_field(self, z_name, resolution): if resolution != 256 and resolution != 512: raise ValueError("Only resolution 256 or 512 is supported.") resStr = str(resolution) if z_name == '0': file_path= os.path.join(SIMS_DIR, 'dens'+resStr+'-z-0.0.csv.gz') pdDens=pd.read_csv(file_path) pdDensN=pdDens[['Bolshoi__Dens'+resStr+'_z0__ix','Bolshoi__Dens'+resStr+'_z0__iy','Bolshoi__Dens'+resStr+'_z0__iz','Bolshoi__Dens'+resStr+'_z0__dens']] pdDensN=pdDensN.sort_values(['Bolshoi__Dens'+resStr+'_z0__ix','Bolshoi__Dens'+resStr+'_z0__iy','Bolshoi__Dens'+resStr+'_z0__iz']) tden = pdDensN['Bolshoi__Dens'+resStr+'_z0__dens'].values tden2 = np.reshape(tden,(resolution,resolution,resolution)) return normDM((tden2+1).sum(2), 0, resolution, self.Lbox) else: file_path = os.path.join(SIMS_DIR, 'dens'+resStr+'-z-{}.csv.gz'.format(z_name)) den = pd.read_csv(file_path) den2=den[['Bolshoi__Dens'+resStr+'__ix','Bolshoi__Dens'+resStr+'__iy','Bolshoi__Dens'+resStr+'__iz','Bolshoi__Dens'+resStr+'__dens']] den_sorted=den2.sort_values(['Bolshoi__Dens'+resStr+'__ix','Bolshoi__Dens'+resStr+'__iy','Bolshoi__Dens'+resStr+'__iz']) den_vals = den_sorted['Bolshoi__Dens'+resStr+'__dens'].values den = np.reshape(den_vals,(resolution,resolution,resolution)) den = normDM((den+1).sum(2),0, resolution, self.Lbox) test_l= (np.repeat((np.repeat(den,4,axis=0)),4,axis=1)) test_sm = gauss_sinc_smoothing(test_l,4,4,1, self.halofieldresolution) assert resolution == 256 smoothed_field = test_sm.reshape([256, 4, 256, 4]).mean(3).mean(1) return smoothed_field
def add_permission(self, elements): elements = element_resolver(elements) self.data['granted_element'].extend(elements) self.update()