content
stringlengths 10
4.9M
|
---|
<gh_stars>0
package basics;
import java.util.*;
public class AddNumber {
static Scanner sc=new Scanner(System.in);
public static void main(String[] args) {
// TODO Auto-generated method stub
System.out.println("Enter the 1st number: ");
int a=sc.nextInt();
System.out.println("Enter the 2nd number: ");
int b=sc.nextInt();
System.out.println("Sum of 2 numbers: "+(a+b));
}
}
|
/**
* An {@link Executor} that executes tasks one at a time in serial order.
*/
public final class SerialExecutor implements Executor {
private final LinkedList<Runnable> mTasks = new LinkedList<>();
private Runnable mActive;
public synchronized void execute(@NonNull final Runnable r) {
mTasks.offer(() -> {
try {
r.run();
} finally {
scheduleNext();
}
});
if (mActive == null) {
scheduleNext();
}
}
private synchronized void scheduleNext() {
if ((mActive = mTasks.poll()) != null) {
Executors.THREAD_POOL_EXECUTOR.execute(mActive);
}
}
public synchronized boolean remove(@Nullable Runnable r) {
if (r == null) {
if (mTasks.size() > 0) {
mTasks.clear();
return true;
}
} else {
return mTasks.remove(r);
}
return false;
}
public synchronized boolean isIdle() {
return mActive == null;
}
} |
<filename>src/01wstep/05listy.py
# listy
print('----------')
l = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
print(l)
print(l[0])
print(l[-1])
print(l[1:6])
print(l[0:7])
print(l[0:8])
print(l[:5])
print(l[3:])
print(l[:])
print(l[7:2:-1])
print(l[7:2])
# print l[7]
print('------ wstawianie --------')
l.append('z')
print(l)
l.insert(2,'qq')
print(l)
l.extend(['x','y','z'])
print(l)
l.append(['x','y','z'])
print(l)
print(l[-1])
print(l.index('z'))
print(('z' in l), ('w' in l))
print('--------- usuwaie ---------')
print(l.pop())
print(l)
l.remove('qq')
print(l)
print('-------- operacje ----------')
l = l + ['Ala']
l += ['As']
print(l)
l = l * 2
print(l)
print(type(l))
print(dir(list))
# ----- jaki bedzie wynik ? -------
a = []
b = [1, a]
b[1].append(1)
print(a, b)
x = []
y = [1, x]
x.append(1)
print(x, y)
x = []
x.append(x)
print(x)
|
Evangelicals crave narratives of redemption—of men finding God and forsaking all other paths—and a spiritual biography of Trump is sure to give them exactly what they need.
Despite his lack of understanding about basic biblical precepts (he's frequently said that he doesn't like to ask forgiveness, a central tenet of evangelicalism), Trump has managed to convince evangelicals that he has a newfound faith.
Chip Somodevilla/Getty Images
The election of Donald J. Trump to the presidency was aided and abetted by a number of evangelical leaders who vouched for his faith with their cohorts. Even as Trump stood at the front of a Detroit church looking deeply uncomfortable, and as he cited the book of “Two Corinthians” instead of “Second Corinthians,” evangelical leaders insisted that he was a “baby Christian” who was just learning. He would mature, Focus on the Family’s James Dobson proclaimed, and his faith was genuine and real.
Closer to the election, Trump mocked his Methodist opponent and dismissed apparent sexual assault as “locker room talk.” But evangelicals continued to march in almost complete lockstep with the idea that Trump was a new man, a new creation in Christ. Though a few leaders sounded the warning bells, they were quickly drowned out by the choruses of four in five evangelicals who cast their votes for Trump.
Now, with the announced “spiritual biography” of Trump helmed by journalist David Brody, the evangelical whitewashing is nearly complete. Evangelicals crave narratives of redemption—of men finding God and forsaking all other paths—and a spiritual biography of Trump is sure to give them exactly what they need.
In an interview with the Toronto Star, Brody, the chief political correspondent for Christian Broadcasting Network, explains that he thinks Trump and evangelicals have a lot in common. While Trump isn’t going to win Christian of the year honors, Brody said, he has a faith that is “deeply rooted.” According to Brody, Trump still sees the world through an evangelical lens.
Get the facts, direct to your inbox. Subscribe to our daily or weekly digest. SUBSCRIBE
And this thinly supported proclamation in the form of The Faith of Donald J. Trump: A Spiritual Biography, to be released next January, is really all U.S. evangelicals need to throw themselves behind a man who once declared himself “very pro-choice” and said Planned Parenthood does “very good work”—proclamations that are anathema to evangelical America. Brody’s standing as a journalist in the community, and his book’s research and support of Trump’s spiritual life, is sure to make it a definitive statement on the topic of Trump’s faithfulness.
It follows, then, that if the president is on the side of the evangelicals, then that means God really does endorse their political ideas. Because of the Apostle Paul’s statements in Romans 13:1 (“Let every person be subject to the governing authorities. For there is no authority except from God, and those that exist have been instituted by God”), evangelicals believe that God has sovereign control over the path of the nation, and that God himself placed Trump in the presidency. Trump’s claims to Christianity—and Brody’s spiritual biography giving weight to those claims—only further evangelicals’ certainty that they have a righteous claim to the halls of power.
As the subject and co-author of The Art of the Deal, Trump is nothing if not attuned to how to say what people want to hear. Despite his lack of understanding about basic biblical precepts (he’s frequently said that he doesn’t like to ask forgiveness, a central tenet of evangelicalism), Trump has managed to convince evangelicals that he has a newfound faith. It’s no coincidence that Trump’s conversion to Christianity is pinpointed to approximately a decade ago, around the time Barack Obama was running for president for the first time—as Trump has clearly been preparing for this presidential run since that period.
Evangelicals are deeply invested in narratives of redemption. The idea that Christ makes all things new permeates their daily discourse, and is the central tenet of their worldview. One of the most common events in evangelical church services and retreats is the telling of the story of God’s grace in a person’s life. These testimonies have a plotted narrative arc: A person was a bad sinner, and then they read or encountered the Bible and God in some way, and now, while they are not perfect, they are a Jesus follower and nothing can deny that fact.
It is this narrative that evangelicals crave, as it confirms their image of God as victorious over tragedy, as the warrior doing battle to save precious human souls. If a person who used to be a drug dealer and a murderer, for example, can be saved by the grace of God, then God is powerful indeed.
And what better narrative to rehabilitate the image of a philanderer and abuser than to develop his story of conversion and faith in God for print? Here’s the catch: This narrative doesn’t necessarily have to be true to be believable—as long as it doesn’t stretch credibility too far and has enough grand gestures (which Brody hints will be covered in the book), evangelicals stand ready to support their broken, sinful leader as a great man of faith.
Even as Trump continues to golf instead of going to church, and makes statements opposed to the tenets of Christianity, he has supposedly claimed conversion and forgiveness in evangelicalism, and that washes away any and all sins. He’s not perfect, but evangelicals will support him as long as they can continue to have access to the halls of power. God has given them a great gift in Donald J. Trump, and if they have to do a little fudging of his spirituality to prop him up, then so be it.
To the outsider looking in, this whitewashing of Trump’s spiritual life is patently a ploy for political favor with an extremely powerful voting bloc. But to the evangelical concerned about the black and white fundamentals of who is and who isn’t with them, turning Trump into one of them, however flimsily, makes perfect sense. If Trump is a born-again Christian who just struggles at times, then they can ultimately assume that he has the best interests of the evangelical church at heart—interests that are distinctly white and male. They are concerned with ensuring people with uteruses do not have access to abortion, that other people’s sex lives are kept in check, and that transgender people are kept on the fringes where evangelicalism wants them.
And with evangelical darling Vice President Mike Pence endorsing and backing up the president’s alleged faith, evangelicals can sleep soundly at night knowing their agenda is protected by the grace of God himself. After all, the story doesn’t have to be true. It just has to be believed. |
With Peter King on vacation until July 25, this week’s Monday Morning QB guest columnist is Lions guard Geoff Schwartz. The 2016 season will be Schwartz’s ninth in the NFL. He was drafted in the seventh round in 2008 by the Panthers, and he also has spent time with the Vikings, Chiefs and Giants. He’s written for The MMQB in the past and is active on social media. Follow Geoff here.
I turn 30 years old today. Happy birthday to me. (My son turns 2 as well, so I’d rather you send b-day wishes his way instead.) As I enter my fourth decade by being gifted this forum by Peter King, it seems like a good time to reflect on my professional football journey.
• MMQB GUEST COLUMN: Jake Plummer advocates for cannabinoid use and why NFL should embrace it
Who would have thought I’d be here? Not me. My career has been a roller coaster. I was drafted in the seventh round. I’ve been on a practice squad. I’ve been cut. I’ve signed a big contract, then been released from that deal. I’ve been a backup. I’ve been a starter. I’ve been injured multiple times. I came into the league under the old collective bargaining agreement and now play under the new CBA. And I have a brother who’s also in the league.
Geoff Schwartz signed a one-year deal with the Lions in March after being released by the Giants. Paul Sancya/AP
I’m not a sentimental guy. I don’t like looking in the rearview. I’ve gotten this far in the NFL by always looking forward and challenging myself to be better. However, in early February, looking forward kind of came to a screeching halt.
I got THAT call. You know, the one that almost every player gets. We are releasing you today. Man, was I upset. Angry even. But reality set in quickly. Professional sports is a business. So with the Giants in my rearview, I started planning my next move.
Except my next move didn’t come for almost 60 days. Those were two tough months. As opposed to being a coveted free agent as I was a couple of years ago, this time the phone wasn’t ringing off the hook. I didn’t know if I was going to play again. I was forced into reflecting on my career. How did I survive and thrive in the NFL?
The Injury-Prone Journeyman
My high school coach enjoyed clichés. The one that has stuck with me the most is: Don’t lie to yourself. Basically, know who you are. As hard as it is for me to say, I’m a journeyman who’s been prone to injury. Yet here I am, on my fifth team entering my ninth season, having gone through six surgeries, a severely dislocated toe and badly injured groin. How does a late seventh-round pick make it this far with an injury list like mine? I beat the odds. Why? I'll lay it out in a bit. But first I want to take a few shots at the label “injury-prone journeyman.”
Challenging the toughness of a football player is the worst insult we can receive. This game is tough, physically and mentally. The tag “injury-prone” has a connotation of softness, even if it’s not meant that way when said or written. I hate this label. Injuries are not created equal. There’s no way to prepare your body for guys falling on your legs. Or your big toe dislocating while trying to anchor a bull rush. S--- happens. I feel awful about not being reliable. I pride myself on showing up prepared to work every day. If I could have done something different to prevent these injuries, you can bet I would have. But there was nothing to be done. (Click for picture of dislocated toe. Editor’s note: It’s not pleasant.)
It’s easier to accept the term “journeyman.” If you look at my career path, five teams in nine seasons, that’s the definition of a journeyman. However, when I think of a journeyman, I picture the end-of-a-bench basketball player, or long man in the bullpen. Journeymen are good chemistry guys and just average in talent. They’re role players, nothing special and certainly not starters. Well, that hasn’t been me.
The Giants signed Schwartz to a four-year, $16.8 million deal prior to the 2014 season, but injuries limited him to just 13 games the next two years. Rob Leiter/Getty Images
I have been an above-average player. I started 19 games in a row when I played for Carolina, never missing a snap in 2010. But then the injuries happened, and I had two hip surgeries, one core surgery and one awful season as a Viking in 2012. I went to Kansas City in 2013 and was not promised anything but a chance to compete, and I won a starting spot in the middle of a playoff push. That is rare. We were 9-1 after I became a regular starter. I played well. I signed with the Giants in 2014. And then the injuries happened again.
• LAST MANNING STANDING: Q&A with Eli on new Giants coach, narrowing playoff window and what might be next for Peyton
Two seasons, a pair of broken ankles and a dislocated toe later and I’m now a Lion. What's ironic is now I am the “grizzled veteran” in Detroit. I’m embracing the role. We have a young, extremely talented offensive line group. I’m the guy who’s been around the block more than any other lineman. It feels weird thinking that I’m the vet because I’m only turning 30 today. I’m not old in life but certainly so in the world of football.
How to Survive in Football
In a previous article on The MMQB, Emily Kaplan wrote about the toll it takes on the family when a player switches teams so frequently. Now, I’d like to take you through the football schematics of switching so many teams and what have I learned that’s allowed me to survive.
First and foremost, and this is a cliché, you have to respect the process. The process isn’t easy. It takes time and is supposed to be tough. But once you’ve respected the process, you truly become a pro. That means doing whatever is needed to be prepared for Sundays. Work out longer? Do it. Spend more time watching film? Do it. Eat properly in the offseason? Do it. Give up drinking? Do it. What it takes is different for everyone, but it starts with respecting the game and the process.
I believe the talent gap at the bottom of a roster isn’t much. So if you know what you’re doing, and adjust on the fly, you can live the NFL dream.
Once you’ve learned how to be a pro, you’re in a much better position to adapt to new environments. Each team has a different scheme. You need to learn the new scheme and then apply what you do best within each scheme. Knowing what you do best and applying it correctly keeps you in the league.
I’ve played for six offensive line coaches in the NFL. You might think OL play is simple enough that the coaches are basically teaching the same things. Nope. Only two of my OL coaches taught the same technique. Ironically, these also have been my favorite OL systems.
Everything linemen do in a system is for a purpose and has a reason. I can get down with that. So I’ve had to adapt to various ways to pass block. Some OL coaches teach strong inside hand, some want vertical sets, some want a jump set at 45 degrees. I’ve been taught two-hand punch, independent hand usage and outside hand punch. I’ve been taught three different ways to stop a bull rush and different aiming points on zone plays. How difficult could it be to pull right? Well, if you’re pulling on power, some schemes take the guard inside (but always outside of the double team) and ask him to “swab out” anything in the hole. Other schemes, if the guard sees it’s congested inside, then he adjusts and pulls around the blocks. It’s all madness. So you have to adapt and obey. You find out what the OL coach demands. You follow that.
Part of being a pro is learning not only your job, but the jobs of people around you and on the defense. I’m playing right guard, but what is the tight end doing here? The fullback? The slot wide receiver? Where are the safeties? Are the backers deep or close, bowed or bossed? And on and on. This makes learning a new offense more fluid. Offenses are unique, with different principles for the same plays.
Schwartz with Panthers in 2011 Scott Cunningham/Getty Images
Let’s take Outside Zone as an example. Outside Zone is a choreographed run blocking scheme. The entire O-line initially moves in the same direction as the play is called. The objective of a true Outside Zone is getting the running back outside the widest defender and around the edge. However, in today’s NFL, a true outside zone play is the least common form of a zone running play. The name is almost a misnomer.
Outside Zone is used to describe a zone play, but rarely does the running back get outside; defensive ends are too good and quick. Most often, an Outside Zone turns into a Mid Zone. And a Mid Zone starts out like Outside Zone, but the running back often cuts up, mimicking Inside Zone. Confused? Well, you’ll enjoy what’s next.
When I started writing this column, I talked over Outside Zone principles with my brother, Mitch Schwartz of the Chiefs. He offered to write up Mid Zone disguised as Outside Zone. Here is what Mitch wrote, unedited on purpose to show complexity of the language.
Traditional Outside Zone starts with the running back chasing the inside leg of the TE. That’s the landmark he's given to stay on course. His read is the DE, back inside to the DT. If the DE plays with the OL, which they’re typically coached to do in the scheme, then he looks inside to see how the DT is playing. If the DT is also running with the OL, and maintaining his gap integrity, then the running back cuts even behind him. This is where you commonly hear “one cut.” The RB has been on his angle, running towards the inside leg of the TE, but once both the DE and DT have committed to running laterally with the OL, he cuts the ball behind them. A common way of saying this is that he “cuts back,” but coaches in this scheme don’t like that term. They like to say “cut up” because the best running backs in this scheme get vertical when they cut up, they don’t start running backwards from the flow of the offense, otherwise the defenders from the backside will catch up to them. The nature of this scheme is that the outside defender will play outside, defending his “contain” responsibility, and so the RB will never truly get outside of the defense. Some RBs get to know that, and will cheat by cutting up too soon, but this ruins the flow and integrity of the play. The best RBs “press the line,” which means they stay on that angle for as long as they can, which then helps bring defenders towards them, brings defenders to the offensive line (we work as a unit, and their ability to stay on track for as long as possible is crucial to our success on this type of play), and will end up defining the read much better for them.
These zone run plays are my favorite, in part because they work. Back in 2009 when I was a Panther, DeAngelo Williams and Jonathan Stewart etched their names in NFL history, becoming the first teammates to each scamper for more than 1,100 yards in a season. I was one of the “hogs” blocking for them. How satisfying, especially when they presented the entire starting line with a token of their appreciation. Every time I put on that fancy watch I’m reminded of that amazing season.
So as you can see, even a seemingly simple running play has many, many nuances. You’ve got to be able to figure all this out, and then play fast. This is what the best can do. I believe the talent gap at the bottom of a roster, and between the 5th through the 8th linemen, isn’t much. So if you know what you’re doing, and adjust on the fly, you can live the NFL dream.
* * *
Some NFL observers believe blocking and tackling have been negatively impacted due to less repetitions at training camps. David Eulitt/Kansas City Star/TNS via Getty Images
Ten Things I Think I Think
1. I think I’m tired of people attacking the NFL game. Yes, it’s different than it was years ago, but it’s not dying and it’s not a worse product. Just look at how much money the NFL is making. If you want to find a factor that changed the game the most, look at two-a-day practices going away. Two-a-days is a phrase that makes all footballers cringe. The current CBA, under the premise of player safety, did away with two practices a day. But that decision dramatically changed the game. Blocking and tackling will never be the same without two-a-days. Those are skills that need constant repetition. I used to blame collegiate spread offenses for the general decline of O-line play, but the loss of reps is mostly to blame.
• DR. Z WEEK: The MMQB celebrates the life and career of groundbreaking football writer Paul Zimmerman
2. I think I like when passionate players take a stand. We all have our opinions. Take Eugene Monroe for instance. I won’t use this space to agree or disagree with his view on marijuana usage, but if he and others feel strongly about removing the NFL’s ban, they should be able to share their thoughts—without repercussion.
3. I think I love the report about fewer NFL players being arrested. Did you know that arrests are way down for players in the league? There has been an effort from the league office to the team level to stress a zero tolerance policies for off-the-field incidents. While I doubt guys are consciously thinking, “Hey, I better not screw up—I’ll get released,” all of the awareness towards behaving properly must be working. I also sense that tougher penalties at the college and high school levels are contributing to the decline in arrests.
4. I think the game should be played on grass. I love grass. The smell, the uniform stains, the way it feels under my cleats. Grass is easier on my feet and legs and is certainly softer on the body when hitting the ground. I know it’s not practical in a domed stadium—though Arizona gets away with it—but it's a wish of this lineman.
5. I think I’m falling out of love with social media. I use all forms because I like interacting with fans, and I get my news and entertainment from it. However, it’s gotten to a point where social media causes fake-outrage mobs and everyone wants to be on the right side of a story. There’s no nuance anymore, and heaven forbid you have a reasoned opinion that runs counter to what the masses believe. You’re labeled all sorts of nonsense. It’s okay to have civil disagreements and reasoned opinions. My teammates and I discuss politics and often disagree. That’s perfect; it leads to discussion. And guess what? We are still friends and work together just fine.
6. I think I love seeing all these NBA players making so much money. And it’s all guaranteed. While NFL players have parts of our contracts guaranteed, it’s typically no more than 40% of the overall value. I hold out hope in the next round of CBA negotiations that we can work towards fully guaranteed salaries, or maybe some sort of team financial penalty for cutting a player before the end of his contract. While anything along these lines would be a big win for us, I’m not holding my breath that it will happen—unless we use the only leverage we have and sit out games to get our point across.
• HOW ANDREW LUCK GOT PAID: Andrew Brandt takes inside look at QB’s new contract
7. I think if you love OL talk, check out my podcast. It’s called “Block ’Em Up,” and co-host Duke Manyweather, and I discuss everything offensive line. It can get nerdy at times, but we have great guests join us. It’s fun and educational. You can find us on iTunes or wherever you enjoy listening to pods. We are there. Please subscribe.
8. I think it’s time for another shameless plug. My brother and I have a book coming out in September called “Eat My Schwartz: Our Story of NFL Football, Food, Family and Faith.” It covers all the bases, including our story as the first Jewish brothers in the NFL since 1923. You can preorder it here.
9. I think that, if he wants, I’ll allow my son to play this game I love so much. Football has taught me many life lessons. Why not let him benefit from the same experiences? As a whole, the NFL and all 32 teams are doing a much better job creating a safer environment, especially when it comes to concussions. The big question for me is when to have him begin playing. I started in high school. If he plays, that’s when he’ll begin as well. I’ve seen pee-wee and youth games. Kids can’t control their bodies well enough to practice good technique, especially tackling. Also, I’ve watched “those” parents. Way too intense for me. Sports should always be fun!
10. I think I’m worried my San Francisco Giants are too good right now. They have the best record in the majors but lead the league in blown saves and have tons of injuries. When they have won their championships, it’s been by scraping into the playoffs. However, I have no worries once they get in because of their front-end rotation, their defense and their timely hitting. I can’t wait for October.
• Question or comment? Email us at [email protected]. |
package test
import (
"context"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
k8sMeta "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func TestResourceQuota_Query(t *testing.T) {
const namespace = "default"
t.Run("Should return resourceQuotas when there are some", func(t *testing.T) {
res1 := createMockResourceQuota("resource1", namespace)
res2 := createMockResourceQuota("resource2", namespace)
service := setupServiceWithObjects(t, res1, res2)
result, _ := service.ResourceQuotasQuery(context.Background(), namespace)
assert.Equal(t, 2, len(result))
})
t.Run("Should return empty array if limitRange is not found", func(t *testing.T) {
service := setupServiceWithObjects(t)
result, _ := service.ResourceQuotasQuery(context.Background(), namespace)
assert.Equal(t, 0, len(result))
})
}
func TestResourceQuota_JsonField(t *testing.T) {
const namespace = "default"
t.Run("JsonField returns no error", func(t *testing.T) {
res1 := createMockResourceQuota("res1", namespace)
service := setupServiceWithObjects(t, res1)
_, err := service.ResourceQuotaJSONfield(context.Background(), res1)
require.NoError(t, err)
})
}
func createMockResourceQuota(name, namespace string) *v1.ResourceQuota {
return &v1.ResourceQuota{
TypeMeta: k8sMeta.TypeMeta{
Kind: "ResourceQuota",
APIVersion: v1.SchemeGroupVersion.String(),
},
ObjectMeta: k8sMeta.ObjectMeta{
Name: name,
Namespace: namespace,
},
}
}
|
N = list(map(int, input().split()))
s1 = N[0] * N[1]
s2 = N[2] * N[3]
print(max(s1, s2))
|
Rem oving the H a za rd Fe d w ire Daylight
Free Federal Reserve daylight overdrafts misallocate resources. One reason is the moral hazard of fully insuring a paying bank’s access to whatever volume of daylight overdraft credit it needs. This paper contrasts the effects of three recent proposals for pricing daylight overdrafts and demonstrates that reduc ing moral hazard depends on how, rather than on how much, pricing affects daylight overdrafts. If payment practices and modes of bank financing were unresponsive to pricing, it would suggest that the moral hazard of Federal Reserve daylight overdrafts has been an insidious force behind the rapid growth of interbank lending and securities-market trading in recent decades. abstract analysis of the transactions demand for money. The commodity-demand elasticity with respect to the real interest rate, b x , was set to unity because, of all (equally arbitrary) values, unity is the most straightforward choice. (Econo metric evidence currently available does not provide direct knowledge of this elasticity.) The relative sizes of the disturbances give consider able scope to demand-side influences on output and employment, and allow for a relatively unstable money-demand function. |
<reponame>BuckWang0509/codeforces-go
package main
import (
"bufio"
. "fmt"
"io"
)
// github.com/EndlessCheng/codeforces-go
func CF1621B(_r io.Reader, _w io.Writer) {
in := bufio.NewReader(_r)
out := bufio.NewWriter(_w)
defer out.Flush()
var T, n, l, r, c int
for Fscan(in, &T); T > 0; T-- {
ll, rr, lc, rc := int(2e9), 0, 0, 0
sl, sr, sc := int(2e9), 0, 0
for Fscan(in, &n); n > 0; n-- {
Fscan(in, &l, &r, &c)
if l < sl || l == sl && r > sr || l == sl && r == sr && c < sc {
sl, sr, sc = l, r, c
}
if l < ll || l == ll && c < lc {
ll, lc = l, c
}
if r > rr || r == rr && c < rc {
rr, rc = r, c
}
res := lc + rc
if sl == ll && sr == rr && sc < res {
res = sc
}
Fprintln(out, res)
}
}
}
//func main() { CF1621B(os.Stdin, os.Stdout) }
|
<reponame>stemey/feature-hub
/**
* @jest-environment node
*/
let mockResponse: string;
// tslint:disable promise-function-async
jest.mock('node-fetch', () => () =>
Promise.resolve({
buffer: () => Promise.resolve(Buffer.from(mockResponse))
})
);
// tslint:enable promise-function-async
import {loadCommonJsModule} from '..';
describe('loadCommonJsModule (on Node.js)', () => {
it('when a module is fetched successfully', async () => {
const url = 'http://example.com/test.js';
mockResponse = `
var semver = require('semver');
module.exports = {
default: {test: semver.coerce('1').version}
};
`;
const loadedModule = await loadCommonJsModule(url);
const expectedModule = {default: {test: '1.0.0'}};
expect(loadedModule).toEqual(expectedModule);
});
});
|
<gh_stars>1-10
#include "mymanager.h"
myManager::myManager(QObject *parent)
: QObject(parent), m_passComplexity(QString("Default"))
{}
/* ============================================================================================================ */
/* SECTION I : Credentials */
/* ============================================================================================================ */
QString myManager::getPassword() const
{
return password;
}
void myManager::setPassword(const QString &value)
{
password = value;
}
QString myManager::getUsername() const
{
return username;
}
void myManager::setUsername(const QString &value)
{
username = value;
}
QString myManager::getNew_user_shell() const
{
return new_user_shell;
}
void myManager::setNew_user_shell(const QString &value)
{
new_user_shell = value;
}
QString myManager::getNew_user_id() const
{
return new_user_id;
}
void myManager::setNew_user_id(const QString &value)
{
new_user_id = value;
}
QString myManager::getNew_user_group() const
{
return new_user_group;
}
void myManager::setNew_user_group(const QString &value)
{
new_user_group = value;
}
QString myManager::getNew_user_realname() const
{
return new_user_realname;
}
void myManager::setNew_user_realname(const QString &value)
{
new_user_realname = value;
}
QString myManager::getNew_username() const
{
return new_username;
}
void myManager::setNew_username(const QString &value)
{
new_username = value;
}
QString myManager::getNew_user_encr_password() const
{
return new_user_encr_password;
}
void myManager::setNew_user_encr_password(const QString &value)
{
new_user_encr_password = value;
}
// Compare the entered username with the username that is returned by the Qt
bool myManager::compare_usernames()
{
QString current_user = getenv("USER");
// qDebug() << "Current user: " << current_user << " Entered user: " << getUsername() << endl;
if(getUsername()!=current_user)
{
return false;
}
return true;
}
void myManager::clearCredentials()
{
// clean username and password fields up
setUsername("");
setPassword("");
}
// CHeck if a user exists in the system
bool myManager::user_exists()
{
QProcess proc;
proc.start("id " + getNew_username());
proc.waitForFinished(-1);
if(proc.exitCode()!=0)
{
return false;
}
return true;
}
// Add a new user in the system
//useradd command example :: useradd -c "Real User Name" -m -u <UID> -g <GROUP> -d /home/$username -s <shell> $username
bool myManager::is_username_valid()
{
// Name of the new user should not start with a digit
// Also a username in Linux should be lower case name
if(getNew_username() == getenv("USER") || getNew_username().at(0).isDigit() || getNew_username().at(0).isUpper())
{
return false;
}
QVector<QString> invalid_characters = {"`", "~", "@", "!", "#", "$", "%", "^", "&", "*", "(", ")", "-", "+", "<", ">", ",", ".", "=", "_", "/", ";", ":", "?"};
QVector<QString>::iterator start = invalid_characters.begin();
if(getNew_username()=="root")
{
return false;
}
while(start != invalid_characters.end())
{
if(getNew_username().at(0) == *start)
{
return false;
}
start++;
}
return true;
}
bool myManager::is_username_valid(QString userName)
{
// Name of the new user should not start with a digit
// Also a username in Linux should be lower case name
qDebug() << "Checking for new user's username validation ...\n";
if(userName == getenv("USER") || userName.at(0).isDigit() || userName.at(0).isUpper())
{
return false;
}
QVector<QString> invalid_characters = {"`", "~", "@", "!", "#", "$", "%", "^", "&", "*", "(", ")", "-", "+", "<", ">", ",", ".", "=", "_", "/", ";", ":", "?"};
QVector<QString>::iterator start = invalid_characters.begin();
if(userName=="root")
{
return false;
}
while(start != invalid_characters.end())
{
if(userName.at(0) == *start)
{
return false;
}
start++;
}
return true;
}
void myManager::create_enc_password()
{
// opnssl passwd <plain_password> --> creates the encrypted password hash
// to be used with the useradd -p <password_hash> option !
QProcess openssl;
openssl.start("openssl passwd " + getNew_user_encr_password());
openssl.waitForFinished();
QString hold(openssl.readAllStandardOutput());
hold.remove("\n");
setNew_user_encr_password(hold);
if(openssl.exitCode()!=0)
{
new_user_encr_password = "";
}
}
// Ownership & Permissions for the home directory of the new user
bool myManager::set_chown()
{
QProcess passwd, chown;
passwd.setStandardOutputProcess(&chown);
passwd.start("echo " + getPassword());
chown.start("sudo -S chown " + getNew_username() + " /home/" + getNew_username());
chown.waitForFinished(-1);
passwd.waitForFinished(-1);
if(chown.exitCode()!=0)
{
return false;
}
return true;
}
bool myManager::set_chmod()
{
QProcess pass, chmod;
pass.setStandardOutputProcess(&chmod);
pass.start("echo " + getPassword());
chmod.start("sudo -S chmod -R u=rwx,g=rw,o=--- /home/" + getNew_username());
chmod.waitForFinished(-1);
pass.waitForFinished(-1);
if(chmod.exitCode()!=0)
{
return false;
}
return true;
}
// Create the new user using a QProcess
bool myManager::adduser()
{
QString options;
QProcess pass, add;
pass.setStandardOutputProcess(&add);
pass.start("echo " + getPassword());
if(!getNew_user_realname().isEmpty())
{
options += " -c \"" + getNew_user_realname() + "\"";
}
if(!getNew_user_group().isEmpty())
{
options += " -g " + getNew_user_group();
}
if(!getNew_user_id().isEmpty())
{
options += " -u " + getNew_user_id();
}
options += " -m -d /home/" + getNew_username();
if(!getNew_user_shell().isEmpty())
{
options += " -s /bin/" + getNew_user_shell();
}
// Call create_enc_password() to create the Hash of the entered password
//
create_enc_password();
options += " -p " + new_user_encr_password;
options += " " + getNew_username();
add.start("sudo -S useradd " + options);
add.waitForFinished(-1);
pass.waitForFinished(-1);
if(add.exitCode()!=0)
{
return false;
}
set_chmod();
set_chown();
return true;
}
// REMOVE A USER
bool myManager::deluser()
{
QProcess pass, del;
pass.setStandardOutputProcess(&del);
pass.start("echo " + getPassword());
del.start("sudo -S userdel " + getNew_username());
pass.waitForFinished(-1);
del.waitForFinished(-1);
if(del.exitCode()!=0)
{
return false;
}
return true;
}
// Delete the user's home directory
bool myManager::del_user_home()
{
QProcess pass, rm_dir;
pass.setStandardOutputProcess(&rm_dir);
pass.start("echo " + getPassword());
rm_dir.start("sudo -S rm -r /home/" + getNew_username());
pass.waitForFinished(-1);
rm_dir.waitForFinished(-1);
if(rm_dir.exitCode()!=0)
{
return false;
}
return true;
}
/* ============================================================================================================ */
/* SECTION II : System & Networking Section */
/* ============================================================================================================ */
QString myManager::cat_users()
{
QProcess users_proc;
QString real_users;
users_proc.start("bash", QStringList() << "-c" << "cut -d: -f1,3 /etc/passwd | egrep ':[0-9]{4}$' | cut -d: -f1");
if(!users_proc.waitForFinished(3000) || users_proc.exitCode()!=0)
{
// upon error return error
return "ERROR";
}
real_users = users_proc.readAllStandardOutput();
return real_users;
}
QString myManager::cat_groups()
{
QProcess groups_proc;
QString real_groups;
groups_proc.start("bash", QStringList() << "-c" << "cut -d: -f1,3 /etc/group | egrep ':[0-9]{4}$' | cut -d: -f1");
if(!groups_proc.waitForFinished(3000) || groups_proc.exitCode()!=0)
{
return "ERROR";
}
real_groups = groups_proc.readAllStandardOutput();
return real_groups;
}
QString myManager::cat_shells()
{
QProcess shells_proc;
QString shells;
shells_proc.start("cat /etc/shells");
if(!shells_proc.waitForFinished(-1) || shells_proc.exitCode()!=0)
{
return "ERROR";
}
shells = shells_proc.readAllStandardOutput();
return shells;
}
QString myManager::ifconfig()
{
QProcess ifconf;
ifconf.start("ifconfig");
ifconf.waitForFinished(-1);
QString hold(ifconf.readAllStandardOutput());
if(ifconf.exitCode()!=0)
{
return "ERROR";
}
return hold;
}
QString myManager::netstat()
{
QProcess net;
// r -> display routing table argument
net.start("netstat -r");
net.waitForFinished(-1);
QString hold(net.readAllStandardOutput());
if(net.exitCode()!=0)
{
return "ERROR";
}
return hold;
}
void myManager::setTable(const QString table)
{
firewallTable = table;
}
QString myManager::getTable() const
{
return firewallTable;
}
QString myManager::ip4tables()
{
QProcess pass, ip_proc;
pass.setStandardOutputProcess(&ip_proc);
pass.start("echo " + getPassword());
ip_proc.start("sudo -S iptables -t " + getTable() + " -nL --line-numbers");
ip_proc.waitForFinished(6000);
pass.waitForFinished(6000);
QString hold(ip_proc.readAllStandardOutput());
if(ip_proc.exitCode()!=0)
{
return "ERROR";
}
return hold;
}
QString myManager::ip6tables()
{
QProcess pass, ip_proc;
pass.setStandardOutputProcess(&ip_proc);
pass.start("echo " + getPassword());
ip_proc.start("sudo -S ip6tables -t " + getTable() + " -nL --line-numbers");
ip_proc.waitForFinished(6000);
pass.waitForFinished(6000);
QString hold(ip_proc.readAllStandardOutput());
if(ip_proc.exitCode()!=0)
{
return "ERROR";
}
return hold;
}
/* ============================================================================================================ */
/* SECTION III : Group Management */
/* ============================================================================================================ */
QString myManager::getGid() const
{
return gid;
}
void myManager::setGid(const QString &value)
{
gid = value;
}
QString myManager::getNew_groupname() const
{
return new_groupname;
}
void myManager::setNew_groupname(const QString &value)
{
new_groupname = value;
}
QString myManager::getGroupname() const
{
return groupname;
}
void myManager::setGroupname(const QString &value)
{
groupname = value;
}
bool myManager::group_exists()
{
QProcess proc;
proc.start("getent group " + getGroupname());
proc.waitForFinished(-1);
if(proc.exitCode()!=0)
{
return false;
}
return true;
}
bool myManager::groupadd()
{
QProcess pass, add;
pass.setStandardOutputProcess(&add);
pass.start("echo " + getPassword());
if(gid.isEmpty())
{
add.start("sudo -S groupadd " + getGroupname());
}
else {
add.start("sudo -S groupadd -g " + getGid() + " " + getGroupname());
}
pass.waitForFinished(-1);
add.waitForFinished(-1);
if(add.exitCode()!=0)
{
return false;
}
return true;
}
bool myManager::groupmod()
{
QProcess pass, mod;
pass.setStandardOutputProcess(&mod);
pass.start("echo " + getPassword());
mod.start("sudo -S groupmod -n " + getNew_groupname() + " " + getGroupname());
pass.waitForFinished(-1);
mod.waitForFinished(-1);
if(mod.exitCode()!=0)
{
return false;
}
return true;
}
bool myManager::groupdel()
{
QProcess pass, del;
pass.setStandardOutputProcess(&del);
pass.start("echo " + getPassword());
del.start("sudo -S groupdel " + getGroupname());
pass.waitForFinished(-1);
del.waitForFinished(-1);
if(del.exitCode()!=0)
{
return false;
}
return true;
}
/* Q_PROPERTY functionality to notify QML code regarding the entered password's
* complexity for the new user that is about to be created
*/
QString myManager::passComplexity() const
{
return m_passComplexity==QString("Default") ? QString("None") : m_passComplexity;
}
void myManager::setPassComplexity(QString passComplexity)
{
int passLength = passComplexity.length();
// This is for real time illustration for now
// Will be updated for username classes :
// Uppercase / Lowercase letter , digits & special character presence inside the password
// that is typed from the operator
//
QRegularExpression re_digit("[0-9]");
QRegularExpression re_Upper("[A-Z]");
QRegularExpression re_SpecialChar("[!@#$%^&*,._+=/]");
bool hasDigit = false;
bool hasUpper = false;
bool hasSpecialChar = false;
// For sure username has lowercase letters, so class is initialized to: 1
// FOr every other class found inside the password, ++classesFound
int classesFound = 1;
if(re_digit.match(passComplexity).hasMatch()) {
//qDebug() << "Found DIGIT inside password ...\n";
hasDigit = true;
}
if( re_Upper.match(passComplexity).hasMatch() ) {
//qDebug() << "Found UPPERCASE letter inside password ...\n";
hasUpper = true;
}
if(re_SpecialChar.match(passComplexity).hasMatch()) {
//qDebug() << "Found SPECIAL character inside password ...\n";
hasSpecialChar = true;
}
classesFound += (hasDigit==true ? 1 : 0) + (hasUpper==true ? 1 : 0) + (hasSpecialChar==true ? 1 : 0);
//qDebug() << "Classes found inside password are: " << classesFound << endl;
if(classesFound==1 && passLength) {
m_passComplexity = QString("Weak");
} else if(classesFound==2) {
m_passComplexity = QString("Medium");
} else if(classesFound==3) {
m_passComplexity = QString("Strong");
} else if(passLength==0) {
m_passComplexity = QString("Default");
} else {
// If the password has one lowercase | one uppercase | one digit | one special && lenght>10 then
// it is of course a Very Strong one ... Otherwise, it is just strong
//
m_passComplexity = (passLength>10) ? QString("Very Strong") : QString("Strong");
}
//m_passComplexity = passComplexity;
emit passComplexityChanged(m_passComplexity);
}
|
package util
import (
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"github.com/timsolov/boilr/pkg/template"
"github.com/timsolov/boilr/pkg/util/osutil"
"github.com/timsolov/boilr/pkg/util/validate"
)
var (
// ErrUnexpectedArgs indicates that the given number of arguments exceed the expected number of arguments.
ErrUnexpectedArgs = errors.New("unexpected arguments")
// ErrNotEnoughArgs indicates that the given number of arguments does not match the expected number of arguments.
ErrNotEnoughArgs = errors.New("not enough arguments")
)
const (
// InvalidArg error message format string for filling in the details of an invalid arg.
InvalidArg = "invalid argument for %q: %q, should be a valid %v"
)
// ValidateArgCount validates the number of arguments and returns the corresponding error if there are any.
func ValidateArgCount(expectedArgNo, argNo int) error {
switch {
case expectedArgNo < argNo:
return ErrUnexpectedArgs
case expectedArgNo > argNo:
return ErrNotEnoughArgs
case expectedArgNo == argNo:
}
return nil
}
// ValidateVarArgs validates variadic arguments with the given validate.Argument function.
func ValidateVarArgs(args []string, v validate.Argument) error {
if len(args) == 0 {
return ErrNotEnoughArgs
}
for _, arg := range args {
if ok := v.Validate(arg); !ok {
return fmt.Errorf(InvalidArg, v.Name, arg, v.Validate.TypeName())
}
}
return nil
}
// ValidateArgs validates arguments with the given validate.Argument functions.
// Two arguments must contain the same number of elements.
func ValidateArgs(args []string, validations []validate.Argument) error {
if err := ValidateArgCount(len(validations), len(args)); err != nil {
return err
}
for i, arg := range validations {
if ok := arg.Validate(args[i]); !ok {
return fmt.Errorf(InvalidArg, arg.Name, args[i], arg.Validate.TypeName())
}
}
return nil
}
func testTemplate(path string) error {
tmpDir, err := ioutil.TempDir("", "boilr-validation-test")
if err != nil {
return err
}
defer os.RemoveAll(tmpDir)
tmpl, err := template.Get(path)
if err != nil {
return err
}
tmpl.UseDefaultValues()
return tmpl.Execute(tmpDir, "")
}
// ValidateTemplate validates the template structure given the template path.
func ValidateTemplate(tmplPath string) (bool, error) {
if exists, err := osutil.DirExists(tmplPath); !exists {
if err != nil {
return false, err
}
return false, fmt.Errorf("template directory not found")
}
if exists, err := osutil.DirExists(filepath.Join(tmplPath, "template")); !exists {
if err != nil {
return false, err
}
return false, fmt.Errorf("template should contain %q directory", "template")
}
if err := testTemplate(tmplPath); err != nil {
return false, err
}
return true, nil
}
|
package array;
import java.util.HashSet;
import java.util.Set;
/**
* 840. Magic Squares In Grid
* <p>
* A 3 x 3 magic square is a 3 x 3 grid filled with distinct numbers from 1 to 9 such that each row, column, and both diagonals all have the same sum.
* Given an grid of integers, how many 3 x 3 "magic square" subgrids are there? (Each subgrid is contiguous).
* <p>
* Example 1:
* Input: [[4,3,8,4],
* [9,5,1,9],
* [2,7,6,2]]
* Output: 1
* <p>
* Explanation:
* The following subgrid is a 3 x 3 magic square:
* 438
* 951
* 276
* while this one is not:
* 384
* 519
* 762
* In total, there is only one magic square inside the given grid.
* <p>
* Note:
* 1 <= grid.length <= 10
* 1 <= grid[0].length <= 10
* 0 <= grid[i][j] <= 15
* <p>
* Created by zjm on 2019/5/22.
*/
public class MagicSquaresInGrid {
public int numMagicSquaresInside(int[][] grid) {
if (grid.length < 3 || grid[0].length < 3) {
return 0;
}
int res = 0;
for (int i = 2; i < grid.length; i++) {
for (int j = 2; j < grid[0].length; j++) {
if (isMagic(grid, i, j)) res++;
}
}
return res;
}
public boolean isMagic(int[][] arr, int r, int c) {
Set<Integer> set = new HashSet();
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
if (set.contains(arr[r - i][c - j]) || arr[r - i][c - j] > 9 || arr[r - i][c - j] < 1) return false;
set.add(arr[r - i][c - j]);
}
}
int m = arr[r - 2][c - 2] + arr[r - 1][c - 1] + arr[r][c];
if (arr[r - 2][c - 2] + arr[r - 2][c - 1] + arr[r - 2][c] != m) {
return false;
} else if (arr[r - 1][c - 2] + arr[r - 1][c - 1] + arr[r - 1][c] != m) {
return false;
} else if (arr[r][c - 2] + arr[r][c - 1] + arr[r][c] != m) {
return false;
} else if (arr[r - 2][c] + arr[r - 1][c] + arr[r][c] != m) {
return false;
} else if (arr[r - 2][c - 2] + arr[r - 1][c - 2] + arr[r][c - 2] != m) {
return false;
} else if (arr[r - 2][c - 1] + arr[r - 1][c - 1] + arr[r][c - 1] != m) {
return false;
} else if (arr[r][c - 2] + arr[r - 1][c - 1] + arr[r - 2][c] != m) {
return false;
}
return true;
}
}
|
A Gift of Life: An Islamic Perspective in Organ Donation and Transplantation
Organ Donation and Transplantation is an issue that has widespread ramifications. In addition to the medical/ technical aspects, there are legal, moral, ethical, economic, logistical and humanitarian aspects. Each of these aspects may have some peculiarity related to the donor and recipient. This paper deals primarily with the moral, ethical and humanitarian aspects of the issue. From the viewpoint of Islam, organ transplantation is an acceptable therapeutic value provided the following criteria are fulfilled: 1. There is no other equally effective therapeutic solution available that is simpler, safer and/or more cost effective. 2. The organ donation does not result in any harm to the donor 3. The organ donation is done with the free will and full approval of the donor, or in the case of an unconscious donor, or an organ donation taken from a cadaver, the approval of the next of kin or legal guardian. 4. In the case of the donation of a single organ upon which the life of the donor depends, e.g., the heart or liver, the organ may not be removed from the donor until the donor’s brain stem death is ascertained. 5. The donated organ is a gift and is not sold. 6. If the transaction results in material or monetary gain to the donor or to the donor’s family, the gain must not be in the form of price, but the donor or his/her family may accept a gift as a token of appreciation since the donated organ is considered a gift to the recipient. 7. The transplantation of active reproductive organs is categorically forbidden. 8. The basic rule governing the entire transaction is that organ transplantation is considered a humanitarian act of mercy accomplished with the free will and approval of all parties involved under no pressure, coercion or injustice.
Review
Our life is a gift that we humans must appreciate and share with others. Organ donation is sharing the much needed gift. The Islamic Holy Book Quran, encourages Muslims to save life by stating "Anyone who saves one life it is as if he has saved all mankind" . In spite of such clear instructions, Muslims in general, educated or not are sometimes reluctant to sign up donor card, while themselves willing to receive organ for transplantation when needed. Their reluctance is not based on notion as held by some that one has to present to God in the Day of Judgment with all intact organs. "They ask who will revive the dead and rotten bones after death, tell them "He will give life to them who gave them in life in the first place . Their reluctance based on definition of death (brain stem death verses cardio respiratory arrest, harvesting organs for transplantation, and cost of the procedure. Currently according to US Bureau of Statistics 1, 23,175 people are on wait list to receive an organ transplant, out of which 1, 01,170 are kidney. 16, 896 transplants took place in 2013 out which 11,163 were kidney transplants. 3000 new applicants are added each year. 4,453 Americans died in 2013 while waiting to receive renal transplant (12/ day) With the improvement of surgical techniques, the advances in the technology of organ preservation, and with the availability of better and safer drugs for prevention of tissue rejection, organ transplantation is now being done in an ever-increasing number for a growing number of organs with a rapidly improving success rate. As the demand increases for organs to be transplanted, the logistics of the entire issue become more complex. This paper deals primarily with moral, ethical ad humanitarian aspects of the issue. Although the technical and economic aspects of organ transplantation are not directly addressed in this paper, the author is by no means insensitive to the magnitude of the impact of these two aspects on the entire picture of global health care.
In Islam, it is a religious duty for the sick person to seek treatment: "O worshipers of God seek treatment" . It becomes a collective duty to cooperate with each other and achieve this goal of treatment and /or healing. "Cooperate towards righteousness and God-consciousness…" However, while cooperating toward the goal of treatment and/or healing, the believers have to be mindful of certain Islamic rules: 1. While cooperating to do a good deed, we must not be involved in bad deeds or aggression: "…Do not cooperate toward bad deeds and aggression" . 2. We must not inflict harm on anyone, and must not allow harm to be inflicted on ourselves: "Do not cause harm and do not get hurt" . 3. Whenever we have a choice between two options leading to the same goal, we should always opt for the simpler and/or easier option: "Whenever the messenger of God was given a choice between two matters, he always elected the simpler and/or easier one . 4. We must not go to excesses and must not be wasteful. "Eat and drink, and do not waste indeed, He (God) does not love the wasteful ones." "The wasteful ones are companions of the devils" .
Applying the above listed rules means that, before transplantation is considered, any equally effective therapeutic modality that is simpler, safer and more cost effective must be first considered and given priority. In other words, if transplantation is considered indicated and justified (after satisfying all of the above listed rules), then transplantation must be considered a "fulfillment of Islamic duty" .
The statement that I mentioned earlier: "Any equally effective therapeutic modality that is simpler, safer and more cost effective" needs more explanation and elaboration. Modern western medicine has failed in offering a curative treatment for chronic illnesses. The evidence and proof of this failure is the chronicity of the illness. By definition, a chronic illness is an illness that has persisted for months or years. The fact that an illness is considered "chronic" and especially if it is considered "end stage chronic illness", proves that modern western medicine has failed to offer a curative treatment for such an illness. "End stage chronic illness" often causes progressive failure of the affected organ, to the extent that it seriously affects the function or even endangers the life of the sick person. Replacement of the affected organ is considered, either in the form of transplantation, i.e. replacement with a living organ, or replacement with a mechanical artificial organ. Replacement with a mechanical artificial organ could be either temporary-like artificial kidney for temporary intermittent hemo-dialysis, temporary pacemakers or temporary cardiac assist devices; or permanent-like permanent pacemakers, total mechanical heart, or total mechanical joints. Whether it is transplantation or total mechanical replacement, the situation remains the same: a curative treatment for the affected organ is not available.
To my knowledge, there are now several patients who were thought to need heart transplantation for cardiomyopathy or a combination of cardiomyopathy and atherosclerotic heart disease; or liver transplantation for hepatic cirrhosis; or who had progressive renal failure and were projected in a year or less to require hemodialysis and/or kidney transplantation. I know these patients to be in a stable and progressively improving condition for one or several years, and not requiring the transplantation once thought to be needed. However, it is an example of what could be an "equally effective, therapeutic modality that is simpler, safer and more cost effective." Even conventional modern western approaches may offer alternative options to transplantation-for example, combinations of partial temporary mechanical support combined with either pharmacologic or natural conservative management, and so on. However, if none of these options is available for a given patient, then and only them, transplantation should be considered a "fulfillment of Islamic duty.
Having overcome this hurdle, there are several other hurdles to overcome. The first one is that organ donation not result in any harm to the donor. For a successful outcome of transplantation, the transplanted organ has to be living, or at least viable. This could be the case of a double organ-i.e. a kidney-from a living donor who is a good surgical candidate for the nephrectomy, meaning having a healthy second kidney, and being physiologically fit for the surgery. This situation is very limited (a donor who is a member of the immediate family of the recipient), and will seriously limit the availability of organs to be transplanted. If we could have viable organs from dead donors, this would greatly improve the availability of single organs (like heart and liver) as well as double organs. The problem we face here is that for an organ to be viable, it has to have active circulation and oxygenation. But if we wait until the donor is dead according to the traditional criteria for death-i.e., absence of pulse and respirationthe organs are no longer viable. This dilemma was overcome by accepting brain stem death as an adequate criterion for death. This means that a person may be considered dead if his or her brain stem death is ascertained, even while the heart is still beating and respiration is artificially maintained. This bran stem death as the criterion for the donor's death has been accepted by the great majority of Muslim scholars who deal with the issue of transplantations, whether they are medical experts or Muslim jurists. This was the consensus of the participants of several specialized workshops and seminars that were held in various parts of the Muslim world like Kuwait, Saudi Arabia, Egypt, Jordan and others . These specialized workshops and seminars included a large number of experts from various schools of thought over twenty Muslim jurists and an equal number of Muslim medical experts. Until two weeks ago, I thought that this issue was finalized and settled among the experts-Muslim and non-Muslim-over ten years ago. However, less than two weeks ago, I realized that I was not quite correct in my assumption. I realized that there are still some people who are thought to be experts, or who give the impression of being experts, who feel that brain stem death is not a valid criterion for death, who feel that transplant surgeons kill donors by removing their vital organs while they (the donors) are still alive and that organ transplantation must be banned and prohibited altogether . This opposition is created by these individuals is greatly magnified by the mass media. The physicians behind the movement against organ transplantation use several medical references from European and American literature to support their views. Upon careful review of their listed references, it becomes obvious that these references are of two types: 1. sporadic reports of isolated cases where brain stem death was not adequately verified and not definitely ascertained; then it was later discovered that it was not truly brain stem death; of 2. Sporadic reports of isolated cases where some pathologies of the nervous system were misdiagnosed and initially mistaken for brain death and where it was later discovered that they were not truly cases of brain death. In other words, the few cases reported were sporadic cases of human error, either in the form of misdiagnosis or inadequate verification due to improper testing. This still does not change the fact that if and when brainstem death is ascertained and confirmed, it is an adequate proof of death. As to the references from mass-media, these proved to be reports of criminal cases where people-patients, children, or others-were killed, abused or kidnapped in order to obtain some of their organs for commercial purposes. After all, these are crimes and crimes are wrong. These cases however do not prove or disprove that brain stem death is or is not a criterion for death. This means that all these sensational reports in the mass media are totally irrelevant. So much for the argument against brain stem death.
Another reason for the prohibition of the transfer and transplantation of human organs listed in the book written by the above mentioned professor of anesthesiology from Cairo, is the statement that taking any organ from a living donor or even from a dead donor is absolutely prohibited because the body of the donor belongs to God and not to the donor, living or dead, and therefore the donor has no right to give his/her body or any part thereof, neither by selling, or gift giving, or in any other way. This is a statement which is partially true but does not have any logic and does not make any sense. It is true that everything belongs to God, I mean everything, the donor's blood, the donor's milk (in the case of a nursing mother), everything! Still, the donor has the right to give his knowledge, his time, his money, his blood, her milk (in the case of a nursing mother), without being accused of transgressing against the ownership of God. As a matter of fact, it is considered a virtue and a good deed to give of what you have for a good cause. So much for the ownership argument.
In addition to the irritating argument mentioned above, the promoter of the movement against organ transplantation appears to be unaware of some of the rules of fundamentals of jurisprudence. The worst scenario about organ transplantation-in his opinion-or it could be lawful, as in the opinion of the majority of experts. For him to make the statement that transfer and transplantation of organs is "absolutely prohibited" indicates his unawareness of the rule of fundamentals of jurisprudence which says that "prohibiting the lawful is worse than allowing the unlawful": for example, to say drinking water is haram is worse that saying drinking wine is halal.
Let us move on to some other easier issues: Organ donation is done with the free will and full approval of the donor, or-in the case of an unconscious donor, or an organ donation taken from a cadaver-with the approval of the next of kin or legal guardian. A human being, donor or recipient, is a free individual. Free will and freedom of choice are God-given rights. These God-given rights cannot be imposed on someone without his or her full approval, or-if he/she is not in a position to make such a decision-then without the approval of the next of kin or legal guardian.
Another issue is that the donated organ is not sold. This statement is based on the sanctity of the human body, and the ownership by God the Almighty. That is why the majority of experts feel that human organs should not be used for commercial gain except under pressing circumstances. However, this issue is controversial and some experts allow it although the majority does not. Discussion of this issue will not be given here due to the limitation of time.
An important issue that must be briefly mentioned is that the transplantation of active reproductive organs is categorically forbidden in Islam. That is because such transplantation would lead to the violation of basic Islamic rules governing marriage, reproduction and inheritance.
Lastly, the question of Muslims donating organs to no-Muslims and vice versa needs to be addressed. I feel that this should be allowed if we consider the whole issue of organ donation as an act of human mercy, keeping in mind that the main-if not only-purpose of the mission of prophet Muhammad, peace be upon him, is to be a mercy for mankind. "We have not sent you but as a mercy for the worlds.
To summarize we can say that from the viewpoint of Islam, organ donation and transplantation is an acceptable therapeutic option provided the following criteria are fulfilled: 1. There is no other equally effective therapeutic modality available that is simpler and safer and /or more cost effective. 2. The organ donation does not result in any harm to the donor. 3. The organ donation is done with the free will and full approval of the donor, or-in the case of an unconscious donor or an organ donation taken from a cadaver-with the approval of the next of kin or legal guardian. 4. In the case of donation of a single organ upon which the life of the donor depends, e.g., the heart or liver, the organ may not be removed from the donor until the donor's brain stem death is ascertained. 5. The donated organ is not sold. 6. If the transaction results in material or monetary gain to the donor or to the donor's family, the gain must not be in the form of a price, but the donor or his/her family may accept a gift as a token of appreciation since the donated organ is considered a gift to the recipient. 7. The transplantation of active reproductive organs is categorically forbidden. 8. The basic rule governing the entire transaction is that organ transplantation is considered a humanitarian act of mercy accomplished with the free will and approval of all parties involved under no pressure, coercion, or injustice.
Additional Notes
By word "transaction" I mean some voluntary compensation to donor as a gift in appreciation of his/her charity.
Definition of death has evolved in Islam now to accept "brain stem death" while the old imams with little knowledge of medicine still use old definition of respiratory death as death.
Contemporary educated Muslim physicians and scholars have accepted donation and transplantation of live organs or part of it as gift to save life and enhance the quality of life of the recipient as long as there is no medical harm to the donor. Included in the category are kidney, liver and bone marrow.
Though some conservative imam may object to receiving heart or blood transfusion from an unbeliever, I as Muslim physician do not. The blood and life of all humans is pure and sacred to me.
Organ retrieval from a brain dead person is allowed with permission of the family. The issue is the desecration of dead body and not the use of usable organs after death as there is urgency to return the body to earth. Muslims do not have belief that have to present their intact body to God in the life hereafter. |
def contains_ratio(container: BoundingBox, contained: BoundingBox) -> float:
if contained.area == 0:
return 0.0
return intersection(container, contained).area / contained.area |
// initZookeeperLatencyMinMetric builds new zookeeper.latency.min metric.
func (mb *MetricsBuilder) initZookeeperLatencyMinMetric() {
metric := mb.metrics.ZookeeperLatencyMin
metric.SetName("zookeeper.latency.min")
metric.SetDescription("Minimum time in milliseconds for requests to be processed.")
metric.SetUnit("1")
metric.SetDataType(pdata.MetricDataTypeGauge)
} |
<reponame>Kkiranandroid/exam
package get.set;
import java.util.ArrayList;
public class HomeCategoryGetSet {
public String categoryId;
public String categroyName;
public String coursecount;
public String getCoursecount() {
return coursecount;
}
public void setCoursecount(String coursecount) {
this.coursecount = coursecount;
}
public ArrayList<CategoryGetSetClass> menList = new ArrayList<CategoryGetSetClass>();
public String getCategoryId() {
return categoryId;
}
public void setCategoryId(String categoryId) {
this.categoryId = categoryId;
}
public String getCategroyName() {
return categroyName;
}
public void setCategroyName(String categroyName) {
this.categroyName = categroyName;
}
public ArrayList<CategoryGetSetClass> getMenList() {
return menList;
}
public void setMenList(ArrayList<CategoryGetSetClass> menList) {
this.menList = menList;
}
}
|
#include <stdio.h>
#include <string.h>
int main()
{
int hash[4010],i,j,x,k,num1,num2,a,min,max;
while(scanf("%d%d",&x,&k)!=EOF)
{
min=max=0;memset(hash,0,sizeof(hash));
for(i=0;i<k;i++)
{
scanf("%d",&a);
if(a==2)
{
scanf("%d",&num2);hash[num2]++;
}
if(a==1)
{
scanf("%d%d",&num2,&num1);hash[num2]++;hash[num1]++;
}
}
for(i=1;i<x;i++)
{
if(hash[i]==0)
{
min++;max++;hash[i]++;
if(i+1<x&&hash[i+1]==0)
{
max++;hash[i+1]++;
}
}
}
printf("%d %d\n",min,max);
}
return 0;
}
|
Identification of free radicals in irradiated MA-AMPS copolymer
Methacrylamide-2-acrylamido-2-methylpropanesulphonic acid (MA-AMPS) (80:20) is subjected to n -irradiation and the generated free radicals are identified by the electron spin resonance (ESR) technique. The ESR spectrum "obser"ved for MA-AMPS (80:20) has shown a complex line shape, indicating the presence of more than one free radical. Computer simulations are employed to unravel the radicals responsible for the ESR spectrum. Radical "identification has been done with the magnetic parameters employed during computer simulation. The observed ESR spectrum of the copolymer is simulated to be a superposition of component quartet, quintet and singlet spectra. The component spectra are assigned to ∼CH 2 - . (CONH 2 )-CH 2 ∼ or ∼CH 2 - . H-CH 2 ∼, . H 3 radicals, "respectively. The possibility of formation of such radicals in the sample material has been discussed. |
// This software is released under the MIT License.
// https://opensource.org/licenses/MIT
import { serializeCustomData } from './requests/custom-data'
export { massTransfer } from './transactions/mass-transfer'
export { reissue } from './transactions/reissue'
export { burn } from './transactions/burn'
export { exchange } from './transactions/exchange'
export { lease } from './transactions/lease'
export { cancelLease } from './transactions/cancel-lease'
export { data } from './transactions/data'
export { issue } from './transactions/issue'
export { transfer } from './transactions/transfer'
export { alias } from './transactions/alias'
export { setScript } from './transactions/set-script'
export { setAssetScript } from './transactions/set-asset-script'
export { sponsorship } from './transactions/sponsorship'
export { order } from './requests/order'
export { cancelOrder } from './requests/cancel-order'
export { customData, serializeCustomData } from './requests/custom-data'
export { auth } from './requests/auth'
export { wavesAuth } from './requests/wavesAuth'
export { invokeScript } from './transactions/invoke-script'
export { updateAssetInfo } from './transactions/update-asset-info'
export { signTx, verify, serialize, submitOrder, cancelSubmittedOrder, verifyAuthData, verifyCustomData, verifyWavesAuthData } from './general'
export { waitForTx, broadcast } from './nodeInteraction'
export { makeTx, makeTxBytes } from './make-tx'
export { invokeExpression } from './transactions/invoke-expression'
// Export interfaces
export {
TTx,
TTxParams,
IAliasParams,
IIssueParams,
IReissueParams,
IBurnParams,
ILeaseParams,
ICancelLeaseParams,
ITransferParams,
IMassTransferParams,
ISetScriptParams,
ISponsorshipParams,
IDataParams,
ISetAssetScriptParams,
IInvokeScriptParams,
IUpdateAssetInfoParams,
IOrderParams,
ICancelOrder,
ICancelOrderParams,
WithId,
WithSender,
WithProofs,
WithTxType,
} from './transactions'
export { INodeRequestOptions, IStateChangeResponse } from './nodeInteraction'
export {
TSeedTypes, TOption
} from './types'
// internal libraries access
import * as crypto from '@waves/ts-lib-crypto'
import * as marshall from '@waves/marshall'
// import * as nodeApiJs from '@waves/node-api-js'
const libs = {
crypto,
marshall,
// nodeApiJs
}
import * as seedUtils from './seedUtils'
import * as nodeInteraction from './nodeInteraction'
import * as validators from './validators'
import * as protoPerialize from './proto-serialize'
export {
libs,
seedUtils,
nodeInteraction,
validators,
protoPerialize
}
|
// Copyright 2020 by <NAME> <<EMAIL>>
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
// the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
//! IO functionality for use of this program with the CdE Datenbank export and import file formats.
use crate::{Assignment, Course, Participant};
use std::collections::HashMap;
use chrono::{SecondsFormat, Utc};
use serde_json::json;
use std::cmp::max;
use log::{info};
const MINIMUM_EXPORT_VERSION: (u64, u64) = (7, 0);
const MAXIMUM_EXPORT_VERSION: (u64, u64) = (15, std::u64::MAX);
pub struct ImportAmbienceData {
event_id: u64,
track_id: u64,
}
/// Read course and participant data from an JSON event export of the CdE Datenbank
///
/// This function takes a Reader (e.g. an open filehandle), reads its contents and interprets them
/// as a partial event export from the CdE Datenbank 2.
///
/// If the event data comprises multiple course tracks and no track id is selected via the `track`
/// parameter, this function fails with "Event has more than one course track". Otherwise, the only
/// existing track is selected automatically.
///
/// Only registrations with status *Participant* in the relevant part (the part of the selected
/// course track) are considered. Existing course assignments and cancelled course segments are
/// ignored—they will be overridden by importing the result file into the CdE Datenbank.
///
/// Minimum and maximum participant numbers of courses are interpreted as total number of attendees,
/// **including** course instructors. The registered course
/// If no maximum size is given for a course, we assume num_max = 25 (incl. instructors).
/// If no minimum size is given for a course, we assume num_min = 0 (excl. instructors).
///
/// # Arguments
///
/// * reader: The Reader (e.g. open file) to read the json data from
/// * track: The CdEDB id of the event's course track, if the user specified one on the command
/// line. If None and the event has only one course track, it is selected automatically.
/// * ignore_inactive_courses: If true, courses with an inactive segment in the relevant track are
/// not added to the results.
/// * ignore_assigned: If true, participants who are assigned to a valid course are not added to the
/// results. If `ignore_inactive_courses` is true, participants assigned to a cancelled course are
/// not ignored.
///
/// # Errors
///
/// Fails with a string error message to be displayed to the user, if
/// * the file has invalid JSON syntax (the string representation of the serde_json error is returned)
/// * the file is not a 'partial' CdEDB export
/// * the file has no version within the supported version range (MINIMUM_/MAXIMUM_EXPORT_VERSION)
/// * any expected entry in the json fields is missing
/// * the event has no course tracks
/// * the event has more than one course track, but no `track` is given.
///
pub fn read<R: std::io::Read>(
reader: R,
track: Option<u64>,
ignore_inactive_courses: bool,
ignore_assigned: bool,
) -> Result<(Vec<Participant>, Vec<Course>, ImportAmbienceData), String> {
let data: serde_json::Value = serde_json::from_reader(reader).map_err(|err| err.to_string())?;
// Check export type and version
let export_kind = data
.get("kind")
.and_then(|v| v.as_str())
.ok_or("No 'kind' field found in data. Is this a correct CdEdb export file?")?;
if export_kind != "partial" {
return Err("The given JSON file is no 'Partial Export' of the CdE Datenbank".to_owned());
}
let export_version =
if let Some(version_tag) = data.get("EVENT_SCHEMA_VERSION") {
version_tag.as_array()
.ok_or("'EVENT_SCHEMA_VERSION' is not an array!")
.and_then(|v|
if v.len() == 2
{Ok(v)}
else {Err("'EVENT_SCHEMA_VERSION' does not have 2 entries.")})
.and_then(|v| v.iter().map(
|x| x.as_u64()
.ok_or("Entry of 'EVENT_SCHEMA_VERSION' is not an u64 value."))
.collect::<Result<Vec<u64>, &str>>())
.and_then(|v| Ok((v[0], v[1])))
} else if let Some(version_tag) = data.get("CDEDB_EXPORT_EVENT_VERSION") {
// Support for old export schema version field
version_tag.as_u64()
.ok_or("'CDEDB_EXPORT_EVENT_VERSION' is not an u64 value")
.and_then(|v| Ok((v, 0)))
} else {
Err("No 'EVENT_SCHEMA_VERSION' field found in data. Is this a correct CdEdb \
export file?")
}?;
if export_version < MINIMUM_EXPORT_VERSION || export_version > MAXIMUM_EXPORT_VERSION {
return Err(format!(
"The given CdE Datenbank Export is not within the supported version range \
[{}.{},{}.{}]",
MINIMUM_EXPORT_VERSION.0, MINIMUM_EXPORT_VERSION.1, MAXIMUM_EXPORT_VERSION.0,
MAXIMUM_EXPORT_VERSION.1));
}
// Find part and track ids
let parts_data = data
.get("event")
.and_then(|v| v.as_object())
.ok_or("No 'event' object found in data.")?
.get("parts")
.and_then(|v| v.as_object())
.ok_or("No 'parts' object found in event.")?;
let (part_id, track_id) = find_track(parts_data, track)?;
// Parse courses
let mut courses = Vec::new();
let mut skipped_course_ids = Vec::new(); // Used to ignore KeyErrors for those later
let courses_data = data
.get("courses")
.and_then(|v| v.as_object())
.ok_or("No 'courses' object found in data.".to_owned())?;
let mut i = 0;
for (course_id, course_data) in courses_data.iter() {
let course_id: usize = course_id
.parse()
.map_err(|e: std::num::ParseIntError| e.to_string())?;
let course_segments_data = course_data
.get("segments")
.and_then(|v| v.as_object())
.ok_or(format!(
"No 'segments' object found for course {}",
course_id
))?;
// Skip courses without segment in the relevant track
if !course_segments_data.contains_key(&format!("{}", track_id)) {
continue;
}
// Skip already cancelled courses (if wanted). Only add their id to `skipped_course_ids`
if ignore_inactive_courses
&& !(course_segments_data
.get(&format!("{}", track_id))
.and_then(|v| v.as_bool())
.ok_or(format!("Segment of course {} is not a boolean.", course_id))?)
{
skipped_course_ids.push(course_id);
continue;
}
let course_name = format!(
"{}. {}",
course_data
.get("nr")
.and_then(|v| v.as_str())
.ok_or(format!("No 'nr' found for course {}", course_id))?,
course_data
.get("shortname")
.and_then(|v| v.as_str())
.ok_or(format!("No 'shortname' found for course {}", course_id))?
);
courses.push(crate::Course {
index: i,
dbid: course_id as usize,
name: course_name,
num_max: course_data
.get("max_size")
.and_then(|v| v.as_u64())
.unwrap_or(25) as usize,
num_min: course_data
.get("min_size")
.and_then(|v| v.as_u64())
.unwrap_or(0) as usize,
instructors: Vec::new(),
room_offset: 0,
fixed_course: false,
});
i += 1;
}
// Store, how many instructors attendees are already set for each course (only relevant if
// ignore_assigned == true). The vector holds a tuple
// (num_hidden_instructors, num_hidden_attendees) for each course in the same order as the
// `courses` vector.
let mut invisible_course_participants = vec![(0usize, 0usize); courses.len()];
let mut courses_by_id: HashMap<u64, &mut crate::Course> =
courses.iter_mut().map(|r| (r.dbid as u64, r)).collect();
// Parse Registrations
let mut registrations = Vec::new();
let registrations_data = data
.get("registrations")
.and_then(|v| v.as_object())
.ok_or("No 'registrations' object found in data.".to_owned())?;
let mut i = 0;
for (reg_id, reg_data) in registrations_data {
let reg_id: u64 = reg_id
.parse()
.map_err(|e: std::num::ParseIntError| e.to_string())?;
// Check registration status to skip irrelevant registrations
let rp_data = reg_data
.get("parts")
.and_then(|v| v.as_object())
.ok_or(format!("No 'parts' found in registration {}", reg_id))?
.get(&format!("{}", part_id))
.and_then(|v| v.as_object());
if let None = rp_data {
continue;
}
let rp_data = rp_data.unwrap();
if rp_data
.get("status")
.and_then(|v| v.as_i64())
.ok_or(format!(
"Missing 'status' in registration_part record of reg {}",
reg_id
))?
!= 2
{
continue;
}
// Parse persona attributes
let persona_data = reg_data
.get("persona")
.and_then(|v| v.as_object())
.ok_or(format!("Missing 'persona' in registration {}", reg_id))?;
let reg_name = format!(
"{} {}",
persona_data
.get("given_names")
.and_then(|v| v.as_str())
.ok_or(format!("No 'given_name' found for registration {}", reg_id))?,
persona_data
.get("family_name")
.and_then(|v| v.as_str())
.ok_or(format!(
"No 'family_name' found for registration {}",
reg_id
))?
);
// Get registration track data
let rt_data = reg_data
.get("tracks")
.and_then(|v| v.as_object())
.ok_or(format!("No 'tracks' found in registration {}", reg_id))?
.get(&format!("{}", track_id))
.and_then(|v| v.as_object())
.ok_or(format!(
"Registration track data not present for registration {}",
reg_id
))?;
// Skip already assigned participants (if wanted)
if ignore_assigned {
// Check if course_id is an integer and get this integer
if let Some(course_id) = rt_data.get("course_id").and_then(|v| v.as_u64()) {
// Add participant to the invisible_course_participants of this course ...
if let Some(course) = courses_by_id.get(&course_id) {
match rt_data.get("course_instructor").and_then(|v| v.as_u64()) {
// In case, they are (invisible) instructor of the course ...
Some(c) if c == course_id => {
invisible_course_participants[course.index].0 += 1;
},
// In case, they are (invisible) attendee of the course ...
_ => {
invisible_course_participants[course.index].1 += 1;
}
}
}
continue;
}
}
// Parse course choices
let choices_data = rt_data
.get("choices")
.and_then(|v| v.as_array())
.ok_or(format!(
"No 'choices' found in registration {}'s track data",
reg_id
))?;
let mut choices = Vec::<usize>::new();
for v in choices_data {
let course_id = v.as_u64().ok_or("Course choice is no integer.")?;
if let Some(course) = courses_by_id.get(&course_id) {
choices.push(course.index);
} else if !skipped_course_ids.contains(&(course_id as usize)) {
return Err(format!(
"Course choice {} of registration {} does not exist.", course_id, reg_id));
}
}
// Filter out registrations without choices
if choices.len() == 0 {
if choices_data.len() > 0 {
info!("Ignoring participant {} (id {}), who only chose cancelled courses.",
reg_name, reg_id);
}
continue;
}
// Add course instructors to courses
if let Some(instructed_course) = rt_data
.get("course_instructor")
.and_then(|v| v.as_u64()) {
if let Some(course) = courses_by_id.get_mut(&instructed_course) {
course.instructors.push(i);
} else if !skipped_course_ids.contains(&(instructed_course as usize)) {
return Err(format!(
"Instructed course {} of registration {} does not exist.",
instructed_course,
reg_id));
}
}
registrations.push(crate::Participant {
index: i,
dbid: reg_id as usize,
name: reg_name,
choices,
});
i += 1;
}
// Subtract invisible course attendees from course participant bounds
// Prevent courses with invisible course participants from being cancelled and add
// invisible course participants to room_offset
for mut course in courses.iter_mut() {
let invisible_course_attendees = invisible_course_participants[course.index].1;
let total_invisible_course_participants = invisible_course_participants[course.index].0 + invisible_course_participants[course.index].1;
course.num_min = if invisible_course_attendees > course.num_min
{
0
} else {
course.num_min - invisible_course_attendees
};
course.num_max = if invisible_course_attendees > course.num_max
{
0
} else {
course.num_max - invisible_course_attendees
};
course.fixed_course = total_invisible_course_participants != 0;
course.room_offset += total_invisible_course_participants;
}
Ok((
registrations,
courses,
ImportAmbienceData {
event_id: data
.get("id")
.and_then(|v| v.as_u64())
.ok_or("No event 'id' found in data")?,
track_id,
},
))
}
/// Write the calculated course assignment as a CdE Datenbank partial import JSON string to a Writer
/// (e.g. an output file).
pub fn write<W: std::io::Write>(
writer: W,
assignment: &Assignment,
participants: &Vec<Participant>,
courses: &Vec<Course>,
ambience_data: ImportAmbienceData,
) -> Result<(), String> {
// Calculate course sizes
let mut course_size = vec![0usize; courses.len()];
for (_p, c) in assignment.iter().enumerate() {
course_size[*c] += 1;
}
let registrations_json = assignment
.iter()
.enumerate()
.map(|(pid, cid)| {
(
format!("{}", participants[pid].dbid),
json!({
"tracks": {
format!("{}", ambience_data.track_id): {
"course_id": courses[*cid].dbid
}
}}),
)
})
.collect::<serde_json::Map<String, serde_json::Value>>();
let courses_json = course_size
.iter()
.enumerate()
.map(|(cid, size)| {
(
format!("{}", courses[cid].dbid),
json!({
"segments": {
format!("{}", ambience_data.track_id): *size > 0
}}),
)
})
.collect::<serde_json::Map<String, serde_json::Value>>();
let data = json!({
"EVENT_SCHEMA_VERSION": [15, 4],
"kind": "partial",
"id": ambience_data.event_id,
"timestamp": Utc::now().to_rfc3339_opts(SecondsFormat::Millis, false),
"courses": courses_json,
"registrations": registrations_json
});
serde_json::to_writer(writer, &data).map_err(|e| format!("{}", e))?;
Ok(())
}
/// Helper function to find the specified course track or the single course track, if the event has
/// only one.
///
/// # Arguments
/// * parts_data: The JSON 'parts' object from the 'event' part of the export file
/// * track: The course track selected by the user (if any)
///
/// # Returns
/// part_id and track_id of the chosen course track or a user readable error string
fn find_track(
parts_data: &serde_json::Map<String, serde_json::Value>,
track: Option<u64>,
) -> Result<(u64, u64), String> {
match track {
// If a specific course track id is given, search for that id
Some(t) => {
for (part_id, part) in parts_data {
let tracks_data = part
.get("tracks")
.and_then(|v| v.as_object())
.ok_or("Missing 'tracks' in event part.")?;
for (track_id, _track) in tracks_data {
if track_id
.parse::<u64>()
.map_err(|e: std::num::ParseIntError| e.to_string())?
== t
{
return Ok((
part_id
.parse()
.map_err(|e: std::num::ParseIntError| e.to_string())?,
track_id
.parse()
.map_err(|e: std::num::ParseIntError| e.to_string())?,
));
}
}
}
Err(format!("Could not find course track with id {}.", t))
}
// Otherwise, check if there is only a single course track
None => {
let mut result: Option<(u64, u64)> = None;
for (part_id, part) in parts_data {
let tracks_data = part
.get("tracks")
.and_then(|v| v.as_object())
.ok_or("Missing 'tracks' in event part.")?;
for (track_id, _track) in tracks_data {
if let Some(_) = result {
return Err(format!(
"Event has more than one course track. Please select one of the \
tracks:\n{}",
track_summary(parts_data)?
));
}
result = Some((
part_id
.parse()
.map_err(|e: std::num::ParseIntError| e.to_string())?,
track_id
.parse()
.map_err(|e: std::num::ParseIntError| e.to_string())?,
));
}
}
result.ok_or("Event has no course track.".to_owned())
}
}
}
/// Helper function to generate a summary of the event's tracks and their IDs.
///
/// # Arguments
/// * parts_data: The JSON 'parts' object from the 'event' part of the export file
///
/// # Returns
/// A String containing a listing of the track ids and names to be printed to the command line
///
/// # Errors
/// Returns an error String, when
///
fn track_summary(
parts_data: &serde_json::Map<String, serde_json::Value>,
) -> Result<String, String> {
let mut tracks = Vec::new();
let mut max_id_len = 0;
for (_part_id, part) in parts_data {
let tracks_data = part
.get("tracks")
.and_then(|v| v.as_object())
.ok_or("Missing 'tracks' in event part.")?;
for (track_id, track) in tracks_data {
max_id_len = max(max_id_len, track_id.len());
tracks.push((
track_id,
track
.get("title")
.and_then(|v| v.as_str())
.ok_or("Missing 'title' in event track.")?,
track
.get("sortkey")
.and_then(|v| v.as_i64())
.ok_or("Missing 'sortkey' in event track.")?,
));
}
}
tracks.sort_by_key(|e| e.2);
let result = tracks
.iter()
.map(|(id, title, _)| format!("{:>1$} : {2}", id, max_id_len, title))
.collect::<Vec<_>>()
.join("\n");
return Ok(result);
}
#[cfg(test)]
mod tests {
use crate::{Assignment, Course, Participant};
#[test]
fn parse_testaka_sitzung() {
let data = include_bytes!("test_ressources/TestAka_partial_export_event.json");
let (participants, courses, import_ambience) =
super::read(&data[..], Some(3), false, false).unwrap();
super::super::assert_data_consitency(&participants, &courses);
// Check courses
// Course "γ. Kurz" is cancelled in this track, thus it should not exist in the parsed data
assert_eq!(courses.len(), 4);
assert!(find_course_by_id(&courses, 3).is_none());
assert_eq!(find_course_by_id(&courses, 5).unwrap().name, "ε. Backup");
assert_eq!(find_course_by_id(&courses, 5).unwrap().instructors.len(), 0);
assert_eq!(find_course_by_id(&courses, 1).unwrap().num_min, 2);
assert_eq!(find_course_by_id(&courses, 1).unwrap().num_max, 10);
assert_eq!(find_course_by_id(&courses, 1).unwrap().instructors.len(), 1);
assert_eq!(
find_course_by_id(&courses, 1).unwrap().instructors[0],
find_participant_by_id(&participants, 2).unwrap().index
);
for c in courses.iter() {
assert_eq!(
c.room_offset, 0,
"room_offset of course {} (dbid {}) is not 0",
c.index, c.dbid
);
assert_eq!(
c.fixed_course, false,
"course {} (dbid {}) is fixed",
c.index, c.dbid
);
}
// Check participants
assert_eq!(participants.len(), 5);
assert_eq!(
find_participant_by_id(&participants, 2).unwrap().name,
"<NAME>"
);
assert_eq!(
find_participant_by_id(&participants, 2).unwrap().choices,
vec![
find_course_by_id(&courses, 4).unwrap().index,
find_course_by_id(&courses, 2).unwrap().index
]
);
// Check import_ambience
assert_eq!(import_ambience.event_id, 1);
assert_eq!(import_ambience.track_id, 3);
}
#[test]
fn parse_testaka_other_tracks() {
let data = include_bytes!("test_ressources/TestAka_partial_export_event.json");
// Check that only participants are parsed (no not_applied, applied, waitlist, guest,
// cancelled or rejected registration parts)
// Morgenkreis
let (participants, courses, _import_ambience) =
super::read(&data[..], Some(1), false, false).unwrap();
super::super::assert_data_consitency(&participants, &courses);
assert_eq!(courses.len(), 4);
assert_eq!(participants.len(), 2);
assert!(find_participant_by_id(&participants, 3).is_some());
// Kaffee
let (participants, courses, _import_ambience) =
super::read(&data[..], Some(2), false, false).unwrap();
super::super::assert_data_consitency(&participants, &courses);
assert_eq!(courses.len(), 4);
assert_eq!(participants.len(), 2);
assert!(find_participant_by_id(&participants, 3).is_some());
}
#[test]
fn test_no_track_error() {
let data = include_bytes!("test_ressources/TestAka_partial_export_event.json");
// Check that only participants are parsed (no not_applied, applied, waitlist, guest,
// cancelled or rejected registration parts)
// Morgenkreis
let result = super::read(&data[..], None, false, false);
assert!(result.is_err());
assert!(result.err().unwrap().find("Kaffeekränzchen").is_some());
}
#[test]
fn test_ignore_assigned() {
let data = include_bytes!("test_ressources/TestAka_partial_export_event.json");
let (participants, courses, _import_ambience) =
super::read(&data[..], Some(3), false, true).unwrap();
super::super::assert_data_consitency(&participants, &courses);
assert_eq!(courses.len(), 4);
assert_eq!(find_course_by_id(&courses, 1).unwrap().fixed_course, true);
assert_eq!(find_course_by_id(&courses, 1).unwrap().room_offset, 3);
assert_eq!(find_course_by_id(&courses, 4).unwrap().fixed_course, false);
assert_eq!(find_course_by_id(&courses, 4).unwrap().room_offset, 0);
assert_eq!(participants.len(), 2);
assert!(find_participant_by_id(&participants, 2).is_none());
assert!(find_participant_by_id(&participants, 4).is_none());
}
#[test]
fn test_ignore_cancelled() {
let data = include_bytes!("test_ressources/TestAka_partial_export_event.json");
let (participants, courses, _import_ambience) =
super::read(&data[..], Some(3), true, false).unwrap();
super::super::assert_data_consitency(&participants, &courses);
assert_eq!(courses.len(), 3);
assert!(find_course_by_id(&courses, 3).is_none());
assert!(find_course_by_id(&courses, 5).is_none());
assert_eq!(participants.len(), 5);
}
// TODO test parsing single track event
#[test]
fn test_write_result() {
let courses = vec![
Course {
index: 0,
dbid: 1,
name: String::from("<NAME>"),
num_max: 10 - 1,
num_min: 3 - 1,
instructors: vec![2],
room_offset: 0,
fixed_course: false,
},
Course {
index: 1,
dbid: 2,
name: String::from("<NAME>"),
num_max: 20,
num_min: 10,
instructors: vec![],
room_offset: 0,
fixed_course: false,
},
Course {
index: 2,
dbid: 4,
name: String::from("<NAME>"),
num_max: 25,
num_min: 0,
instructors: vec![2],
room_offset: 0,
fixed_course: false,
},
Course {
index: 3,
dbid: 5,
name: String::from("<NAME>"),
num_max: 25,
num_min: 0,
instructors: vec![2],
room_offset: 0,
fixed_course: false,
},
];
let participants = vec![
Participant {
index: 0,
dbid: 1,
name: String::from("<NAME>"),
choices: vec![0, 2],
},
Participant {
index: 1,
dbid: 2,
name: String::from("<NAME>"),
choices: vec![2, 1],
},
Participant {
index: 2,
dbid: 3,
name: String::from("<NAME>"),
choices: vec![1, 2],
},
Participant {
index: 3,
dbid: 4,
name: String::from("<NAME>"),
choices: vec![0, 1],
},
];
super::super::assert_data_consitency(&participants, &courses);
let ambience_data = super::ImportAmbienceData {
event_id: 1,
track_id: 3,
};
let assignment: Assignment = vec![0, 0, 2, 0];
let mut buffer = Vec::<u8>::new();
let result = super::write(
&mut buffer,
&assignment,
&participants,
&courses,
ambience_data,
);
assert!(result.is_ok());
// Parse buffer as JSON file
let data: serde_json::Value = serde_json::from_reader(&buffer[..]).unwrap();
// Check course segments (cancelled courses)
let courses_data = data["courses"].as_object().unwrap();
assert_eq!(courses_data.len(), 4);
check_output_course(courses_data, "1", "3", true);
check_output_course(courses_data, "2", "3", false);
let registrations_data = data["registrations"].as_object().unwrap();
assert_eq!(registrations_data.len(), 4);
check_output_registration(registrations_data, "1", "3", 1);
check_output_registration(registrations_data, "3", "3", 4);
}
fn find_course_by_id(courses: &Vec<Course>, dbid: usize) -> Option<&Course> {
courses.iter().filter(|c| c.dbid == dbid).next()
}
fn find_participant_by_id(
participants: &Vec<Participant>,
dbid: usize,
) -> Option<&Participant> {
participants.iter().filter(|c| c.dbid == dbid).next()
}
/// Helper function for test_write_result() to check a course entry in the resulting json data
fn check_output_course(
courses_data: &serde_json::Map<String, serde_json::Value>,
course_id: &str,
track_id: &str,
active: bool,
) {
let course_data = courses_data
.get(course_id)
.and_then(|v| v.as_object())
.unwrap_or_else(|| panic!("Course id {} not found or not an object", course_id));
assert_eq!(
course_data.len(),
1,
"Course id {} has more than one data entry",
course_id
);
let course_segments = course_data
.get("segments")
.and_then(|v| v.as_object())
.unwrap_or_else(|| panic!("Course id {} has no 'segments' entry", course_id));
assert_eq!(
course_segments.len(),
1,
"Course id {} has more than one segment defined",
course_id
);
assert_eq!(
course_segments
.get(track_id)
.and_then(|v| v.as_bool())
.unwrap_or_else(|| panic!(
"Course id {} has no segment id {} or it is not bool",
course_id, track_id
)),
active,
"Course id {} segment has wrong active state",
course_id
);
}
/// Helper function for test_write_result() to check a registration entry in the resulting json
/// data
fn check_output_registration(
registrations_data: &serde_json::Map<String, serde_json::Value>,
reg_id: &str,
track_id: &str,
course_id: u64,
) {
let reg_data = registrations_data
.get(reg_id)
.and_then(|v| v.as_object())
.unwrap_or_else(|| panic!("Registration id {} not found or not an object", reg_id));
assert_eq!(
reg_data.len(),
1,
"Registration id {} has more than one data entry",
reg_id
);
let reg_tracks = reg_data
.get("tracks")
.and_then(|v| v.as_object())
.unwrap_or_else(|| panic!("Course id {} has no 'tracks' entry", reg_id));
assert_eq!(
reg_tracks.len(),
1,
"Registration id {} has more than one track defined",
reg_id
);
let reg_track = reg_tracks
.get(track_id)
.and_then(|v| v.as_object())
.unwrap_or_else(|| {
panic!(
"Registration id {} has no track id {} or it is not an object",
reg_id, track_id
)
});
assert_eq!(
reg_track
.get("course_id")
.and_then(|v| v.as_u64())
.unwrap_or_else(|| panic!(
"Registration id {} has no 'course_id' entry or it is not an uint",
reg_id
)),
course_id,
"Registration id {} has a wrong course assignment",
reg_id
);
}
}
|
<filename>src/main/java/dd2480/group4/storage/RepositoryHandler.java
package dd2480.group4.storage;
import java.io.IOException;
import java.nio.file.Path;
/**
* The interface for handling repositories.
*/
public interface RepositoryHandler {
/**
* Creates a temporary directory and returns the path to it.
*
* @return the path to the directory
* @throws IOException if it fails to create the repository
*/
Path createDirectory() throws IOException;
/**
* Clones a repository to the given path.
*
* @param path the location where the repositories is cloned to.
* @param repo the http-address to the repo to be cloned.
*/
void cloneGit(Path path, String repo) throws IOException, InterruptedException;
/**
* Clones a repository to the given path.
*
* @param path the location where the repositories is cloned to.
* @param repo the http-address to the repo to be cloned.
* @param hashId the commit hash to checkout
*/
void cloneGit(Path path, String repo, String hashId) throws IOException, InterruptedException;
/**
* Deletes the directory at the given path.
*
* @param path the path to the directory which is to be deleted.
*/
void deleteDirectory(Path path) throws IOException;
}
|
/**
* Parameters supplied to the Create Hosted Service operation.
*/
public class HostedServiceCreateParameters {
private String affinityGroup;
/**
* Optional. The name of an existing affinity group associated with this
* subscription. Required if Location is not specified. This name is a GUID
* and can be retrieved by examining the name element of the response body
* returned by the List Affinity Groups operation. Specify either Location
* or AffinityGroup, but not both. To list available affinity groups, use
* the List Affinity Groups operation.
* @return The AffinityGroup value.
*/
public String getAffinityGroup() {
return this.affinityGroup;
}
/**
* Optional. The name of an existing affinity group associated with this
* subscription. Required if Location is not specified. This name is a GUID
* and can be retrieved by examining the name element of the response body
* returned by the List Affinity Groups operation. Specify either Location
* or AffinityGroup, but not both. To list available affinity groups, use
* the List Affinity Groups operation.
* @param affinityGroupValue The AffinityGroup value.
*/
public void setAffinityGroup(final String affinityGroupValue) {
this.affinityGroup = affinityGroupValue;
}
private String description;
/**
* Optional. A description for the cloud service. The description can be up
* to 1024 characters in length.
* @return The Description value.
*/
public String getDescription() {
return this.description;
}
/**
* Optional. A description for the cloud service. The description can be up
* to 1024 characters in length.
* @param descriptionValue The Description value.
*/
public void setDescription(final String descriptionValue) {
this.description = descriptionValue;
}
private HashMap<String, String> extendedProperties;
/**
* Optional. Represents the name of an extended cloud service property. Each
* extended property must have a defined name and a value. You can have a
* maximum of 50 extended property name and value pairs. The maximum length
* of the name element is 64 characters, only alphanumeric characters and
* underscores are valid in the name, and it must start with a letter.
* Attempting to use other characters, starting with a non-letter
* character, or entering a name that is identical to that of another
* extended property owned by the same service will result in a status code
* 400 (Bad Request) error. Each extended property value has a maximum
* length of 255 characters.
* @return The ExtendedProperties value.
*/
public HashMap<String, String> getExtendedProperties() {
return this.extendedProperties;
}
/**
* Optional. Represents the name of an extended cloud service property. Each
* extended property must have a defined name and a value. You can have a
* maximum of 50 extended property name and value pairs. The maximum length
* of the name element is 64 characters, only alphanumeric characters and
* underscores are valid in the name, and it must start with a letter.
* Attempting to use other characters, starting with a non-letter
* character, or entering a name that is identical to that of another
* extended property owned by the same service will result in a status code
* 400 (Bad Request) error. Each extended property value has a maximum
* length of 255 characters.
* @param extendedPropertiesValue The ExtendedProperties value.
*/
public void setExtendedProperties(final HashMap<String, String> extendedPropertiesValue) {
this.extendedProperties = extendedPropertiesValue;
}
private String label;
/**
* Required. A name for the cloud service. The name can be up to 100
* characters in length. The name can be used to identify the storage
* account for your tracking purposes.
* @return The Label value.
*/
public String getLabel() {
if (this.label == null) {
return this.getServiceName();
} else {
return this.label;
}
}
/**
* Required. A name for the cloud service. The name can be up to 100
* characters in length. The name can be used to identify the storage
* account for your tracking purposes.
* @param labelValue The Label value.
*/
public void setLabel(final String labelValue) {
this.label = labelValue;
}
private String location;
/**
* Optional. The location where the cloud service will be created. Required
* if AffinityGroup is not specified. Specify either Location or
* AffinityGroup, but not both. To list available locations, use the List
* Locations operation.
* @return The Location value.
*/
public String getLocation() {
return this.location;
}
/**
* Optional. The location where the cloud service will be created. Required
* if AffinityGroup is not specified. Specify either Location or
* AffinityGroup, but not both. To list available locations, use the List
* Locations operation.
* @param locationValue The Location value.
*/
public void setLocation(final String locationValue) {
this.location = locationValue;
}
private String reverseDnsFqdn;
/**
* Optional. Dns address to which the cloud service's IP address resolves
* when queried using a reverse Dns query.
* @return The ReverseDnsFqdn value.
*/
public String getReverseDnsFqdn() {
return this.reverseDnsFqdn;
}
/**
* Optional. Dns address to which the cloud service's IP address resolves
* when queried using a reverse Dns query.
* @param reverseDnsFqdnValue The ReverseDnsFqdn value.
*/
public void setReverseDnsFqdn(final String reverseDnsFqdnValue) {
this.reverseDnsFqdn = reverseDnsFqdnValue;
}
private String serviceName;
/**
* Required. A name for the cloud service that is unique within Azure. This
* name is the DNS prefix name and can be used to access the service.
* @return The ServiceName value.
*/
public String getServiceName() {
return this.serviceName;
}
/**
* Required. A name for the cloud service that is unique within Azure. This
* name is the DNS prefix name and can be used to access the service.
* @param serviceNameValue The ServiceName value.
*/
public void setServiceName(final String serviceNameValue) {
this.serviceName = serviceNameValue;
}
/**
* Initializes a new instance of the HostedServiceCreateParameters class.
*
*/
public HostedServiceCreateParameters() {
this.setExtendedProperties(new LazyHashMap<String, String>());
}
/**
* Initializes a new instance of the HostedServiceCreateParameters class
* with required arguments.
*
* @param serviceName The service name.
* @param label The label.
*/
public HostedServiceCreateParameters(String serviceName, String label) {
this();
if (serviceName == null) {
throw new NullPointerException("serviceName");
}
if (label == null) {
throw new NullPointerException("label");
}
this.setServiceName(serviceName);
this.setLabel(label);
}
} |
// CheckResponse checks response headers and a copied stream of the body.
func CheckResponse(r *http.Response, teedBody io.Reader) error {
if r.StatusCode != http.StatusOK {
return fmt.Errorf("%v %v: %s",
r.Request.Method, r.Request.URL, r.Status,
)
}
var er ErrorResponse
err := json.NewDecoder(teedBody).Decode(&er)
if err == nil && er.Code != "" {
er.Response = r
return &er
}
return nil
} |
import DatabaseError, { DatabaseErrorSubclassOptions } from '../database-error';
export enum RelationshipType {
parent = 'parent',
child = 'child',
}
interface ForeignKeyConstraintErrorOptions {
table?: string;
fields?: { [field: string]: string };
value?: unknown;
index?: string;
reltype?: RelationshipType;
}
/**
* Thrown when a foreign key constraint is violated in the database
*/
class ForeignKeyConstraintError extends DatabaseError {
table: string | undefined;
fields: { [field: string]: string } | undefined;
value: unknown;
index: string | undefined;
reltype: RelationshipType | undefined;
constructor(
options: ForeignKeyConstraintErrorOptions & DatabaseErrorSubclassOptions
) {
options = options || {};
options.parent = options.parent || { sql: '', name: '', message: '' };
super(options.parent, { stack: options.stack });
this.name = 'SequelizeForeignKeyConstraintError';
this.message =
options.message || options.parent.message || 'Database Error';
this.fields = options.fields;
this.table = options.table;
this.value = options.value;
this.index = options.index;
this.reltype = options.reltype;
}
}
export default ForeignKeyConstraintError;
|
/**
* Restore the contents of {@link SnapshottleMemory} from a snapshot
* file. It is up to the memory to clear its contents before or to
* handle conflicts otherwise. The memory must not be modified while
* its contents are restored. Does nothing if the snapshotFile does
* not exist or is emtpy, this method
*
* @param memory a non-null SnapshottableMemory
*/
public void restoreFromSnapshot(SnapshottableMemory memory) {
Validate.notNull(this.snapshotFile, "snapshotFile not set");
if (!this.snapshotFile.exists()) {
return;
}
try {
this.basicRestoreFromSnapshot(memory);
}
catch (Exception exc) {
throw new SnapshotException("failed to restore contents of snapshot file " + this.snapshotFile, exc);
}
} |
Osteoblastic metastasis from breast affecting the condyle misinterpreted as temporomandibular joint disorder.
Sir, Despite the low incidence of metastases in jaw bones compared with the rest of the skeleton, metastases are important because of the poor prognosis they carry. Their presence can indicate a yet unknown lesion, a disseminated cancer, or recurrence of the disease. We report a case of a metastatic adenocarcinoma of the breast to the mandible affecting the condyle showing a unique radiologic osteoblastic aspect and symptoms similar to the temporomandibular joint dysfunction. |
def remove(self, entity: Entity) -> None:
cache = self._cache
obj_store = self._obj_store
key = entity.id
if cache[key] is entity:
del obj_store[key]
del cache[key]
Persistent.remove_from(entity)
else:
raise DuplicateIdError |
import toLower from 'lodash-es/toLower';
toLower('a string');
|
<gh_stars>10-100
// RUN: %clang_cc1 -fsyntax-only -verify -std=c++11 %s
struct notlit { // expected-note {{not literal because}}
notlit() {}
};
struct notlit2 {
notlit2() {}
};
// valid declarations
constexpr int i1 = 0;
constexpr int f1() { return 0; }
struct s1 {
constexpr static int mi1 = 0;
const static int mi2;
};
constexpr int s1::mi2 = 0;
// invalid declarations
// not a definition of an object
constexpr extern int i2; // expected-error {{constexpr variable declaration must be a definition}}
// not a literal type
constexpr notlit nl1; // expected-error {{constexpr variable cannot have non-literal type 'const notlit'}}
// function parameters
void f2(constexpr int i) {} // expected-error {{function parameter cannot be constexpr}}
// non-static member
struct s2 {
constexpr int mi1; // expected-error {{non-static data member cannot be constexpr; did you intend to make it const?}}
static constexpr int mi2; // expected-error {{requires an initializer}}
mutable constexpr int mi3 = 3; // expected-error-re {{non-static data member cannot be constexpr$}} expected-error {{'mutable' and 'const' cannot be mixed}}
};
// typedef
typedef constexpr int CI; // expected-error {{typedef cannot be constexpr}}
// tag
constexpr class C1 {}; // expected-error {{class cannot be marked constexpr}}
constexpr struct S1 {}; // expected-error {{struct cannot be marked constexpr}}
constexpr union U1 {}; // expected-error {{union cannot be marked constexpr}}
constexpr enum E1 {}; // expected-error {{enum cannot be marked constexpr}}
template <typename T> constexpr class TC1 {}; // expected-error {{class cannot be marked constexpr}}
template <typename T> constexpr struct TS1 {}; // expected-error {{struct cannot be marked constexpr}}
template <typename T> constexpr union TU1 {}; // expected-error {{union cannot be marked constexpr}}
class C2 {} constexpr; // expected-error {{class cannot be marked constexpr}}
struct S2 {} constexpr; // expected-error {{struct cannot be marked constexpr}}
union U2 {} constexpr; // expected-error {{union cannot be marked constexpr}}
enum E2 {} constexpr; // expected-error {{enum cannot be marked constexpr}}
constexpr class C3 {} c3 = C3();
constexpr struct S3 {} s3 = S3();
constexpr union U3 {} u3 = {};
constexpr enum E3 { V3 } e3 = V3;
class C4 {} constexpr c4 = C4();
struct S4 {} constexpr s4 = S4();
union U4 {} constexpr u4 = {};
enum E4 { V4 } constexpr e4 = V4;
constexpr int; // expected-error {{constexpr can only be used in variable and function declarations}}
// redeclaration mismatch
constexpr int f3(); // expected-note {{previous declaration is here}}
int f3(); // expected-error {{non-constexpr declaration of 'f3' follows constexpr declaration}}
int f4(); // expected-note {{previous declaration is here}}
constexpr int f4(); // expected-error {{constexpr declaration of 'f4' follows non-constexpr declaration}}
template<typename T> constexpr T f5(T);
template<typename T> constexpr T f5(T); // expected-note {{previous}}
template<typename T> T f5(T); // expected-error {{non-constexpr declaration of 'f5' follows constexpr declaration}}
template<typename T> T f6(T); // expected-note {{here}}
template<typename T> constexpr T f6(T); // expected-error {{constexpr declaration of 'f6' follows non-constexpr declaration}}
// destructor
struct ConstexprDtor {
constexpr ~ConstexprDtor() = default; // expected-error {{destructor cannot be marked constexpr}}
};
// template stuff
template <typename T> constexpr T ft(T t) { return t; }
template <typename T> T gt(T t) { return t; }
struct S {
template<typename T> constexpr T f(); // expected-warning {{C++1y}}
template<typename T> T g() const;
};
// explicit specialization can differ in constepxr
template <> notlit ft(notlit nl) { return nl; }
template <> char ft(char c) { return c; } // expected-note {{previous}}
template <> constexpr char ft(char nl); // expected-error {{constexpr declaration of 'ft<char>' follows non-constexpr declaration}}
template <> constexpr int gt(int nl) { return nl; }
template <> notlit S::f() const { return notlit(); }
template <> constexpr int S::g() { return 0; } // expected-note {{previous}} expected-warning {{C++1y}}
template <> int S::g() const; // expected-error {{non-constexpr declaration of 'g<int>' follows constexpr declaration}}
// specializations can drop the 'constexpr' but not the implied 'const'.
template <> char S::g() { return 0; } // expected-error {{no function template matches}}
template <> double S::g() const { return 0; } // ok
constexpr int i3 = ft(1);
void test() {
// ignore constexpr when instantiating with non-literal
notlit2 nl2;
(void)ft(nl2);
}
// Examples from the standard:
constexpr int square(int x); // expected-note {{declared here}}
constexpr int bufsz = 1024;
constexpr struct pixel { // expected-error {{struct cannot be marked constexpr}}
int x;
int y;
constexpr pixel(int);
};
constexpr pixel::pixel(int a)
: x(square(a)), y(square(a)) // expected-note {{undefined function 'square' cannot be used in a constant expression}}
{ }
constexpr pixel small(2); // expected-error {{must be initialized by a constant expression}} expected-note {{in call to 'pixel(2)'}}
constexpr int square(int x) {
return x * x;
}
constexpr pixel large(4);
int next(constexpr int x) { // expected-error {{function parameter cannot be constexpr}}
return x + 1;
}
extern constexpr int memsz; // expected-error {{constexpr variable declaration must be a definition}}
|
def update_until_STOP(self, nodes, path, pathidx):
nodeidx = 0
while path[pathidx][0] != 'STOP':
node, edge = path[pathidx]
if nodeidx >= len(nodes):
astnode = {}
if node == 'DBranch':
astnode['node'] = node
astnode['_cond'] = []
astnode['_then'] = []
astnode['_else'] = []
nodes.append(astnode)
elif node == 'DExcept':
astnode['node'] = node
astnode['_try'] = []
astnode['_catch'] = []
nodes.append(astnode)
elif node == 'DLoop':
astnode['node'] = node
astnode['_cond'] = []
astnode['_body'] = []
nodes.append(astnode)
else:
nodes.append({'node': 'DAPICall', '_call': node})
nodeidx += 1
pathidx += 1
continue
else:
astnode = nodes[nodeidx]
if edge == SIBLING_EDGE:
nodeidx += 1
pathidx += 1
continue
if node == 'DBranch':
self.update_DBranch(astnode, path, pathidx)
return -1
elif node == 'DExcept':
self.update_DExcept(astnode, path, pathidx)
return -1
elif node == 'DLoop':
self.update_DLoop(astnode, path, pathidx)
return -1
else:
raise ValueError('Invalid node/edge: ' + str((node, edge)))
return pathidx |
<reponame>lightsea90/ref-contracts<gh_stars>1-10
use near_sdk_sim::{call, to_yocto};
use crate::common::utils::*;
pub mod common;
#[test]
fn owner_scenario_01() {
let (root, owner, pool, token1, _, _) = setup_pool_with_liquidity();
assert_eq!(balance_of(&token1, &pool.account_id()), to_yocto("105"));
call!(
root,
token1.ft_transfer_call(pool.valid_account_id(), to_yocto("10").into(), None, "".to_string()),
deposit = 1
)
.assert_success();
assert_eq!(balance_of(&token1, &pool.account_id()), to_yocto("115"));
println!("Owner Case 0101: only owner can retrieve unmanaged tokens");
let out_come = call!(
root,
pool.retrieve_unmanaged_token(token1.valid_account_id(), to_yocto("10").into()),
deposit = 1
);
assert!(!out_come.is_ok());
assert_eq!(get_error_count(&out_come), 1);
assert!(get_error_status(&out_come).contains("E100: no permission to invoke this"));
// println!("{}", get_error_status(&out_come));
println!("Owner Case 0102: owner retrieve unmanaged token but unregstered");
let out_come = call!(
owner,
pool.retrieve_unmanaged_token(token1.valid_account_id(), to_yocto("10").into()),
deposit = 1
);
assert!(!out_come.is_ok());
assert_eq!(get_error_count(&out_come), 1);
assert!(get_error_status(&out_come).contains("The account owner is not registered"));
assert_eq!(balance_of(&token1, &pool.account_id()), to_yocto("115"));
println!("Owner Case 0103: owner retrieve unmanaged tokens");
call!(
owner,
token1.storage_deposit(None, None),
deposit = to_yocto("1")
)
.assert_success();
let out_come = call!(
owner,
pool.retrieve_unmanaged_token(token1.valid_account_id(), to_yocto("10").into()),
deposit = 1
);
out_come.assert_success();
assert_eq!(get_error_count(&out_come), 0);
assert_eq!(balance_of(&token1, &pool.account_id()), to_yocto("105"));
assert_eq!(balance_of(&token1, &owner.account_id()), to_yocto("10"));
} |
/*
* Copyright 1995-2016 The OpenSSL Project Authors. All Rights Reserved.
*
* Licensed under the OpenSSL license (the "License"). You may not use
* this file except in compliance with the License. You can obtain a copy
* in the file LICENSE in the source distribution or at
* https://www.openssl.org/source/license.html
*/
#include <stdio.h>
#include "internal/cryptlib.h"
#include <openssl/buffer.h>
#include <openssl/bn.h>
#include <openssl/objects.h>
#include <openssl/x509.h>
#include <openssl/x509v3.h>
#include "internal/asn1_int.h"
#ifndef OPENSSL_NO_STDIO
int X509_print_fp(FILE *fp, X509 *x)
{
return X509_print_ex_fp(fp, x, XN_FLAG_COMPAT, X509_FLAG_COMPAT);
}
int X509_print_ex_fp(FILE *fp, X509 *x, unsigned long nmflag,
unsigned long cflag)
{
BIO *b;
int ret;
if ((b = BIO_new(BIO_s_file())) == NULL) {
X509err(X509_F_X509_PRINT_EX_FP, ERR_R_BUF_LIB);
return 0;
}
BIO_set_fp(b, fp, BIO_NOCLOSE);
ret = X509_print_ex(b, x, nmflag, cflag);
BIO_free(b);
return ret;
}
#endif
int X509_print(BIO *bp, X509 *x)
{
return X509_print_ex(bp, x, XN_FLAG_COMPAT, X509_FLAG_COMPAT);
}
int X509_print_ex(BIO *bp, X509 *x, unsigned long nmflags,
unsigned long cflag)
{
long l;
int ret = 0, i;
char *m = NULL, mlch = ' ';
int nmindent = 0;
ASN1_INTEGER *bs;
EVP_PKEY *pkey = NULL;
const char *neg;
if ((nmflags & XN_FLAG_SEP_MASK) == XN_FLAG_SEP_MULTILINE) {
mlch = '\n';
nmindent = 12;
}
if (nmflags == X509_FLAG_COMPAT)
nmindent = 16;
if (!(cflag & X509_FLAG_NO_HEADER)) {
if (BIO_write(bp, "Certificate:\n", 13) <= 0)
goto err;
if (BIO_write(bp, " Data:\n", 10) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_VERSION)) {
l = X509_get_version(x);
if (l >= 0 && l <= 2) {
if (BIO_printf(bp, "%8sVersion: %ld (0x%lx)\n", "", l + 1, (unsigned long)l) <= 0)
goto err;
} else {
if (BIO_printf(bp, "%8sVersion: Unknown (%ld)\n", "", l) <= 0)
goto err;
}
}
if (!(cflag & X509_FLAG_NO_SERIAL)) {
if (BIO_write(bp, " Serial Number:", 22) <= 0)
goto err;
bs = X509_get_serialNumber(x);
if (bs->length <= (int)sizeof(long)) {
ERR_set_mark();
l = ASN1_INTEGER_get(bs);
ERR_pop_to_mark();
} else {
l = -1;
}
if (l != -1) {
unsigned long ul;
if (bs->type == V_ASN1_NEG_INTEGER) {
ul = 0 - (unsigned long)l;
neg = "-";
} else {
ul = l;
neg = "";
}
if (BIO_printf(bp, " %s%lu (%s0x%lx)\n", neg, ul, neg, ul) <= 0)
goto err;
} else {
neg = (bs->type == V_ASN1_NEG_INTEGER) ? " (Negative)" : "";
if (BIO_printf(bp, "\n%12s%s", "", neg) <= 0)
goto err;
for (i = 0; i < bs->length; i++) {
if (BIO_printf(bp, "%02x%c", bs->data[i],
((i + 1 == bs->length) ? '\n' : ':')) <= 0)
goto err;
}
}
}
if (!(cflag & X509_FLAG_NO_SIGNAME)) {
const X509_ALGOR *tsig_alg = X509_get0_tbs_sigalg(x);
if (BIO_puts(bp, " ") <= 0)
goto err;
if (X509_signature_print(bp, tsig_alg, NULL) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_ISSUER)) {
if (BIO_printf(bp, " Issuer:%c", mlch) <= 0)
goto err;
if (X509_NAME_print_ex(bp, X509_get_issuer_name(x), nmindent, nmflags)
< 0)
goto err;
if (BIO_write(bp, "\n", 1) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_VALIDITY)) {
if (BIO_write(bp, " Validity\n", 17) <= 0)
goto err;
if (BIO_write(bp, " Not Before: ", 24) <= 0)
goto err;
if (!ASN1_TIME_print(bp, X509_get0_notBefore(x)))
goto err;
if (BIO_write(bp, "\n Not After : ", 25) <= 0)
goto err;
if (!ASN1_TIME_print(bp, X509_get0_notAfter(x)))
goto err;
if (BIO_write(bp, "\n", 1) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_SUBJECT)) {
if (BIO_printf(bp, " Subject:%c", mlch) <= 0)
goto err;
if (X509_NAME_print_ex
(bp, X509_get_subject_name(x), nmindent, nmflags) < 0)
goto err;
if (BIO_write(bp, "\n", 1) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_PUBKEY)) {
X509_PUBKEY *xpkey = X509_get_X509_PUBKEY(x);
ASN1_OBJECT *xpoid;
X509_PUBKEY_get0_param(&xpoid, NULL, NULL, NULL, xpkey);
if (BIO_write(bp, " Subject Public Key Info:\n", 33) <= 0)
goto err;
if (BIO_printf(bp, "%12sPublic Key Algorithm: ", "") <= 0)
goto err;
if (i2a_ASN1_OBJECT(bp, xpoid) <= 0)
goto err;
if (BIO_puts(bp, "\n") <= 0)
goto err;
pkey = X509_get0_pubkey(x);
if (pkey == NULL) {
BIO_printf(bp, "%12sUnable to load Public Key\n", "");
ERR_print_errors(bp);
} else {
EVP_PKEY_print_public(bp, pkey, 16, NULL);
}
}
if (!(cflag & X509_FLAG_NO_IDS)) {
const ASN1_BIT_STRING *iuid, *suid;
X509_get0_uids(x, &iuid, &suid);
if (iuid != NULL) {
if (BIO_printf(bp, "%8sIssuer Unique ID: ", "") <= 0)
goto err;
if (!X509_signature_dump(bp, iuid, 12))
goto err;
}
if (suid != NULL) {
if (BIO_printf(bp, "%8sSubject Unique ID: ", "") <= 0)
goto err;
if (!X509_signature_dump(bp, suid, 12))
goto err;
}
}
if (!(cflag & X509_FLAG_NO_EXTENSIONS))
X509V3_extensions_print(bp, "X509v3 extensions",
X509_get0_extensions(x), cflag, 8);
if (!(cflag & X509_FLAG_NO_SIGDUMP)) {
const X509_ALGOR *sig_alg;
const ASN1_BIT_STRING *sig;
X509_get0_signature(&sig, &sig_alg, x);
if (X509_signature_print(bp, sig_alg, sig) <= 0)
goto err;
}
if (!(cflag & X509_FLAG_NO_AUX)) {
if (!X509_aux_print(bp, x, 0))
goto err;
}
ret = 1;
err:
OPENSSL_free(m);
return ret;
}
int X509_ocspid_print(BIO *bp, X509 *x)
{
unsigned char *der = NULL;
unsigned char *dertmp;
int derlen;
int i;
unsigned char SHA1md[SHA_DIGEST_LENGTH];
ASN1_BIT_STRING *keybstr;
X509_NAME *subj;
/*
* display the hash of the subject as it would appear in OCSP requests
*/
if (BIO_printf(bp, " Subject OCSP hash: ") <= 0)
goto err;
subj = X509_get_subject_name(x);
derlen = i2d_X509_NAME(subj, NULL);
if ((der = dertmp = OPENSSL_malloc(derlen)) == NULL)
goto err;
i2d_X509_NAME(subj, &dertmp);
if (!EVP_Digest(der, derlen, SHA1md, NULL, EVP_sha1(), NULL))
goto err;
for (i = 0; i < SHA_DIGEST_LENGTH; i++) {
if (BIO_printf(bp, "%02X", SHA1md[i]) <= 0)
goto err;
}
OPENSSL_free(der);
der = NULL;
/*
* display the hash of the public key as it would appear in OCSP requests
*/
if (BIO_printf(bp, "\n Public key OCSP hash: ") <= 0)
goto err;
keybstr = X509_get0_pubkey_bitstr(x);
if (keybstr == NULL)
goto err;
if (!EVP_Digest(ASN1_STRING_get0_data(keybstr),
ASN1_STRING_length(keybstr), SHA1md, NULL, EVP_sha1(),
NULL))
goto err;
for (i = 0; i < SHA_DIGEST_LENGTH; i++) {
if (BIO_printf(bp, "%02X", SHA1md[i]) <= 0)
goto err;
}
BIO_printf(bp, "\n");
return 1;
err:
OPENSSL_free(der);
return 0;
}
int X509_signature_dump(BIO *bp, const ASN1_STRING *sig, int indent)
{
const unsigned char *s;
int i, n;
n = sig->length;
s = sig->data;
for (i = 0; i < n; i++) {
if ((i % 18) == 0) {
if (BIO_write(bp, "\n", 1) <= 0)
return 0;
if (BIO_indent(bp, indent, indent) <= 0)
return 0;
}
if (BIO_printf(bp, "%02x%s", s[i], ((i + 1) == n) ? "" : ":") <= 0)
return 0;
}
if (BIO_write(bp, "\n", 1) != 1)
return 0;
return 1;
}
int X509_signature_print(BIO *bp, const X509_ALGOR *sigalg,
const ASN1_STRING *sig)
{
int sig_nid;
if (BIO_puts(bp, " Signature Algorithm: ") <= 0)
return 0;
if (i2a_ASN1_OBJECT(bp, sigalg->algorithm) <= 0)
return 0;
sig_nid = OBJ_obj2nid(sigalg->algorithm);
if (sig_nid != NID_undef) {
int pkey_nid, dig_nid;
const EVP_PKEY_ASN1_METHOD *ameth;
if (OBJ_find_sigid_algs(sig_nid, &dig_nid, &pkey_nid)) {
ameth = EVP_PKEY_asn1_find(NULL, pkey_nid);
if (ameth && ameth->sig_print)
return ameth->sig_print(bp, sigalg, sig, 9, 0);
}
}
if (sig)
return X509_signature_dump(bp, sig, 9);
else if (BIO_puts(bp, "\n") <= 0)
return 0;
return 1;
}
int X509_aux_print(BIO *out, X509 *x, int indent)
{
char oidstr[80], first;
STACK_OF(ASN1_OBJECT) *trust, *reject;
const unsigned char *alias, *keyid;
int keyidlen;
int i;
if (X509_trusted(x) == 0)
return 1;
trust = X509_get0_trust_objects(x);
reject = X509_get0_reject_objects(x);
if (trust) {
first = 1;
BIO_printf(out, "%*sTrusted Uses:\n%*s", indent, "", indent + 2, "");
for (i = 0; i < sk_ASN1_OBJECT_num(trust); i++) {
if (!first)
BIO_puts(out, ", ");
else
first = 0;
OBJ_obj2txt(oidstr, sizeof(oidstr),
sk_ASN1_OBJECT_value(trust, i), 0);
BIO_puts(out, oidstr);
}
BIO_puts(out, "\n");
} else
BIO_printf(out, "%*sNo Trusted Uses.\n", indent, "");
if (reject) {
first = 1;
BIO_printf(out, "%*sRejected Uses:\n%*s", indent, "", indent + 2, "");
for (i = 0; i < sk_ASN1_OBJECT_num(reject); i++) {
if (!first)
BIO_puts(out, ", ");
else
first = 0;
OBJ_obj2txt(oidstr, sizeof(oidstr),
sk_ASN1_OBJECT_value(reject, i), 0);
BIO_puts(out, oidstr);
}
BIO_puts(out, "\n");
} else
BIO_printf(out, "%*sNo Rejected Uses.\n", indent, "");
alias = X509_alias_get0(x, NULL);
if (alias)
BIO_printf(out, "%*sAlias: %s\n", indent, "", alias);
keyid = X509_keyid_get0(x, &keyidlen);
if (keyid) {
BIO_printf(out, "%*sKey Id: ", indent, "");
for (i = 0; i < keyidlen; i++)
BIO_printf(out, "%s%02X", i ? ":" : "", keyid[i]);
BIO_write(out, "\n", 1);
}
return 1;
}
|
N,*A=map(int,open(0).read().split());l=[0]*9
for a in A:l[min(8,a//400)]+=1
m=sum([0<l[i]for i in range(8)]);print(max(m,1),m+l[8]) |
/**
* Returns an {@code RDFProcessor} that expands the ruleset based on the supplied TBox and
* applies the resulting ruleset on input statements either as a whole or partitioned based on
* an optional {@code Mapper}.
*
* @param ruleset
* the ruleset to apply
* @param mapper
* the optional mapper for partitioning input statements, possibly null
* @param dropBNodeTypes
* true to drop output {@code rdf:type} statements with a {@link BNode} object
* @param deduplicate
* true to enforce that output statements do not contain duplicates (if false,
* duplicates might be returned if this enables the rule engine to operate faster)
* @param tboxData
* the {@code RDFSource} of TBox data; null to disable TBox expansion
* @param emitTBox
* true to emit TBox data (closed based on rules in the supplied {@code Ruleset})
* @param tboxContext
* the context where to emit closed TBox data; null to emit TBox statements with
* their original contexts (use {@link SESAME#NIL} for emitting TBox data in the
* default context)
* @return the created {@code RDFProcessor}
*/
public static RDFProcessor rules(final Ruleset ruleset, @Nullable final Mapper mapper,
final boolean dropBNodeTypes, final boolean deduplicate,
@Nullable final RDFSource tboxData, final boolean emitTBox,
@Nullable final URI tboxContext) {
return new ProcessorRules(ruleset, mapper, dropBNodeTypes, deduplicate, tboxData,
emitTBox, tboxContext);
} |
N = input()
nl = []
for i in N:
nl.append(i)
#print(nl)
nl_i = [int(i)for i in nl]
#print(nl_i)
if int(N) % sum(nl_i) ==0:
print('Yes')
else:
print('No') |
def for_service(self, service) -> list:
routes = [r for r in self if r["spec"]["to"]["name"] == service]
routes = list(sorted(routes,
key=lambda x: float(x["metadata"]["labels"].get("3scale.net/tenant_id", math.inf))))
return routes |
Grand Prix Cincinnati is this weekend, and Standard is the name of the game. Success in Standard tournaments boils down to understanding of a format and familiarity with a deck. If you understand everything going on in a format, and you plan for the possibility of seeing anything and everything across the table, you will probably be successful with any top-tier deck. If you play a deck you are familiar with, you are unlikely to make all of the little stumbles that tend to occur when exploring new territory. Less time spent figuring out the basics leads to more time for higher-level thought.
I’ve been battling with Mono-Black Devotion in Standard since the beginning. It’s been over 6 months now since the deck debuted at Pro Tour Theros. My entire experience in Theros Standard has been through a black lens, and it’s through that lens that I will see the competition this weekend in Cincinnati.
Nowhere is familiarity with a deck and its matchups more important than when sideboarding. Sideboards allow a deck to adjust itself to better combat a particular strategy or type of card. Being limited to just one color greatly restricts access to sideboard cards, but the quality of Standard black cards is so high that Mono-Black Devotion has no issue finding solutions to its problems. Some players opt to splash colors for sideboard cards, but I don’t think they solve any problems that black cards cannot solve. In a theoretical sense, playing shocklands in the maindeck makes the deck strictly worse.
Here is my current list:
I’ve arrived at that list after a lot of games and discussion, and I am very confident in it for any Standard tournament this weekend.
This Monoblack deck is built to play only the most flexible and broadly-effective cards in the maindeck. The sideboard contains some of the most efficient cards available in the format, and it allows Mono-Black to shift gears as appropriate. The sideboard contains extra removal for creature decks, extra discard for spell decks, and some powerful, targeted hate cards that combat entire strategies. In my experience Mono-Black Devotion holds true to the control standard of getting better after sideboard against the average opponent, because the cards it sideboards in outperform the cards opponents sideboard in.
Sideboarding is very flexible, but here’s a guide on how I usually do it against the major decks:
MIRROR MATCH
In
Out
Desecration Demon is very vulnerable to black removal and will often be destroyed at a huge tempo loss, so it is removed. Hero’s Downfall is the slowest removal spell and is terrible against Pack Rat, so it leaves and Dark Betrayal comes in. Duress is very important because fighting over Underworld Connections and protecting Pack Rat defines the matchup. On the draw I might cut a Gray Merchant of Asphodel or a land and leave in the fourth Devour Flesh.
ESPER CONTROL
In
Out
Against Esper control, Duress is the best card and gives the deck an absurd amount of disruption when paired with Thoughtseize. Erebos, God of the Dead draws cards, stops lifegain from Sphinx’s Revelation, and is a high-powered threat. Lifebane Zombie is critical for snagging Blood Baron of Vizkopa, while a couple Devour Flesh stay in for added protection. Hero’s Downfall is not exceptional but a couple stay in as insurance against planeswakers that dodge discard. Gray Merchant of Asphodel is very weak against a deck with so much removal, and it is quite slow, so all of them are removed. Esper does not pressure the life total like other decks, so the lifegain is not needed to keep Underworld Connections active.
GRUUL MONSTERS
In
Out
Doom Blade destroys any of their creatures, often at a tempo gain, while Lifebane Zombie is an evasive threat that can create value. Pack Rat is simply too slow and too small to fight against large Gruul creatures, and it is also vulnerable to Domri Rade and Mizzium Mortars. Bile Blight does not kill anything important and is thus removed. Nightveil Specter is useful as an evasive threat that synergies with Gray Merchant of Asphodel.
BLUE DEVOTION
In
Out
Doom Blade is incredible, and it kills everything but Thassa, God of the Sea. Lifebane Zombie is important as an aggressive, evasive threat. Desecration Demon can be effective, but I like to cut it because it is so clunky against their blue disruption and army of cheap creatures. I think Underworld Connections is important as a way to bury Monoblue with cards, as otherwise Thassa, God of the Sea or Bident of Thassa will take over the game. It is also a key source of devotion for Gray Merchant of Asphodel, one of the most important cards in the matchup. Hero’s Downfall is clunky, so one is removed.
BURN
In
Out
Against burn the plan is to preserve life total at all costs, so most of the life-loss cards are removed. Duress is the best sideboard card, and it gives Monoblack Devotion a way to trade off cards with the opponent as it is accustomed to doing in other matchups. Monoblack must be aggressive, so Lifebane Zombie acts as a threat that can sometimes snag a Boros Reckoner. Doom Blade is simply better than Hero’s Downfal. Bile Blight has applications against Chandra’s Phoenix and even Assemble the Legion. The best card is actually Devour Flesh, which can be used in combination with Desecration Demon or Gray Merchant of Asphodel to great effect.
If you have sideboard questions or want to learn about other matchups please turn to the comments section.
You Make The Play
Imagine you are in the middle of a match this weekend, with your trusty Monoblack Devotion deck of course. Top 8 is on the line, and it’s the dreaded mirror match. Your opponent gets you in the first game, but not to be discouraged, you sideboard like I suggested and confidently shuffle up for game two. You practice solid fundamentals, and before long the match is tied 1-1. On the draw for the deciding game, your opponent keeps his 7, your mind focuses, and you look down at the following hand:
What do you do?
Share your thoughts in the comments section of the article, because next week, when I share my thoughts on the matter, I’ll award a prize to the most well-explained answer!
-Adam
Are you a Quiet Speculation member yet?
Adam Yurchick Adam started playing Magic in 1999 at age 12, and soon afterwards he was working his trade binder at school, the mall food court, FNM, and the Junior Super Series circuit. He's a long-time Pro Tour gravy-trainer who has competed in 26 Pro Tours, a former US National Team member, Grand Prix champion, and magic.tcgplayer.com columnist. Follow him at: http://twitter.com/adamyurchick More Posts
Enjoy what you just read? Share it with the world!
If not, Ravnica Allegiance Pro Tour season is a great time to join up! Our powerful tools, breaking news analysis, and awesome Discord chat room will make sure you stay up-to-date and ahead of the curve. |
/**
* Creates ChunkEncoded data for an given chunk data.
* @param chunkData chunk data that needs to be converted to chunk encoded format.
* @param isLastByte if true then additional CRLF will not be appended.
* @return Chunk encoded format of a given data.
*/
public static ByteBuffer createChunk(ByteBuffer chunkData, boolean isLastByte) {
int chunkLength = chunkData.remaining();
StringBuilder chunkHeader = new StringBuilder(Integer.toHexString(chunkLength));
chunkHeader.append(CRLF);
try {
byte[] header = chunkHeader.toString().getBytes(StandardCharsets.UTF_8);
byte[] trailer = !isLastByte ? CRLF.getBytes(StandardCharsets.UTF_8)
: "".getBytes(StandardCharsets.UTF_8);
ByteBuffer chunkFormattedBuffer = ByteBuffer.allocate(header.length + chunkLength + trailer.length);
chunkFormattedBuffer.put(header)
.put(chunkData)
.put(trailer);
chunkFormattedBuffer.flip();
return chunkFormattedBuffer;
} catch (Exception e) {
throw SdkClientException.builder()
.message("Unable to create chunked data. " + e.getMessage())
.cause(e)
.build();
}
} |
package com.ruoyi.controller.web.controller.demo.controller;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
/**
* 报表
*
*/
@Controller
@RequestMapping("/demo/report")
public class DemoReportController
{
private String prefix = "demo/report";
/**
* 百度ECharts
*/
@GetMapping("/echarts")
public String echarts()
{
return prefix + "/echarts";
}
/**
* 图表插件
*/
@GetMapping("/peity")
public String peity()
{
return prefix + "/peity";
}
/**
* 线状图插件
*/
@GetMapping("/sparkline")
public String sparkline()
{
return prefix + "/sparkline";
}
/**
* 图表组合
*/
@GetMapping("/metrics")
public String metrics()
{
return prefix + "/metrics";
}
}
|
<filename>influxdb/event_service_test.go<gh_stars>0
package influxdb_test
import (
"context"
"testing"
"github.com/aukbit/hippo"
pb "github.com/aukbit/hippo/test/proto"
"github.com/aukbit/rand"
)
// Ensure event can be created.
func TestEventService_Create(t *testing.T) {
c := MustConnectStore()
defer c.Close()
user := pb.User{
Id: rand.String(10),
Name: "test",
Email: "<EMAIL>",
}
// Create new event for user_created topic.
event := hippo.NewEvent("user_created", user.GetId())
// Marshal user proto and assign it to event data
if err := event.MarshalProto(&user); err != nil {
t.Fatal(err)
}
ctx := context.Background()
// Create event in store.
if err := c.EventService().Create(ctx, event); err != nil {
t.Fatal(err)
}
}
func TestEventService_GetLastVersion(t *testing.T) {
c := MustConnectStore()
defer c.Close()
user := pb.User{
Id: rand.String(10),
Name: "test",
Email: "<EMAIL>",
}
// Create new event for user_created topic.
event := hippo.NewEvent("user_created", user.GetId())
// Marshal user proto and assign it to event data
if err := event.MarshalProto(&user); err != nil {
t.Fatal(err)
}
ctx := context.Background()
// Create event in store.
if err := c.EventService().Create(ctx, event); err != nil {
t.Fatal(err)
}
// Get last event version from store.
if n, err := c.EventService().GetLastVersion(ctx, user.GetId()); err != nil {
t.Fatal(err)
} else if n != 0 {
t.Fatalf("unexpected version: %#v != 0", n)
}
}
func TestEventService_ListEvents(t *testing.T) {
c := MustConnectStore()
defer c.Close()
user := pb.User{
Id: rand.String(10),
Name: "test",
Email: "<EMAIL>",
}
// Create new event for user_created topic.
ev1 := hippo.NewEvent("user_created", user.GetId())
// Marshal user proto and assign it to event data
if err := ev1.MarshalProto(&user); err != nil {
t.Fatal(err)
}
ctx := context.Background()
// Create event 1 in store.
if err := c.EventService().Create(ctx, ev1); err != nil {
t.Fatal(err)
}
// Update user details.
user.Name = "my name changed to something else"
// Create new event for user_created topic.
ev2 := hippo.NewEvent("user_updated", user.GetId())
// Marshal user proto and assign it to event data
if err := ev2.MarshalProto(&user); err != nil {
t.Fatal(err)
}
// Increase event aggregate version to avoid concurrency exception
ev2.Version = 1
// Create event 2 in store.
if err := c.EventService().Create(ctx, ev2); err != nil {
t.Fatal(err)
}
// Define query parameters
p := hippo.Params{
ID: user.GetId(),
}
// List events
if events, err := c.EventService().List(ctx, p); err != nil {
t.Fatal(err)
} else if len(events) != 2 {
t.Fatalf("unexpected number of events: %#v != 2", len(events))
}
}
|
class DbscanWrapper:
"""
Run dbscan without allocating memory for a distance matrix. Distance is
currently hamming distance.
"""
def __init__(self,alphabet="amino",dist_function="simple"):
"""
Initialize the class. This should be called by every subclass to
initialize the internal dictionaries mapping alphabets to fast internal
indexes.
"""
# initialize internal variables
self.alphabet = alphabet
self.dist_function = dist_function
# decide on the alphabet
if self.alphabet == "amino":
self._alphabet_string = "*ABCDEFGHIKLMNPQRSTVWXYZ"
else:
raise ValueError("alphabet not recongized.")
if self.dist_function == "simple":
self._dist_function_internal = 0
elif self.dist_function == "dl":
self._dist_function_internal = 1
else:
err = "dist_function not recognized. should be 'simple' or 'dl' (Damerau-Levenshtein)\n"
raise ValueError(err)
self.alphabet_size = len(list(self._alphabet_string))
enum_list = zip(self._alphabet_string,range(len(self._alphabet_string)))
self._alphabet_dict = dict([(a, i) for a, i in enum_list])
tmp_matrix = np.zeros((self.alphabet_size,self.alphabet_size),dtype=int)
for k1 in self._alphabet_string:
i = self._alphabet_dict[k1]
for k2 in self._alphabet_string:
j = self._alphabet_dict[k2]
if k1 == k2:
tmp_matrix[i,j] = 0
else:
tmp_matrix[i,j] = 1
self.dist_matrix = tmp_matrix
def read_file(self,filename):
"""
Read file with sequences and convert to integer representation.
"""
f = open(filename,'r')
lines = f.readlines()
f.close()
sequences = [l.strip() for l in lines if l.strip() != ""]
self.load_sequences(sequences)
def load_sequences(self,list_of_sequences):
"""
Load in a collection of sequences and convert to internal integer representation.
"""
if len(set([len(s) for s in list_of_sequences])) != 1:
err = "All sequences must be the same length.\n"
raise ValueError(err)
# Sort so results are identical each time
list_of_sequences.sort()
self.num_points = len(list_of_sequences)
self.num_dimensions = len(list_of_sequences[0])
self.all_points = np.ascontiguousarray(np.zeros((self.num_points,self.num_dimensions),dtype=int))
for i, seq in enumerate(list_of_sequences):
self.all_points[i,:] = np.array([self._alphabet_dict[s] for s in seq])
self.sequences = list_of_sequences[:]
self.cluster_assignments = np.ascontiguousarray(np.ones(self.num_points,dtype=int)*-1)
def run(self,epsilon,min_neighbors=None):
"""
Run calculation.
"""
if min_neighbors is None:
min_neighbors = self.num_dimensions + 1
status = dbscan.run_dbscan(self.all_points,
self.dist_matrix,
self.cluster_assignments,
self.num_points,
self.num_dimensions,
self.alphabet_size,
epsilon,
min_neighbors,
self._dist_function_internal)
@property
def results(self):
clusters = {}
for i in range(len(self.cluster_assignments)):
try:
clusters[self.cluster_assignments[i]].append(self.sequences[i])
except KeyError:
clusters[self.cluster_assignments[i]] = [self.sequences[i]]
cluster_ids = np.unique(self.cluster_assignments)
return clusters |
#include "Engine\ECS/Transform.h"
#define GLM_ENABLE_EXPERIMENTAL
#include <Base/Math/gtx/quaternion.hpp>
namespace NuclearEngine
{
namespace ECS
{
Transform::Transform()
{
mTransformMatrix = Math::Matrix4(1.0f);
mPosition = Math::Vector3(0.0f);
mRotation = Math::Quaternion(0.0f, 0.0f, 0.0f, 1.0f);
mScale = Math::Vector3(0.0f);
mWorldPosition = Math::Vector3(0.0f);
mWorldRotation = Math::Quaternion(0.0f, 0.0f, 0.0f, 1.0f);
mWorldScale = Math::Vector3(0.0f);
}
Transform::Transform(Math::Matrix4 Transform)
{
mTransformMatrix = Transform;
}
Transform::Transform(Math::Vector3 position, Math::Quaternion rotation)
{
mTransformMatrix = Math::Matrix4(1.0f);
mPosition = position;
mRotation = rotation;
mScale = Math::Vector3(0.0f);
mWorldPosition = Math::Vector3(0.0f);
mWorldRotation = Math::Quaternion(0.0f, 0.0f, 0.0f, 1.0f);
mWorldScale = Math::Vector3(0.0f);
}
Transform::~Transform()
{
}
void Transform::SetPosition(Math::Vector3 position)
{
mPosition = position;
mDirty = true;
}
void Transform::SetRotation(Math::Quaternion rotation)
{
mRotation = rotation;
mDirty = true;
}
void Transform::SetScale(Math::Vector3 scale)
{
mScale = scale;
mDirty = true;
}
void Transform::SetScale(float scale)
{
mScale = Math::Vector3(scale);
mDirty = true;
}
Math::Vector3 Transform::GetLocalPosition()
{
return mPosition;
}
Math::Quaternion Transform::GetLocalRotation()
{
return mRotation;
}
Math::Vector3 Transform::GetLocalScale()
{
return mScale;
}
Math::Matrix4 Transform::GetTransform()
{
return mTransformMatrix;
}
Math::Vector3 Transform::GetWorldPosition()
{
Math::Matrix4 transform = GetTransform();
Math::Vector4 pos = transform * Math::Vector4(mPosition, 1.0f);
return Math::Vector3(pos.x, pos.y, pos.z);
}
Math::Quaternion Transform::GetWorldRotation()
{
return mWorldRotation;
}
Math::Vector3 Transform::GetWorldScale()
{
Math::Matrix4 transform = GetTransform();
Math::Vector3 scale = Math::Vector3(transform[0][0], transform[1][1], transform[2][2]);
if (scale.x < 0.0f) scale.x *= -1.0f;
if (scale.y < 0.0f) scale.y *= -1.0f;
if (scale.z < 0.0f) scale.z *= -1.0f;
return scale;
}
void Transform::SetTransform(Math::Matrix4 _Transform)
{
mTransformMatrix = _Transform;
}
void Transform::Update()
{
if (mDirty)
{
mTransformMatrix = Math::translate(mTransformMatrix, mPosition);
mTransformMatrix = Math::scale(mTransformMatrix, mScale);
mTransformMatrix *= Math::toMat4(mRotation);
mDirty = false;
}
}
void Transform::Update(Math::Matrix4 parent)
{
Update();
if (mDirty)
{
mTransformMatrix = parent * mTransformMatrix;
}
}
}
} |
Epidemiological evidence relating environmental smoke to COPD in lifelong non-smokers: a systematic review
Background: Some evidence suggests environmental tobacco smoke (ETS) might cause chronic obstructive pulmonary disease (COPD). We reviewed available epidemiological data in never smokers. Methods: We identified epidemiological studies providing estimates of relative risk (RR) with 95% confidence interval (CI) for various ETS exposure indices. Confounder-adjusted RRs for COPD were extracted, or derived using standard methods. Meta-analyses were conducted for each exposure index, with tests for heterogeneity and publication bias. For the main index (spouse ever smoked or nearest equivalent), analyses investigated variation in RR by location, publication period, study type, sex, diagnosis, study size, confounder adjustment, never smoker definition, and exposure index definition. Results: Twenty-eight relevant studies were identified; nine European or Middle Eastern, nine Asian, eight American and two from multiple countries. Five were prospective, seven case-control and 16 cross-sectional. The COPD definition involved death or hospitalisation in seven studies, GOLD stage 1+ criteria in twelve, and other definitions in nine. For the main index, random-effects meta-analysis of 33 heterogeneous (p<0.001) estimates gave a RR of 1.20 (95%CI 1.08-1.34). Higher estimates for females (1.59,1.16-2.19, n=11) than males (1.29,0.94-1.76, n=7) or sexes combined (1.10,0.99-1.22, n=15 where sex-specific not available), and lower estimates for studies of 150+ cases (1.08,0.97-1.20, n=13) partly explained the heterogeneity. Estimates were higher for Asian studies (1.34,1.08-1.67, n=10), case-control studies (1.55,1.04-2.32, n=8), and COPD mortality or hospitalisation (1.40,1.12-1.74, n=11). Some increase was seen for severer COPD (1.29,1.10-1.52, n=7). Dose-response evidence was heterogeneous. Evidence for childhood (0.88,0.72-1.07, n=2) and workplace (1.12,0.77-1.64, n=4) exposure was limited, but an increase was seen for overall adulthood exposure (1.20,1.03-1.39, n=17). We discuss study weaknesses that may bias estimation of the association of COPD with ETS. Conclusions: Although the evidence strongly suggests that ETS increases COPD, study weaknesses and absence of well-designed large studies preclude reliable effect estimation. More definitive evidence is required.
Introduction
This systematic review aims to present an up-to-date metaanalysis of available epidemiological evidence relating exposure to environmental tobacco smoke (ETS) from cigarettes to risk of chronic obstructive pulmonary disease (COPD) in lifelong non-smokers ("never smokers"). As described below, this review considers data from 28 longitudinal, case-control or crosssectional studies .
It is long established that active smoking causes COPD, the U.S. Surgeon General concluding in 1964 29 that "cigarette smoking is the most important of the causes of chronic bronchitis in the United States, and increases the risk of dying from chronic bronchitis". This opinion was echoed in their 2004 report 30 , which felt the evidence "sufficient to infer a causal relationship between active smoking and chronic obstructive pulmonary disease morbidity and mortality", a view confirmed by a recent systematic review 31 .
Sidestream smoke (released between puffs from the burning cone) contains similar chemicals to mainstream smoke (drawn and inhaled by smokers), but with different relative and absolute quantities of many individual constituents 32 . However, sidestream smoke, after mixing with aged exhaled mainstream smoke, is diluted massively by room air before non-smokers inhale it. Smoke constituent levels in tissues of non-smokers are very much lower than in smokers, studies using cotinine typically indicating a relative exposure factor between 0.06% and 0.4% , with studies using particulate matter indicating a lower factor of 0.005% to 0.02% . Though an effect of ETS on COPD risk is plausible, it is difficult to establish this with certainty, as a threshold is a logical possibility. The same difficulty of establishing effects of ETS exposure on other diseases caused by smoking is also present, notably for lung cancer 31,45 .
In 2006, a review by the U.S. Surgeon General of the association of COPD with ETS exposure 46 concluded that "the evidence is suggestive but not sufficient to infer a causal relationship between second-hand smoke exposure and risk for COPD", the need for additional research also being highlighted. Although that review cited only nine of the 28 studies considered here 1,3, 13 , and although various new studies have appeared since then, no other fully comprehensive review of this subject appears to have been undertaken.
This review, which is essentially an update of the 2006 review 46 is an attempt to assess the epidemiological evidence currently available, restricting attention to studies of COPD in which its relationship to one or more ETS exposure indices has been studied in never smokers. This restriction to never smokers is necessary as there is a very strong association of COPD with smoking 46 , and it is difficult to reliably detect any ETS effect where a history of smoking is present. This is because the extent of a smoker's overall exposure to smoke constituents is determined largely by his own smoking habits and hardly at all by his much smaller ETS exposure, and also because smoking and ETS exposure are correlated (e.g. since smokers tend to marry smokers). Any errors in assessing smoking history are therefore likely to cause a residual confounding effect much larger than any plausible ETS effect 47 .
As the 2006 US Surgeon General's Report 46 notes "COPD is a non-specific term, defined differently by clinicians, pathologists, and epidemiologists, each using different criteria based on symptoms, physiologic impairment, and pathologic abnormalities". That report goes on to state that "the hallmark of COPD is the slowing of expiratory airflow measured by spirometric testing, with a persistently low FEV 1 and a low ratio of FEV 1 to FVC despite treatment". International guidelines 48 define COPD as post-bronchodilator FEV 1 /FVC <0.70, with severity classified by subdividing FEV 1 as a percentage of predicted into four groups (≥80, <80, <50 and <30%). The term COPD was little used until the 1980s, and diagnoses commonly used earlier (e.g. chronic bronchitis and emphysema) do not correspond exactly to what is now termed COPD. The studies we selected for review used disease definitions close enough to COPD as now defined to reasonably allow overall assessment. Some studies present additional results using criteria corresponding to severer forms of the disease. While these data are presented here, they are not included in our detailed meta-analyses.
Amendments from Version 2
The text has been amended to make clearer that the evidence strongly suggests -but does not definitively prove -that ETS increases COPD risk.
(i) in the conclusions section of the abstract where we now say "strongly suggests" rather than "suggests", (ii) in the section "Comparison with other recently published reviews" in the Discussion, where the final paragraph now starts "Generally these reviews point to an association between ETS exposure and risk of COPD without concluding that a causal relationship has clearly been established. The present review confirms the association and provides evidence that is strongly suggestive of a true effect". It then ends with a sentence "While this suggestion is not inconsistent with the view of the Global Burden of Disease Study 2017 75 that second-hand smoke is a risk factor for COPD, limitations of the evidence, discussed above, preclude a more definitive conclusion" which refers to a recent study mentioned by one of the reviewers.
(iii) at the start of the Conclusions section after the Discussion which now begins "Taken in conjunction with the strong association of smoking with COPD, the significant relationship seen for the main index of ETS exposure, and the evidence of a dose-response relationship is high;y suggestive that ETS also increases risk of COPD. However, the absence of....." The two referees are now thanked in the Acknowledgement Section.
Materials and methods
This systematic review was conducted according to PRISMA guidelines 49 .
Study inclusion and exclusion criteria
Attention is restricted to epidemiological longitudinal, casecontrol or cross-sectional studies which provide risk estimates for never (or virtually never) smokers for any of the following indices of ETS exposure: spouse, partner, cohabitant, at home, at work, in adulthood, in childhood.
The term COPD is relatively recent, so we also included studies with outcomes described otherwise. Following the strategy used in our review of smoking and COPD 31 , outcomes "could be based on International Classification of Diseases (ICD) codes, on lung function criteria, on a combination of lung function criteria and symptoms, or on combinations of diagnosed conditions….where diagnoses were extracted from medical records or reported in questionnaires". Acceptable combinations of diagnosed conditions had to include both chronic bronchitis and emphysema, but could also additionally include asthma, acute and unqualified bronchitis or bronchiectasis. However, studies were rejected where results were only available for emphysema, for chronic bronchitis, for respiratory symptoms such as cough or phlegm, or for lung function criteria not equating to COPD. Over-broad definitions such as respiratory disease were also not accepted. Acceptable lung function criteria included those of the Global Initiative for Chronic Obstructive Lung Disease (GOLD), the European Respiratory Society, and the British and American Thoracic Societies, Studies which provide near equivalent definitions of "never smokers" are also accepted; thus never smokers can include occasional smokers or smokers with a minimal lifetime duration of smoking or number smoked. Risk estimates may be based on relative risks (RRs), hazard ratios (HRs), or odds ratios (ORs), and must either be provided directly or be capable of being estimated from the data provided.
Literature searches
A PubMed search identified papers published up to June 2016 using the term "COPD AND (ENVIRONMENTAL TOBACCO SMOKE OR PASSIVE SMOKING OR SECOND-HAND SMOKE EXPOSURE OR INVOLUNTARY SMOK-ING)", with restriction to humans. After rejecting papers that were clearly irrelevant based on the abstract, copies of the others were obtained for inspection. Other potentially relevant papers were obtained from reference lists in the 2006 Surgeon General report 46 , an earlier review we conducted 47 and relevant review papers identified in the search. The complete list of potentially relevant papers were then looked at in detail to determine those which described studies satisfying the selection criteria, the rejected papers also including those where an alternative paper provided results from the same study that were more useful (e.g. based on a longer follow-up, a larger number of cases, or using a disease definition closer to COPD as currently defined).
Data recorded
Details were extracted from relevant publication on the following: study author; year of publication; study location; study design; sexes included; disease definition; number of cases; potential confounding variables considered; and never smoker definition. An effect estimate together with its associated 95% confidence interval (CI) was obtained, where available, for ETS exposure at home, at work, in adulthood, childhood, and from these sources combined. Choice between multiple definitions of COPD followed the rules of Forey et al. 31 , except that here we also obtained additional estimates, if available, for severer COPD. We preferred effect estimates where the denominator was with no (or minimal) exposure to the ETS type considered rather than with no exposure to any ETS. Effect estimates and 95% CIs extracted were sex-specific, if possible, and for longitudinal studies were for the longest follow-up available. Estimates adjusted for covariates, where available, were generally preferred to unadjusted estimates, except that results adjusted for symptoms or precursors of COPD were not considered. Where a study provided multiple adjusted estimates, we used that adjusted for most covariates. Dose-response data were also extracted, where available.
Derivation of effect estimates
For a study reporting effect estimates and CIs only by exposure level, that for the overall unexposed/exposed comparison was estimated using the Morris and Gardner method 50 for unadjusted data or the Hamling et al. method 51 for adjusted data. These methods also allowed estimation of the significance of dose-related trends, if not given in the source publication.
Alternative types of effect estimates
As the great majority of effect estimates were ORs derived from case-control or cross-sectional studies, and as the RRs or HRs from longitudinal studies were all based on low incidences, where the OR would be virtually the same, all estimates were treated as if they were ORs. In the rest of this paper, we use OR rather than referring to specific types of effect estimate.
Meta-analyses
A pre-planned set of fixed-effect and random-effects metaanalyses were carried out using standard methods 52 . Heterogeneity was quantified by H, the ratio of the heterogeneity chi squared to its degrees of freedom. The I-squared statistic 53 is related to H by the formula I 2 = 100 (H-1)/H. Publication bias tests were also conducted using the Egger method 54 .
Our main analyses included OR estimates for the exposure most closely equivalent to "spouse ever smoked" where results were provided or could be estimated. This selection was based on the source of exposure (spouse highest preference, then partner, cohabitant, home or work). Spousal smoking is traditionally used for studying possible ETS effects, it being clearly demonstrated that women married to a smoker have much higher cotinine levels than women married to a non-smoker 55 . Apart from the meta-analyses using all available estimates, metaanalyses also investigated variation in the OR according to a list of pre-defined factors, and using the following subsets: continent (North America, Asia, Europe, multicountry); publication period (1976-1990, 1991-2005, 2006-2016); study type (longitudinal, case-control, cross-sectional); sex (males, females, combined); diagnosis (mortality or hospitalisation, GOLD stage 1+, other); method of taking asthma into account (included as part of the COPD definition, adjusted for, asthmatic participants excluded, ignored); number of cases estimate based on (<50, 50-149, 150+ cases); extent of confounder adjustment (unadjusted for age, adjusted for age and at most four other variables, adjusted for age and five or more variables); never smoker definition (never smoked any product, never smoked but product unstated, other -including never cigarette smoker, occasional smoker or very short-term smoker); and definition of exposure index (spouse specifically, other exposure at home, other).
Meta-analyses were also carried out for the main index using the estimates for severer COPD, and also for other indices of exposure with sufficient data (workplace, overall adult -including at least home and work, childhood). Here, data were too limited to study variation in the OR by the subsets described above.
Results of the overall meta-analyses are displayed as forest plots. In each plot, individual estimates are listed in increasing order of the OR. For the main index, estimates are grouped by region. Random-effects estimates are also shown. The estimates are not only shown numerically, but in graphical form on a logarithmic scale, where the OR is shown as a square, the area of which is proportional to its inverse-variance weight. Arrows warn when the CI goes outside the range of the plot.
Study quality and risk of bias
We did not attempt to derive any overall score based on study quality and risk of bias for each individual study, as the relative importance of different sources of bias or poor study quality is difficult or impossible to assess accurately. Instead, we attempted to gain insight into this in two ways. First, as mentioned in the previous section, we carried out meta-analyses showing how the OR varied by some relevant aspects linked to study quality and bias, such as study size, study type, source of diagnosis, method of taking asthma into account, and extent of confounder adjustment. Second, we considered factors affecting quality and bias in the discussion section, including some factors that affected all or virtually all of the studies.
Searches
The PubMed search produced 509 hits. As summarized in Figure 1, Seventy-five were considered of potential relevance based on the abstracts, 15 of which proved to meet the inclusion criteria on examination of the papers themselves. Further examination of reference lists in reviews 46,47,56-63 and in papers obtained identified a further 40 papers of potential relevance, 13 of which met the inclusion criteria. Of the 87 papers examined but not accepted, the most common reasons for rejection were no results for never smokers (38 papers), not COPD as defined (26), no control group or no results for unexposed participants (11) and better results for the same cohort given in another paper (9), some studies being rejected for more than one reason.
Supplementary File 1 gives details of the studies rejected and fuller reasons for rejection. Table 1 gives details of the 28 epidemiological studies that met the inclusion criteria, including author, reference(s), publication year, location, design, sexes included, disease definition, account taken of asthma, and numbers of cases in never smokers. The studies are listed in chronological order of publication and are given consecutive identifying study numbers.
Studies identified
The included studies are mainly of representative populations, except that studies 18 and 26 have a large proportion with respiratory symptoms. Of the 28 studies, one was published in the 1970s, six in the 1980s, one in the 1990s, nine between 2000 and 2009 and 11 more recently.
Nine studies were conducted in Europe or the Middle East, subsequently referred to as "Europe" (two in England, and one each in Greece, Italy, Lebanon, Poland, Sweden, Switzerland and Turkey), while nine took place in Asia (five in China, and one each in Hong Kong, Japan, Korea and Taiwan), seven in North America (six in the USA and one in Canada) and one in South America (Brazil). Two studies presented combined results, one from 16 countries, the other from 14.
Five studies were longitudinal in design, with the length of follow-up varying from 12 to 39 years, one was a cross-sectional study analysed as a nested case-control study, 16 other studies were cross-sectional, with the remaining six of case-control design.
Most studies were of both sexes, though six studies considered only females.
Two studies considered those with a minimum age of 60, with a further 16 having a minimum age between 35 and 51. Other studies had a lower minimum age.
Definitions of outcome used varied by study. Seven studies required the case to have died or been hospitalised for COPD, while a further 12, mainly relatively recent cross-sectional studies, used COPD as defined by the GOLD stage 1+ criteria. The remaining nine studies used other definitions, as detailed in Table 1. Five studies (17,19,20,26,28) also provided results for severer COPD (generally equivalent to GOLD 2+, see footnotes to Table 1).
Twenty-one studies ignored asthma in their outcome definition and analysis, with the remaining 12 studies equally divided into those that included asthma in their outcome definition, excluded asthmatics, or adjusted for asthma status in analysis.
Most studies were small, with ten studies considering less than 100 cases and only one study (15) more than 1000 cases. Table 2 gives the adjustment variables used and the definitions of never smokers used in the studies. Table 3) Age, education, occupational exposure, biomass fuel use, childhood hospitalisation, comorbidity, BMI (severer COPD in Table 3) Never smoked more than 20 packs in lifetime or more than Table 3) None (severer COPD in Table 3 and Ever home/work and Childhood in Five studies (1,20,21,24,25) made no adjustment for any potential confounding variables, while some others made little or no adjustment for such variables as occupation, education, diet and family history of disease, which may differ between smoking and non-smoking households 64 . Failure to adjust for household size, where the index of exposure is based on presence of a smoker in the household, was also common. Where adjustment was carried out, all but four studies considered age, although study 3 adjusted for the husband's age rather than the subject's.
Fifteen studies were of never smokers, though only three of these made it clear they were never smokers of cigarettes, pipes or cigars. Five studies were of never cigarette smokers (i.e. they may have included some pipe or cigar only smokers), the remaining eight allowing a minimal smoking history, such as smoking less than 1 cigarette a day or less than 100 cigarettes in life.
Main exposure index
The main meta-analyses use an exposure index that relates as closely as possible to ever smoking by the spouse. Table 3 shows the definitions of ETS exposure used for the main index. This was based on smoking by the spouse for five studies, and on smoking by cohabitants for a further 13 (although study 13 only included participants who had lived with a smoker 10 years previously, and study 20 only considered ETS exposure in the home in the two weeks prior to the study). For the remaining studies, the index was based on exposure in the home and at work (studies 4, 12, 17, 18 and 27) or on a combination of exposure from any source (studies 11, 15, 19, 21 and 25).
Although most studies presented results comparing participants who were exposed or unexposed to ETS, some required a minimum level before a subject could be classified as exposed.
In study 19, exposure had to be for at least one hour per week, while study 12 specified living with a smoker who smoked in the home or exposure at work for at least one hour per day. In studies 20 and 28, exposure had to have been in the previous two weeks, while participants in study 25 had to have had regular exposure in the previous year. In study 22 exposure had to be for 15+ minutes per day at least once per week for two or more years, while in study 15 the minimum requirement was 15 minutes or more, three or more times per week. In study 11, participants were only considered to have been exposed if they reported four or more hours of exposure on most days or nights in the previous year. Finally, study 14 required 10 years of exposure. Table 3, supported by Figure 2, also presents the ORs for the main exposure index, while Table 4 presents the results of meta-analyses, and Table 5 the dose-response data.
From Table 3 it can be seen that, of the 33 individual OR estimates given for COPD, 24 are above 1.00, seven of these increases being significant at p<0.05. Eight studies reported an OR below 1.00, but only in study 4 for females was the reduction statistically significant. Study 25 reported an OR of 1.00, while study 24, excluded from the meta-analyses, did not present an OR but reported no significant relationship with duration or type of exposure. In addition, five studies presented a total of seven OR estimates for severer COPD, with five estimates above 1.00 (one significantly so and one marginally significant) and two non-significantly below 1.00. There was also evidence of a dose-response relationship, as shown in Table 5, with six of 11 studies investigating this reporting a statistically significant positive trend. Study 16 reported no trend in relation to the number of smokers in the household, but did report positive dose-response relationships for years of ETS exposure at home and at work. Study 19, which found no relationship with the main COPD outcome, also presented dose-response relationships for severer COPD, again finding no significant increase in risk with increasing exposure.
Other exposure indices
Five studies also presented additional results for other indices of ETS exposure, as shown in Table 6. Four studies (16,22,23,26) looked at exposure at work, all but study 23 also presenting results for combined exposure at home and at work. Study 5 produced a combined index of adulthood exposure at home or work, or during travel or leisure. Three studies (16, 23, 26) considered childhood ETS exposure, study 23 studying exposure from both the mother and the father, and also looking at parental smoking during pregnancy. Table) b Study types are CC = case-control, CS = cross-sectional, L = longitudinal. For longitudinal studies, number of years follow-up is shown c Comparison is with those not exposed as defined, except where indicated otherwise d RRs from longitudinal studies are taken as being equivalent to ORs e Separate results also available for current smoker and exsmoker f OR and/or CI estimated from data provided g Approximate estimates h Compares exposed at home only to unexposed. Excludes those exposed at work i A straw cigarette is a handful of tobacco, wrapped in a corn husk; study not included in meta-analysis j Compared to subjects not exposed to any source of ETS; results also available for current or former exposure k Results not included in the meta-analysis in Figure 2 The ORs for these other exposure indices are supported by Figure 3 (workplace) and Figure 4 (overall adult), while Table 7 presents the results of meta-analyses. Note that Figure 4, and the meta-analyses for overall adult exposure, consider not only the ORs indicated in Table 6, but also include estimates from Table 3 for those ten studies (4,11,12,15,17,18,19,21,25,27) for which the exposure was at least from home and work.
Of the four ORs included in the meta-analysis of COPD for exposure at work, two were above 1.00, one of borderline 1-9, 10-19, 20, 21-39 and 40+. For wife smoking there were 7 levels, as for husband except with no level for pipe/cigar h Number of cases is for the exposed groups combined i Trend estimated from data provided j Sum of scores for exposure at home (0 = no exposure, 1 = <4 pack years, 2 = 4 to <8 pack years, 3 = ≥8 pack years) and at work (0 = no exposure, 1 = <5, 2 = 5 to <15, 3 = ≥15, calculated from (pack years x smokers x hours/day)/100 statistical significance, and two were below 1.00, the combined estimate being 1.12 (0.77-1.64). Note that in study 26 there was a choice of workplace OR estimates, with the meta-analysis including that for current exposure. Using estimates for previous or ever exposure would not have affected the conclusion that there was no clear relationship of COPD to workplace ETS exposure.
Of the 17 ORs included in the meta-analysis for overall adult exposure, 12 were above 1.00, five significantly so, with one equal to 1.00, and four less than 1.00. The combined estimate of 1.20 (1.03-1.39) was also significantly increased.
There was no clear association of COPD with childhood ETS exposure, with none of the ORs shown in Table 6 being significant. Only two estimates could be included in the meta-analysis, giving an overall estimate of 0.88 (0.72-1.07).
There was no significant evidence of publication bias for workplace or adult exposure, the data being too limited to assess this for childhood exposure. However, there was evidence of heterogeneity (p<0.01) for overall adult ETS exposure.
The limited further dose-response data shown in Table 8 added little to the data already shown in Table 5.
Discussion
We rejected papers for various appropriate reasons. These included the following: failing to give results for an endpoint equivalent to COPD; giving results only for COPD exacerbation or prognosis; not presenting results for never smokers; describing studies without a control group; not presenting results for those unexposed to ETS; and presenting less useful results than reported in another publication.
Twenty-eight epidemiological studies did qualify for inclusion, and from 33 estimates of the risk of COPD associated with ever having a spouse who smoked, or the nearest equivalent ETS exposure index available, random-effects meta-analysis gave a significantly increased OR estimate of 1.20 (1.08-1.34). There was also some evidence of dose-response. While the clear relationship of smoking with COPD 31 makes it plausible that some effect will also be evident for ETS, one must emphasize that exposure is much less than from active smoking, as noted in the Introduction. Also, various limitations of the evidence, Table 7 meta-analysis e Comparison is with those with no exposure of any of the four types, or at most little exposure from one of them f OR and/or CI estimated from data provided g Comparison is with those with <2 years of 40 hours per week exposure h Compares exposed at work only to unexposed. Excludes those exposed at home i Comparison group is subjects not exposed to ETS from any source j From 65 Table 1. In the graphical representation, ORs are indicated by a square, with the area of the square proportional to the weight. f Index includes "home or workplace" or combined index of any adulthood exposure. Note that this metaanalysis not only includes those estimates marked with an A in Table 6, but also includes estimates from Table 3 for studies 4,11,12,15,17,18,19,21,25 and 27 g Preferring exposure from the mother in study 23. Estimates would be 0.89 (0.74-1.08) fixed and 0.91 (0.70-1.18) random, preferring exposure from the father a Study types are CC = case-control, CS = cross-sectional b NS = p≥0.05, + = p<0.05, ++ = p<0.01 c Based on sum of 0 = not at all, 1 = little, 2 = average, 3 = a lot for each source of exposure d Trend estimated from data provided discussed below, make it difficult to estimate reliably the true extent of any causal relationship. However, one should also take into account the evidence of a relationship between ETS and wheezing 46,66 , a symptom of COPD.
Few cases
Though four studies involved more than 500 cases, with the maximum 1097 in study 15, as many as ten of the 28 studies involved less than 100 cases, the quite small number of cases making it difficult to detect potential effects reliably.
Publication bias
The observation that ORs are only modestly raised for studies with larger numbers of cases but are greater for smaller studies suggests the possibility of publication bias, with authors being more likely to report stronger relationships. However formal tests for publication bias 54 showed no clear evidence of its existence. One must note, though, that various large longitudinal studies, e.g. 67-70, reported results relating ETS to smokingrelated diseases such as lung cancer or heart disease, but did not do so for COPD. If any relationship had been seen, these studies might well have been reported.
Misclassification of smoking status
No study validated the lifelong non-smoking status of their participants, although study 18 did verify current active and passive exposure in a random sample of participants by measuring urinary cotinine levels. As some current and past smokers deny smoking when interviewed 71 , and as the smoking habits of spouses or household members are clearly correlated 47 , misclassification of even a few ever smokers as never smokers can cause relevant bias 72 , especially when, as is the case with COPD, the association with smoking is strong 31 .
Weaknesses in longitudinal studies
All the longitudinal studies considered involved follow-up for at least 12 years. Of the five studies, three (studies 3, 7 and 10) assumed spousal smoking was unchanged during follow-up, only studies 4 and 22 collecting information on smoking status at multiple time points.
All these studies only considered COPD deaths which occurred in the original study area.
Inappropriate controls in case-control studies
Although three case-control studies used population controls, the remaining three used control groups unlikely to be representative of the population from which the cases derived. Studies 6 and 14 used visitors to the hospital attended by the cases, and study 13 used as a control a person identified by the informant of a death as a "living person about the same age who was well known to the informant", the informant then being asked about the lifestyle 10 years earlier of both decedent and control.
Weakness of cross-sectional studies
Over half of the studies were of cross-sectional design, a design limited by difficulties in determining whether ETS exposure or disease onset occurred first.
Poor control for potential confounding variables As noted above, some studies made little or no adjustment for variables likely to differ between smoking and non-smoking households. Though ORs for the main exposure index did not vary significantly by extent of adjustment, it should be noted that adjustment for dietary variables and education explains a substantial part of the association of lung cancer with spousal smoking 45 . The same may be the case for COPD.
Variation and appropriateness of diagnostic criteria Definitions of COPD used were all consistent with the inclusion criteria. However, they still varied somewhat between study, further adding uncertainty to the meta-analysis results. Even given the inclusion criteria, there are doubts about the appropriateness of the diagnostic criteria used in some studies. In study 8, for example, the definition included asthma as well as chronic bronchitis and emphysema, the diagnosis being reported by the head of the household, and not necessarily made by a doctor.
Misclassification of ETS exposure
While random errors in determining ETS exposure led to underestimation of the relationship of COPD with ETS, errors may not be random. Twenty-three of the 28 studies considered were of case-control or cross-sectional design, where recall bias may exist if those with COPD tend to overestimate their ETS exposure compared to those without COPD. Exposure was generally not validated by biochemical markers or air measurements taken at home.
Limited evidence for some sources of ETS
Only 15 studies (4, 5, 11, 12, 15-19, 21-23, 25-27) provided data on ETS exposure from sources other than the home. Five (4,12,17,18,27) presented results only for a combined household and workplace exposure index, with a further five (11,15,19,21,25) only presenting results for total exposure irrespective of location, results used in our analyses as the nearest equivalent which was available to smoking by the spouse or household member. While there are far less available data on risk of COPD from ETS exposure specifically in the workplace or in childhood than on smoking in the home, the available data show no clear relationship of risk with these less studied exposure indices.
Comparison with other recently published reviews A review in 2007 73 considered that "ETS exposure may be an important cause of COPD". However, this conclusion was based on only six studies, one examining absolute risk of COPD in relation to changes in tobacco consumption, and one comparing lung function of employees in bars and restaurants before and after a smoking ban. Also it seemed that at least some of the others considered were not restricted to never smokers.
A review in 2010 60 meta-analysed results from 12 studies and gave an overall estimate of 1.56 (1.40-1.74), somewhat higher than our estimate. Not all of the studies included were of COPD, some being based on chronic bronchitis symptoms. Also, some studies were based on current non-smokers rather than on lifelong never smokers.
In 2013, Bentayeb et al. 61 reviewed evidence on indoor air pollution and respiratory health in those aged over 65 years. After considering 33 papers (only one 16 presenting relevant results on ETS and COPD risk in non-smokers), they reported that the most consistent relationship found was between ETS exposure and COPD risk. However, the findings did not allow causal inference due to heterogeneity of the studies considered, measurement errors in exposure assessment, variable outcome definition, and lack of information on lifetime exposure to air pollution. The authors concluded that more investigations are needed to understand the relationship of indoor air pollution to respiratory health in the elderly.
A review in 2014 74 reached similar conclusions, the authors stating that "second-hand exposure to tobacco smoke has also been shown to be associated with the risk of COPD, although more robust evidence needs to be generated". These conclusions were derived from only eight studies, some concerned with respiratory symptoms rather than COPD. Also, one study did not restrict any analyses to never smokers.
A review in 2015 62 included only five studies in the metaanalyses. The estimated risk of COPD in ETS-exposed participants was higher than we estimated, being 1.66 (1.38-2.00) for both sexes combined, 1.50 (0.96-2.28) for males and 2.17 (1.48-3.18) for females. However, these estimates were based on, respectively, three, one and one estimates, the authors examining three further studies but not including them in their metaanalysis due to low study quality. However, two of the studies they did include were not based on lifelong never smokers. Many other studies that might have been included were not. The authors noted that "the few existing studies on second-hand smoke exposure and COPD differ considerably, although the results indicate a positive association" and that "further research is needed, to provide more adequate primary studies which account for confounding and other biases".
A review in 2016 63 of "the effects of smoking on respiratory health" also considered effects of ETS exposure. However, only three studies were cited, two not satisfying our inclusion criteria. Noting the variability in the results, the authors only pointed to the need for additional studies.
Generally these reviews point to an association between ETS exposure and risk of COPD without concluding that a causal relationship has clearly been established. The present review, which includes far more studies, confirms the association and provides evidence that is strongly suggestive of a true effect. While this suggestion is not inconsistent with the view of the Global Burden of Disease Study 2017 75 that second-hand smoke is a risk factor for COPD, limitations of the evidence, discussed above, precludes a more definitive conclusion.
Another relevant publication
In response to a comment from a reviewer (Dr Maio), we updated our searches by a further two years. While this identified an additional 99 publications, only one 76 satisfied our inclusion criteria. That paper reported age and sex adjusted hazard ratio estimates by level of ETS exposure, which, when combined, gave an exposed/unexposed estimate of 2.25 (95% . Including this estimate, based on only 33 COPD cases, had little effect on the meta-analysis results shown in Table 4. Thus, the overall random-effects estimate of 1.20 (95% CI 1.08-1.34) for all COPD was changed only to 1.22 (1.09-1.36), while that for Asia was changed only from 1.34 (1.08-1.67) to 1.38 (1.11-1.72).
Conclusion
Taken in conjunction with the strong association of smoking with COPD, the significant relationship seen for the main index of ETS exposure, and the evidence of a dose-response relationship is highly suggestive that ETS also increases risk of COPD. However, the absence of well-designed and fully reported large studies, and the limitations noted above make it difficult to obtain an accurate estimate of the true magnitude of any possible effect. More definitive studies are required to reach a firmer conclusion.
Data availability
Underlying data There were no underlying data associated with this article As regards the comment about numbers of aspects and subsets, we have left the paper as it is. As a reader our strong preference is to have all the material to be considered in one place, without having to go backwards and forwards between the main paper and supplementary files. Readers can always skip reading information they are not interested in if they wish to.
We have however made a number of changes to the main messages put over. As shown in the red-lining these appear at the end of the abstract, near the end of the discussion section, and in the conclusions section. We now make it clear that the overall association with the main ETS exposure index, coupled with the dose-response evidence, and the evidence on smoking and COPD, provides strong evidence of a possible causal relationship, and note that this is consistent with the Global Burden of Disease 2017 statement that ETS is a risk factor. However we make it clear that one cannot go further based on the evidence -one cannot definitively conclude that ETS causes COPD -still less get very accurate estimation of its possible effect.
I hope that these alterations are sufficient to remove the reviewer's reservations.
As previously described.
Reply to Yousser Mohammad
Dr Mohammad suggests that I should cite a statement by WHO that ETS is responsible for about 900,000 deaths per year from all causes combined. We would rather not do this for two reasons. First, the paper is specifically about ETS and COPD, so does not need to stray into the relationship between ETS and other causes of death. Secondly, estimation of the effect of ETS on overall mortality is extremely complex and citation of a single estimate is questionable. We have in fact published widely on the evidence relating ETS to other diseases, such as lung cancer (Lee ., et al 2016a;Lee ., 2002), other cancers (Lee, 2002;Lee and Hamling, 2006;Lee and Hamling, et al 2016;Lee ., 2016b), stroke (Lee and Forey, 2006;Lee ., 2017b), heart disease (Lee ., et al et al et al 2017a), and asthma (Lee and Forey, 2007), and find little evidence of an effect as large as WHO claims. For lung cancer, for example, we concluded (Lee ., 2017b) that any causal relationship et al is not convincingly demonstrated, as most, if not all, of the relationship with ETS can be explained by confounding adjustment and misclassification correction.
Dr Mohammad also suggests that I mention waterpipes. We would prefer not to do this as the paper is about ETS exposure from conventional cigarettes and as we have not formally reviewed the evidence relating to waterpipes. We have made it clearer in the introduction that the paper the evidence relating to waterpipes. We have made it clearer in the introduction that the paper concerns ETS from cigarettes.
He also suggests that we should discuss differences in the quality of ETS. We assume that he is pointing out that ETS from various products may not have the same composition or effects as from conventional cigarettes. But we are not concerned with ETS from products other than cigarettes.
He suggests that we refer to evidence relating ETS to wheezing and asthma. While the paper he cites does not actually mention COPD, we do now include a statement in the text at the end of paragraph 2 of the discussion.
I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
Author Response 15 May 2018 , P.N. Lee Statistics and Computing Ltd, Sutton, UK Peter Lee Dear Dr Maio, I thank you for your comments which I respond to below on behalf of my co-authors. The text of the paper has not yet been altered as the editorial team advise that I wait for the additional referee reports before doing so.
You say that "many other scientific evidences were published after June Updating selected papers 2016. However, we have updated our searches to cover more recent papers, and only found one paper which satisfied the inclusion criteria. This was a report by Ukawa et al in 2017 (International Journal of Public Health, vol 62, pp 489-494) which presented results for at home passive smoking exposure from the Japan Collaborative cohort study, based only on 33 cases. Rather than updating the whole range of analysis results, we intend simply to refer to this additional study, and the effect it had on the overall effect estimate, in a comment at the end of the discussion section. If you think there are other important papers we have missed please let us know what they are.
We have published a number of previous Too many aspects and subsets are taken into account reviews of the relationship of passive smoking to other diseases, and this style has never before been criticized. In our view it is important to fully describe how the association of interest varies by the source of exposure and by study characteristics, and also by the definition of disease. One cannot get a good insight without these details.
You say "the smoking Taking into account the publication period before and after the smoking ban ban" but there are many smoking bans, different in type and different in timing. In the US for example different states, and different locations within states, brought in bans at different times. When one also considers the long latent period of COPD, with deaths post-ban perhaps due to exposures pre-ban, and the fact that in some studies some COPD cases occur pre-ban and some post-ban, we did not consider it useful to attempt the required analysis.
We made the point (also made in other passive smoking reviews) that some large Publication bias cohort studies are known to have published positive relationships relating passive smoking to other diseases when they did not publish results relating passive smoking to COPD. Surely it is quite likely that they did not find a.positive relationship for COPD? In my view large cohort studies ought to publish passive smoking results for all diseases with sufficient cases, but often they do not. Our comment is supported by the evidence as to what has and has not been published -though this does not prove that all such studies found no positive association with COPD, the likelihood is there. The argument is similar to the general one for publication bias. We don't generally have evidence that papers showing no association are less likely to be submitted or accepted than papers finding an association, but it is highly plausible.
We state the reasons why these other reviews are Negative approach against other reviews limited.
Overlooking evidence suggesting an association, such as the dose-response results and the Overlooking evidence suggesting an association, such as the dose-response results and the In paragraph 2 of the discussion we refer to the overall overall meta-analysis results meta-analysis results and the dose-response results, and then go on to discuss why these results are only suggestive of a causal relationship. The overall association of 1.20 with passive smoking at home, though highly statistically significant, is quite small in magnitude, and it is certainly possible that it may be explicable in terms of bias. In our review of passive smoking and lung cancer (World Journal of Meta-Analysis, 2017, 4, 10-43) we were able to demonstrate quite clearly that a similar sized association could plausibly be explained by a combination of uncontrolled confounding and misclassification of smoking status. Though the data for COPD are not extensive enough to readily allow such adjustments, we would be extremely nervous in saying that the association is more than suggestive of a causal relationship. Nevertheless, we will look again at the wording we have used and try to make our argument clearer.
This really relates to the Conclusions drawn not adequately supported by the results presented previous point. We believe our conclusions are supported by the results.
We would be happy to hear your reactions to our replies.
Yours sincerely
Peter Lee (and co-authors) Our competing interests have already been described in the paper itself Competing Interests: The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias You can publish traditional articles, null/negative results, case reports, data notes and more The peer review process is transparent and collaborative Your article is indexed in PubMed after passing peer review Dedicated customer support at every stage For pre-submission enquiries, contact [email protected] |
package org.hl7.fhir.dstu3.model.codesystems;
/*
Copyright (c) 2011+, HL7, Inc.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of HL7 nor the names of its contributors may be used to
endorse or promote products derived from this software without specific
prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
*/
// Generated on Sat, Mar 25, 2017 21:03-0400 for FHIR v3.0.0
import org.hl7.fhir.exceptions.FHIRException;
public enum OrganizationType {
/**
* An organization that provides healthcare services.
*/
PROV,
/**
* A department or ward within a hospital (Generally is not applicable to top level organizations)
*/
DEPT,
/**
* An organizational team is usually a grouping of practitioners that perform a specific function within an organization (which could be a top level organization, or a department).
*/
TEAM,
/**
* A political body, often used when including organization records for government bodies such as a Federal Government, State or Local Government.
*/
GOVT,
/**
* A company that provides insurance to its subscribers that may include healthcare related policies.
*/
INS,
/**
* An educational institution that provides education or research facilities.
*/
EDU,
/**
* An organization that is identified as a part of a religious institution.
*/
RELI,
/**
* An organization that is identified as a Pharmaceutical/Clinical Research Sponsor.
*/
CRS,
/**
* An un-incorporated community group.
*/
CG,
/**
* An organization that is a registered business or corporation but not identified by other types.
*/
BUS,
/**
* Other type of organization not already specified.
*/
OTHER,
/**
* added to help the parsers
*/
NULL;
public static OrganizationType fromCode(String codeString) throws FHIRException {
if (codeString == null || "".equals(codeString))
return null;
if ("prov".equals(codeString))
return PROV;
if ("dept".equals(codeString))
return DEPT;
if ("team".equals(codeString))
return TEAM;
if ("govt".equals(codeString))
return GOVT;
if ("ins".equals(codeString))
return INS;
if ("edu".equals(codeString))
return EDU;
if ("reli".equals(codeString))
return RELI;
if ("crs".equals(codeString))
return CRS;
if ("cg".equals(codeString))
return CG;
if ("bus".equals(codeString))
return BUS;
if ("other".equals(codeString))
return OTHER;
throw new FHIRException("Unknown OrganizationType code '"+codeString+"'");
}
public String toCode() {
switch (this) {
case PROV: return "prov";
case DEPT: return "dept";
case TEAM: return "team";
case GOVT: return "govt";
case INS: return "ins";
case EDU: return "edu";
case RELI: return "reli";
case CRS: return "crs";
case CG: return "cg";
case BUS: return "bus";
case OTHER: return "other";
default: return "?";
}
}
public String getSystem() {
return "http://hl7.org/fhir/organization-type";
}
public String getDefinition() {
switch (this) {
case PROV: return "An organization that provides healthcare services.";
case DEPT: return "A department or ward within a hospital (Generally is not applicable to top level organizations)";
case TEAM: return "An organizational team is usually a grouping of practitioners that perform a specific function within an organization (which could be a top level organization, or a department).";
case GOVT: return "A political body, often used when including organization records for government bodies such as a Federal Government, State or Local Government.";
case INS: return "A company that provides insurance to its subscribers that may include healthcare related policies.";
case EDU: return "An educational institution that provides education or research facilities.";
case RELI: return "An organization that is identified as a part of a religious institution.";
case CRS: return "An organization that is identified as a Pharmaceutical/Clinical Research Sponsor.";
case CG: return "An un-incorporated community group.";
case BUS: return "An organization that is a registered business or corporation but not identified by other types.";
case OTHER: return "Other type of organization not already specified.";
default: return "?";
}
}
public String getDisplay() {
switch (this) {
case PROV: return "Healthcare Provider";
case DEPT: return "Hospital Department";
case TEAM: return "Organizational team";
case GOVT: return "Government";
case INS: return "Insurance Company";
case EDU: return "Educational Institute";
case RELI: return "Religious Institution";
case CRS: return "Clinical Research Sponsor";
case CG: return "Community Group";
case BUS: return "Non-Healthcare Business or Corporation";
case OTHER: return "Other";
default: return "?";
}
}
}
|
def create_gaussian_filter(fsize,sigma):
center = np.ceil(fsize/2)
tmp = np.arange(1,center,1)
tmp = np.concatenate([tmp[::-1], [0], tmp])
dist = np.zeros((1,tmp.shape[0]))
dist[0,:] = tmp
Hgx = np.exp(-dist**2/(2*sigma**2))
Hgy = np.transpose(Hgx)
HG = np.outer(Hgy, Hgx)
SUM = np.sum(HG)
HG = HG/SUM
Hgx = Hgx/np.sqrt(SUM)
Hgy = Hgy/np.sqrt(SUM)
return HG, Hgx, Hgy, SUM |
// Describes your import snapshot tasks.
func (c *EC2) DescribeImportSnapshotTasks(input *DescribeImportSnapshotTasksInput) (*DescribeImportSnapshotTasksOutput, error) {
req, out := c.DescribeImportSnapshotTasksRequest(input)
err := req.Send()
return out, err
} |
/**
* The base game entity class, it aims for creating movable objects.
*/
public abstract class BaseGameEntity extends AbstractEntity {
public static final int DEFAULT_ENTITY_TYPE = -1;
/**
* Using this vector for temporary calculation, so the change of this vector
* does not affect to current instance.
*/
private final Vector2 position = Vector2.newInstance();
/**
* Using this vector for temporary calculation, so the change of this vector
* does not affect to current instance.
*/
private final Vector2 scale = Vector2.newInstance();
/**
* Every entity has a type associated with it (health, troll, ammo, etc.).
*/
private int type;
/**
* This is a generic flag.
*/
private boolean tag;
// It's position in the environment
private float positionX;
private float positionY;
// It's current scale rate
private float scaleX;
private float scaleY;
/**
* This object's bounding radius.
*/
private float boundingRadius;
protected BaseGameEntity() {
boundingRadius = 0;
positionX = 0;
positionY = 0;
scaleX = 0;
scaleY = 0;
type = DEFAULT_ENTITY_TYPE;
tag = false;
}
protected BaseGameEntity(int type) {
this();
this.type = type;
scaleX = 1;
scaleY = 1;
}
protected BaseGameEntity(int type, float positionX, float positionY, float radius) {
this();
this.type = type;
this.positionX = positionX;
this.positionY = positionY;
boundingRadius = radius;
}
public float getPositionX() {
return positionX;
}
public float getPositionY() {
return positionY;
}
public Vector2 getPosition() {
return position.set(positionX, positionY);
}
public void setPosition(Vector2 position) {
setPosition(position.x, position.y);
}
public void setPosition(float x, float y) {
positionX = x;
positionY = y;
}
public float getScaleX() {
return scaleX;
}
public float getScaleY() {
return scaleY;
}
public Vector2 getScale() {
return scale.set(scaleX, scaleY);
}
public void setScale(Vector2 scale) {
setScale(scale.x, scale.y);
}
/**
* Set the new scale value.
*
* @param value the new value
*/
public void setScale(float value) {
boundingRadius *= (value / MathUtility.maxOf(scaleX, scaleY));
scaleX = value;
scaleY = value;
}
/**
* Set the new scale value.
*
* @param x in width
* @param y in height
*/
public void setScale(float x, float y) {
boundingRadius *= MathUtility.maxOf(x, y) / MathUtility.maxOf(scaleX, scaleY);
scaleX = x;
scaleY = y;
}
public float getBoundingRadius() {
return boundingRadius;
}
public void setBoundingRadius(float radius) {
boundingRadius = radius;
}
public boolean isTagged() {
return tag;
}
public void enableTag(boolean enabled) {
tag = enabled;
}
public int getType() {
return type;
}
public void setType(int type) {
this.type = type;
}
} |
<filename>powerplant_vistool-private/WorldWindJava-master/lib-external/webview/windows/stdafx.cpp
/*
* Copyright (C) 2012 United States Government as represented by the Administrator of the
* National Aeronautics and Space Administration.
* All Rights Reserved.
*/
// stdafx.cpp : source file that includes just the standard includes
// WebView.pch will be the pre-compiled header
// stdafx.obj will contain the pre-compiled type information
#include "stdafx.h"
|
// Register the approver-policy Webhook endpoints against the
// controller-manager Manager.
func Register(ctx context.Context, opts Options) error {
log := opts.Log.WithName("webhook")
log.Info("running tls bootstrap process...")
tls, err := tls.New(ctx, tls.Options{
Log: log,
RestConfig: opts.Manager.GetConfig(),
WebhookCertificatesDir: opts.WebhookCertificatesDir,
CASecretNamespace: opts.CASecretNamespace,
ServiceName: opts.ServiceName,
})
if err != nil {
return fmt.Errorf("failed to run webhook tls bootstrap process: %w", err)
}
log.Info("tls bootstrap process complete")
if err := opts.Manager.Add(tls); err != nil {
return fmt.Errorf("failed to add webhook tls manager as a runnable: %w", err)
}
var registerdPlugins []string
for _, approver := range registry.Shared.Approvers() {
if name := approver.Name(); name != "allowed" && name != "constraints" {
registerdPlugins = append(registerdPlugins, name)
}
}
log.Info("registering webhook endpoints")
validator := &validator{
log: log.WithName("validation"),
lister: opts.Manager.GetCache(),
webhooks: opts.Webhooks,
registeredPlugins: registerdPlugins,
}
opts.Manager.GetWebhookServer().Register("/validate", &webhook.Admission{Handler: validator})
opts.Manager.AddReadyzCheck("validator", validator.check)
return nil
} |
2K Shares Pin 2K
Making your own jam seems so… Martha Stewart. Ha. Doesn’t it?! But let me tell you.. it could not be any easier.
Thanks to our friendly superfood, the chia seed, we can make Cranberry Raisin Chia Seed Jam in no time! Chia seeds make it all possible (and fast). These tiny chia gems absorb all the excess liquid (they can hold more than 12 times their weight in water) to make the perfect jelly-like texture for homemade jam.
We talk about chia seeds a lot here on TGF! I try to incorporate them into my daily eats because they are a perfect source of plant protein, omega-3’s, fiber, antioxidants, magnesium and potassium. Chia seeds are tasteless and have so many uses! Chia pudding, chia eggs, overnight oats, you can add them to pretty much any recipe for a nutritional boost. And if you don’t usually like the texture, they are perfect for jam because they are similar to the seeds in berry jam – you won’t even notice a difference!
To make this festive Cranberry Raisin Chia Jam, you won’t need any special/weird/yuck ingredients that are in most sugary, processed store-bought brands. Just cranberries, water, maple syrup (or your fave natural sweetener), vanilla extract and raisins! YUM!
Boil and simmer your cranberries in water.
The cranberries make fun popping noises like popcorn when they first burst open in the simmering pot 🙂 After they have been simmering for 10 minutes, you blend ’em up with the other ingredients and that’s it. The healthiest jam around. I’ve been adding it to oatmeal too, which is phenom. Talk about PJ&J Oats! Of course its yummy on toast, crackers or just eaten plain with a spoon. Yep. Cranberries are in season now too! Perfect timing for Thanksgiving and all things festive.
Print
Cranberry Raisin Chia Jam Print Pin Ingredients 1.5 cups fresh cranberries
1.5 cups water
1/3 cup raisins
1 teaspoon vanilla
1 tablespoon maple syrup
4 tablespoons chia seeds Instructions Combine cranberries and water in a medium-sized pot. Bring to a boil and then reduce heat to simmer for 10 minutes. Do not drain the water. Transfer to a blender or food processor and add raisins, vanilla and maple syrup. Blend until smooth or blend for a short time to still have pieces of cranberry and raisins like I did. Add the chia seeds and pulse a couple times just to mix them in. Transfer to a glass jar or container. They are ready to eat immediately or you can wait an hour for it to firm up more. Stays fresh in the fridge for up to 2 weeks. Enjoy! |
/**
* A builder object for configuring {@link PlaybookConfig} objects dynamically
*
* @author Greg Marut
*/
public class PlaybookConfigBuilder
{
//holds the instance of the playbooks test configuration
private final PlaybooksTestConfiguration playbooksTestConfiguration;
//holds the app class to configure this playbook config for
private final Class<? extends PlaybooksApp> playbookAppClass;
//holds the list of configured params for this playbooks app
private final List<Param> playbookParamList;
private final List<PlaybookOutputVariable> playbookOutputVariableList;
PlaybookConfigBuilder(final Class<? extends PlaybooksApp> playbookAppClass,
final PlaybooksTestConfiguration playbooksTestConfiguration)
{
this.playbookAppClass = playbookAppClass;
this.playbooksTestConfiguration = playbooksTestConfiguration;
this.playbookParamList = new ArrayList<Param>();
this.playbookOutputVariableList = new ArrayList<PlaybookOutputVariable>();
}
PlaybookConfigBuilder(final PlaybookConfig playbookConfig,
final PlaybooksTestConfiguration playbooksTestConfiguration)
{
this.playbookAppClass = playbookConfig.getPlaybookAppClass();
this.playbooksTestConfiguration = playbooksTestConfiguration;
this.playbookParamList = new ArrayList<Param>(playbookConfig.getPlaybookParams());
this.playbookOutputVariableList = new ArrayList<PlaybookOutputVariable>(playbookConfig.getAllOutputVariables());
}
public PlaybookConfigBuilder addAppParam(final String name, final ParamDataType type)
{
//create the new param object and add it to the list
Param param = new Param();
param.setName(name);
param.setType(type);
playbookParamList.add(param);
return this;
}
public PlaybookConfigBuilder addPlaybookParam(final String name, final ParamDataType type,
final StandardPlaybookType... playbookVariableTypes)
{
return addPlaybookParam(name, type,
Arrays.stream(playbookVariableTypes).map(StandardPlaybookType::toString).toArray(String[]::new));
}
public PlaybookConfigBuilder addPlaybookParam(final String name, final ParamDataType type,
final String... playbookVariableTypes)
{
//make sure the playbooks variable type is not null
if (null == playbookVariableTypes || playbookVariableTypes.length == 0)
{
throw new IllegalArgumentException("playbookVariableTypes cannot be null or empty");
}
//create the new param object and add it to the list
Param param = new Param();
param.setName(name);
param.setType(type);
param.getPlaybookDataType().addAll(Arrays.asList(playbookVariableTypes));
playbookParamList.add(param);
return this;
}
public PlaybookConfigBuilder addOutputVariable(final String name, final StandardPlaybookType playbookVariableType)
{
return addOutputVariable(name, playbookVariableType.toString());
}
public PlaybookConfigBuilder addOutputVariable(final String name, final String playbookVariableType)
{
PlaybookOutputVariable playbookOutputVariable = new PlaybookOutputVariable();
playbookOutputVariable.setName(name);
playbookOutputVariable.setType(playbookVariableType);
playbookOutputVariableList.add(playbookOutputVariable);
return this;
}
/**
* Builds this configuration and registers it with the test configuration
*/
public void build()
{
//create a new playbook config
PlaybookConfig playbookConfig =
new PlaybookConfig(playbookAppClass, playbookParamList, playbookOutputVariableList);
//register this playbook config with the test configuration
playbooksTestConfiguration.registerDynamicPlaybookConfiguration(playbookConfig);
}
} |
I have to admit and apologize, I literally gave my Secret Santa nothing to go off of. Compared to what everyone talked about, I feel ashamed lol
However, Santa came through, and in awesome ways! Santa must have used all sorts of Claus-fu (i.e. reddit-stalking).
Boxes arrive at my door. Yes two, not one. It was a blustery Saturday afternoon, coming home from work on the coldest of days all year, I hurry them into the house to warm up.
I open the first box, and what do I see? A Dammit Doll head! Its a modern day take on a stress ball. I literally use a stress ball everyday at work, one of those cheesy foam balls with some insignificant corporate logo on it. This is where the Claus-fu came in to play. I freakin' love this thing!
Box # 2, a little heavier, so I ponder what it could be. As I am pulling it out, I can kind of make out that its a book, with a 3 inch binding! I am groaning to myself, am I really going to devote so much time to read a novel so long? Until then.... the book emerges. "1,000 Places to See in the United States & Canada Before You Die". HOW FREAKIN' COOL? I LOVE travelling domestically, long road trips are my thing. Skimming through this book showed me some great reviews of place, well thought out opinions, and all the time just making me itch to get out on the open road more and more. It will take me years to visit a portion of the places in this book, but this is a keeper I will DEFINITELY be reading from cover to cover.
Secret Santa, you nailed it. Thank you VERY much for your thoughtfulness and generosity. Merry Christmas! |
package cn.zhuguoqing.operationLog.bean.dto;
import cn.zhuguoqing.operationLog.bean.enums.CustomFunctionType;
import cn.zhuguoqing.operationLog.bean.enums.DiffType;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.springframework.util.CollectionUtils;
import org.springframework.util.StringUtils;
import java.util.*;
import java.util.function.Function;
/**
* @author guoqing.zhu
* <p>description:建造者模式的DiffDTO.
*/
@Getter
@Slf4j
public class DiffDTO {
/** exmple: product.g_prod_spu,|必传| */
private String schemaTableName;
/** schema名称 */
private String schema;
/** 表名 */
private String table;
/** 键 example where {键} = 123; */
private String keyName;
/** 值 example:where id = {值} */
private List<String> keyValue;
/** SQL查询后拼接的内容,|可以不传|,example:where id = 1 and {is_deleted = 0}; */
private String appendSQLAfterWhere;
/** 记录的字段名,|可以不传| */
private String diffName;
/** 关于什么的信息,|可以不传|, */
private String informationAboutWhat;
/** 需要记录的字段,|可以不传|,直接使用数据库的字段 */
private List<String> includeRecordClms;
/** 不需要记录的字段,|可以不传|,直接使用数据库的字段 */
private List<String> excludeRecordClms;
/** 自定义关于字段的名称的函数,|可以不传| */
private Map<String, Function<String, String>> customCommentFunctionMap;
/** 自定义关于字段的值的函数,|可以不传| */
private Map<String, Function<String, String>> customValueFunctionMap;
private DiffDTO(Builder builder) {
this.schemaTableName = builder.schemaTableName;
this.schema = builder.schema;
this.table = builder.table;
this.keyName = builder.keyName;
this.keyValue = builder.keyValue;
this.appendSQLAfterWhere = builder.appendSQLAfterWhere;
this.diffName = builder.diffName;
this.includeRecordClms = builder.includeRecordClms;
this.excludeRecordClms = builder.excludeRecordClms;
this.informationAboutWhat = builder.informationAboutWhat;
this.customCommentFunctionMap = builder.customCommentFunctionMap;
this.customValueFunctionMap = builder.customValueFunctionMap;
}
public static class Builder {
private static final String KEY_NAME = "id";
private DiffType diffType;
private String schemaTableName;
private String schema;
private String table;
private String keyName = KEY_NAME;
private List<String> keyValue;
private String appendSQLAfterWhere;
private String diffName;
private List<String> includeRecordClms;
private List<String> excludeRecordClms;
private String informationAboutWhat;
private final Map<String, Function<String, String>> customCommentFunctionMap = new HashMap<>(8);
private final Map<String, Function<String, String>> customValueFunctionMap = new HashMap<>(8);
public DiffDTO build() {
try {
// 校验逻辑放到这里来做,包括必填项校验、依赖关系校验、约束条件校验等
if (Objects.isNull(diffType)) {
diffType = DiffType.SINGLE_UPDATE;
}
if (StringUtils.isEmpty(schema)) {
throw new IllegalArgumentException("schemaTableName cannot be null");
}
if (StringUtils.isEmpty(table)) {
throw new IllegalArgumentException("schemaTableName cannot be null");
}
if (StringUtils.isEmpty(appendSQLAfterWhere)) {
appendSQLAfterWhere = " ";
} else {
if (!appendSQLAfterWhere.trim().startsWith("and")) {
appendSQLAfterWhere = " and " + appendSQLAfterWhere;
}
}
if (diffType.getType().equals(DiffType.LIST_ADD_DELETE.getType())
&& StringUtils.isEmpty(diffName)) {
throw new IllegalArgumentException("当使用list_add_delete的diff时候 diffName不能为空");
}
if (!CollectionUtils.isEmpty(includeRecordClms)
&& !CollectionUtils.isEmpty(excludeRecordClms)) {
throw new IllegalArgumentException("includeRecordClms与excludeRecordClms不能同时存在");
}
} catch (Exception e) {
log.error("--OperationLog--DiffDTO build error", e);
}
return new DiffDTO(this);
}
/**
* DiffType |必传|
*
* @param diffType diffType
* @return 建造者
*/
public Builder setDiffType(DiffType diffType) {
try {
if (Objects.isNull(diffType)) {
throw new IllegalArgumentException("diffType cannot be null");
}
this.diffType = diffType;
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setDiffType error", e);
}
return this;
}
/**
* exmple: product.g_prod_spu,|必传|
*
* @param schemaTableName 库名.表名
* @return 建造者
*/
public Builder setSchemaTableName(String schemaTableName) {
try {
if (StringUtils.isEmpty(schemaTableName)) {
throw new IllegalArgumentException("schemaTableName cannot be null");
}
if (!schemaTableName.contains(".")) {
throw new IllegalArgumentException("schemaTableName的格式需要设置为:schemaA.tableB");
}
String[] split = schemaTableName.split("\\.");
if (split.length != 2) {
throw new IllegalArgumentException("schemaTableName的格式需要设置为:schemaA.tableB");
}
this.schemaTableName = schemaTableName;
schema = split[0];
table = split[1];
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setSchemaTableName error", e);
}
return this;
}
/**
* 键 example:where {键} = 123
*
* @param keyName 键的名称
* @return 建造者
*/
public Builder setKeyName(String keyName) {
try {
if (StringUtils.isEmpty(keyName)) {
throw new IllegalArgumentException("keyName cannot be null");
}
this.keyName = keyName;
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setKeyName error", e);
}
return this;
}
/**
* 值 example:where id = {值}
*
* @param keyValue 键的值
* @return 建造者
*/
public Builder setKeyValue(String... keyValue) {
try {
if (Objects.isNull(keyValue)) {
throw new IllegalArgumentException("keyValue cannot be null");
}
if (keyValue.length == 0) {
throw new IllegalArgumentException("keyValue size cannot be 0");
}
this.keyValue = new ArrayList<>(Arrays.asList(keyValue));
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setKeyValue error", e);
}
return this;
}
/**
* SQL查询后拼接的内容,|可以不传|,example:where id = 1 and {is_deleted = 0};
*
* @param appendSQLAfterWhere SQL拼接后的内容
* @return 建造者
*/
public Builder setAppendSQLAfterWhere(String appendSQLAfterWhere) {
try {
if (StringUtils.isEmpty(appendSQLAfterWhere)) {
throw new IllegalArgumentException("appendSQLAfterWhere cannot be null");
}
this.appendSQLAfterWhere = appendSQLAfterWhere;
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setAppendSQLAfterWhere error", e);
}
return this;
}
/**
* 日志表述用的字段名,|可以不传|
*
* @param diffName 日志表述用的字段名
* @return 建造者模式
*/
public Builder setDiffName(String diffName) {
try {
if (StringUtils.isEmpty(diffName)) {
throw new IllegalArgumentException("diffName cannot be null");
}
this.diffName = diffName;
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setDiffName error", e);
}
return this;
}
/**
* 需要记录的字段,|可以不传|,直接使用数据库的字段
*
* @param includeRecordClm 需要记录的字段
* @return 建造者
*/
public Builder setIncludeRecordClms(String... includeRecordClm) {
try {
if (includeRecordClm == null) {
throw new IllegalArgumentException("includeClms cannot be null");
}
if (includeRecordClm.length == 0) {
throw new IllegalArgumentException("includeClms size cannot be 0");
}
this.includeRecordClms = new ArrayList<>(Arrays.asList(includeRecordClm));
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setIncludeRecordClms error", e);
}
return this;
}
/**
* 不需要记录的字段,|可以不传|,直接使用数据库的字段
*
* @param excludeRecordClm 不需要记录的字段
* @return 建造者
*/
public Builder setExcludeRecordClms(String... excludeRecordClm) {
try {
if (excludeRecordClm == null) {
throw new IllegalArgumentException("excludeClms cannot be null");
}
if (excludeRecordClm.length == 0) {
throw new IllegalArgumentException("excludeClms size cannot be 0");
}
this.excludeRecordClms = new ArrayList<>(Arrays.asList(excludeRecordClm));
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setExcludeRecordClms error", e);
}
return this;
}
/**
* 关于什么的信息,|可以不传|, example:关于[规格编码为:194276537401347]的限价
*
* @param informationAboutWhat 关于什么的信息
* @return 建造者
*/
public Builder setInformationAboutWhat(String informationAboutWhat) {
try {
if (StringUtils.isEmpty(informationAboutWhat)) {
throw new IllegalArgumentException("informationAboutWhat cannot be null");
}
this.informationAboutWhat = informationAboutWhat;
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- setInformationAboutWhat error", e);
}
return this;
}
/**
* 自定义的函数,|可以不传|
*
* @param columnName
* @param function
* @return
*/
public Builder addCustomFunction(
CustomFunctionType type, Function<String, String> function, String... columnName) {
try {
if (Objects.isNull(columnName)) {
throw new IllegalArgumentException("columnName cannot be null");
}
if (Objects.isNull(function)) {
throw new IllegalArgumentException("function cannot be null");
}
if (Objects.isNull(type)) {
type = CustomFunctionType.VALUE;
}
if (type.getType().equals(CustomFunctionType.KEY)) {
for (String s : columnName) {
customCommentFunctionMap.put(s, function);
}
} else {
for (String s : columnName) {
customValueFunctionMap.put(s, function);
}
}
} catch (IllegalArgumentException e) {
log.error("--OperationLog-- addCustomFunction error", e);
}
return this;
}
}
}
|
// NewSubscriber creates an amqp subscriber.
func NewSubscriber(conn *amqp.Connection, exchange string, logger *zap.Logger) (*Subscriber, error) {
ch, err := conn.Channel()
if err != nil {
return nil, fmt.Errorf("cannot allocate channel: %v", err)
}
defer ch.Close()
err = declareExchange(ch, exchange)
if err != nil {
return nil, fmt.Errorf("cannot declare exchange: %v", err)
}
return &Subscriber{
conn: conn,
exchange: exchange,
logger: logger,
}, nil
} |
package com.mf.sample;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import com.mf.library.OnCallBack;
import com.mf.library.UpdateChecker;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
new UpdateChecker(this)
.setRemindDays(1)
.setRemindLabel("Remind me")
.setOnCallBack(new OnCallBack() {
@Override
public boolean Done(boolean success, boolean isUpdateAvailable, String new_version) {
System.out.println("is success=" + success + " is update available=" + isUpdateAvailable + " new version is" + new_version);
// return true will show default library dialog
return true;
}
}).checkUpdate();
}
}
|
Comparison of the effects of early handling and early deprivation on maternal care in the rat.
It has been reported in the rat that postnatal manipulations can induce robust and persistent effects on offspring neurobiology and behavior, mediated in part via effects on maternal care. There have, however, been few studies of the effects of postnatal manipulations on maternal care. Here, we describe and compare the effects on maternal behavior on postnatal days 1-12 of two manipulations, early handling (EH, 15-min isolation per day) and early deprivation (ED, 4-hr isolation per day), relative to our normal postnatal husbandry procedure. Maternal behavior was measured at five time points across the dark phase of the reversed L:D cycle. EH yielded an increase in arched-back nursing across several time points but did not affect any other behavior. ED stimulated a bout of maternal behavior such that licking and arched-back nursing were increased at the time of dam-litter reunion, although not at any other time point. Neither EH nor ED affected weaning weight significantly. Importantly, within-treatment variation was high relative to these between-treatment effects. |
BEIRUT (Reuters) - A group tracking the Syrian war said on Saturday that Islamist insurgents shot dead 56 members of Syrian government forces in a mass execution at an air base captured from the army earlier this month in northwestern Syria.
The Syrian Observatory for Human Rights said the mass killing at Abu al-Duhur air base happened a few days ago, citing sources on the ground. The air base in Idlib province was captured by an alliance of groups including the al Qaeda-linked Nusra Front on Sept. 9.
“We confirmed it yesterday in the evening, via people who witnessed it, and via some pictures that arrived - the execution happened,” Rami Abdulrahman, director of the Observatory, said, speaking by telephone.
The Observatory said the executions were carried out by the Nusra Front, the Turkistan Islamic Party, and other Islamist groups. The Turkistan Islamic Party is one of the groups fighting in northwestern Syria, where it has claimed a role in several major insurgent advances this year.
When the air base fell to insurgents, Syrian state TV said the forces defending it had withdrawn after a two-year siege. It was the last position held by the Syrian military in Idlib province.
The Observatory said a total of 71 members of government forces had been executed at Abu al-Duhur air base since its capture. |
/**
* Converts table into index scan relational expression.
*/
IgniteLogicalIndexScan toRel(
RelOptCluster cluster,
RelOptTable relOptTbl,
String idxName,
List<RexNode> proj,
RexNode condition,
ImmutableBitSet requiredCols
); |
The number of illegal immigrants attempting to enter the US by crossing the Rio Grande has fallen dramatically since President Trump took office.
In March this year, only 4,143 people were detained by Border Patrol Agents at the notorious river crossing as opposed to 15,579 who were caught in January before Trump was in full swing in the White House.
The sharp decline has been chalked down to the president's tough line on immigration.
Border Patrol agents said migrants were no longer willing to risk the dangerous journey and pay steep smuggler fees now that the chances of them being allowed to stay in the US are so slim.
The phenomenon at the Rio Grande crossing is not isolated. Only 12,193 people were arrested across the entire Southwest border in March, 50,000 fewer than in October.
The number of immigrants arrested at the Rio Grande border crossing has fallen dramatically since President Trump took office in January
In Yuma, Arizona, only 336 were arrested trying to cross the border in March whereas 1,155 were stopped in January.
El Paso, another busy crossing in Texas, saw only 976 arrested in March, a decrease of 1,803 since January.
The drop bucks a five-year trend of increases. Since 2012, the total number of immigrants either arrested or turned away from the border has risen.
The end of 2016 brought the highest number of crossings with 56,000 arrested or turned away in September alone.
October, November and December were among the busiest months in the last five years as hundreds of thousands attempted to rush in to the country before Trump took office.
U.S. Customs and Border Protection attributes the decline since then to the president's crackdown.
The entire border has seen dramatic monthly decreases since Trump took office in Januaryn after a spike in attempted entries in October, November and December
The crossing is the busiest along the Mexican border with thousands pouring in every month to seek asylum in Texas. Above, a group of migrants remove their shoelaces after being caught by Border Patrol agents in April last year when traffic was heavier
The 1,885 mile stretch of water runs directly along the southeast Texas border
Thousands have died trying to make the crossing in treacherous parts of the river. Above, one group is led through it by a smuggler on March 14, 2017
On his third day in office, President Trump signed an executive order demanding the construction of a wall along the entire border. Above, a road crew is seen reinforcing the border next to the Rio Grande in Hidalgo last month
'Since the Administration’s implementation of Executive Orders to enforce immigration laws, the drop in apprehensions shows a marked change in trends,' it says.
On January 25, three days after being sworn in, President Trump signed an executive order demanding tougher regulations across the Southwest border.
Border Patrol agents have credited President Trump's tough stance on immigration with the decline
The order includes the construction of a physical wall to run the length of the border which Trump promised to deliver throughout his election campaign.
While he has not implemented new immigration laws, the order enforces rules already made. It places emphasis on removing anyone deemed ineligible, a threat which border officials say has spooked illegal migrants.
'Are you going to risk a 1,000-mile journey and pay $8,000 to be smuggled if you’re not sure you’ll get to stay? I wouldn’t,' Marlene Castro told The Los Angeles Times, describing how the once 'hot' Rio Grande crossing was now quiet.
The Rio Grande crossing is the busiest route for illegal immigrants entering the US from Mexico.
Thousands have died trying to complete the treacherous journey, many of them drowning in the river's troublesome waters.
Deaths in the river increased last year as immigrants tried to cross over new, uncharted sections of it. More than 300 died in just six months in 2015. |
In vitro growth of postembryonic hair.
The growth of postembryonic mouse hair has been studied in tissue culture using rat-tail collagen gel as a substrate. Continued development and growth of follicles from both newborn and 3- to 4-day-old mice is stimulated only when a tryptic digest of early mouse embryos is incorporated in the culture medium. |
//A generalized function to write the data into the given file name. 'append' parameter tells if the data should be appended to te file or the file
//should be truncated and then data should be written.
void writeFile(char *fileName, char *data, int size, int append) {
FILE *file;
char fileNameW[50];
bzero(fileNameW, sizeof(fileNameW));
strcpy(fileNameW, fileName);
if (append == 0) {
file = fopen(fileNameW,"wb");
} else {
file = fopen(fileNameW,"ab");
}
int fileSize = fwrite(data , sizeof(unsigned char), size, file);
if(fileSize < 0)
{
printf("Error writting file\n");
exit(1);
}
fclose(file);
} |
RL-numbers: An alternative to fuzzy numbers for the representation of imprecise quantities
In this paper we define imprecise quantities on the basis of a new representation of imprecision introduced by the authors called RL-representation (for restriction-level representation). We call the corresponding RL-representation of imprecise quantities RL-numbers. We first define RL-natural numbers on the basis of the notion of cardinality. The usual arithmetic operations of addition, product and division are extended and RL-integers, RL-rationals and RL-real numbers are defined so that solution is provided to any kind of equation involving those operations, as with precise numbers. We show that the algebraic properties of precise numbers with respect to the ordinary arithmetic operators are preserved. In addition, and remarkably, we show that the imprecision of the quantities being operated can be increased, preserved or diminished. Ranking of RL-numbers is introduced by means of the notion of RL-ranking as an extensive RL-representation defined on the set {Lt,=,Gt}. In our view, fuzzy numbers correspond to the definition of imprecise intervals corresponding to linguistic concepts like approximately x. We discuss about the relationship between RL-numbers and fuzzy numbers, and how they complement each other. Specifically, we propose to use RL-numbers in order to represent imprecise quantities obtained by measuring properties, and fuzzy numbers (equivalently, RL-intervals) to define concepts and to provide a linguistic approximation of RL-numbers. |
Location Sensitivity of Non-structural Component for Channel-type Auxiliary Building Considering Primary-secondary Structure Interaction
To ensure the safe and stable operation of nuclear power (NPP), many non-structural components (NSCs) are actively associated with NPP. Generally, floor response spectrum (FRS) is used to design the NSCs. Nevertheless, it is essential to focus on the mounting position and frequency of NSCs which is normally ignored during the conventional design of NSCs. This paper evaluates the effect of mounting location for NSCs over the same floor in a channel-type auxiliary building. The modal parameter estimation is taken into account to capture the dynamic property of the NPP auxiliary building by the shake table test; which leads to the calibration of the finite element model (FEM). The calibration of FEM was conducted through response surface methodology (RSM) and the calibrated model is verified utilizing modal parameters as well as frequency response spectrum function. Finally, the location sensitivity was investigated by time history analysis (THA) under artificially generated design response spectrum compatible earthquakes and sine sweeps. The result showed that the right choice of location for NSCs can be an important measure to reduce the undesirable responses during earthquakes, which can reduce up to 30% horizontal and 70% vertical zero period acceleration (ZPA) responses in channel- type auxiliary buildings.
INTRODUCTION 1
Earthquake (EQ) is a natural hazard and loads due to EQ have the greatest influence on nuclear power plant (NPP) structures. Therefore, the safety against EQ of structural and non-structural components (NSCs) in NPP is a critical concern. In particular, the safety concern of the NPP structures has significantly increased since the Fukushima Daiichi nuclear accident in Japan (2011) and the Gyeongju (2016) and Pohang (2017) EQs in South Korea . The auxiliary building (AB) is one of the main parts of NPP systems. AB is generally placed adjacent to the reactor containment structure that supports most of the auxiliary and safety-related systems and components . The configuration for the structural *Corresponding Author Institutional Email: [email protected] (D. Kim) and NSCs in NPP has been reported by Kwag et al. as shown in Figure 1. NSCs are susceptible to earthquakes throughout the last few decades . Some damages of NSCs due to EQ events are depicted in Figure 2, captured by Jiang . The AB contains many substantial NSCs, i.e., pumps, heat exchanger, feedwater tanks, main control room, emergency diesel generator, fuel storage tanks, radioactive waste systems, chemical and volume control systems, etc. . In the context of safety assurance and operating the NPP, the seismic analysis, design, assessment, and evaluation of such NSCs are the most challenging issue. Besides, the distribution of the following NSCs plays a vital role in minimizing the seismic responses without addition and any structural modification. . Earthquake damage of NSCs The previous study focuses mainly on the vertical distribution of NSCs. Hur et al. investigated the seismic performance of nonstructural components located in various locations throughout the AB and found that the probability of acceleration of NSCs on the first floor is greater than that of NSCs on the second floor. Mondal and Jain recommend, for the design of NSCs and their attachments, amplification of lateral force that increases with an increase in vertical position of the NSCs should be considered. If the NSC is located on lower building floors and has a natural period equal to or greater than the building's second or third natural period, the responses of NSCs are amplified . Merz and Ibanez reported only for rough estimates of NSCs, floor response spectra (FRS) may be considered but estimating the mounting point response is desirable. According to Pardalopoulos and Pantazopoulou , the responses of NSCs are mainly controlled by the developed absolute spectral acceleration at the mounting point on the supporting building. However, there are no considerable investigations on the previous study for the response behavior of NSCs attached at different locations on the same floor.
This type of distribution can be very effective in response measures of NSCs; especially for the asymmetric building which is the main motivation of this study. This study evaluates the location sensitivity on NSCs on the same floor under earthquake excitation considering the primary-secondary structure interaction. To fulfill the objective of this study, the numerical investigations were conducted using a three-dimensional finite element model (FEM) developed by SAP2000 software of a channel type AB. This building was designed and the shake table test program was organized by the Korea Atomic Energy Research Institute (KAERI). Among various modal parameter estimation (MPE) techniques, least-squares complex exponential (LSCE) was utilized for MPE using the shake table test results. LSCE approximates the correlation function using the sum of exponentially decaying harmonic functions . After evaluating the modal parameters, the FEM was updated based on test results through a statistical tool, i.e., response surface methodology (RSM). Many researchers employed the RSM for FEM optimization due to its simplicity and effectiveness . Then the evaluation was conducted using optimized FEM throughout the study.
AUXILIARY BUILDING
As demonstrated in Figure 3(a), this study was conducted using a channel type three-storied AB provided by KAERI. The overall dimension of the main part of the test specimen is 3650mm×2575mm×4570mm. The thicknesses of slabs, walls, and base assembly are 140mm, 150mm, and 400mm, respectively. The detailed dimensions of the test specimen are predicted in Figure 3(b).
1. Shake Table Test
The Earthquake Disaster Prevention Center at Pusan National University conducted this experimental program with the shaking table facility. This program was organized by KAERI for joint research on the Round Robin Analysis to evaluate the dynamic characteristics and to verify the numerical model for the AB in NPP. To capture linear response characteristics, natural frequencies, and vibration modes, the model was initially excited by a low-intensity random vibration (peak acceleration is 0.05g) in X and Ydirections separately .
The sensors, i.e., the accelerometers were installed as different arrays to record the responses under the excitation in X and Y direction. Figure 4(a) and Figure 4(b) show the accelerometer's location for X and Ydirectional responses, respectively. Although the shake table test was directed for the Gyeongju earthquake with a loading sequence as 0.28g -0.28g -0.50g -0.75g -1.00 g, which was not considered in this study. The random vibration response was utilized for MPE and validates the linear FEM model of the AB. Figure 5(b) represent the recorded acceleration response for the X and Y-direction, respectively. Here, the sensors denoted as "Acc. base", "Acc. 6", "Acc. 4" and "Acc. 1" are the sensors for the corresponding base, 1st floor, 2nd floor, and 3rd floor (roof) responses for each case, which is used for MPE.
In the study, the LSCE method was used for MPE. Figure 5(a) and Figure 5(b) illustrate the stabilization diagram for X and Y-direction the input-output responses of shake table test for probable model order and a frequency range up to 30 and 100Hz, respectively. The dot marker specifies the unstable poses whereas plusshaped shows stable one in frequency and damping, and the circular marker represents the stale poles only in frequency. Furthermore, a solid blue line depicts the average response to help distinguish between physical and non-physical poles. The modal frequency of predominant modes, i.e., mode 1 (X-direction) and mode 2 (Y-direction) are 16.05 Hz and 23.02 Hz ( Figure 6). The damping ratio for fundamental modes varies from
Numerical Modeling and Updating
For the dynamic evaluation of horizontally distributed NSCs, i.e., secondary structures on the KAERI channel type AB, a three-dimensional linear (elastic) FEM developed using commercially available structural analysis and design software SAP2000 is presented in this study . SAP2000 allows the nonlinear behavior of materials to be modeled using either link/support elements or plastic hinges or multilayer shell elements . During the shake table test evaluation, the building was excited under the Gyeongju earthquake (2016) with a loading sequence as 0.28g -0.50g -0.75g -1.00 g. When the excitation level was upto 1.00 g, there was no remarkable damage present in the structure . Also, the maximum floor acceleration i.e., zero period acceleration (ZPA) responses in the incremental dynamic analysis (IDA) as shown in Figure 7, indicates that the building model shows approximately linear behavior up to 1g excitation level of peak table acceleration (PTA). Therefore, in this case, linear analysis was performed.
The slabs and walls were modeled as 4 noded shell elements. And the base assembly was considered as 8 noded solid elements. The maximum mesh size is assumed as 300mm. Figure 8(a) shows the full FEM with mesh view. As the shear wall elements were assumed as elastic, the effective stiffness was considered to reduce the strength for inelastic behaviors. Based on ACI , the effective stiffness was applied by reducing the moment of inertia ( ) of the wall as 0.70 (as it was in uncracked condition). The NSCs were modeled by the linear spring available in SAP2000 which were rigidly connected with the mounting position as depicted in Figure 8(b). Three translational degrees of freedoms (Ux, Uy, and Uz) were activated at the top of NSCs. The second floor was considered for the placing of NSCs in this case study. The governing equation of motion for linearly modeled structure can be expressed as Equation (1) : where ̈, ̇, and represent the acceleration, velocity, and displacement vector of the systems at any instant of time ( ). ̈ denotes the ground motion excitation acceleration. The compiled mass ( ), damping ( ) and where the mass matrix for primary and secondary structures are denoted by p and NSC , respectively. p and NSC denote the damping matrix of primary and secondary structures and finally, the stiffness matrix of primary and secondary structures are symbolized by p and NSC , respectively.
For the case study, the height and masses of NSCs are implicit as 1m and 200kg. The global damping matrix ( ) of the coupled system was constructed by assuming the same damping ratio (3.4%) for primary and secondary structures. The stiffness of the NSCs was calculated as, NSC . The frequency range of NSCs was assumed as 5 to 50Hz. The evaluation was directed by a frequency increment of 5Hz.
Before going to the evaluation stage, the FEM was calibrated using RSM based on the updating of concrete material properties. The RSM is a collection of statistical models that may be used to model, analyze, optimize, and construct an empirical model . It appears to be highly promising in terms of reducing the time and cost of model design and analysis .
Based on the statistical and mathematical analysis, RSM investigates the approximate relationship of the input design variables and the outputs in the form of a linear or polynomial equation. According to Rastbood et al. , a polynomial of higher-order must be used, if the system has curvatures and in most cases, the secondorder is adequate to handle engineering problems . Therefore, a second-order polynomial equation is considered for the RSM as shown in Equation (3) to get the response, .
where the intercept, linear, quadratic, and interaction terms are represented by 0 , , , and , respectively; denotes the number of input variables and is the offset or residual related to the experiments.
The central composite design (CCD) was used to estimate the number of the experiment of RSM for optimization of multi-objective input variables . The total number of samples of runs of the experiment required for a complete CCD circumscribed is computed by = 2 + 2 + ; where is the number of factors, i.e., input variables; and 2 , 2 , and represent the number of cubic, axial, and center points. Here, each factor is studied at 5 levels as depicted in Figure The lower and upper limit ranges of factors were chosen based on the normal concrete material properties. The range for density and Poisson's ratio was 0.15 to 0.25 and 2200 to 2600 kg/m 3 , respectively. The Young's modulus was assumed to be 10 to 25 GPa. The cubic, axial and central points coded and actual values of 3 factors are presented in Table 1.
CCD created a total of 20 design points for , , and . Each set of design points and corresponding responses from FEM are listed in Table 2. The polynomial relationships between input variables (for , , and ) and responses ( 1 and 2) from Equation (3) can be presented by Equations (4). The coefficients for Equations (4) using RSM through the Minitab tool are shown in Table 3.
1 or 2 = 0 + 1 + 2 + 3 + 11 2 + 22 2 + 33 2 + 12 + 13 + 23 The analysis of variance (ANOVA) established by Ronald Fisher in 1918, is an effective method for assessing the model fitness . To clarify the model fitness with data, the probability values (P-value) are compared to their significant level. Model terms with Pvalues less than 0.05 are considered significant. Model terms are significant if the P-value is less than 0.05. Table 4 indicates that for both responses ( 1 and 2), , , 2 , and * are significant model terms. The model F-value of 1409.60 and 1409.09 for 1 and 2, respectively implies the model is significant. The goodness of fit, i.e., 2 is 99.92% and also the Predicted 2 of 99.40% is in reasonable agreement with the Adjusted 2 of 99.85% for both models (Table 5). Therefore, the model represented in Equation (4) for 1 and 2 prediction can be used.
To make it easier to grasp, the surface plot function was used to display a three-dimensional perspective of the response when the parameters were changed. Figures 10 and 11 show the response plot (surface and contour) using Equation (4) for corresponding output variables 1 and 2, respectively. It shows that the changing pattern of responses 1 and 2 with respect to factors and is approximately similar.
To get the optimized value of , , and the target values for 1 and 2 were set to 16.05 and 23.02 Hz. The optimized values for , , and were 15.75 GPa, 2400 kg/m 3 , and 0.20, respectively ( Figure 13). Figure 13 demonstrates that the values of 1 and 2 are matched
Contour Plot of F2 (Hz) vs ρ (kg/m3), E (GPa)
about 94 and 98%, respectively with the target values and the composite desirability is matched about 96%. Figure 12(a) depicted the comparison of actual responses of 1 and 2 were from FEM and the predicted responses using RSM (Equation (4)). The results from both models are near to the diagonal (dotted line), showing a good correlation between the predicted and actual values. Figure 12(b) shows that the maximum error between the fitted values from RSM and the FEM simulation is 2.75%, which also relay the use of the predicted model for further study.
3. Model Validation
The FEM model was validated through the modal parameters and the response function under random seismic excitations. The MPE is the first stage in detecting structural deterioration and performing structural health monitoring (SHM) or assessing dynamic characteristics. The natural frequencies of the AB were obtained through modal analysis, and the results were compared with shake table test results to validate the studied FEM. The most fundamental frequencies (mode 1 and mode 2) are enlisted in Table 6 along with the error compared with test results. The mode shapes (first 6 modes) and their natural frequencies along with modal participation mass ratio (MPMR) from FEM are described in Figure . Table 6 shows that the maximum error is 2.4%, which indicates the good agreement of the result from FEM in this study with compared to shake table test. Based on the LSCE methods, the magnitudes of the averaged response functions were plotted against frequencies as shown in Figure 15, which also indicate similar dynamic actions between the actual model and FEM. Therefore, the presented model was used for the NSC's location sensitivity evaluation. Figure 15. Average response function from shake table and FEM results (a) X-direction and (b) Y-direction
1. Input Ground Motions (GMs)
To evaluate the response behavior of NSCs, two types of input motions were used, i.e., 1) artificially generated GMs (AGMs) for the reference design response spectrum (DRS), and 2) Sine sweep with exiting frequency range 5 to 50 Hz. The artificial ground motion was generated for the response spectrum compatible accelerogram for the design of NPP, i.e., Regulatory guide 1.60 (RG 1.60) . The GMs were applied in three directions, i.e., horizontal 1, H1 (X-direction); horizontal 2, H2 (Ydirection); vertical, V (Z-direction). The peak acceleration for the horizontal component was considered based on 2400 years of return period for seismic zone I (Korean peninsula), i.e., 0.22g . The vertical component of GM was defined by scaling of the horizontal component by a factor of 2/3, i.,e., 0.147 . The generation was done using the Matlab-based computer tool "Quake_M" developed by Kim and Quake as represented in Figure 16 and Figure 17(a). The root means square error of AGMs are 1.004%, 1.187%, and 0.729% for H1, H2, and V directions, which indicate the well-matched AGMs with target spectrum (RG 1.60). The sine sweep was used to confirm the response behavior for all excitation modes (target frequency range) of the NSCs. The amplitude of the sine sweep was the same as AGMs. Only the first 3s of sine sweep is presented in Figure 17(b) for the clear visualization, but actually it was 30s with frequency range 5 to 50 Hz.
2. Location Sensitivity
To evaluate the location sensitivity of NSCs, a total of 6 probable locations as shown in Figure 8(b) were considered in this study, i.e., 1) L1 represents the response of outside or exposer corners, 2) L2 denotes the middle of the sidewall, 3) L3 indicates the responses for the inside corners, 4) L4, middle of the exposer side of the building, 5) L5, middle of the floor, 6) L6, which replicates the responses of the middle of the back wall of the AB. The study was conducted assuming the NSCs are distributed only on the second floor.
Zero period acceleration (ZPA) i.e., peak acceleration responses are compared for each direction and each loading. Figure 18 replicates the acceleration responses in X-direction whereas Figure 19 shows the corresponding ZPA of NSCs placed in each credible location, in which the responses for L1 and L4, L2 and L5, and L3 and L6 indicate the similar path under AGM and sine sweep as well. In the case of AGM excitation, Figure 19. X-directional ZPA responses of NSC (a) AGM excitation (b) Sine sweep excitation the NSCs with frequency around 15Hz were more vulnerable (in X-direction) under both excitation for location L1 and L4. Additionally, it confirms that the NSCs with higher frequency, i.e., around 45Hz were more sensitive for location L3 and L6 than others under sine sweep excitation.
In Y-directional response as shown in Figure 20, the AGM excitation indicates that if NSCs frequency is more than the 1st modal frequency of AB, the locations for L1, L2 and L3 are more sensitive than others, whereas the sine sweep excitation reveals that all locations were pursuing approximately the similar track and sensible frequency range was widespread (it may be 15Hz to 35Hz) ( Figure 21). Figure 22 explores the time history responses for all considered locations in Z-direction. proper functionality of NPP . Like other NSCs, the cabinet is also acceleration sensitive so it can be susceptible to the high-frequency input motions. Here, as a case study, a single electrical cabinet was used to check the location sensitivity on the response under AGMs. The properties, i.e., stiffness (2897kN/m) and mass (287kg) of the cabinet were obtained from Salman et al. . The cabinet was modeled for both directions, i.e., X and Ydirections using same the mass and stiffness values (Z direction was considered as fully stiff). Figure 24(a) shows the cabinet response spectrum under AGMs and it reflects that the location L1 and L4 give 61.8% more peak spectrum acceleration than L3 and L6 for X-direction. Similarly, in Y-directional responses, the L4, L5, and L6 were more sensible (21.5%) than other locations ( Figure 24(b)). So, the cabinet or other NSCs distribution over the same floor is very important to get the proper incabinet response spectrum for selecting the engineering demand parameters (EDP). From Figure 25, it can be concluded that considering the ZPA as EDP, the L3 location is the best choice for electrical cabinet, which can provide safety of devices in the cabinet by lowering (around 42% in X and 15% in Y-direction) the ZPA responses.
CONCLUSIONS
The effects of the distribution of NSCs over the same floor in an AB under seismic excitations have been focused. Most of NSCs in NPP are acceleration sensitive and the floor acceleration can differ in the different mounting positions of NSCs. The flexibility of floor and combination of predominant modes with translational and torsional effects (especially in channel-type buildings) can lead to diverse responses of them in different locations. Therefore, the location sensitivity needs to be assessed before placing the NSCs in NPP to reduce the responses. KAERI channel type AB was acted here as the reference for developing the FEM to capture the goal through numerical evaluation. The FEM was calibrated using RSM and the calibrated model was used for seismic analysis under AGMs and sine sweep excitation for NSCs with frequency range 5 to 50Hz, which was rigidly mounted on six different locations. Finally, the sensitivity of the response of NSCs was evaluated for different locations. The key findings and conclusions from the results can be summarized as follows: • In X-direction, the exposer side corners (L1) and mid positions (L4) are more vulnerable especially if the frequency of NSCs (around 15 Hz) are around the first mode of AB. Although, the inside corners (L3) and middle of the back wall (L4) show lower responses for AGMs whereas sine sweep confirms that after 30 Hz (frequency of NSCs) L3 and L4 increase the responses remarkably (especially around 45 Hz).
• In Y-direction, if the NSCs frequency is less than 15Hz the exposure corners (L1), middle of the sidewall (L2), and inside corners (L3) are more sensitive. If the frequency is more than 20Hz the response behavior changes and in this case, the middle of the exposure side (L4), middle of the floor (L5), and middle of the back wall (L6) show more sensitivity. However, under sine sweep, the sine sweep excitation reveals that all locations are pursuing approximately a similar track and sensible frequency range is widespread (around 2nd and 3rd modes of the AB).
• In Z-direction, the riskier zone in the middle of the exposure side (L4), and also the NSCs with frequency around 25Hz in this zone is more hazardous than others.
• The location selection of NSCs can be reduced up to 30% horizontal (X or Y-direction) and 70% vertical ZPA responses which can lead to the economic design of NSCs as there is no need to consider any additional measures, just the right choice of mounting positions based on their vibration frequency.
• In the case of the cabinet, the inside corners (L3) can be a good choice for the placement and the middle of the exposure side (L4) will be the worst choice. Placing at L3 can reduce the maximum cabinet response spectrum by around 62% in X-direction and 22% in Y-direction, which were measured under AGMs excitation. |
def fromAskQueries(self,wikiId:str,askQueries:list,withFetch:bool=True):
for askQuery in askQueries:
name=askQuery["name"]
ask=askQuery["ask"]
title=askQuery["title"] if "title" in askQuery else None
description=askQuery["description"] if "description" in askQuery else None
self.addAskQuery(wikiId, name, ask, title, description)
if withFetch:
self.fetchQueryResults() |
A = int(input())
B = int(input())
C = int(input())
D = int(input())
E = int(input())
r = []
if A%10 != 0:
r.append(10 - A%10)
A += 10 - A%10
if B%10 != 0:
r.append(10 - B%10)
B += 10 - B%10
if C%10 != 0:
r.append(10 - C%10)
C += 10 - C%10
if D%10 != 0:
r.append(10 - D%10)
D += 10 - D%10
if E%10 != 0:
r.append(10 - E%10)
E += 10 - E % 10
if r:
print(A+B+C+D+E-max(r))
else:
print(A+B+C+D+E)
|
An Open Letter to New Yoga Teachers
Yesterday was the last day of a yoga teacher training that began in September. Inspired by the group’s great courage and heart, I’ve written them an open letter that I hope will be of help to them and to anyone starting out on the path of teaching:
Dear Friends,
Let me begin by saying that being in your presence all these months has been a privilege beyond my highest expectations. When I reflect on where we—students and teachers—started in our travels together and where we are in this moment, I am both inspired and humbled. Your commitment to diving into principles and practices that took you deep into unexplored—sometimes challenging and even frightening—places shows a courage that is rare. The way you came together, from a handful of individuals each walking your own separate paths, to a cohesive organism bound together by truth and love is something I will never forget. I will remember you when I need to be reminded of the human capacities for kindness, compassion, connectedness and authenticity.
I know that some, if not most of you harbor more than a little trepidation about presenting the teachings. I get it. After 32 years of practice and 27 years of teaching I still sometimes ask, “What am I doing? Who am I to teach yoga?”
Yoga’s depth is endless. There will always be more to learn. We will probably come to know only a small fraction of it. But that is what’s exciting about this path. The opportunities for transformation are endless. You know what you know in this moment, and it is enough. What you know now is, in fact, perfect. In 10 years you will know and prioritize different things in your teaching and in your life and that will be perfect too.
Also in 10 years, you will likely look back at what you are teaching now and shudder at some of it. This will probably happen throughout your teaching life. Principles about which you were once emphatic will fall apart on deeper inspection. Sometimes you will stumble upon this in your own practice, and sometimes your students will show you. While I sincerely hope that what we presented in our training will serve you for years to come, I also hope that you will challenge it and if you discover a different truth, that you follow what’s authentic for you. If that happens please tell me so that I can learn from you!
This path of transformation is like any other long-term relationship. Early on, there’s a honeymoon. Yoga is the be-all-end-all. Like any other relationship, if you stay with it long enough, there will probably be times when it seems flat, when the magic disappears. There may be times when it pushes your buttons—big time. I sincerely hope this for you, because this means that it’s becoming a part of you, rewiring you from the inside. At that point, maybe you’ll double down, go deeper, and find a whole new way of relating to your practice. Or you and yoga may drift apart—maybe for years. Or you may be led to something else that becomes your truth. You may return to yoga or you may not. Either is okay. As Gandhi said, “My commitment is to truth, not to consistency.” Whether you practice for a few years or for the rest of your life, the time you’ve spent committed to yoga has not been wasted. It has transformed you in some way. Whether that transformation leads to deeper yoga practice, or leads you away from yoga practice and toward something else doesn’t matter. It has, in some way, lit your life’s path. Be grateful.
Most likely you will say things while you’re teaching that will surprise you. Sometimes you may regret what you say the moment it leaves your mouth and it will haunt you for months—or years—after. Remember that this is a teaching for you. Right speech is a practice that you can continue to refine over the span of your teaching practice. Practicing right speech will transform your life—another opportunity. Other times you’ll hear yourself say something inspiring or wise that you had no idea was in you. When that happens, you may recognize that as teachers committed to yoga, we are really just vessels that have made ourselves available to the wisdom of yoga. Sometimes that wisdom coalesces in words and concepts to which we unexpectedly give voice. Celebrate those times!
There’s so much more I could say, but as I reflect on what I’ve written so far, I realize that I may already have said too much. None of these things may happen for you, so please don’t layer reflections of my path onto yours. Our teaching paths and life paths are ours alone to travel and tend. Whether that path in a given moment is straight, curvy, uphill, downhill, short, long, rocky, smooth, thorny or strewn with rose petals, I hope we will all remember to take refuge in each others’ courageous, open hearts. That is where the yoga is to be found.
May we be safe, happy, healthy and free for the benefit of all beings. |
package models
// This file is auto-generated.
// Please contact <EMAIL> for any change requests.
// AviCloudStatusDetails avi cloud status details
// swagger:model AviCloudStatusDetails
type AviCloudStatusDetails struct {
// Connection status of the controller cluster to Avi Cloud. Enum options - AVICLOUD_CONNECTIVITY_UNKNOWN, AVICLOUD_DISCONNECTED, AVICLOUD_CONNECTED. Field introduced in 18.2.6.
Connectivity *string `json:"connectivity,omitempty"`
// Status change reason. Field introduced in 18.2.6.
Reason *string `json:"reason,omitempty"`
// Registration status of the controller cluster to Avi Cloud. Enum options - AVICLOUD_REGISTRATION_UNKNOWN, AVICLOUD_REGISTERED, AVICLOUD_DEREGISTERED. Field introduced in 18.2.6.
Registration *string `json:"registration,omitempty"`
}
|
#pragma once
#include "Room.h"
namespace sf
{
class Sprite;
}
class TreasureRoom : public Room
{
private:
std::shared_ptr<sf::Sprite> m_pPodiumSprite;
public:
TreasureRoom(Map *parent, const sf::Vector2i &pos);
~TreasureRoom();
void RenderGame(sf::RenderWindow *window) override;
};
|
/**
* Handles all clash-related messages. In particular, these events are handled:
* - CLASH_REQUEST: Another client has sent a clash request to the current client.
* - CLASH_ACCEPTED: Another client has accepted a clash request sent by the current client.
* - CLASH_REJECTED: Another client has rejected a clash request sent by the current client.
* - CLASH_WON: Current client has won a clash.
* - CLASH_LOST: Current client has lost a clash.
*
* @return A new message with the request result.
* Possible results for CLASH_REQUEST are:
* - CLASH_ACCEPTED: The client has accepted the clash request.
* - CLASH_REJECTED: The client has rejected the clash request.
* Possible results for CLASH_ACCEPTED are:
* - START_CLASH: Signal for clash starting.
* If no other options are recognized, a null message is returned.
*/
@Override
protected Message handleClash() throws HandlerException {
/*
* Calls inserted here for preventing other players to enter this node are not necessary, since this scenery
* is not shared among other players. However calls have been added to prevent state errors in the current
* instance of the scenery (some methods inside Place class do check if there are clashes running).
*/
switch (MessageManager.convertXML("header", message.getMessageContent())) {
case "CLASH_REQUEST":
return selectClashRequest();
case "CLASH_ACCEPTED":
Client.getPosition().getClashManager().signalClashStart();
return manageAcceptedClash();
case "CLASH_REJECTED":
manageRejectedClash();
break;
case "CLASH_WON":
manageWonClash();
Client.getPosition().getClashManager().signalClashEnding();
break;
case "CLASH_LOST":
manageLostClash();
Client.getPosition().getClashManager().signalClashEnding();
break;
case "UGLY_VISIT":
manageUglyVisit();
break;
default:
throw new HandlerException("Invalid message type encountered");
}
return null;
} |
package generate
import (
"testing"
"github.com/containers/podman/v4/pkg/domain/entities"
"github.com/stretchr/testify/assert"
)
func TestSetPodExitPolicy(t *testing.T) {
tests := []struct {
input, expected []string
}{
{
[]string{"podman", "pod", "create"},
[]string{"podman", "pod", "create", "--exit-policy=stop"},
},
{
[]string{"podman", "pod", "create", "--exit-policy=continue"},
[]string{"podman", "pod", "create", "--exit-policy=continue"},
},
{
[]string{"podman", "pod", "create", "--exit-policy", "continue"},
[]string{"podman", "pod", "create", "--exit-policy", "continue"},
},
}
for _, test := range tests {
assert.Equalf(t, test.expected, setPodExitPolicy(test.input), "%v", test.input)
}
}
func TestValidateRestartPolicyPod(t *testing.T) {
type podInfo struct {
restart string
}
tests := []struct {
name string
podInfo podInfo
wantErr bool
}{
{"good-on", podInfo{restart: "no"}, false},
{"good-on-success", podInfo{restart: "on-success"}, false},
{"good-on-failure", podInfo{restart: "on-failure"}, false},
{"good-on-abnormal", podInfo{restart: "on-abnormal"}, false},
{"good-on-watchdog", podInfo{restart: "on-watchdog"}, false},
{"good-on-abort", podInfo{restart: "on-abort"}, false},
{"good-always", podInfo{restart: "always"}, false},
{"fail", podInfo{restart: "foobar"}, true},
{"failblank", podInfo{restart: ""}, true},
}
for _, tt := range tests {
test := tt
t.Run(tt.name, func(t *testing.T) {
if err := validateRestartPolicy(test.podInfo.restart); (err != nil) != test.wantErr {
t.Errorf("ValidateRestartPolicy() error = %v, wantErr %v", err, test.wantErr)
}
})
}
}
func TestCreatePodSystemdUnit(t *testing.T) {
serviceInfo := `# pod-123abc.service
`
headerInfo := `# autogenerated by Podman CI
`
podContent := `
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=102
ExecStart=/usr/bin/podman start jadda-jadda-infra
ExecStop=/usr/bin/podman stop -t 42 jadda-jadda-infra
ExecStopPost=/usr/bin/podman stop -t 42 jadda-jadda-infra
PIDFile=/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid
Type=forking
[Install]
WantedBy=default.target
`
podGood := serviceInfo + headerInfo + podContent
podGoodNoHeaderInfo := serviceInfo + podContent
podGoodWithEmptyPrefix := `# 123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman 123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=102
ExecStart=/usr/bin/podman start jadda-jadda-infra
ExecStop=/usr/bin/podman stop -t 42 jadda-jadda-infra
ExecStopPost=/usr/bin/podman stop -t 42 jadda-jadda-infra
PIDFile=/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid
Type=forking
[Install]
WantedBy=default.target
`
podGoodCustomWants := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
# User-defined dependencies
Wants=a.service b.service c.target
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=102
ExecStart=/usr/bin/podman start jadda-jadda-infra
ExecStop=/usr/bin/podman stop -t 42 jadda-jadda-infra
ExecStopPost=/usr/bin/podman stop -t 42 jadda-jadda-infra
PIDFile=/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid
Type=forking
[Install]
WantedBy=default.target
`
podGoodCustomAfter := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
# User-defined dependencies
After=a.service b.service c.target
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=102
ExecStart=/usr/bin/podman start jadda-jadda-infra
ExecStop=/usr/bin/podman stop -t 42 jadda-jadda-infra
ExecStopPost=/usr/bin/podman stop -t 42 jadda-jadda-infra
PIDFile=/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid
Type=forking
[Install]
WantedBy=default.target
`
podGoodCustomRequires := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
# User-defined dependencies
Requires=a.service b.service c.target
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=102
ExecStart=/usr/bin/podman start jadda-jadda-infra
ExecStop=/usr/bin/podman stop -t 42 jadda-jadda-infra
ExecStopPost=/usr/bin/podman stop -t 42 jadda-jadda-infra
PIDFile=/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid
Type=forking
[Install]
WantedBy=default.target
`
podGoodCustomDependencies := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
# User-defined dependencies
Wants=a.service b.service c.target
After=a.service b.service c.target
Requires=a.service b.service c.target
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=102
ExecStart=/usr/bin/podman start jadda-jadda-infra
ExecStop=/usr/bin/podman stop -t 42 jadda-jadda-infra
ExecStopPost=/usr/bin/podman stop -t 42 jadda-jadda-infra
PIDFile=/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid
Type=forking
[Install]
WantedBy=default.target
`
podGoodRestartSec := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
RestartSec=15
TimeoutStopSec=102
ExecStart=/usr/bin/podman start jadda-jadda-infra
ExecStop=/usr/bin/podman stop -t 42 jadda-jadda-infra
ExecStopPost=/usr/bin/podman stop -t 42 jadda-jadda-infra
PIDFile=/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid
Type=forking
[Install]
WantedBy=default.target
`
podGoodNamedNew := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/pod-123abc.pid %t/pod-123abc.pod-id
ExecStartPre=/usr/bin/podman pod create --infra-conmon-pidfile %t/pod-123abc.pid --pod-id-file %t/pod-123abc.pod-id --name foo "bar=arg with space" --replace --exit-policy=stop
ExecStart=/usr/bin/podman pod start --pod-id-file %t/pod-123abc.pod-id
ExecStop=/usr/bin/podman pod stop --ignore --pod-id-file %t/pod-123abc.pod-id -t 10
ExecStopPost=/usr/bin/podman pod rm --ignore -f --pod-id-file %t/pod-123abc.pod-id
PIDFile=%t/pod-123abc.pid
Type=forking
[Install]
WantedBy=default.target
`
podGoodNamedNewWithRootArgs := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/pod-123abc.pid %t/pod-123abc.pod-id
ExecStartPre=/usr/bin/podman --events-backend none --runroot /root pod create --infra-conmon-pidfile %t/pod-123abc.pid --pod-id-file %t/pod-123abc.pod-id --name foo "bar=arg with space" --replace --exit-policy=stop
ExecStart=/usr/bin/podman --events-backend none --runroot /root pod start --pod-id-file %t/pod-123abc.pod-id
ExecStop=/usr/bin/podman --events-backend none --runroot /root pod stop --ignore --pod-id-file %t/pod-123abc.pod-id -t 10
ExecStopPost=/usr/bin/podman --events-backend none --runroot /root pod rm --ignore -f --pod-id-file %t/pod-123abc.pod-id
PIDFile=%t/pod-123abc.pid
Type=forking
[Install]
WantedBy=default.target
`
podGoodNamedNewWithReplaceFalse := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/pod-123abc.pid %t/pod-123abc.pod-id
ExecStartPre=/usr/bin/podman pod create --infra-conmon-pidfile %t/pod-123abc.pid --pod-id-file %t/pod-123abc.pod-id --name foo --replace --exit-policy=stop
ExecStart=/usr/bin/podman pod start --pod-id-file %t/pod-123abc.pod-id
ExecStop=/usr/bin/podman pod stop --ignore --pod-id-file %t/pod-123abc.pod-id -t 10
ExecStopPost=/usr/bin/podman pod rm --ignore -f --pod-id-file %t/pod-123abc.pod-id
PIDFile=%t/pod-123abc.pid
Type=forking
[Install]
WantedBy=default.target
`
podNewLabelWithCurlyBraces := `# pod-123abc.service
# autogenerated by Podman CI
[Unit]
Description=Podman pod-123abc.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/var/run/containers/storage
Requires=container-1.service container-2.service
Before=container-1.service container-2.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/pod-123abc.pid %t/pod-123abc.pod-id
ExecStartPre=/usr/bin/podman pod create --infra-conmon-pidfile %t/pod-123abc.pid --pod-id-file %t/pod-123abc.pod-id --name foo --label key={{someval}} --exit-policy=continue --replace
ExecStart=/usr/bin/podman pod start --pod-id-file %t/pod-123abc.pod-id
ExecStop=/usr/bin/podman pod stop --ignore --pod-id-file %t/pod-123abc.pod-id -t 10
ExecStopPost=/usr/bin/podman pod rm --ignore -f --pod-id-file %t/pod-123abc.pod-id
PIDFile=%t/pod-123abc.pid
Type=forking
[Install]
WantedBy=default.target
`
tests := []struct {
name string
info podInfo
want string
new bool
noHeader bool
wantErr bool
}{
{"pod",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "pod", "create", "--name", "foo", "bar=arg with space"},
},
podGood,
false,
false,
false,
},
{"pod",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
Wants: []string{"a.service", "b.service", "c.target"},
CreateCommand: []string{
"podman", "pod", "create", "--name", "foo", "--wants", "a.service",
"--wants", "b.service", "--wants", "c.target", "bar=arg with space"},
},
podGoodCustomWants,
false,
false,
false,
},
{"pod",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
After: []string{"a.service", "b.service", "c.target"},
CreateCommand: []string{
"podman", "pod", "create", "--name", "foo", "--after", "a.service",
"--after", "b.service", "--after", "c.target", "bar=arg with space"},
},
podGoodCustomAfter,
false,
false,
false,
},
{"pod",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
Requires: []string{"a.service", "b.service", "c.target"},
CreateCommand: []string{
"podman", "pod", "create", "--name", "foo", "--requires", "a.service",
"--requires", "b.service", "--requires", "c.target", "bar=arg with space"},
},
podGoodCustomRequires,
false,
false,
false,
},
{"pod",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
Wants: []string{"a.service", "b.service", "c.target"},
After: []string{"a.service", "b.service", "c.target"},
Requires: []string{"a.service", "b.service", "c.target"},
CreateCommand: []string{
"podman", "pod", "create", "--name", "foo", "--wants", "a.service",
"--wants", "b.service", "--wants", "c.target", "--after", "a.service",
"--after", "b.service", "--after", "c.target", "--requires", "a.service",
"--requires", "b.service", "--requires", "c.target", "bar=arg with space"},
},
podGoodCustomDependencies,
false,
false,
false,
},
{"pod restartSec",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "pod", "create", "--name", "foo", "bar=arg with space"},
RestartSec: 15,
},
podGoodRestartSec,
false,
false,
false,
},
{"pod noHeader",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "pod", "create", "--name", "foo", "bar=arg with space"},
},
podGoodNoHeaderInfo,
false,
true,
false,
},
{"pod with root args",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "--events-backend", "none", "--runroot", "/root", "pod", "create", "--name", "foo", "bar=arg with space"},
},
podGood,
false,
false,
false,
},
{"pod --new",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 10,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "pod", "create", "--name", "foo", "bar=arg with space"},
},
podGoodNamedNew,
true,
false,
false,
},
{"pod --new with root args",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 10,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "--events-backend", "none", "--runroot", "/root", "pod", "create", "--name", "foo", "bar=arg with space"},
},
podGoodNamedNewWithRootArgs,
true,
false,
false,
},
{"pod --new with --replace=false",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 10,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "pod", "create", "--name", "foo", "--replace=false"},
},
podGoodNamedNewWithReplaceFalse,
true,
false,
false,
},
{"pod --new with double curly braces",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 10,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "pod", "create", "--name", "foo", "--label", "key={{someval}}", "--exit-policy=continue"},
},
podNewLabelWithCurlyBraces,
true,
false,
false,
},
{"pod --new with ID files",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "pod-123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 10,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "pod", "create", "--infra-conmon-pidfile", "/tmp/pod-123abc.pid", "--pod-id-file", "/tmp/pod-123abc.pod-id", "--name", "foo", "bar=arg with space"},
},
podGoodNamedNew,
true,
false,
false,
},
{"pod with empty pod-prefix",
podInfo{
Executable: "/usr/bin/podman",
ServiceName: "123abc",
InfraNameOrID: "jadda-jadda-infra",
PIDFile: "/run/containers/storage/overlay-containers/639c53578af4d84b8800b4635fa4e680ee80fd67e0e6a2d4eea48d1e3230f401/userdata/conmon.pid",
StopTimeout: 42,
PodmanVersion: "CI",
GraphRoot: "/var/lib/containers/storage",
RunRoot: "/var/run/containers/storage",
RequiredServices: []string{"container-1", "container-2"},
CreateCommand: []string{"podman", "pod", "create", "--name", "foo", "bar=arg with space"},
},
podGoodWithEmptyPrefix,
false,
false,
false,
},
}
for _, tt := range tests {
test := tt
t.Run(tt.name, func(t *testing.T) {
opts := entities.GenerateSystemdOptions{
New: test.new,
NoHeader: test.noHeader,
}
got, err := executePodTemplate(&test.info, opts)
if (err != nil) != test.wantErr {
t.Errorf("CreatePodSystemdUnit() error = \n%v, wantErr \n%v", err, test.wantErr)
return
}
assert.Equal(t, test.want, got)
})
}
}
|
// const localeDirectory = import.meta.globEager('./compiled-lang/*.json')
import en from './compiled-lang/en.json'
import sk from './compiled-lang/sk.json'
const localeDirectory = {
'./compiled-lang/en.json': {
default: en
},
'./compiled-lang/sk.json': {
default: sk
}
}
const gatherLocales = Object.keys(localeDirectory).reduce(
(gatherLocales, file) => {
const locale = file.substring(
file.lastIndexOf('/') + 1,
file.indexOf('.json')
)
return { ...gatherLocales, [locale.toUpperCase()]: locale }
},
{}
)
export const locales = {
...gatherLocales,
}
export const messages = (locale) => {
const path = Object.keys(localeDirectory).find((element) =>
element.endsWith(locale.split('-')[0] + '.json')
)
return localeDirectory[path].default ?? ''
}
|
#include <iomanip>
#include <ostream>
#include "EventFilter/Phase2TrackerRawToDigi/interface/Phase2TrackerFEDDAQHeader.h"
namespace Phase2Tracker {
std::ostream& operator<<(std::ostream& os, const FEDDAQEventType& value) {
switch (value) {
case DAQ_EVENT_TYPE_PHYSICS:
os << "Physics trigger";
break;
case DAQ_EVENT_TYPE_CALIBRATION:
os << "Calibration trigger";
break;
case DAQ_EVENT_TYPE_TEST:
os << "Test trigger";
break;
case DAQ_EVENT_TYPE_TECHNICAL:
os << "Technical trigger";
break;
case DAQ_EVENT_TYPE_SIMULATED:
os << "Simulated event";
break;
case DAQ_EVENT_TYPE_TRACED:
os << "Traced event";
break;
case DAQ_EVENT_TYPE_ERROR:
os << "Error";
break;
case DAQ_EVENT_TYPE_INVALID:
os << "Unknown";
break;
default:
os << "Unrecognized";
os << " (";
printHexValue(value, os);
os << ")";
break;
}
return os;
}
FEDDAQEventType FEDDAQHeader::eventType() const {
switch (eventTypeNibble()) {
case DAQ_EVENT_TYPE_PHYSICS:
case DAQ_EVENT_TYPE_CALIBRATION:
case DAQ_EVENT_TYPE_TEST:
case DAQ_EVENT_TYPE_TECHNICAL:
case DAQ_EVENT_TYPE_SIMULATED:
case DAQ_EVENT_TYPE_TRACED:
case DAQ_EVENT_TYPE_ERROR:
return FEDDAQEventType(eventTypeNibble());
default:
return DAQ_EVENT_TYPE_INVALID;
}
}
void FEDDAQHeader::setEventType(const FEDDAQEventType evtType) { header_[7] = ((header_[7] & 0xF0) | evtType); }
void FEDDAQHeader::setL1ID(const uint32_t l1ID) {
header_[4] = (l1ID & 0x000000FF);
header_[5] = ((l1ID & 0x0000FF00) >> 8);
header_[6] = ((l1ID & 0x00FF0000) >> 16);
}
void FEDDAQHeader::setBXID(const uint16_t bxID) {
header_[3] = ((bxID & 0x0FF0) >> 4);
header_[2] = ((header_[2] & 0x0F) | ((bxID & 0x000F) << 4));
}
void FEDDAQHeader::setSourceID(const uint16_t sourceID) {
header_[2] = ((header_[2] & 0xF0) | ((sourceID & 0x0F00) >> 8));
header_[1] = (sourceID & 0x00FF);
}
FEDDAQHeader::FEDDAQHeader(const uint32_t l1ID,
const uint16_t bxID,
const uint16_t sourceID,
const FEDDAQEventType evtType) {
//clear everything (FOV,H,x,$ all set to 0)
memset(header_, 0x0, 8);
//set the BoE nibble to indicate this is the last fragment
header_[7] = 0x50;
//set variable fields vith values supplied
setEventType(evtType);
setL1ID(l1ID);
setBXID(bxID);
setSourceID(sourceID);
}
} // namespace Phase2Tracker
|
/**
* usb_composite_unregister() - unregister a composite driver
* @driver: the driver to unregister
*
* This function is used to unregister drivers using the composite
* driver framework.
*/
void usb_composite_unregister(struct usb_composite_driver *driver)
{
if (composite != driver)
return;
usb_gadget_unregister_driver(&composite_driver);
} |
// To Implement Insert func, Use two pointer algorithm.
// First one (j) points long string.
// The other one (i) points short string.
// While increasing i, j, if there are more than one difference between strA[i] and strB[j] --> return false;
// pale
// ale ple pae pal --> true
// lae --> false
bool checkAddDel(string &strA, string &strB) {
int check = 0;
int j = 0;
for (int i = 0; i < strA.length(); ) {
if (check >= 2)
return false;
else if (strA[i] != strB[j]) {
check++;
j++;
}
else if (strA[i] == strB[j]) {
i++;
j++;
}
}
return true;
} |
/**
* Maps the data from the trial run actions to the database.
*
* @author Walter Weinmann
*
*/
public class TrialRunActionMapper {
private static final String AND_SEQUENCE_NUMBER_ACTION =
" AND SEQUENCE_NUMBER_ACTION = ";
private static final String AND_START_TIME = " AND START_TIME = ";
private static final String AND_TEST_SUITE_ID = " AND TEST_SUITE_ID = ";
private static final Logger LOGGER =
Logger.getLogger(TrialRunActionMapper.class.getPackage().getName());
private static final String PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL =
"Precondition: DatabaseAccessor is missing (null)";
private static final String SINGLEQUOTE_COMMA_SPACE_SINGLEQUOTE = "', '";
private static final String UPDATE_TMD_TRIAL_RUN_ACTION =
"UPDATE TMD_TRIAL_RUN_ACTION ";
private static final String WHERE_DATABASE_INSTANCE_ID =
"' WHERE DATABASE_INSTANCE_ID = ";
private final int databaseInstanceId;
private final DatabaseAccessor dbAccess;
private long sequenceNumberAction;
private final String startTime;
private final int testSuiteId;
private final TrialRunProtocolMapper trialRunProtocol;
/**
* Constructs a <code>TrialRunActionMapper</code> object.
*
* @param parTrialRunProtocol The <code>TralRunProtocol</code> object.
* @param parSQLSyntaxCodeTarget The type of the SQL syntax version of the
* database system.
* @param parDatabaseInstanceId The identification of the
* <code>DatabaseInstance</code> object.
* @param parTestSuiteId The identification of the <code>TestSuite</code>
* object.
* @param parStartTime The current time stamp.
*/
public TrialRunActionMapper(
final TrialRunProtocolMapper parTrialRunProtocol,
final String parSQLSyntaxCodeTarget,
final int parDatabaseInstanceId, final int parTestSuiteId,
final String parStartTime) {
super();
assert parTrialRunProtocol != null : "Precondition: TrialRunProtocol is missing (null)";
assert parSQLSyntaxCodeTarget != null : "Precondition: String SQL syntax code target is missing (null)";
assert parStartTime != null : "Precondition: String start time is missing (null)";
dbAccess =
new DatabaseAccessor(Global.DATABASE_SCHEMA_IDENTIFIER_MASTER,
parSQLSyntaxCodeTarget, true);
databaseInstanceId = parDatabaseInstanceId;
sequenceNumberAction = 0L;
startTime = parStartTime;
testSuiteId = parTestSuiteId;
trialRunProtocol = parTrialRunProtocol;
}
private void checkPreconditionComparison(final String parEquals,
final String parMessage) {
if (parEquals == null) {
throw new IllegalArgumentException(
"Comparison equals is missing (null)");
}
if ("".equals(parEquals)) {
throw new IllegalArgumentException(
"Comparison equals is missing (empty)");
}
if (!("N".equals(parEquals) || "Y".equals(parEquals))) {
throw new IllegalArgumentException("Comparison equals ("
+ parEquals + ") is invalid (only N or Y allowed)");
}
if (parMessage == null) {
throw new IllegalArgumentException(
"Comparison message is missing (null)");
}
if ("N".equals(parEquals) && "".equals(parEquals)) {
throw new IllegalArgumentException(
"Comparison message is missing (empty) - mandatory if not equal");
}
}
private void checkPreconditionEndActionError(final Date parStartTime,
final Date parEndTime) {
if (parStartTime == null) {
throw new IllegalArgumentException(
"Start date and time is missing (null)");
}
if (parEndTime == null) {
throw new IllegalArgumentException(
"End date and time is missing (null)");
}
}
private void checkPreconditionEndActionError(final String parErrorMessage) {
if (parErrorMessage == null) {
throw new IllegalArgumentException(
"Error message is missing (null)");
}
if ("".equals(parErrorMessage)) {
throw new IllegalArgumentException(
"Error message is missing (empty)");
}
}
private void checkPreconditionStartAction(final String parStatement) {
if (parStatement == null) {
throw new IllegalArgumentException("Statement is missing (null)");
}
if ("".equals(parStatement)) {
throw new IllegalArgumentException("Statement is missing (empty)");
}
}
/**
* Close the database connection.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean closeConnection() {
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
return dbAccess.closeConnection();
}
private boolean handleDatabaseError(final String parStatement,
final String parMethod) {
final String lvMsg =
"TrialRunActionMapper " + parMethod
+ ": Table TMD_TRIAL_RUN_ACTION ("
+ sequenceNumberAction
+ ") could not be created, statement="
+ parStatement.replaceAll("'", "''");
trialRunProtocol.createErrorProtocol(lvMsg, false);
LOGGER.log(Level.SEVERE, lvMsg);
return false;
}
/**
* Creates the row in the database table <code>TMD_TRIAL_RUN_ACTION</code>.
*
* @param parColumnsTestSuiteAction The <code>Map</code> object containing
* the database columns.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean initialise(
final Map<String, Object> parColumnsTestSuiteAction) {
if (parColumnsTestSuiteAction == null) {
throw new IllegalArgumentException(
"Map containg the columns of the test suite action is missing (null)");
}
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
("INSERT INTO TMD_TRIAL_RUN_ACTION "
+ "(DATABASE_INSTANCE_ID, TEST_SUITE_ID, START_TIME, "
+ "SEQUENCE_NUMBER_ACTION, APPLIED_PATTERN_ORDER_BY, "
+ "APPLIED_PATTERN_SELECT_STMNT, EXECUTION_FREQUENCY, "
+ "OPERATION_CODE, OPERATION_TYPE, "
+ "PATTERN_SQL_IDIOM_NAME, SQL_SYNTAX_CODE, TABLE_NAME, "
+ "TEST_QUERY_PAIR_DESCRIPTION, "
+ "TEST_SUITE_ACTION_DESCRIPTION, TEST_SUITE_OPERATION_NAME, "
+ "TEST_TABLE_DESCRIPTION, UNAPPLIED_PATTERN_ORDER_BY, "
+ "UNAPPLIED_PATTERN_SELECT_STMNT) VALUES ("
+ databaseInstanceId
+ ", "
+ testSuiteId
+ ", "
+ startTime
+ ", "
+ sequenceNumberAction
+ Global.SEPARATOR_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("APPLIED_PATTERN_ORDER_BY")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("APPLIED_PATTERN_SELECT_STMNT")
+ "', "
+ parColumnsTestSuiteAction.get("EXECUTION_FREQUENCY")
+ Global.SEPARATOR_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction.get("OPERATION_CODE")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction.get("OPERATION_TYPE")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("PATTERN_SQL_IDIOM_NAME")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction.get("SQL_SYNTAX_CODE")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction.get("TABLE_NAME")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("TEST_QUERY_PAIR_DESCRIPTION")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("TEST_SUITE_ACTION_DESCRIPTION")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("TEST_SUITE_OPERATION_NAME")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("TEST_TABLE_DESCRIPTION")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("UNAPPLIED_PATTERN_ORDER_BY")
+ Global.SEPARATOR_SINGLE_QUOTE_COMMA_SPACE_SINGLE_QUOTE
+ parColumnsTestSuiteAction
.get("UNAPPLIED_PATTERN_SELECT_STMNT") + "')")
.replaceAll("'null'", Global.NULL);
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement, "initialise()");
}
return dbAccess.commit();
}
/**
* Rest the database column <code>APPLIED_PATTERN_SELECT_STMNT</code>.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean resetAppliedPatternSelectStmnt() {
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION
+ "SET APPLIED_PATTERN_ORDER_BY = null, "
+ "APPLIED_PATTERN_SELECT_STMNT = null "
+ "WHERE DATABASE_INSTANCE_ID = " + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement,
"resetAppliedPatternSelectStmnt()");
}
return dbAccess.commit();
}
/**
* Rest the database column <code>UNAPPLIED_PATTERN_SELECT_STMNT</code>.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean resetUnappliedPatternSelectStmnt() {
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION
+ "SET UNAPPLIED_PATTERN_ORDER_BY = null, "
+ "UNAPPLIED_PATTERN_SELECT_STMNT = null "
+ "WHERE DATABASE_INSTANCE_ID = " + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement,
"resetUnappliedPatternSelectStmnt()");
}
return dbAccess.commit();
}
/**
* Updates in the database the columns
* <code>APPLIED_DURATION_MICRO_SECOND</code>,
* <code>APPLIED_END_TIME</code>, and <code>APPLIED_START_TIME</code>.
*
* @param parStartTime The new start date and time.
* @param parEndTime The new end date and time.
* @param parDuration The new duration of the query execution.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean setAppliedEndAction(final Date parStartTime,
final Date parEndTime, final long parDuration) {
checkPreconditionEndActionError(parStartTime, parEndTime);
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION
+ "SET APPLIED_DURATION = "
+ parDuration
+ ", APPLIED_END_TIME = CAST(TO_TIMESTAMP('"
+ new SimpleDateFormat(
Global.DATE_FORMAT_DD_MM_YYYY_HH_MM_SS_SSS_JAVA)
.format(parEndTime)
+ SINGLEQUOTE_COMMA_SPACE_SINGLEQUOTE
+ Global.DATE_FORMAT_DD_MM_YYYY_HH_MM_SS_SSS_SQL
+ "') AS TIMESTAMP(9)), APPLIED_START_TIME = CAST(TO_TIMESTAMP('"
+ new SimpleDateFormat(
Global.DATE_FORMAT_DD_MM_YYYY_HH_MM_SS_SSS_JAVA)
.format(parStartTime)
+ SINGLEQUOTE_COMMA_SPACE_SINGLEQUOTE
+ Global.DATE_FORMAT_DD_MM_YYYY_HH_MM_SS_SSS_SQL
+ "') AS TIMESTAMP(9)), APPLIED_STATUS = '"
+ Global.TRIAL_RUN_STATUS_END_ACTION
+ WHERE_DATABASE_INSTANCE_ID + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement, "setAppliedEndAction()");
}
return dbAccess.commit();
}
/**
* Updates in the database the columns <code>APPLIED_ERROR_MESSAGE</code>
* and <code>APPLIED_STATUS</code>.
*
* @param parErrorMessage The new applied error message.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean setAppliedEndActionError(final String parErrorMessage) {
checkPreconditionEndActionError(parErrorMessage);
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION + "SET APPLIED_ERROR_MESSAGE = '"
+ parErrorMessage + "', APPLIED_STATUS = '"
+ Global.TRIAL_RUN_STATUS_END_ACTION
+ WHERE_DATABASE_INSTANCE_ID + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement,
"setAppliedEndActionError()");
}
return dbAccess.commit();
}
/**
* Updates in the database the columns
* <code>APPLIED_PATTERN_SELECT_STMNT</code> and
* <code>APPLIED_STATUS</code>.
*
* @param parStatement The new applied <code>SQL</code> statement.
* @param parOrderBy Rhe new <code>ORDER BY</code> clause.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean setAppliedStartAction(final String parStatement,
final String parOrderBy) {
checkPreconditionStartAction(parStatement);
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION
+ "SET APPLIED_PATTERN_ORDER_BY = '" + parOrderBy
+ "', APPLIED_PATTERN_SELECT_STMNT = '" + parStatement
+ "', APPLIED_STATUS = '"
+ Global.TRIAL_RUN_STATUS_START_ACTION
+ WHERE_DATABASE_INSTANCE_ID + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement, "setAppliedStartAction()");
}
return dbAccess.commit();
}
/**
* Updates in the database the columns <code>COMPARISON_EQUALS</code> and
* <code>COMPARISON_MESSAGE</code>.
*
* @param parEquals Y if both <code>ResultSet</code>s are equal, or N
* otherwise.
* @param parMessage The new message reasoning the inequality of both
* <code>ResultSet</code>.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean setComparison(final String parEquals,
final String parMessage) {
checkPreconditionComparison(parEquals, parMessage);
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
String lvMessage;
if ("".equals(parMessage)) {
lvMessage = Global.NULL;
} else {
lvMessage = "'" + parMessage.replaceAll("'", "''") + "'";
}
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION + "SET COMPARISON_EQUALS = '"
+ parEquals + "', COMPARISON_MESSAGE = " + lvMessage
+ " WHERE DATABASE_INSTANCE_ID = " + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement, "setComparison()");
}
return dbAccess.commit();
}
/**
* Sets the current action sequence number.
*
* @param parSequenceNumberAction The current action sequence number.
*/
public final void setSequenceNumberAction(final long parSequenceNumberAction) {
sequenceNumberAction = parSequenceNumberAction;
}
/**
* Updates in the database the column <code>TABLE_NAME</code>.
*
* @param parTableName The new table name.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean setTableName(final String parTableName) {
if (parTableName == null) {
throw new IllegalArgumentException("Table name is missing (null)");
}
if ("".equals(parTableName)) {
throw new IllegalArgumentException("Table name is missing (empty)");
}
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION + "SET TABLE_NAME = '"
+ parTableName + WHERE_DATABASE_INSTANCE_ID
+ databaseInstanceId + AND_TEST_SUITE_ID + testSuiteId
+ AND_START_TIME + startTime
+ AND_SEQUENCE_NUMBER_ACTION + sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement, "setTableName()");
}
return dbAccess.commit();
}
/**
* Updates in the database the columns
* <code>UNAPPLIED_DURATION_MICRO_SECOND</code>,
* <code>UNAPPLIED_END_TIME</code>, and <code>UNAPPLIED_START_TIME</code>.
*
* @param parStartTime The new start date and time.
* @param parEndTime The new end date and time.
* @param parDuration The new duration of the query execution.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean setUnappliedEndAction(final Date parStartTime,
final Date parEndTime, final long parDuration) {
checkPreconditionEndActionError(parStartTime, parEndTime);
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION
+ "SET UNAPPLIED_DURATION = "
+ parDuration
+ ", UNAPPLIED_END_TIME = CAST(TO_TIMESTAMP('"
+ new SimpleDateFormat(
Global.DATE_FORMAT_DD_MM_YYYY_HH_MM_SS_SSS_JAVA)
.format(parEndTime)
+ SINGLEQUOTE_COMMA_SPACE_SINGLEQUOTE
+ Global.DATE_FORMAT_DD_MM_YYYY_HH_MM_SS_SSS_SQL
+ "') AS TIMESTAMP(9)), UNAPPLIED_START_TIME = CAST(TO_TIMESTAMP('"
+ new SimpleDateFormat(
Global.DATE_FORMAT_DD_MM_YYYY_HH_MM_SS_SSS_JAVA)
.format(parStartTime)
+ SINGLEQUOTE_COMMA_SPACE_SINGLEQUOTE
+ Global.DATE_FORMAT_DD_MM_YYYY_HH_MM_SS_SSS_SQL
+ "') AS TIMESTAMP(9)), UNAPPLIED_STATUS = '"
+ Global.TRIAL_RUN_STATUS_END_ACTION
+ WHERE_DATABASE_INSTANCE_ID + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement, "setUnappliedEndAction()");
}
return dbAccess.commit();
}
/**
* Updates in the database the columns <code>UNAPPLIED_ERROR_MESSAGE</code>
* and <code>UNAPPLIED_STATUS</code>.
*
* @param parErrorMessage The new unapplied error message.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean setUnappliedEndActionError(final String parErrorMessage) {
checkPreconditionEndActionError(parErrorMessage);
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION + "SET UNAPPLIED_ERROR_MESSAGE = '"
+ parErrorMessage + "', UNAPPLIED_STATUS = '"
+ Global.TRIAL_RUN_STATUS_END_ACTION
+ WHERE_DATABASE_INSTANCE_ID + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement,
"setUnappliedEndActionError()");
}
return dbAccess.commit();
}
/**
* Updates in the database the columns
* <code>UNAPPLIED_PATTERN_SELECT_STMNT</code> and
* <code>UNAPPLIED_STATUS</code>.
*
* @param parStatement The new unapplied <code>SQL</code> statement.
* @param parOrderBy Rhe new <code>ORDER BY</code> clause.
*
* @return <code>true</code> if the operation succeeeded and
* <code>false</code> otherwise.
*/
public final boolean setUnappliedStartAction(final String parStatement,
final String parOrderBy) {
checkPreconditionStartAction(parStatement);
assert dbAccess != null : PRECONDITION_DATABASE_ACCESSOR_IS_MISSING_NULL;
final String lvStatement =
UPDATE_TMD_TRIAL_RUN_ACTION
+ "SET UNAPPLIED_PATTERN_ORDER_BY = '" + parOrderBy
+ "', UNAPPLIED_PATTERN_SELECT_STMNT = '"
+ parStatement + "', UNAPPLIED_STATUS = '"
+ Global.TRIAL_RUN_STATUS_START_ACTION
+ WHERE_DATABASE_INSTANCE_ID + databaseInstanceId
+ AND_TEST_SUITE_ID + testSuiteId + AND_START_TIME
+ startTime + AND_SEQUENCE_NUMBER_ACTION
+ sequenceNumberAction;
if (!dbAccess.executeUpdate(lvStatement)) {
return handleDatabaseError(lvStatement, "setUnappliedStartAction()");
}
return dbAccess.commit();
}
} |
Grav|Lab Early Access is almost here! Hey folks,
Grav|Lab will start it's preliminary testing on October 21st! The game will feature 17 levels and a built in level editor. Workshop support will come in an update after early access launch.
http://store.steampowered.com/app/408340/
Grav|Lab is nominated for a Proto Award... ... and you could be invited!
Grav|Lab is a honoree of the 2016 Proto Awards in Los Angeles on October 8th! We have one empty seat at our table and we would like to offer you this spot. To enter the draw for it, please like and retweet this tweet:
https://twitter.com/skyworxx/status/782265509601357824
Please do so by the end of Sunday October 2nd. The winner will be put down as +1 to the Grav|Lab guest list for the Proto Awards. Accommodation and Travel not included.
Oculus Connect and Steam Dev Days
Last but not least: We will be at Oculus Connect and Steam Dev Days, probably in the hallway demoing from our portable rig on Oculus Touch or HTC Vive. If you see us, please say Hi :).
Cheers,
Grav|Lab Management |
package com.evieclient.utils.render;
import io.sentry.Sentry;
import net.minecraft.client.Minecraft;
import net.minecraft.client.renderer.texture.DynamicTexture;
import net.minecraft.util.ResourceLocation;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.InputStream;
public class LoadTexture {
public static ResourceLocation LoadTexture(String path) {
BufferedImage bufferedImage;
try {
ClassLoader classLoader = LoadTexture.class.getClassLoader();
InputStream inputStream = classLoader.getResourceAsStream(path);
bufferedImage = ImageIO.read(inputStream);
} catch (Exception e) {
Sentry.captureException(e);
return null;
}
if (bufferedImage != null) {
return Minecraft.getMinecraft().getRenderManager().renderEngine.getDynamicTextureLocation(path, new DynamicTexture(bufferedImage));
}
return null;
}
}
|
def sample_data(self,sample_size):
samples_idx = [];
for i in xrange(sample_size):
rand_w = random();
sum = 0;
for j in xrange(self.num_training_examples):
sum += self.weights[j];
if sum>rand_w:
samples_idx += [j];
break;
return samples_idx; |
#![doc(html_root_url = "https://docs.rs/mio/0.7.11")]
#![deny(
missing_docs,
missing_debug_implementations,
rust_2018_idioms,
unused_imports,
dead_code
)]
#![cfg_attr(docsrs, feature(doc_cfg))]
// Disallow warnings when running tests.
#![cfg_attr(test, deny(warnings))]
// Disallow warnings in examples.
#![doc(test(attr(deny(warnings))))]
//! Mio is a fast, low-level I/O library for Rust focusing on non-blocking APIs
//! and event notification for building high performance I/O apps with as little
//! overhead as possible over the OS abstractions.
//!
//! # Usage
//!
//! Using Mio starts by creating a [`Poll`], which reads events from the OS and
//! puts them into [`Events`]. You can handle I/O events from the OS with it.
//!
//! For more detail, see [`Poll`].
//!
//! [`Poll`]: ../mio/struct.Poll.html
//! [`Events`]: ../mio/event/struct.Events.html
//!
//! ## Examples
//!
//! Examples can found in the `examples` directory of the source code, or [on
//! GitHub].
//!
//! [on GitHub]: https://github.com/tokio-rs/mio/tree/master/examples
//!
//! ## Guide
//!
//! A getting started guide is available in the [`guide`] module.
//!
//! ## Available features
//!
//! The available features are described in the [`features`] module.
// macros used internally
#[macro_use]
mod macros;
mod interest;
mod poll;
mod sys;
mod token;
mod waker;
pub mod event;
cfg_io_source! {
mod io_source;
}
cfg_net! {
pub mod net;
}
#[doc(no_inline)]
pub use event::Events;
pub use interest::Interest;
pub use poll::{Poll, Registry};
pub use token::Token;
pub use waker::Waker;
#[cfg(all(unix, feature = "os-ext"))]
#[cfg_attr(docsrs, doc(cfg(all(unix, feature = "os-ext"))))]
pub mod unix {
//! Unix only extensions.
pub mod pipe {
//! Unix pipe.
//!
//! See the [`new`] function for documentation.
pub use crate::sys::pipe::{new, Receiver, Sender};
}
pub use crate::sys::SourceFd;
}
#[cfg(all(windows, feature = "os-ext"))]
#[cfg_attr(docsrs, doc(cfg(all(windows, feature = "os-ext"))))]
pub mod windows {
//! Windows only extensions.
pub use crate::sys::named_pipe::NamedPipe;
}
pub mod features {
//! # Mio's optional features.
//!
//! This document describes the available features in Mio.
//!
#![cfg_attr(feature = "os-poll", doc = "## `os-poll` (enabled)")]
#![cfg_attr(not(feature = "os-poll"), doc = "## `os-poll` (disabled)")]
//!
//! Mio by default provides only a shell implementation, that `panic!`s the
//! moment it is actually run. To run it requires OS support, this is
//! enabled by activating the `os-poll` feature.
//!
//! This makes `Poll`, `Registry` and `Waker` functional.
//!
#![cfg_attr(feature = "os-ext", doc = "## `os-ext` (enabled)")]
#![cfg_attr(not(feature = "os-ext"), doc = "## `os-ext` (disabled)")]
//!
//! `os-ext` enables additional OS specific facilities. These facilities can
//! be found in the `unix` and `windows` module.
//!
#![cfg_attr(feature = "net", doc = "## Network types (enabled)")]
#![cfg_attr(not(feature = "net"), doc = "## Network types (disabled)")]
//!
//! The `net` feature enables networking primitives in the `net` module.
}
pub mod guide {
//! # Getting started guide.
//!
//! In this guide we'll do the following:
//!
//! 1. Create a [`Poll`] instance (and learn what it is).
//! 2. Register an [event source].
//! 3. Create an event loop.
//!
//! At the end you'll have a very small (but quick) TCP server that accepts
//! connections and then drops (disconnects) them.
//!
//! ## 1. Creating a `Poll` instance
//!
//! Using Mio starts by creating a [`Poll`] instance, which monitors events
//! from the OS and puts them into [`Events`]. This allows us to execute I/O
//! operations based on what operations are ready.
//!
//! [`Poll`]: ../struct.Poll.html
//! [`Events`]: ../event/struct.Events.html
//!
#![cfg_attr(feature = "os-poll", doc = "```")]
#![cfg_attr(not(feature = "os-poll"), doc = "```ignore")]
//! # use mio::{Poll, Events};
//! # fn main() -> std::io::Result<()> {
//! // `Poll` allows for polling of readiness events.
//! let poll = Poll::new()?;
//! // `Events` is collection of readiness `Event`s and can be filled by
//! // calling `Poll::poll`.
//! let events = Events::with_capacity(128);
//! # drop((poll, events));
//! # Ok(())
//! # }
//! ```
//!
//! For example if we're using a [`TcpListener`], we'll only want to
//! attempt to accept an incoming connection *iff* any connections are
//! queued and ready to be accepted. We don't want to waste our time if no
//! connections are ready.
//!
//! [`TcpListener`]: ../net/struct.TcpListener.html
//!
//! ## 2. Registering event source
//!
//! After we've created a [`Poll`] instance that monitors events from the OS
//! for us, we need to provide it with a source of events. This is done by
//! registering an [event source]. As the name “event source” suggests it is
//! a source of events which can be polled using a `Poll` instance. On Unix
//! systems this is usually a file descriptor, or a socket/handle on
//! Windows.
//!
//! In the example below we'll use a [`TcpListener`] for which we'll receive
//! an event (from [`Poll`]) once a connection is ready to be accepted.
//!
//! [event source]: ../event/trait.Source.html
//!
#![cfg_attr(all(feature = "os-poll", features = "net"), doc = "```")]
#![cfg_attr(not(all(feature = "os-poll", features = "net")), doc = "```ignore")]
//! # use mio::net::TcpListener;
//! # use mio::{Poll, Token, Interest};
//! # fn main() -> std::io::Result<()> {
//! # let poll = Poll::new()?;
//! # let address = "127.0.0.1:0".parse().unwrap();
//! // Create a `TcpListener`, binding it to `address`.
//! let mut listener = TcpListener::bind(address)?;
//!
//! // Next we register it with `Poll` to receive events for it. The `SERVER`
//! // `Token` is used to determine that we received an event for the listener
//! // later on.
//! const SERVER: Token = Token(0);
//! poll.registry().register(&mut listener, SERVER, Interest::READABLE)?;
//! # Ok(())
//! # }
//! ```
//!
//! Multiple event sources can be [registered] (concurrently), so we can
//! monitor multiple sources at a time.
//!
//! [registered]: ../struct.Registry.html#method.register
//!
//! ## 3. Creating the event loop
//!
//! After we've created a [`Poll`] instance and registered one or more
//! [event sources] with it, we can [poll] it for events. Polling for events
//! is simple, we need a container to store the events: [`Events`] and need
//! to do something based on the polled events (this part is up to you, we
//! can't do it all!). If we do this in a loop we've got ourselves an event
//! loop.
//!
//! The example below shows the event loop in action, completing our small
//! TCP server.
//!
//! [poll]: ../struct.Poll.html#method.poll
//! [event sources]: ../event/trait.Source.html
//!
#![cfg_attr(all(feature = "os-poll", features = "net"), doc = "```")]
#![cfg_attr(not(all(feature = "os-poll", features = "net")), doc = "```ignore")]
//! # use std::io;
//! # use std::time::Duration;
//! # use mio::net::TcpListener;
//! # use mio::{Poll, Token, Interest, Events};
//! # fn main() -> io::Result<()> {
//! # let mut poll = Poll::new()?;
//! # let mut events = Events::with_capacity(128);
//! # let address = "127.0.0.1:0".parse().unwrap();
//! # let mut listener = TcpListener::bind(address)?;
//! # const SERVER: Token = Token(0);
//! # poll.registry().register(&mut listener, SERVER, Interest::READABLE)?;
//! // Start our event loop.
//! loop {
//! // Poll the OS for events, waiting at most 100 milliseconds.
//! poll.poll(&mut events, Some(Duration::from_millis(100)))?;
//!
//! // Process each event.
//! for event in events.iter() {
//! // We can use the token we previously provided to `register` to
//! // determine for which type the event is.
//! match event.token() {
//! SERVER => loop {
//! // One or more connections are ready, so we'll attempt to
//! // accept them (in a loop).
//! match listener.accept() {
//! Ok((connection, address)) => {
//! println!("Got a connection from: {}", address);
//! # drop(connection);
//! },
//! // A "would block error" is returned if the operation
//! // is not ready, so we'll stop trying to accept
//! // connections.
//! Err(ref err) if would_block(err) => break,
//! Err(err) => return Err(err),
//! }
//! }
//! # _ => unreachable!(),
//! }
//! }
//! # return Ok(());
//! }
//!
//! fn would_block(err: &io::Error) -> bool {
//! err.kind() == io::ErrorKind::WouldBlock
//! }
//! # }
//! ```
}
|
def iterate_threads (self):
self._log("iterate_threads()")
thread_entry = THREADENTRY32()
snapshot = kernel32.CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, self.pid)
if snapshot == INVALID_HANDLE_VALUE:
raise pdx("CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, %d" % self.pid, True)
thread_entry.dwSize = sizeof(thread_entry)
if not kernel32.Thread32First(snapshot, byref(thread_entry)):
return
while 1:
if thread_entry.th32OwnerProcessID == self.pid:
yield thread_entry
if not kernel32.Thread32Next(snapshot, byref(thread_entry)):
break
self.close_handle(snapshot) |
Influence of Engagement on the Performance of Company Employees in Cavite, Philippines
Employee is considered one of the most important stakeholders in an organization. Hence, their performance should always be in accordance with the company’s goals and objectives. Nevertheless, in meeting all these expectations, employees’ engagement towards their work should also be considered. Keeping employees’ high engagement will uphold faithfulness and maintain a productive work environment. Therefore, the main objective of this research was to evaluate the level of employee engagement and its effect on their performance, specifically the sales associates working in the province of Cavite, Philippines. A total of 153 participants were selected using Slovin’s formula. Researchers used descriptive, comparative, and causal research designs in the analysis of the gathered data. In comparing employee engagement and performance across socio-demographic profiles, it was found that only sex was found to have a significant difference. Lastly, when simple linear regression was used, it was found that a sociodemographic profile in terms of sex is also a determinant for the employees' engagement in their work. However, it was found that employee engagement is highly significant in the productivity of sales associates in Cavite, Philippines. Hence, it implies that employers should value their employees for them to be highly engaged in their jobs, which will in turn provide more profit for the company. Keywords: Quantitative Research, Company Employees, Employee Engagement, Employee Performance, and Sales associates |
Culture eats strategy for lunch.
Culture is a balanced blend of human psychology, attitudes, actions, and beliefs that combined create either pleasure or pain, serious momentum or miserable stagnation. A strong culture flourishes with a clear set of values and norms that actively guide the way a company operates. Employees are actively and passionately engaged in the business, operating from a sense of confidence and empowerment rather than navigating their days through miserably extensive procedures and mind-numbing bureaucracy. Performance-oriented cultures possess statistically better financial growth, with high employee involvement, strong internal communication, and an acceptance of a healthy level of risk-taking in order to achieve new levels of innovation. |
package testresource
import "github.com/giantswarm/microerror"
var testError = microerror.New("just testing")
// IsTestError asserts testError.
func IsTestError(err error) bool {
return microerror.Cause(err) == testError
}
|
"""This module contains the classes used for constructing and conducting an Experiment (most notably,
:class:`CrossValidationExperiment`). Any class contained herein whose name starts with 'Base' should not be used directly.
:class:`CrossValidationExperiment` is the preferred means of conducting one-off experimentation
Related
-------
:mod:`hyperparameter_hunter.experiment_core`
Defines :class:`ExperimentMeta`, an understanding of which is critical to being able to understand :mod:`experiments`
:mod:`hyperparameter_hunter.metrics`
Defines :class:`ScoringMixIn`, a parent of :class:`experiments.BaseExperiment` that enables scoring and evaluating models
:mod:`hyperparameter_hunter.models`
Used to instantiate the actual learning models, which are a single part of the entire experimentation workflow, albeit the
most significant part
Notes
-----
As mentioned above, the inner workings of :mod:`experiments` will be very confusing without a grasp on whats going on in
:mod:`experiment_core`, and its related modules"""
##################################################
# Import Own Assets
##################################################
from hyperparameter_hunter.algorithm_handlers import identify_algorithm, identify_algorithm_hyperparameters
from hyperparameter_hunter.exception_handler import EnvironmentInactiveError, EnvironmentInvalidError, RepeatedExperimentError
from hyperparameter_hunter.experiment_core import ExperimentMeta
from hyperparameter_hunter.key_handler import HyperparameterKeyMaker
from hyperparameter_hunter.metrics import ScoringMixIn, get_formatted_target_metric
from hyperparameter_hunter.models import model_selector
from hyperparameter_hunter.recorders import RecorderList
from hyperparameter_hunter.settings import G
##################################################
# Import Miscellaneous Assets
##################################################
from abc import abstractmethod
from copy import copy, deepcopy
from inspect import isclass, signature
import numpy as np
import os
import pandas as pd
import random
import shutil
from sys import exc_info
from uuid import uuid4 as uuid
import warnings
##################################################
# Import Learning Assets
##################################################
from sklearn.model_selection import KFold, StratifiedKFold, RepeatedKFold, RepeatedStratifiedKFold
import sklearn.utils as sklearn_utils
pd.set_option('display.expand_frame_repr', False)
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=DeprecationWarning)
warnings.simplefilter(action='ignore', category=sklearn_utils.DataConversionWarning)
np.random.seed(32)
class BaseExperiment(ScoringMixIn):
def __init__(
# TODO: Make `model_init_params` an optional kwarg - If not given, algorithm defaults used
self, model_initializer, model_init_params, model_extra_params=None, feature_selector=None,
preprocessing_pipeline=None, preprocessing_params=None, notes=None, do_raise_repeated=False, auto_start=True,
target_metric=None,
):
"""Base class for :class:`BaseCVExperiment`
Parameters
----------
model_initializer: Class, or functools.partial, or class instance
The algorithm class being used to initialize a model
model_init_params: Dict, or object
The dictionary of arguments given when creating a model instance with `model_initializer` via the `__init__` method
of :class:`models.Model`. Any kwargs that are considered valid by the `__init__` method of `model_initializer` are
valid in `model_init_params`
model_extra_params: Dict, or None, default=None
A dictionary of extra parameters passed to :class:`models.Model`. This is used to provide parameters to models'
non-initialization methods (like `fit`, `predict`, `predict_proba`, etc.), and for neural networks
feature_selector: List of str, callable, list of booleans, default=None
The value provided when splitting apart the input data for all provided DataFrames. `feature_selector` is provided as
the second argument for calls to `pandas.DataFrame.loc` in :meth:`BaseExperiment._initial_preprocessing`. If None,
`feature_selector` is set to all columns in :attr:`train_dataset`, less :attr:`target_column`, and :attr:`id_column`
preprocessing_pipeline: ...
... Experimental...
preprocessing_params: ...
... Experimental...
notes: String, or None, default=None
Additional information about the Experiment that will be saved with the Experiment's description result file. This
serves no purpose other than to facilitate saving Experiment details in a more readable format
do_raise_repeated: Boolean, default=False
If True and this Experiment locates a previous Experiment's results with matching Environment and Hyperparameter Keys,
a RepeatedExperimentError will be raised. Else, a warning will be logged
auto_start: Boolean, default=True
If True, after the Experiment is initialized, it will automatically call :meth:`BaseExperiment.preparation_workflow`,
followed by :meth:`BaseExperiment.experiment_workflow`, effectively completing all essential tasks without requiring
additional method calls
target_metric: Tuple, or str, default=('oof', <first key in :attr:`environment.Environment.metrics_map`>)
A path denoting the metric to be used to compare completed Experiments or to use for certain early stopping
procedures in some model classes. The first value should be one of ['oof', 'holdout', 'in_fold']. The second value
should be the name of a metric being recorded according to the values supplied in
:attr:`environment.Environment.metrics_params`. See the documentation for :func:`metrics.get_formatted_target_metric`
for more info. Any values returned by, or used as the `target_metric` input to this function are acceptable values
for :attr:`BaseExperiment.target_metric`"""
self.model_initializer = model_initializer
self.model_init_params = identify_algorithm_hyperparameters(self.model_initializer) # FLAG: Play nice with Keras
try:
self.model_init_params.update(model_init_params)
except TypeError:
self.model_init_params.update(dict(build_fn=model_init_params))
self.model_extra_params = model_extra_params
self.feature_selector = feature_selector
self.preprocessing_pipeline = preprocessing_pipeline
self.preprocessing_params = preprocessing_params
self.notes = notes
self.do_raise_repeated = do_raise_repeated
self.auto_start = auto_start
self.target_metric = target_metric
#################### Attributes From Active Environment ####################
G.Env.initialize_reporting()
self._validate_environment()
self.train_dataset = G.Env.train_dataset.copy()
try:
self.holdout_dataset = G.Env.holdout_dataset.copy()
except AttributeError:
self.holdout_dataset = G.Env.holdout_dataset
try:
self.test_dataset = G.Env.test_dataset.copy()
except AttributeError:
self.test_dataset = G.Env.test_dataset
self.target_column = G.Env.target_column
self.id_column = G.Env.id_column
self.do_predict_proba = G.Env.do_predict_proba
self.prediction_formatter = G.Env.prediction_formatter
self.metrics_params = G.Env.metrics_params
self.experiment_params = G.Env.cross_experiment_params
self.cross_validation_params = G.Env.cross_validation_params
self.result_paths = G.Env.result_paths
self.cross_experiment_key = G.Env.cross_experiment_key
#################### Instantiate Other Attributes ####################
self.train_input_data = None
self.train_target_data = None
self.holdout_input_data = None
self.holdout_target_data = None
self.test_input_data = None
self.model = None
self.metrics_map = None # Set by :class:`metrics.ScoringMixIn`
self.stat_aggregates = dict()
self.result_description = None
#################### Experiment Identification Attributes ####################
self.experiment_id = None
self.hyperparameter_key = None
self.algorithm_name, self.module_name = identify_algorithm(self.model_initializer)
ScoringMixIn.__init__(self, **self.metrics_params if self.metrics_params else {})
if self.auto_start is True:
self.preparation_workflow()
self.experiment_workflow()
def __repr__(self):
return '{}("{}", cross_experiment_key="{}", hyperparameter_key="{}")'.format(
type(self).__name__, self.experiment_id, self.cross_experiment_key, self.hyperparameter_key
)
def __getattr__(self, attr):
"""If AttributeError thrown, resort to checking :attr:`settings.G.Env` for target attribute"""
try:
return getattr(G.Env, attr)
except AttributeError:
raise AttributeError("Could not find '{}' in 'G.Env', or any of the following locations: {}".format(
attr, [_.__name__ for _ in type(self).__mro__]
)).with_traceback(exc_info()[2]) from None
def experiment_workflow(self):
"""Define the actual experiment process, including execution, result saving, and cleanup"""
if self.hyperparameter_key.exists is True:
_ex = F'{self!r} has already been run'
if self.do_raise_repeated is True:
self._clean_up()
raise RepeatedExperimentError(_ex)
G.warn(_ex)
self._initialize_random_seeds()
self._initial_preprocessing()
self.execute()
recorders = RecorderList(file_blacklist=G.Env.file_blacklist)
recorders.format_result()
G.log(F'Saving results for Experiment: "{self.experiment_id}"')
recorders.save_result()
self._clean_up()
def preparation_workflow(self):
"""Execute all tasks that must take place before the experiment is actually started. Such tasks include (but are not
limited to): Creating experiment IDs and hyperparameter keys, creating script backups, and validating parameters"""
G.debug('Starting preparation_workflow...')
self._generate_experiment_id()
self._create_script_backup()
self._validate_parameters()
self._generate_hyperparameter_key()
self._additional_preparation_steps()
G.debug('Completed preparation_workflow')
@abstractmethod
def _additional_preparation_steps(self):
"""Perform additional preparation tasks prior to initializing random seeds and beginning initial preprocessing"""
raise NotImplementedError()
@abstractmethod
def execute(self):
"""Execute the fitting protocol for the Experiment, comprising the following: instantiation of learners for each run,
preprocessing of data as appropriate, training learners, making predictions, and evaluating and aggregating those
predictions and other stats/metrics for later use"""
raise NotImplementedError()
##################################################
# Data Preprocessing Methods:
##################################################
def _initial_preprocessing(self):
"""Perform preprocessing steps prior to executing fitting protocol (usually cross-validation), consisting of: 1) Split
train/holdout data into respective train/holdout input and target data attributes, 2) Feature selection on input data
sets, 3) Set target datasets to target_column contents, 4) Initialize PreprocessingPipeline to perform core preprocessing,
5) Set datasets to their (modified) counterparts in PreprocessingPipeline, 6) Log whether datasets changed"""
#################### Preprocessing ####################
# preprocessor = PreprocessingPipelineMixIn(
# pipeline=[], preprocessing_params=dict(apply_standard_scale=True), features=self.features,
# target_column=self.target_column, train_input_data=self.train_input_data,
# train_target_data=self.train_target_data, holdout_input_data=self.holdout_input_data,
# holdout_target_data=self.holdout_target_data, test_input_data=self.test_input_data,
# fitting_guide=None, fail_gracefully=False, preprocessing_stage='infer'
# )
#
# # TODO: Switch from below direct calls to preprocessor.execute_pipeline() call
# # TODO: After calling execute_pipeline(), set data attributes to their counterparts in preprocessor class
# preprocessor.data_imputation()
# preprocessor.target_data_transformation()
# preprocessor.data_scaling()
#
# for dataset_name in preprocessor.all_input_sets + preprocessor.all_target_sets:
# old_val, new_val = getattr(self, dataset_name), getattr(preprocessor, dataset_name)
# G.log('Dataset: "{}" {} updated'.format(dataset_name, 'was not' if old_val.equals(new_val) else 'was'))
# setattr(self, dataset_name, new_val)
self.train_input_data = self.train_dataset.copy().loc[:, self.feature_selector]
self.train_target_data = self.train_dataset.copy()[[self.target_column]]
if isinstance(self.holdout_dataset, pd.DataFrame):
self.holdout_input_data = self.holdout_dataset.copy().loc[:, self.feature_selector]
self.holdout_target_data = self.holdout_dataset.copy()[[self.target_column]]
if isinstance(self.test_dataset, pd.DataFrame):
self.test_input_data = self.test_dataset.copy().loc[:, self.feature_selector]
G.log('Initial preprocessing stage complete')
##################################################
# Supporting Methods:
##################################################
def _validate_parameters(self):
"""Ensure provided input parameters are properly formatted"""
#################### target_metric ####################
self.target_metric = get_formatted_target_metric(self.target_metric, self.metrics_map)
#################### feature_selector ####################
if self.feature_selector is None:
restricted_cols = [_ for _ in [self.target_column, self.id_column] if _ is not None]
self.feature_selector = [_ for _ in self.train_dataset.columns.values if _ not in restricted_cols]
G.debug('Experiment parameters have been validated')
def _validate_environment(self):
"""Check that there is a currently active Environment instance that is not already occupied"""
if G.Env is None:
raise EnvironmentInactiveError('')
if G.Env.current_task is None:
G.Env.current_task = self
G.log(F'Validated Environment with key: "{self.cross_experiment_key}"')
else:
raise EnvironmentInvalidError('An experiment is in progress. It must finish before a new one can be started')
@staticmethod
def _clean_up():
"""Clean up after experiment to prepare for next experiment"""
G.Env.current_task = None
##################################################
# Key/ID Methods:
##################################################
def _generate_experiment_id(self):
"""Set :attr:`experiment_id` to a UUID"""
self.experiment_id = str(uuid())
G.log('')
G.log('Initialized new Experiment with ID: {}'.format(self.experiment_id))
def _generate_hyperparameter_key(self):
"""Set :attr:`hyperparameter_key` to a key to describe the experiment's hyperparameters"""
parameters = dict(
model_initializer=self.model_initializer,
model_init_params=self.model_init_params,
model_extra_params=self.model_extra_params,
preprocessing_pipeline=self.preprocessing_pipeline,
preprocessing_params=self.preprocessing_params,
feature_selector=self.feature_selector,
# FLAG: Should probably add :attr:`target_metric` to key - With option to ignore it?
)
self.hyperparameter_key = HyperparameterKeyMaker(parameters, self.cross_experiment_key)
G.log('Generated hyperparameter key: {}'.format(self.hyperparameter_key))
def _create_script_backup(self):
"""Create and save a copy of the script that initialized the Experiment"""
#################### Attempt to Copy Source Script if Allowed ####################
try:
if G.Env.result_paths['script_backup'] is not None:
try:
shutil.copyfile(self.source_script, F'{self.result_paths["script_backup"]}/{self.experiment_id}.py')
except FileNotFoundError:
os.makedirs(self.result_paths["script_backup"], exist_ok=False)
shutil.copyfile(self.source_script, F'{self.result_paths["script_backup"]}/{self.experiment_id}.py')
G.log('Created backup of file: "{}"'.format(self.source_script))
else:
G.log('Skipped creating backup of file: "{}"'.format(self.source_script))
#################### Exception Handling ####################
except AttributeError as _ex:
if G.Env is None:
raise EnvironmentInactiveError(extra='\n{!s}'.format(_ex))
if not hasattr(G.Env, 'result_paths'):
raise EnvironmentInvalidError(extra='G.Env lacks "result_paths" attribute\n{!s}'.format(_ex))
raise
except KeyError as _ex:
if 'script_backup' not in G.Env.result_paths:
raise EnvironmentInvalidError(extra='G.Env.result_paths lacks "script_backup" key\n{!s}'.format(_ex))
raise
##################################################
# Utility Methods:
##################################################
def _initialize_random_seeds(self):
"""Initialize global random seed, and generate set of random seeds for each fold/run if not provided"""
np.random.seed(self.experiment_params['global_random_seed'])
random.seed(self.experiment_params['global_random_seed'])
self._random_seed_initializer()
G.debug('Initialized random seeds for experiment')
def _random_seed_initializer(self):
"""Generate set of random seeds for each repetition/fold/run if not provided"""
if self.experiment_params['random_seeds'] is None:
self.experiment_params['random_seeds'] = np.random.randint(*self.experiment_params['random_seed_bounds'], size=(
self.cross_validation_params.get('n_repeats', 1),
self.cross_validation_params['n_splits'],
self.experiment_params['runs']
)).tolist()
G.debug('BaseExperiment._random_seed_initializer() done')
def _update_model_params(self):
"""Update random state of :attr:`model_init_params` according to :attr:`current_seed`"""
# TODO: Add this to some workflow in Experiment class. For now it is never used, unless the subclass decides to...
# `model_init_params` initialized to all algorithm hyperparameters - Works even if 'random_state' not explicitly given
try:
if 'random_state' in self.model_init_params:
self.model_init_params['random_state'] = self.current_seed
elif 'seed' in self.model_init_params:
self.model_init_params['seed'] = self.current_seed
else:
G.log('Model has no random_state/seed parameter to update')
# FLAG: HIGH PRIORITY BELOW
# TODO: BELOW IS NOT THE CASE IF MODEL IS NN - SETTING THE GLOBAL RANDOM SEED DOES SOMETHING
# TODO: If this is logged, there is no reason to execute multiple-run-averaging, so don't
# TODO: ... Either 1) Set `runs` = 1 (this would mess with the environment key), or...
# TODO: ... 2) Set the results of all subsequent runs to the results of the first run (this could be difficult)
# FLAG: HIGH PRIORITY ABOVE
except Exception as _ex:
G.log('Failed to update model\'s random_state {}'.format(_ex.__repr__()))
class BaseCVExperiment(BaseExperiment):
def __init__(
self, model_initializer, model_init_params, model_extra_params=None, feature_selector=None,
preprocessing_pipeline=None, preprocessing_params=None, notes=None, do_raise_repeated=False, auto_start=True,
target_metric=None,
):
self._rep = 0
self._fold = 0
self._run = 0
self.current_seed = None
self.train_index = None
self.validation_index = None
self.folds = None
self.fold_train_input = None
self.fold_validation_input = None
self.fold_train_target = None
self.fold_validation_target = None
self.repetition_oof_predictions = None
self.repetition_holdout_predictions = None
self.repetition_test_predictions = None
self.fold_holdout_predictions = None
self.fold_test_predictions = None
self.run_validation_predictions = None
self.run_holdout_predictions = None
self.run_test_predictions = None
#################### Initialize Result Placeholders ####################
# self.full_oof_predictions = None # (n_repeats * runs) intermediate columns
# self.full_test_predictions = 0 # (n_splits * n_repeats * runs) intermediate columns
# self.full_holdout_predictions = 0 # (n_splits * n_repeats * runs) intermediate columns
self.final_oof_predictions = None
self.final_test_predictions = 0
self.final_holdout_predictions = 0
BaseExperiment.__init__(
self, model_initializer, model_init_params, model_extra_params=model_extra_params, feature_selector=feature_selector,
preprocessing_pipeline=preprocessing_pipeline, preprocessing_params=preprocessing_params, notes=notes,
do_raise_repeated=do_raise_repeated, auto_start=auto_start, target_metric=target_metric,
)
def _additional_preparation_steps(self):
"""Perform additional preparation tasks prior to initializing random seeds and beginning initial preprocessing"""
self._initialize_folds()
@abstractmethod
def _initialize_folds(self):
raise NotImplementedError()
def execute(self):
self.cross_validation_workflow()
def cross_validation_workflow(self):
"""Execute workflow for cross-validation process, consisting of the following tasks: 1) Create train and validation split
indices for all folds, 2) Iterate through folds, performing cv_fold_workflow for each, 3) Average accumulated predictions
over fold splits, 4) Evaluate final predictions, 5) Format final predictions to prepare for saving"""
self.on_experiment_start()
cv_indices = self.folds.split(self.train_input_data, self.train_target_data.iloc[:, 0])
new_shape = (self.cross_validation_params.get('n_repeats', 1), self.cross_validation_params['n_splits'], 2)
reshaped_indices = np.reshape(np.array(list(cv_indices)), new_shape)
for self._rep, repetition_indices in enumerate(reshaped_indices.tolist()):
self.on_repetition_start()
for self._fold, (self.train_index, self.validation_index) in enumerate(repetition_indices):
self.cv_fold_workflow()
self.on_repetition_end()
self.on_experiment_end()
G.log('')
##################################################
# Fold Workflow Methods:
##################################################
def on_fold_start(self):
"""Override :meth:`on_fold_start` tasks organized by :class:`experiment_core.ExperimentMeta`, consisting of: 1) Log fold
start, 2) Execute original tasks, 3) Split train and validation data"""
super().on_fold_start()
#################### Split Train and Validation Data ####################
self.fold_train_input = self.train_input_data.iloc[self.train_index, :].copy()
self.fold_validation_input = self.train_input_data.iloc[self.validation_index, :].copy()
self.fold_train_target = self.train_target_data.iloc[self.train_index].copy()
self.fold_validation_target = self.train_target_data.iloc[self.validation_index].copy()
def cv_fold_workflow(self):
"""Execute workflow for individual fold, consisting of the following tasks: Execute overridden :meth:`on_fold_start`
tasks, 2) Perform cv_run_workflow for each run, 3) Execute overridden :meth:`on_fold_end` tasks"""
self.on_fold_start()
# TODO: Call self.intra_cv_preprocessing() - Ensure the 4 fold input/target attributes (from on_fold_start) are changed
for self._run in range(self.experiment_params.get('runs', 1)):
self.cv_run_workflow()
self.on_fold_end()
##################################################
# Run Workflow Methods:
##################################################
def on_run_start(self):
"""Override :meth:`on_run_start` tasks organized by :class:`experiment_core.ExperimentMeta`, consisting of: 1) Set random
seed and update model parameters according to current seed, 2) Log run start, 3) Execute original tasks"""
self.current_seed = self.experiment_params['random_seeds'][self._rep][self._fold][self._run]
np.random.seed(self.current_seed)
self._update_model_params()
super().on_run_start()
def cv_run_workflow(self):
"""Execute workflow for individual run, consisting of the following tasks: 1) Execute overridden :meth:`on_run_start`
tasks, 2) Initialize and fit Model, 3) Execute overridden :meth:`on_run_end` tasks"""
self.on_run_start()
self.model = model_selector(self.model_initializer)(
self.model_initializer, self.model_init_params, self.model_extra_params,
train_input=self.fold_train_input, train_target=self.fold_train_target,
validation_input=self.fold_validation_input, validation_target=self.fold_validation_target,
do_predict_proba=self.do_predict_proba, target_metric=self.target_metric, metrics_map=self.metrics_map,
)
self.model.fit()
self.on_run_end()
##################################################
# Core CV Experiment Classes:
##################################################
class CrossValidationExperiment(BaseCVExperiment, metaclass=ExperimentMeta):
def __init__(
self, model_initializer, model_init_params, model_extra_params=None, feature_selector=None,
preprocessing_pipeline=None, preprocessing_params=None, notes=None, do_raise_repeated=False, auto_start=True,
target_metric=None,
):
BaseCVExperiment.__init__(
self, model_initializer, model_init_params, model_extra_params=model_extra_params, feature_selector=feature_selector,
preprocessing_pipeline=preprocessing_pipeline, preprocessing_params=preprocessing_params, notes=notes,
do_raise_repeated=do_raise_repeated, auto_start=auto_start, target_metric=target_metric,
)
def _initialize_folds(self):
"""Initialize :attr:`folds` according to cross_validation_type and :attr:`cross_validation_params`"""
cross_validation_type = self.experiment_params['cross_validation_type'] # Allow failure
if not isclass(cross_validation_type):
raise TypeError(F'Expected a class to perform cross-validation. Received: {type(cross_validation_type)}')
try:
_split_method = getattr(cross_validation_type, 'split')
if not callable(_split_method):
raise TypeError('`cross_validation_type` must implement a callable :meth:`split`')
except AttributeError:
raise AttributeError('`cross_validation_type` must be a class that implements :meth:`split`')
self.folds = cross_validation_type(**self.cross_validation_params)
##################################################
# Other Experiment Classes:
##################################################
class RepeatedCVExperiment(BaseCVExperiment, metaclass=ExperimentMeta):
def __init__(
self, model_initializer, model_init_params, model_extra_params=None, feature_selector=None,
preprocessing_pipeline=None, preprocessing_params=None, notes=None, do_raise_repeated=False, auto_start=True,
target_metric=None,
):
BaseCVExperiment.__init__(
self, model_initializer, model_init_params, model_extra_params=model_extra_params, feature_selector=feature_selector,
preprocessing_pipeline=preprocessing_pipeline, preprocessing_params=preprocessing_params, notes=notes,
do_raise_repeated=do_raise_repeated, auto_start=auto_start, target_metric=target_metric,
)
def _initialize_folds(self):
"""Initialize :attr:`folds` according to cross_validation_type and :attr:`cross_validation_params`"""
cross_validation_type = self.experiment_params.get('cross_validation_type', 'repeatedkfold').lower()
if cross_validation_type in ('stratifiedkfold', 'repeatedstratifiedkfold'):
self.folds = RepeatedStratifiedKFold(**self.cross_validation_params)
else:
self.folds = RepeatedKFold(**self.cross_validation_params)
class StandardCVExperiment(BaseCVExperiment, metaclass=ExperimentMeta):
def __init__(
self, model_initializer, model_init_params, model_extra_params=None, feature_selector=None,
preprocessing_pipeline=None, preprocessing_params=None, notes=None, do_raise_repeated=False, auto_start=True,
target_metric=None,
):
BaseCVExperiment.__init__(
self, model_initializer, model_init_params, model_extra_params=model_extra_params, feature_selector=feature_selector,
preprocessing_pipeline=preprocessing_pipeline, preprocessing_params=preprocessing_params, notes=notes,
do_raise_repeated=do_raise_repeated, auto_start=auto_start, target_metric=target_metric,
)
def _initialize_folds(self):
"""Initialize :attr:`folds` according to cross_validation_type and :attr:`cross_validation_params`"""
cross_validation_type = self.experiment_params.get('cross_validation_type', 'kfold').lower()
if cross_validation_type == 'stratifiedkfold':
self.folds = StratifiedKFold(**self.cross_validation_params)
else:
self.folds = KFold(**self.cross_validation_params)
# class NoValidationExperiment(BaseExperiment):
# pass
if __name__ == '__main__':
pass
|
// IsFile check is path is exists and it's a directory
func IsFile(path string) bool {
stat, err := os.Stat(path)
if os.IsNotExist(err) {
return false
}
return stat.Mode().IsRegular()
} |
/**
* Renaming Builder class field with parent class field.
*/
public class BuilderFieldRenameParticipant extends RenameParticipant {
private ICompilationUnit unit;
private String oldName;
private IField field;
private IField builderField;
private IMethod builderConstr;
private IMethod builderSetter;
private IMethod parentConstr;
/**
* @return false if could not participate
*/
@Override
protected boolean initialize(Object element) {
if (element instanceof IField) {
field = (IField) element;
oldName = field.getElementName();
unit = field.getCompilationUnit();
if (field.getParent() instanceof IType) {
IType type = (IType) field.getParent();
try {
for (IMethod m : type.getMethods()) {
if (m.getElementName().equals(type.getElementName()) && m.getParameterTypes().length == 1
&& "QBuilder;".equals(m.getParameterTypes()[0])) {
parentConstr = m;
}
}
IType builderClass = null;
for (IType mt : type.getTypes()) {
if ("Builder".equals(mt.getElementName())) {
builderClass = mt;
}
}
if (builderClass != null) {
for (IField bf : builderClass.getFields()) {
if (bf.getElementName().equals(field.getElementName())) {
builderField = bf;
}
}
for (IMethod bm : builderClass.getMethods()) {
if (bm.getElementName().equals("Builder") && bm.getParameterTypes().length == 1) {
builderConstr = bm;
}
else if (bm.getElementName().equals(field.getElementName()) && bm.getParameterTypes().length == 1) {
builderSetter = bm;
}
}
}
} catch (JavaModelException e) {
ErrorLog.warn("Checking child classes", e);
}
}
}
return builderField != null && builderSetter != null && builderConstr != null && parentConstr != null;
}
@Override
public String getName() {
return "Builder Field Renamer";
}
@Override
public RefactoringStatus checkConditions(IProgressMonitor pm,
CheckConditionsContext context) throws OperationCanceledException {
return new RefactoringStatus();
}
@Override
public Change createChange(IProgressMonitor pm)
throws CoreException, OperationCanceledException {
TextChange change = getTextChange(unit);
if (change == null) {
return null;
}
String newName = getArguments().getNewName();
// assignment in setter method in builder class
{
int start = builderSetter.getSourceRange().getOffset();
int nameStart = builderSetter.getSource().indexOf(oldName);
int index = builderSetter.getSource().indexOf(oldName, nameStart+oldName.length());
change.addEdit(new ReplaceEdit(start+index, oldName.length(), newName));
}
// setter method name in builder class
{
ISourceRange nameRange = builderSetter.getNameRange();
change.addEdit(new ReplaceEdit(nameRange.getOffset(), nameRange.getLength(), newName));
}
// assignment in builder class constructor
{
int start = builderConstr.getSourceRange().getOffset();
int index = builderConstr.getSource().indexOf(oldName);
change.addEdit(new ReplaceEdit(start+index, oldName.length(), newName));
}
// field declaration in builder class
{
ISourceRange nameRange = builderField.getNameRange();
change.addEdit(new ReplaceEdit(nameRange.getOffset(), nameRange.getLength(), newName));
}
// assignment in parent class constructor
{
int start = parentConstr.getSourceRange().getOffset();
int nameStart = parentConstr.getSource().indexOf(oldName);
int index = parentConstr.getSource().indexOf(oldName, nameStart+oldName.length());
change.addEdit(new ReplaceEdit(start+index, oldName.length(), newName));
}
return null;
}
} |
import { Component, EventEmitter, Input, OnInit, Output } from '@angular/core';
import { AlertService } from 'app/core/alert/alert.service';
import { TUM_USERNAME_REGEX } from 'app/app.constants';
import { AccountService } from 'app/core/auth/account.service';
import { Course } from 'app/entities/course.model';
import { CourseManagementService } from '../../course/manage/course-management.service';
@Component({
selector: 'jhi-course-registration-selector',
templateUrl: './course-registration-selector.component.html',
})
export class CourseRegistrationSelectorComponent implements OnInit {
@Input() courses: Course[];
@Output() courseRegistered = new EventEmitter();
public coursesToSelect: Course[] = [];
public courseToRegister: Course | undefined;
public isTumStudent = false;
showCourseSelection = false;
addedSuccessful = false;
loading = false;
constructor(private accountService: AccountService, private courseService: CourseManagementService, private jhiAlertService: AlertService) {}
ngOnInit(): void {
this.accountService.identity().then((user) => {
this.isTumStudent = !!user!.login!.match(TUM_USERNAME_REGEX);
});
}
private onError(error: string) {
this.jhiAlertService.error(error, null, undefined);
}
trackCourseById(index: number, item: Course) {
return item.id;
}
loadAndFilterCourses() {
return new Promise((resolve, reject) => {
this.courseService.findAllToRegister().subscribe(
(registerRes) => {
this.coursesToSelect = registerRes.body!.filter((course) => {
return !this.courses.find((el) => el.id === course.id);
});
resolve();
},
(response: string) => reject(response),
);
});
}
startRegistration() {
this.loading = true;
this.loadAndFilterCourses()
.then(() => {
this.loading = false;
this.showCourseSelection = true;
if (this.coursesToSelect.length === 0) {
setTimeout(() => {
this.courseToRegister = undefined;
this.showCourseSelection = false;
}, 3000);
}
})
.catch(() => {
this.loading = false;
this.courseToRegister = undefined;
this.showCourseSelection = false;
});
}
cancelRegistration() {
this.courseToRegister = undefined;
this.showCourseSelection = false;
}
registerForCourse() {
if (this.courseToRegister) {
this.showCourseSelection = false;
this.loading = true;
this.courseService.registerForCourse(this.courseToRegister.id).subscribe(
() => {
this.addedSuccessful = true;
this.loading = false;
setTimeout(() => {
this.courseToRegister = undefined;
this.addedSuccessful = false;
this.coursesToSelect = [];
}, 3000);
this.courseRegistered.emit();
},
(error) => {
console.log(error);
this.loading = false;
this.courseToRegister = undefined;
},
);
}
}
}
|
import React, { useEffect, useState } from 'react';
import { SearchOutlined, DeleteOutlined } from '@ant-design/icons';
import { Input, Pagination, Layout, Card, Tooltip, Spin } from 'antd';
import EmptyComponent from '../empty/empty';
import './list.scss';
const { Content } = Layout;
const { Meta } = Card;
const ListComponent = props => {
const [searchText, setSearchText] = useState('');
const { loading, getData, data, total, history, paginate } = props;
const { push } = history;
useEffect(() => {
setSearchText(paginate.name);
}, []);
const handlerClickSearch = () => {
getData({ ...paginate, name: searchText });
};
const handlerKeyPressSearch = e => {
if (e['keyCode'] === 13 && !loading) {
handlerClickSearch();
}
};
const handlerKeyPress = e => {
if (e['keyCode'] === 13 && !loading) {
getData({ ...paginate, name: searchText });
}
};
const handlerClickClearSearch = () => {
getData({
...paginate,
limit: 10,
offset: 0,
name: '',
});
};
const handlerKeyPressClearSearch = e => {
if (e['keyCode'] === 13 && !loading) {
handlerClickClearSearch();
}
};
const handleChange = e => {
setSearchText(e.currentTarget.value);
};
const handleChangePagination = (offset, limit) => {
offset = offset - 1;
getData({ ...paginate, limit, offset, name: searchText });
};
const handleChangeSize = (offset, limit) => {
offset = 0;
getData({ ...paginate, limit, offset, name: searchText });
};
const renderSearch = () => {
return (
<div className="container-header">
<Input
id="search"
disabled={loading}
className="input-search"
placeholder="Pesquisar..."
value={searchText}
maxLength="255"
onChange={handleChange}
onKeyPress={handlerKeyPress}
/>
<div
className="btn-search"
onClick={handlerClickSearch}
onKeyPress={handlerKeyPressSearch}
tabIndex="0"
aria-label="Pesquisar"
role="button"
>
<SearchOutlined />
</div>
<div
className="btn-search"
onClick={handlerClickClearSearch}
onKeyPress={handlerKeyPressClearSearch}
tabIndex="0"
aria-label="Limpar a pesquisa"
role="button"
>
<DeleteOutlined />
</div>
</div>
);
};
const renderPagination = () => {
return (
<Pagination
total={total}
showTotal={(total, range) => {
return `${range[0]}-${range[1]} of ${total} items`;
}}
pageSize={paginate.limit || 10}
current={paginate.offset + 1 || 1}
defaultCurrent={1}
responsive={true}
showSizeChanger={true}
onChange={handleChangePagination.bind(this)}
onShowSizeChange={handleChangeSize.bind(this)}
/>
);
};
const handlerKeyPressMore = (item, e) => {
if (e['keyCode'] === 13 && !this.loading) {
handlerClickMore(item);
}
};
const handlerClickMore = item => {
push(`/marvel/details/${paginate.type}/${item.id}`);
};
const renderTooltip = title => {
return (
<Tooltip title={title}>
<span>{title}</span>
</Tooltip>
);
};
const renderCards = () => {
return (
<>
{data.map((item, key) => {
return (
<Card
key={key}
hoverable
loading={loading}
style={{ width: 270 }}
extra={
<a
tabIndex="0"
href=""
onClick={handlerClickMore.bind(this, item)}
onKeyPress={handlerKeyPressMore.bind(this, item)}
>
Mais
</a>
}
title={renderTooltip(
(item.type === 'story' &&
item.originalIssue &&
item.originalIssue.name) ||
item.name ||
item.title ||
item.fullName,
)}
cover={
item.thumbnail && (
<img
alt={item.description}
title={item.name || item.title || item.fullName}
src={`${item.thumbnail.path}.${item.thumbnail.extension}`}
/>
)
}
>
<Meta
title={''}
description={
item.description ||
(item.type === 'story' && item.title) ||
'Not description.'
}
/>
{/* <Meta title={item.name || item.title || item.fullName} description={item.description} /> */}
</Card>
);
})}
</>
);
};
const render = () => {
return (
<div className="container-layout-content">
<Content style={{ padding: '15px 50px' }}>
<div className="container">{renderSearch()}</div>
</Content>
<Content style={{ padding: '5px 50px' }}>
<div className="container">
{loading && (
<div className="site-layout-content">
<div className="container-spin">
<Spin tip="Loading..." size="large" />
</div>
</div>
)}
{!loading && data && data.length > 0 && renderCards()}
{!loading && data && data.length > 0 && renderPagination()}
{!loading && data && data.length === 0 && <EmptyComponent />}
</div>
</Content>
</div>
);
};
return <>{render()}</>;
};
export default ListComponent;
|
/*
* Copyright (c) 2012 Clément Bœsch
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVFORMAT_SUBTITLES_H
#define AVFORMAT_SUBTITLES_H
#include <stdint.h>
#include <stddef.h>
#include "avformat.h"
#include "libavutil/bprint.h"
enum sub_sort {
SUB_SORT_TS_POS = 0, ///< sort by timestamps, then position
SUB_SORT_POS_TS, ///< sort by position, then timestamps
};
enum ff_utf_type {
FF_UTF_8, // or other 8 bit encodings
FF_UTF16LE,
FF_UTF16BE,
};
typedef struct {
int type;
AVIOContext *pb;
unsigned char buf[8];
int buf_pos, buf_len;
AVIOContext buf_pb;
} FFTextReader;
/**
* Initialize the FFTextReader from the given AVIOContext. This function will
* read some bytes from pb, and test for UTF-8 or UTF-16 BOMs. Further accesses
* to FFTextReader will read more data from pb.
* If s is not NULL, the user will be warned if a UTF-16 conversion takes place.
*
* The purpose of FFTextReader is to transparently convert read data to UTF-8
* if the stream had a UTF-16 BOM.
*
* @param s Pointer to provide av_log context
* @param r object which will be initialized
* @param pb stream to read from (referenced as long as FFTextReader is in use)
*/
void ff_text_init_avio(void *s, FFTextReader *r, AVIOContext *pb);
/**
* Similar to ff_text_init_avio(), but sets it up to read from a bounded buffer.
*
* @param r object which will be initialized
* @param buf buffer to read from (referenced as long as FFTextReader is in use)
* @param size size of buf
*/
void ff_text_init_buf(FFTextReader *r, void *buf, size_t size);
/**
* Return the byte position of the next byte returned by ff_text_r8(). For
* UTF-16 source streams, this will return the original position, but it will
* be incorrect if a codepoint was only partially read with ff_text_r8().
*/
int64_t ff_text_pos(FFTextReader *r);
/**
* Return the next byte. The return value is always 0 - 255. Returns 0 on EOF.
* If the source stream is UTF-16, this reads from the stream converted to
* UTF-8. On invalid UTF-16, 0 is returned.
*/
int ff_text_r8(FFTextReader *r);
/**
* Return non-zero if EOF was reached.
*/
int ff_text_eof(FFTextReader *r);
/**
* Like ff_text_r8(), but don't remove the byte from the buffer.
*/
int ff_text_peek_r8(FFTextReader *r);
/**
* Read the given number of bytes (in UTF-8). On error or EOF, \0 bytes are
* written.
*/
void ff_text_read(FFTextReader *r, char *buf, size_t size);
typedef struct {
AVPacket *subs; ///< array of subtitles packets
int nb_subs; ///< number of subtitles packets
int allocated_size; ///< allocated size for subs
int current_sub_idx; ///< current position for the read packet callback
enum sub_sort sort; ///< sort method to use when finalizing subtitles
int keep_duplicates; ///< set to 1 to keep duplicated subtitle events
} FFDemuxSubtitlesQueue;
/**
* Insert a new subtitle event.
*
* @param event the subtitle line, may not be zero terminated
* @param len the length of the event (in strlen() sense, so without '\0')
* @param merge set to 1 if the current event should be concatenated with the
* previous one instead of adding a new entry, 0 otherwise
*/
AVPacket *ff_subtitles_queue_insert(FFDemuxSubtitlesQueue *q,
const uint8_t *event, size_t len, int merge);
/**
* Set missing durations, sort subtitles by PTS (and then byte position), and
* drop duplicated events.
*/
void ff_subtitles_queue_finalize(void *log_ctx, FFDemuxSubtitlesQueue *q);
/**
* Generic read_packet() callback for subtitles demuxers using this queue
* system.
*/
int ff_subtitles_queue_read_packet(FFDemuxSubtitlesQueue *q, AVPacket *pkt);
/**
* Update current_sub_idx to emulate a seek. Except the first parameter, it
* matches AVInputFormat->read_seek2 prototypes.
*/
int ff_subtitles_queue_seek(FFDemuxSubtitlesQueue *q, AVFormatContext *s, int stream_index,
int64_t min_ts, int64_t ts, int64_t max_ts, int flags);
/**
* Remove and destroy all the subtitles packets.
*/
void ff_subtitles_queue_clean(FFDemuxSubtitlesQueue *q);
/**
* SMIL helper to load next chunk ("<...>" or untagged content) in buf.
*
* @param c cached character, to avoid a backward seek
*/
int ff_smil_extract_next_text_chunk(FFTextReader *tr, AVBPrint *buf, char *c);
/**
* SMIL helper to point on the value of an attribute in the given tag.
*
* @param s SMIL tag ("<...>")
* @param attr the attribute to look for
*/
const char *ff_smil_get_attr_ptr(const char *s, const char *attr);
/**
* @brief Same as ff_subtitles_read_text_chunk(), but read from an AVIOContext.
*/
void ff_subtitles_read_chunk(AVIOContext *pb, AVBPrint *buf);
/**
* @brief Read a subtitles chunk from FFTextReader.
*
* A chunk is defined by a multiline "event", ending with a second line break.
* The trailing line breaks are trimmed. CRLF are supported.
* Example: "foo\r\nbar\r\n\r\nnext" will print "foo\r\nbar" into buf, and pb
* will focus on the 'n' of the "next" string.
*
* @param tr I/O context
* @param buf an initialized buf where the chunk is written
*
* @note buf is cleared before writing into it.
*/
void ff_subtitles_read_text_chunk(FFTextReader *tr, AVBPrint *buf);
/**
* Get the number of characters to increment to jump to the next line, or to
* the end of the string.
* The function handles the following line breaks schemes:
* LF, CRLF (MS), or standalone CR (old MacOS).
*/
static av_always_inline int ff_subtitles_next_line(const char *ptr)
{
int n = strcspn(ptr, "\r\n");
ptr += n;
if (*ptr == '\r') {
ptr++;
n++;
}
if (*ptr == '\n')
n++;
return n;
}
/**
* Read a line of text. Discards line ending characters.
* The function handles the following line breaks schemes:
* LF, CRLF (MS), or standalone CR (old MacOS).
*
* Returns the number of bytes written to buf. Always writes a terminating 0,
* similar as with snprintf.
*
* @note returns a negative error code if a \0 byte is found
*/
ptrdiff_t ff_subtitles_read_line(FFTextReader *tr, char *buf, size_t size);
#endif /* AVFORMAT_SUBTITLES_H */
|
// newReportXML returns a new Report based on the contents of a
// XML-formatted valgrind report.
func newReportXML(output *outputElement) *Report {
report := &Report{
Pid: output.Pid,
Ppid: output.Ppid,
Tool: output.Tool,
}
for _, node := range output.Errors {
report.Errors = append(report.Errors, newErrorInfoXML(node))
}
if len(output.ErrorCounts) > 0 {
report.ErrorCounts = make(map[string]int)
for _, x := range output.ErrorCounts {
if x.Name != "" {
report.ErrorCounts[x.Name] = x.Count
} else {
report.ErrorCounts[x.Unique] = x.Count
}
}
}
return report
} |
<filename>HW08/libs/WWJ/code/src/gov/nasa/worldwind/formats/tiff/Tiff.java<gh_stars>10-100
/*
* Copyright (C) 2012 United States Government as represented by the Administrator of the
* National Aeronautics and Space Administration.
* All Rights Reserved.
*/
package gov.nasa.worldwind.formats.tiff;
/**
* @author <NAME>
* @version $Id: Tiff.java 1171 2013-02-11 21:45:02Z dcollins $
*/
public interface Tiff
{
public static final int Undefined = 0;
public interface Type
{
public static final int BYTE = 1;
public static final int ASCII = 2;
public static final int SHORT = 3;
public static final int LONG = 4;
public static final int RATIONAL = 5;
public static final int SBYTE = 6;
public static final int UNDEFINED = 7;
public static final int SSHORT = 8;
public static final int SLONG = 9;
public static final int SRATIONAL = 10;
public static final int FLOAT = 11;
public static final int DOUBLE = 12;
}
public interface Tag
{
// Baseline Tiff 6.0 tags...
public static final int IMAGE_WIDTH = 256;
public static final int IMAGE_LENGTH = 257;
public static final int BITS_PER_SAMPLE = 258;
public static final int COMPRESSION = 259;
public static final int PHOTO_INTERPRETATION = 262;
public static final int DOCUMENT_NAME = 269;
public static final int IMAGE_DESCRIPTION = 270;
public static final int DEVICE_MAKE = 271; // manufacturer of the scanner or video digitizer
public static final int DEVICE_MODEL = 272; // model name/number of the scanner or video digitizer
public static final int STRIP_OFFSETS = 273;
public static final int ORIENTATION = 274;
public static final int SAMPLES_PER_PIXEL = 277;
public static final int ROWS_PER_STRIP = 278;
public static final int STRIP_BYTE_COUNTS = 279;
public static final int MIN_SAMPLE_VALUE = 280;
public static final int MAX_SAMPLE_VALUE = 281;
public static final int X_RESOLUTION = 282;
public static final int Y_RESOLUTION = 283;
public static final int PLANAR_CONFIGURATION = 284;
public static final int RESOLUTION_UNIT = 296;
public static final int SOFTWARE_VERSION = 305; // Name and release # of the software that created the image
public static final int DATE_TIME = 306; // uses format "YYYY:MM:DD HH:MM:SS"
public static final int ARTIST = 315;
public static final int COPYRIGHT = 315; // same as ARTIST
public static final int TIFF_PREDICTOR = 317;
public static final int COLORMAP = 320;
public static final int TILE_WIDTH = 322;
public static final int TILE_LENGTH = 323;
public static final int TILE_OFFSETS = 324;
public static final int TILE_COUNTS = 325;
// Tiff extensions...
public static final int SAMPLE_FORMAT = 339; // SHORT array of samplesPerPixel size
}
// The orientation of the image with respect to the rows and columns.
public interface Orientation
{
// 1 = The 0th row represents the visual top of the image,
// and the 0th column represents the visual left-hand side.
public static final int Row0_IS_TOP__Col0_IS_LHS = 1;
//2 = The 0th Row represents the visual top of the image,
// and the 0th column represents the visual right-hand side.
public static final int Row0_IS_TOP__Col0_IS_RHS = 2;
//3 = The 0th row represents the visual bottom of the image,
// and the 0th column represents the visual right-hand side.
public static final int Row0_IS_BOTTOM__Col0_IS_RHS = 3;
//4 = The 0th row represents the visual bottom of the image,
// and the 0th column represents the visual left-hand side.
public static final int Row0_IS_BOTTOM__Col0_IS_LHS = 4;
//5 = The 0th row represents the visual left-hand side of the image,
// and the 0th column represents the visual top.
public static final int Row0_IS_LHS__Col0_IS_TOP = 5;
//6 = The 0th row represents the visual right-hand side of the image,
// and the 0th column represents the visual top.
public static final int Row0_IS_RHS__Col0_IS_TOP = 6;
//7 = The 0th row represents the visual right-hand side of the image,
// and the 0th column represents the visual bottom.
public static final int Row0_IS_RHS__Col0_IS_BOTTOM = 7;
public static final int DEFAULT = Row0_IS_TOP__Col0_IS_LHS;
}
public interface BitsPerSample
{
public static final int MONOCHROME_BYTE = 8;
public static final int MONOCHROME_UINT8 = 8;
public static final int MONOCHROME_UINT16 = 16;
public static final int ELEVATIONS_INT16 = 16;
public static final int ELEVATIONS_FLOAT32 = 32;
public static final int RGB = 24;
public static final int YCbCr = 24;
public static final int CMYK = 32;
}
public interface SamplesPerPixel
{
public static final int MONOCHROME = 1;
public static final int RGB = 3;
public static final int RGBA = 4;
public static final int YCbCr = 3;
public static final int CMYK = 4;
}
// The color space of the image data
public interface Photometric
{
public static final int Undefined = -1;
// 0 = WhiteIsZero
// For bilevel and grayscale images: 0 is imaged as white.
// 2**BitsPerSample-1 is imaged as black.
// This is the normal value for Compression=2
public static final int Grayscale_WhiteIsZero = 0;
// 1 = BlackIsZero
// For bilevel and grayscale images: 0 is imaged as black.
// 2**BitsPerSample-1 is imaged as white.
// If this value is specified for Compression=2, the image should display and print reversed.
public static final int Grayscale_BlackIsZero = 1;
// 2 = RGB
// The RGB value of (0,0,0) represents black, (255,255,255) represents white,
// assuming 8-bit components.
// Note! For PlanarConfiguration=1, the components are stored in the indicated order:
// first Red, then Green, then Blue.
// For PlanarConfiguration = 2, the StripOffsets for the component planes are stored
// in the indicated order: first the Red component plane StripOffsets,
// then the Green plane StripOffsets, then the Blue plane StripOffsets.
public static final int Color_RGB = 2;
// 3 = Palette color
// In this model, a color is described with a single component.
// The value of the component is used as an index into the red, green and blue curves in
// the ColorMap field to retrieve an RGB triplet that defines the color.
//
// Note!!
// When PhotometricInterpretation=3 is used, ColorMap must be present and SamplesPerPixel must be 1.
public static final int Color_Palette = 3;
// 4 = Transparency Mask.
// This means that the image is used to define an irregularly shaped region of another
// image in the same TIFF file.
//
// SamplesPerPixel and BitsPerSample must be 1.
//
// PackBits compression is recommended.
// The 1-bits define the interior of the region; the 0-bits define the exterior of the region.
//
// A reader application can use the mask to determine which parts of the image to
// display. Main image pixels that correspond to 1-bits in the transparency mask are
// imaged to the screen or printer, but main image pixels that correspond to 0-bits in
// the mask are not displayed or printed.
// The image mask is typically at a higher resolution than the main image, if the
// main image is grayscale or color so that the edges can be sharp.
public static final int Transparency_Mask = 4;
public static final int CMYK = 5;
public static final int YCbCr = 6;
// There is no default for PhotometricInterpretation, and it is required.
}
public interface Compression
{
public static final int NONE = 1;
public static final int LZW = 5;
public static final int JPEG = 6;
public static final int PACKBITS = 32773;
}
public interface PlanarConfiguration
{
// CHUNKY
// The component values for each pixel are stored contiguously.
// The order of the components within the pixel is specified by PhotometricInterpretation.
// For example, for RGB data, the data is stored as RGBRGBRGB...
public static final int CHUNKY = 1;
// PLANAR
// The components are stored in separate component planes.
// The values in StripOffsets and StripByteCounts are then arranged as
// a 2-dimensional array, with SamplesPerPixel rows and StripsPerImage columns.
// (All of the columns for row 0 are stored first, followed by the columns of row 1, and so on.)
//
// PhotometricInterpretation describes the type of data stored in each component plane.
// For example, RGB data is stored with the Red components in one component plane,
// the Green in another, and the Blue in another.
//
// Note!
// If SamplesPerPixel is 1, PlanarConfiguration is irrelevant, and need not be included.
public static final int PLANAR = 2;
public static final int DEFAULT = CHUNKY;
}
public interface ResolutionUnit
{
public static final int NONE = 1;
public static final int INCH = 2;
public static final int CENTIMETER = 3;
}
public interface SampleFormat
{
public static final int UNSIGNED = 1;
public static final int SIGNED = 2;
public static final int IEEEFLOAT = 3;
public static final int UNDEFINED = 4;
}
}
|
def seek(self, offset, whence=0):
if whence == 1:
raise ValueError('Relative seek is not supported for '
'SeekableUnicodeStreamReader -- consider '
'using char_seek_forward() instead.')
self.stream.seek(offset, whence)
self.linebuffer = None
self.bytebuffer = ''
self._rewind_numchars = None
self._rewind_checkpoint = self.stream.tell() |
//-----------------------------------------------------------------------------
// S_PrecacheSound
//
// Reserve space for the name of the sound in a global array.
// Load the data for the non-streaming sound. Streaming sounds
// defer loading of data until just before playback.
//-----------------------------------------------------------------------------
CSfxTable *S_PrecacheSound( const char *name )
{
if ( !g_AudioDevice )
return NULL;
if ( !g_AudioDevice->IsActive() )
return NULL;
CSfxTable *sfx = S_FindName( name, NULL );
if ( sfx )
{
SoundError soundError;
S_LoadSound( sfx, NULL, soundError );
}
else
{
Assert( !"S_PrecacheSound: Failed to create sfx" );
}
return sfx;
} |
// Takes a whole multi-line []byte and finds appropriate subsitutions
func (s *Substitutor) Substitute(input []byte) ([]byte, error) {
reB64Value := regexp.MustCompile(`[A-Za-z0-9\+\/\=]{10,}`)
postbase64input := reB64Value.ReplaceAllFunc(input, s.substitutebase64)
return s.substituteraw(postbase64input)
} |
/**
* Removes first occurrences of the constants
* associated with the expression
*
* @param constants the constants (variadic parameters)
* comma separated list
*
* @see Constant
*/
public void removeConstants(Constant... constants) {
for (Constant constant : constants) {
if (constant != null) {
constantsList.remove(constant);
constant.removeRelatedExpression(this);
setExpressionModifiedFlag();
}
}
} |
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
const int N=200, MAXINT=2147483647;
ll b[N], t, n, mi=MAXINT;
vector<string> m, a, r, c, h;
ll M, A, R, C, H;
string s;
int main() {
cin >> n;
for (int i=0; i<n; i++) {
cin >> s;
switch (s[0]) {
case 'M': m.push_back(s); break;
case 'A': a.push_back(s); break;
case 'R': r.push_back(s); break;
case 'C': c.push_back(s); break;
case 'H': h.push_back(s); break;
default: continue;
}
}
M=m.size(); A=a.size(); R=r.size(); C=c.size(); H=h.size();
cout << M*A*R+M*A*C+M*A*H+M*R*C+M*R*H+M*C*H+A*R*C+A*R*H+A*C*H+R*C*H;
} |
Inhibition of Aflatoxin Biosynthesis by Organophosphorus Compounds.
The effect of a range of organophosphorus and various other compounds on production of aflatoxin by Aspergillus flavus was investigated. Five organophosphorus compounds - Chlormephos, Ciodrin, Naled, Phosdrin and Trichlorphon- at concentrations of 20 and 100 μg/ml of culture fluid were found to have activity similar to Dichlorvos, in that they lowered the level of aflatoxin produced and caused formation of several anthraquinone pigments. Two of these pigments have not previously been described, one was named Versicol and a suggested structure is presented, whilst the other compound was shown to be its acetate derivative. A rationale is suggested for the required elements of structure, which are necessary for an organophosphorus compound to have Dichlorvos-type activity. Two unrelated compounds, ammonium nitrate and Tridecanone were also found to elicit Dichlorvos-type activity. It is likely that tridecanone or its breakdown products competitively inhibit enzymes involved in aflatoxin biosynthesis. It is possible that this inhibition effect explains the lowering of aflatoxin production in lipid-rich commodities infected by A. flavus . |
class SuperTestBaseNoArg {
constructor() {}
}
class SuperTestBaseOneArg {
constructor(public x: number) {}
}
// A ctor with a parameter property.
class SuperTestDerivedParamProps extends SuperTestBaseOneArg {
constructor(public y: string) {
super(3);
}
}
// A ctor with an initialized property.
class SuperTestDerivedInitializedProps extends SuperTestBaseOneArg {
y: string = 'foo';
constructor() {
super(3);
}
}
// A ctor with a super() but none of the above two details.
class SuperTestDerivedOrdinary extends SuperTestBaseOneArg {
constructor() {
super(3);
}
}
// A class without a ctor, extending a one-arg ctor parent.
class SuperTestDerivedNoCTorNoArg extends SuperTestBaseNoArg {
}
// A class without a ctor, extending a no-arg ctor parent.
class SuperTestDerivedNoCTorOneArg extends SuperTestBaseOneArg {
// NOTE: if this has any properties, we fail to generate it
// properly because we generate a constructor that doesn't know
// how to properly call the parent class's super().
}
interface SuperTestInterface {
foo: number;
}
// A class implementing an interface.
class SuperTestDerivedInterface implements SuperTestInterface {
foo: number;
}
class SuperTestStaticProp extends SuperTestBaseOneArg {
static foo = 3;
}
|
/**
* Saturate the output signal and interleave.
*
* @param q pointer to the COOKContext
* @param out pointer to the output vector
*/
static void saturate_output_float(COOKContext *q, float *out)
{
q->adsp.vector_clipf(out, q->mono_mdct_output + q->samples_per_channel,
FFALIGN(q->samples_per_channel, 8), -1.0f, 1.0f);
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.