content
stringlengths 10
4.9M
|
---|
Even before you start on a journey through the history of literature, you know some of the stops you'll make on the way: the Epic of Gilgamesh, the Bible, Homer's Iliad and Odyssey, Greek tragedy, Shakespeare, Joyce. And so it comes as no surprise that Jacke Wilson, creator and host of the History of Literature podcast (from ancient epics to contemporary classics - Android - RSS), has so far devoted whole episodes, and often more than one, to each of them. A self-described "amateur scholar," Wilson aims with this show, which he launched last October, to take "a fresh look at some of the most compelling examples of creative genius the world has ever known."
Wilson also addresses questions like "How did literature develop? What forms has it taken? And what can we learn from engaging with these works today?" And yet he asks this rhetorical one in The History of Literature's very first episode: "Is it just me, or is literature dying?" The also self-described "wildly unqualified host" admits that he at first tried to create a straightforward, straight-faced march through literary history, but found the result staid and lifeless. And so he loosened up, allowing in not just more of his personality but more of his doubts about the very literary enterprise in the 21st century.
Given that we get so much of our knowledge, human interaction, and pure wordcraft on the internet today, laments Wilson, what remains for novels, stories, poetry, and drama to provide us? As a History of Literature listener, I personally see things differently. The fact that we now have such abundant outlets from which to receive all those other things may strip literature of some of the relevance it once held by default, but it also lifts from literature a considerable burden. Just as the development of photography freed painting from the obligation to ever more faithfully represent reality, literature can now find forms and subjects better suited to the artistic experience that it, and only it, can deliver.
Jorge Luis Borges counts as only one of the writers who grasped the unexplored potential of literature, and Wilson uses one of the occasional episodes that breaks from the linearity of history to discuss the "Garden of Forking Paths" author's thoughts on the meaning of life. He recorded it (listen above) in response to two deaths: that of "Fifth Beatle" George Martin, and even more so that of his uncle. Other relatable parts of Wilson's life come into play in other conversations about writers both ancient and modern, such as the conversation about the works of Graham Greene and whether he can still get as much out of them as he did during his youthful traveling days. Literature, after all, may have no greater value than that it gets us asking questions — a value The History of Literature demonstrates in every episode.
Related Content:
What Are Literature, Philosophy & History For? Alain de Botton Explains with Monty Python-Style Videos
A Crash Course in English Literature: A New Video Series by Best-Selling Author John Green
Entitled Opinions, the “Life and Literature” Podcast That Refuses to Dumb Things Down
The Dead Authors Podcast: H.G. Wells Comically Revives Literary Greats with His Time Machine
The History of Philosophy Without Any Gaps Podcast, Now at 239 Episodes, Expands into Eastern Philosophy
The Complete History of the World (and Human Creativity) in 100 Objects
78 Free Online History Courses: From Ancient Greece to The Modern World
55 Free Online Literature Courses: From Dante and Milton to Kerouac and Tolkien
Based in Seoul, Colin Marshall writes and broadcasts on cities and culture. He’s at work on a book about Los Angeles, A Los Angeles Primer, the video series The City in Cinema, the crowdfunded journalism project Where Is the City of the Future?, and the Los Angeles Review of Books’ Korea Blog. Follow him on Twitter at @colinmarshall or on Facebook. |
<reponame>ez-deploy/ezdeploy
// Code generated by go-swagger; DO NOT EDIT.
package pod
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
import (
"net/http"
"github.com/go-openapi/runtime"
"github.com/ez-deploy/ezdeploy/models"
)
// CreatePodTicketCreatedCode is the HTTP code returned for type CreatePodTicketCreated
const CreatePodTicketCreatedCode int = 201
/*CreatePodTicketCreated Create Pod Ticket Success, return pod ticket info.
swagger:response createPodTicketCreated
*/
type CreatePodTicketCreated struct {
/*
In: Body
*/
Payload *models.SSHPodTicket `json:"body,omitempty"`
}
// NewCreatePodTicketCreated creates CreatePodTicketCreated with default headers values
func NewCreatePodTicketCreated() *CreatePodTicketCreated {
return &CreatePodTicketCreated{}
}
// WithPayload adds the payload to the create pod ticket created response
func (o *CreatePodTicketCreated) WithPayload(payload *models.SSHPodTicket) *CreatePodTicketCreated {
o.Payload = payload
return o
}
// SetPayload sets the payload to the create pod ticket created response
func (o *CreatePodTicketCreated) SetPayload(payload *models.SSHPodTicket) {
o.Payload = payload
}
// WriteResponse to the client
func (o *CreatePodTicketCreated) WriteResponse(rw http.ResponseWriter, producer runtime.Producer) {
rw.WriteHeader(201)
if o.Payload != nil {
payload := o.Payload
if err := producer.Produce(rw, payload); err != nil {
panic(err) // let the recovery middleware deal with this
}
}
}
// CreatePodTicketForbiddenCode is the HTTP code returned for type CreatePodTicketForbidden
const CreatePodTicketForbiddenCode int = 403
/*CreatePodTicketForbidden Create Pod Ticket Failed, cause do not have permisssion
swagger:response createPodTicketForbidden
*/
type CreatePodTicketForbidden struct {
/*
In: Body
*/
Payload *models.Error `json:"body,omitempty"`
}
// NewCreatePodTicketForbidden creates CreatePodTicketForbidden with default headers values
func NewCreatePodTicketForbidden() *CreatePodTicketForbidden {
return &CreatePodTicketForbidden{}
}
// WithPayload adds the payload to the create pod ticket forbidden response
func (o *CreatePodTicketForbidden) WithPayload(payload *models.Error) *CreatePodTicketForbidden {
o.Payload = payload
return o
}
// SetPayload sets the payload to the create pod ticket forbidden response
func (o *CreatePodTicketForbidden) SetPayload(payload *models.Error) {
o.Payload = payload
}
// WriteResponse to the client
func (o *CreatePodTicketForbidden) WriteResponse(rw http.ResponseWriter, producer runtime.Producer) {
rw.WriteHeader(403)
if o.Payload != nil {
payload := o.Payload
if err := producer.Produce(rw, payload); err != nil {
panic(err) // let the recovery middleware deal with this
}
}
}
// CreatePodTicketInternalServerErrorCode is the HTTP code returned for type CreatePodTicketInternalServerError
const CreatePodTicketInternalServerErrorCode int = 500
/*CreatePodTicketInternalServerError Server Error
swagger:response createPodTicketInternalServerError
*/
type CreatePodTicketInternalServerError struct {
/*
In: Body
*/
Payload *models.Error `json:"body,omitempty"`
}
// NewCreatePodTicketInternalServerError creates CreatePodTicketInternalServerError with default headers values
func NewCreatePodTicketInternalServerError() *CreatePodTicketInternalServerError {
return &CreatePodTicketInternalServerError{}
}
// WithPayload adds the payload to the create pod ticket internal server error response
func (o *CreatePodTicketInternalServerError) WithPayload(payload *models.Error) *CreatePodTicketInternalServerError {
o.Payload = payload
return o
}
// SetPayload sets the payload to the create pod ticket internal server error response
func (o *CreatePodTicketInternalServerError) SetPayload(payload *models.Error) {
o.Payload = payload
}
// WriteResponse to the client
func (o *CreatePodTicketInternalServerError) WriteResponse(rw http.ResponseWriter, producer runtime.Producer) {
rw.WriteHeader(500)
if o.Payload != nil {
payload := o.Payload
if err := producer.Produce(rw, payload); err != nil {
panic(err) // let the recovery middleware deal with this
}
}
}
|
// UpdateBytes updates an existing []byte Value with a value.
// No action is applied to the map where the key does not exist.
// The caller must ensure the []byte passed in is not modified after the call is made, sharing the data
// across multiple attributes is forbidden.
func (m Map) UpdateBytes(k string, v []byte) {
if av, existing := m.Get(k); existing {
av.SetBytesVal(v)
}
} |
package seedu.booking.logic.commands;
import static java.util.Objects.requireNonNull;
import static seedu.booking.logic.parser.CliSyntax.PREFIX_BOOKING_END;
import static seedu.booking.logic.parser.CliSyntax.PREFIX_BOOKING_START;
import static seedu.booking.logic.parser.CliSyntax.PREFIX_DESCRIPTION;
import static seedu.booking.logic.parser.CliSyntax.PREFIX_EMAIL;
import static seedu.booking.logic.parser.CliSyntax.PREFIX_VENUE;
import seedu.booking.logic.commands.exceptions.CommandException;
import seedu.booking.model.Model;
import seedu.booking.model.booking.Booking;
/**
* Adds a booking to the booking system.
*/
public class AddBookingCommand extends Command {
public static final String COMMAND_WORD = "add_booking";
public static final String MESSAGE_USAGE = COMMAND_WORD + ": Adds a booking to the booking system. "
+ "Parameters: "
+ PREFIX_EMAIL + "BOOKER EMAIL "
+ PREFIX_VENUE + "VENUE NAME "
+ PREFIX_DESCRIPTION + "DESCRIPTION "
+ PREFIX_BOOKING_START + "DATETIME "
+ PREFIX_BOOKING_END + "DATETIME\n"
+ "Example: " + COMMAND_WORD + " "
+ PREFIX_EMAIL + "<EMAIL> "
+ PREFIX_VENUE + "Hall "
+ PREFIX_DESCRIPTION + "For FYP Meeting. "
+ PREFIX_BOOKING_START + "2012-01-31 22:59 "
+ PREFIX_BOOKING_END + "2012-01-31 23:59";
public static final String MESSAGE_SUCCESS = "New booking added: %1$s";
public static final String MESSAGE_DUPLICATE_BOOKING = "This booking already exists in the booking system.";
public static final String MESSAGE_INVALID_TIME =
"Invalid timing: The booking's starting time cannot be later than its ending time. ";
public static final String MESSAGE_INVALID_VENUE = "This venue does not exist in the system.";
public static final String MESSAGE_INVALID_PERSON = "This booker does not exist in the system.";
public static final String MESSAGE_OVERLAPPING_BOOKING = "This time slot has been booked.";
private final Booking toAdd;
/**
* Creates an CreateBookingCommand to add the specified {@code Booking}
*/
public AddBookingCommand(Booking booking) {
requireNonNull(booking);
toAdd = booking;
}
@Override
public CommandResult execute(Model model) throws CommandException {
requireNonNull(model);
if (model.hasBooking(toAdd)) {
throw new CommandException(MESSAGE_DUPLICATE_BOOKING);
}
if (!toAdd.isValidTime()) {
throw new CommandException(MESSAGE_INVALID_TIME);
}
if (!model.hasPersonWithEmail(toAdd.getBookerEmail())) {
throw new CommandException(MESSAGE_INVALID_PERSON);
}
if (!model.hasVenueWithVenueName(toAdd.getVenueName())) {
throw new CommandException(MESSAGE_INVALID_VENUE);
}
if (model.hasOverlappedBooking(toAdd)) {
throw new CommandException(MESSAGE_OVERLAPPING_BOOKING);
}
model.addBooking(toAdd);
return new CommandResult(String.format(MESSAGE_SUCCESS, toAdd));
}
@Override
public boolean equals(Object other) {
return other == this // short circuit if same object
|| (other instanceof AddBookingCommand // instanceof handles nulls
&& toAdd.equals(((AddBookingCommand) other).toAdd));
}
}
|
package com.dianrong.common.uniauth.cas.model;
import java.io.Serializable;
/**
* 定义一个标准的http response返回model的格式.
*
* @author wanglin
*/
public class HttpResponseModel<T extends Serializable> implements Serializable {
private static final long serialVersionUID = 788863139426941179L;
private boolean success;
private Integer code;
private T content;
private String msg;
public boolean isSuccess() {
return success;
}
public HttpResponseModel<T> setSuccess(boolean success) {
this.success = success;
return this;
}
public Integer getCode() {
return code;
}
public HttpResponseModel<T> setCode(Integer code) {
this.code = code;
return this;
}
public T getContent() {
return content;
}
public HttpResponseModel<T> setContent(T content) {
this.content = content;
return this;
}
public String getMsg() {
return msg;
}
public HttpResponseModel<T> setMsg(String msg) {
this.msg = msg;
return this;
}
public static <T extends Serializable> HttpResponseModel<T> buildSuccessResponse() {
return new HttpResponseModel<T>().setSuccess(true);
}
}
|
<reponame>bboczar/py<filename>explorer/config.py
class Configuration:
logging = {
'overall_level': 10, # defines lowest log level, 10: DEBUG, 20: INFO, 30: WARNING, 40: ERROR, 50: CRITICAL
'file_level': 10,
'console_level': 10,
}
configuration = Configuration()
|
// Get a database instance.
func Get() *mgo.Database {
session, err := mgo.Dial(os.Getenv("MONGODB_URI"))
if err != nil {
panic(err)
}
return session.DB(os.Getenv("MONGODB_DATABASE"))
} |
Bernie Sanders is positive that Donald Trump will not be the next president of the United States.
During an interview for The Hollywood Reporter published Wednesday, the Vermont senator assured Spike Lee that the Republican front-runner wouldn’t make it anywhere near the White House. (RELATED: Spike Lee Narrates New Ad For Bernie Sanders)
“He is an entertainer by and large,” Sanders said. “He did very well on television; he knows the media very, very well. Don’t underestimate him. And God knows who he is really, but we see what he personifies on TV every night.”
“He knows how to manipulate the media very effectively, he knows how to do what he does with people. But let me just reassure you: Donald Trump is not going to become president of the United States. That I can say.”
Sanders said Trump could change the Republican party as we know it. (RELATED: Spike Lee: It’s Easier For A Black Guy To Be President Than A Hollywood Exec)
“There’s no question,” he said. “The establishment Republicans are going nuts. And this could lead to a real dissolution of the Republican Party as we know it.”
He said Trump is doing so well because politicians underestimated how angry and frustrated the American people are.
“So when he says, ‘Look, I’m not them,’ they say, ‘OK, that’s good enough for me.’ You know? ‘That’s all that I need.’ And there is a lot of anger out there and a lot of reasons for the anger.”
But Sanders added that he thinks Trump is using minorities as a scapegoat.
“People are angry, what do you do?” he continued.
“You don’t get to the real issues as to why people are hurting, you scapegoat. You scapegoat blacks, Latinos, gays, anybody, Jews, Muslims, any minority out there, that’s what you do. That is nothing new. That’s what demagogues have always done, and that’s what Trump is doing.” (RELATED: Emily Ratajkowski Might Be Bernie Sanders’s Hottest Supporter Yet) |
This guest post comes to us courtesy of Mike C. You can read his previous guest post here.
The idea for this blog post came to me, as many of my best ideas do, while I was thinking about sex in church. Now please don’t get all huffy. I am aware of the impracticalities: limited privacy, no comfortable places to lie down (I should know, I’ve tried sleeping on the couches while my kids are attending seminary), etc.
OK, that is not what I meant. Nor do I mean that I was sitting in church daydreaming about having sex, although I confess that that may have happened a time or two when I got carried away in the spirit 🙂
No, what I mean is that I was thinking about how the manual on Strengthening Marriage does not have a chapter on physical intimacy. Even though we are taught that marriage is of supreme importance and that sexual intimacy is an integral and even sacred part of the marriage relationship, the topic is barely broached in the Church’s manual on marriage.
Why is that, I’ve wondered? I don’t really know, but what I’ve come to suspect is that we don’t have a real message to teach about physical intimacy. We simply don’t know what to say. In the Church, as in much of our society, sex is something we do, not something we talk about.
I should say that we have something to say about sex, but it seems that no one is really satisfied with the messages. Even those giving them seem to sense, at least on some level, that the core is missing. Our messages don’t seem to adequately convey what sex is truly about.
It starts out reasonably enough, with teachings that tend to protect us and help us and those around us be happier. Chastity before marriage and fidelity afterwards. Sex within marriage as a sacrament (although this is never elaborated on much–what sex and sacrament seem to have in common, as far as I can figure out, is that both experiences can be ruined by crying babies).
But then most other messages about physical intimacy seem to devolve into women protecting their virtue (from the men trying to steal it), mostly by being modest (so that men won’t want to steal it), and men not looking at pornography (which we learn is about the worst sin ever, since very few people have the opportunity or motive for murder, leaving pornography in first place for worst sins that we can realistically expect to commit).
Much has been written about how problematic and even destructive these messages can be. They cast women in the role of objects to be acted upon, rather than agents in their lives. They ignore rather than celebrate women’s sexual desire and sexual needs, which can cause much unhappiness for women and create difficulties within marriage relationships. They vilify rather than normalize sexual desire in men, engendering guilt and shame in men and alienation in the women who love them or want to love them. Above all, they don’t seem to work. It is my understanding that pornography use and premarital sex are about as common among Mormons as among other groups, but among Mormons guilt, shame, and feelings of deep inadequacy are piled on, sometimes leading to self-loathing and in extreme cases even suicide.
It seems to me that a new program is in order. Let’s talk about sex, but let’s talk about it in a new way:
1. Let’s never, ever forget, that it was God who gave us our sexual desires. They are inherently good, they are inherently divine. When we are turned on sexually we need to remember that God created this within us.
2. Sexual desires need to be normalized. Having sexual thoughts and feelings for others we find attractive is completely normal. There is nothing wrong with it. It is normal. It is normal that pornography is attractive to men and women.
3. I want to be clear: normalizing does not mean that anything goes. Of course the inappropriate expression of our sexual desires can hurt others and ourselves. However, the inappropriate expression of sexual desire is not unique in its potential for harm–it does not belong in some special class of sin; equally serious harm can come from pride, selfishness, ridicule, unbridled ambition, dishonesty, anger, or unrighteous dominion. We need to put sexual sin in the proper perspective.
4. Of much greater importance is this: sex allows us to relate to another person in a way that can be incredibly intimate (though sadly it often is not, even within marriage). In so doing, sex can open us to great vulnerability.
5. It is through this vulnerability that we have a tremendous opportunity for emotional and spiritual development. But it requires that we think about sex as a way to grow as humans (and not just below the belt, guys).
6. This means that what we need to understand and learn about physical intimacy, is how to relate emotionally to another person. As we pursue physical intimacy there are many questions we must continually ask ourselves.
7. For me these include: Am I listening to my wife’s needs? Am I sensitive to how she is feeling? Am I kind? Am I thoughtful? Am I patient with her and with myself? Am I ashamed of my own body? Am I willing to accept my body and myself as I am (including the hair that grows in the most annoying places)? Am I willing to be seen, naked before her? Am I willing to make my desires known? Am I willing to own my sexual desires? Am I willing to speak up for what I want and at the same time accept her wants as equally valid, even when they are different from mine? Am I willing to have fun with her, to smile, to laugh, to be myself, to not take myself so seriously? Am I willing to let my flaws show, rather than preserve my carefully constructed self-image? Am I willing to learn not to hide behind a mask as we engage physically? Above all, can I learn to be truly authentic with this person I love more than anyone?
These are the messages about sex I wish we would hear and discuss at church. Let’s move past the tired, uptight, repressive rhetoric. Let’s move toward teaching sex as an important way to develop God-like characteristics. Let’s not wait any longer to implement a healthy approach towards sex. Let’s, in the prophetic words of Marvin Gaye, get it on. |
//*********************************************
// Sound IDecoder
// Copyright (c) Rylogic Ltd 2007
//*********************************************
// Notes:
// - DirectSound8 is deprecated, prefer XAudio2
#pragma once
#include <fstream>
#include <filesystem>
#include "pr/common/cast.h"
#include "pr/audio/directsound/sound.h"
namespace pr::sound
{
// Interface to a data stream
struct IDataStream
{
virtual ~IDataStream() {}
// Read bytes from the stream and copies them into 'ptr'. Same inputs/outputs as fread
virtual size_t IDataStream_Read(void* ptr, size_t byte_size, size_t size_to_read) = 0;
// Seek to a position in the input stream. Same inputs/outputs as fseek.
// Seek is optional, if seeking is not supported, return -1
virtual int IDataStream_Seek(long offset, int seek_from) = 0;
// Return the byte position of the next byte in the data stream that would be read. Same inputs/outputs as ftell.
virtual long IDataStream_Tell() const = 0;
// Closes the data stream. Same as fclose.
virtual void IDataStream_Close() = 0;
};
// A type that plays a sound and manages filling a dsound buffer from a data stream
struct Player
{
// Notes:
// - A Player lives for the duration of a sound being played. For long running sounds (i.e. infinite loops)
// the application main loop needs to periodically call 'Update' to keep the DX sound buffer filled.
D3DPtr<IDirectSoundBuffer8> m_buf; // The buffer this player is filling
IDataStream* m_src; // The source of data
size_t m_buf_size; // The size of the buffer pointed to by 'm_buf'
size_t m_pos; // The position we're writing to in 'm_buf'
float m_volume; // The playback volume
bool m_src_end; // True after we've read the last byte from the source (implies !m_loop)
bool m_loop; // Loop the sample
Player()
:m_buf()
,m_src()
,m_buf_size()
,m_pos(0)
,m_volume(0.5f)
,m_src_end(false)
,m_loop(false)
{}
~Player()
{
if (m_src)
m_src->IDataStream_Close();
}
// Set this decoder to copy data from 'src' to 'buf'
// 'src' may be nullptr, in which case 'buf' will be filled with zeros
// 'buf' may be nullptr to release the reference held by 'm_buf'
// Looping is handled by the IDataStream. It should wrap internally giving the impression of an infinitely long buffer
void Set(IDataStream* src, D3DPtr<IDirectSoundBuffer8>& buf)
{
if (m_src) m_src->IDataStream_Close();
m_src = src;
m_buf = buf;
m_buf_size = GetBufferSize(buf);
SetVolume(m_volume);
Throw(m_buf->SetCurrentPosition(0));
Update(true);
}
// Returns true while this player is playing
bool IsPlaying() const
{
if (!m_buf)
return false;
DWORD status;
Throw(m_buf->GetStatus(&status));
return (status & DSBSTATUS_PLAYING) != 0;
}
// Set the playback volume
void SetVolume(float vol)
{
m_volume = vol;
if (m_buf)
sound::SetVolume(m_buf, vol);
}
// Start the sample playing
void Play(bool loop, DWORD priority = 0)
{
// The dsound buffer is played as looping because its size is independent of the src data size.
// For non-looped sounds we will call stop during Update() after all data has been read from the stream.
PR_ASSERT(PR_DBG_SND, m_buf, "");
Throw(m_buf->Play(0, priority, DSBPLAY_LOOPING));
m_src_end = false;
m_loop = loop;
}
// Stop the sample playing
// Note: no Rewind() or SetPosition() in the player as that can be
// done in the source stream which knows if it's seekable or not.
void Stop()
{
if (!m_buf) return;
Throw(m_buf->Stop());
}
// Transfers more data from the source stream into the dsound buffer
void Update(bool force = false)
{
// This method should be called when the Sound raises the update event
// Only update if the sound is playing (under normal conditions)
if (!m_buf || !(force || IsPlaying()))
return;
// Get the read/write positions in the dsound buffer and the space that is available for filling
// Note: wpos here is the next byte that can be written, not where we last finished writing to.
DWORD rpos;
pr::Throw(m_buf->GetCurrentPosition(&rpos, 0));
size_t ahead = (m_pos - rpos + m_buf_size) % m_buf_size; // This is how far ahead of the read position our write position is
// If we've reached the end of the source, and 'rpos' has moved past 'm_pos'
// then 'ahead' will be "negative" and we can stop playback
if (m_src_end)
{
if (ahead > m_buf_size/2) Stop();
return;
}
// Only fill the buffer up to half full. This minimises the problems
// with aliasing and allows us to tell when 'rpos' has overtaken 'm_pos'.
size_t fill = pr::Clamp<size_t>((m_buf_size/2) - ahead, 0, m_buf_size/2);
if (fill < m_buf_size/8) return; // wait for a minimum amount to do
// Add more sound data to the writable part of the buffer
Lock lock(m_buf, m_pos, fill);
size_t read = Read(lock.m_ptr0, lock.m_size0) + Read(lock.m_ptr1, lock.m_size1);
m_pos = (m_pos + read) % m_buf_size;
m_src_end = read == 0;
}
private:
// Read 'count' bytes into 'ptr'. If the source stream returns less than
// 'count' bytes the remaining bytes in 'ptr' are filled with zeros.
// Returns the number of bytes read from the source stream.
size_t Read(uint8_t* ptr, size_t count)
{
size_t src_read = 0;
for (size_t read = 0; count != 0; ptr += read, count -= read, src_read += read)
{
read = m_src->IDataStream_Read(ptr, 1, count);
if (read == 0)
{
// If not looping or if no data can be read from the start of the stream then quit
// otherwse, repeatedly seek to the beginning and reread until we've read 'count' bytes
if (!m_loop || m_src->IDataStream_Tell() == 0) break;
else if (m_src->IDataStream_Seek(0, SEEK_SET) == -1) { PR_ASSERT(PR_DBG_SND, false, "Cannot loop as 'm_src' is not seekable"); break; }
}
}
memset(ptr, 0, count); // Fill any remaining space with zeros
return src_read;
}
};
// Some default DataStream implementations
// A local buffer containing the sound file data
struct MemDataStream :IDataStream
{
std::vector<uint8_t> m_data;
size_t m_pos;
bool m_delete_on_close;
MemDataStream(bool delete_on_close = false)
: m_data()
, m_pos(0)
, m_delete_on_close(delete_on_close)
{}
MemDataStream(std::filesystem::path const& filepath, bool delete_on_close = false)
: m_data()
, m_pos(0)
, m_delete_on_close(delete_on_close)
{
// Fill 'm_data' from the file
m_data.resize(s_cast<size_t, true>(std::filesystem::file_size(filepath)));
std::ifstream file(filepath, std::ios::binary);
if (file.read(char_ptr(m_data.data()), m_data.size()).gcount() != static_cast<std::streamsize>(m_data.size()))
throw std::runtime_error(Fmt("Failed to read audio file: '%S'", filepath.c_str()));
}
size_t IDataStream_Read(void* ptr, size_t byte_size, size_t size_to_read)
{
size_t count = size_to_read * byte_size;
if (count > m_data.size() - m_pos) count = m_data.size() - m_pos;
if (count != 0) { memcpy(ptr, &m_data[m_pos], count); m_pos += count; }
return count;
}
int IDataStream_Seek(long offset, int seek_from)
{
switch (seek_from)
{
case SEEK_SET:
m_pos = size_t(offset);
return 0;
case SEEK_CUR:
m_pos += size_t(offset);
return 0;
case SEEK_END:
m_pos = m_data.size() - size_t(offset);
return 0;
default:
return -1;
}
}
long IDataStream_Tell() const
{
return long(m_pos);
}
void IDataStream_Close()
{
if (m_delete_on_close)
delete this;
}
};
}
|
// ----------------------------------------------------------------------------
// Test that reading, writing, then reading a video produces generally the
// same result as the first time we read it.
TEST_F ( ffmpeg_video_output, round_trip )
{
auto const src_path = data_dir + "/" + short_video_name;
auto const tmp_path =
kwiver::testing::temp_file_name( "test-ffmpeg-output-", ".ts" );
kv::timestamp ts;
ffmpeg::ffmpeg_video_input is;
is.open( src_path );
ffmpeg::ffmpeg_video_output os;
os.open( tmp_path, is.implementation_settings().get() );
_tmp_file_deleter tmp_file_deleter{ tmp_path };
for( is.next_frame( ts ); !is.end_of_video(); is.next_frame( ts ) )
{
auto const image = is.frame_image();
os.add_image( image, ts );
}
os.close();
is.close();
auto const image_epsilon = 6.5;
expect_eq_videos( src_path, tmp_path, image_epsilon );
} |
N = int(input())
S = [input() for _ in range(N)]
first_B, last_A, both = 0, 0, 0
ans = 0
for s in S:
ans += s.count('AB')
if s[0] == 'B' and s[-1] == 'A': both += 1
elif s[0] == 'B': first_B += 1
elif s[-1] == 'A': last_A += 1
if both <= 1:
if last_A <= first_B: last_A += both
else: first_B += both
ans += max(min(first_B, last_A), 0)
else:
ans += max(both-1, 0) + (1 if first_B else 0) + (1 if last_A else 0) + max(min(first_B-1, last_A-1), 0)
print(ans) |
// MIT © 2017 azu
import { UseCaseLike } from "../UseCaseLike";
import { StoreLike } from "../StoreLike";
import { StoreGroupLike } from "../UILayer/StoreGroupLike";
import { AlminPerfMarkerAbstract, DebugId, MarkType } from "./AlminAbstractPerfMarker";
import { Transaction } from "../DispatcherPayloadMeta";
import { Events } from "../Events";
const canUsePerformanceMeasure: boolean =
typeof performance !== "undefined" &&
typeof performance.mark === "function" &&
typeof performance.clearMarks === "function" &&
typeof performance.measure === "function" &&
typeof performance.clearMeasures === "function";
export type AlminPerfMarkerActions =
| {
type: "beforeStoreGroupReadPhase";
}
| {
type: "afterStoreGroupReadPhase";
}
| {
type: "beforeStoreGroupWritePhase";
}
| {
type: "afterStoreGroupWritePhase";
}
| {
type: "beforeStoreGetState";
}
| {
type: "afterStoreGetState";
}
| {
type: "beforeStoreReceivePayload";
}
| {
type: "afterStoreReceivePayload";
}
| {
type: "willUseCaseExecute";
}
| {
type: "didUseCaseExecute";
}
| {
type: "completeUseCaseExecute";
}
| {
type: "beginTransaction";
}
| {
type: "endTransaction";
};
export class AlminPerfMarker extends Events<AlminPerfMarkerActions> implements AlminPerfMarkerAbstract {
private _isProfiling = false;
beginProfile(): void {
this._isProfiling = true;
}
get isProfiling(): boolean {
return this._isProfiling;
}
endProfile(): void {
this._isProfiling = false;
this.removeAllEventListeners();
}
shouldMark(_debugId: DebugId) {
if (!this._isProfiling) {
return false;
}
return canUsePerformanceMeasure;
}
markBegin = (debugID: DebugId, markType: MarkType) => {
if (!this.shouldMark(debugID)) {
return;
}
const markName = `almin::${debugID}::${markType}`;
performance.mark(markName);
};
markEnd = (debugID: DebugId, markType: MarkType, displayName: string) => {
if (!this.shouldMark(debugID)) {
return;
}
const markName = `almin::${debugID}::${markType}`;
const measureName = `${displayName} [${markType}]`;
performance.measure(measureName, markName);
// clear unneeded marks
performance.clearMarks(markName);
performance.clearMeasures(measureName);
};
beforeStoreGroupReadPhase(debugId: DebugId, _storeGroup: StoreGroupLike): void {
this.markBegin(debugId, "StoreGroup#read");
this.emit({ type: "beforeStoreGroupReadPhase" });
}
afterStoreGroupReadPhase(debugId: DebugId, storeGroup: StoreGroupLike): void {
const displayName = storeGroup.name;
this.markEnd(debugId, "StoreGroup#read", displayName);
this.emit({ type: "afterStoreGroupReadPhase" });
}
beforeStoreGroupWritePhase(debugId: DebugId, _storeGroup: StoreGroupLike): void {
this.markBegin(debugId, "StoreGroup#write");
this.emit({ type: "beforeStoreGroupWritePhase" });
}
afterStoreGroupWritePhase(debugId: DebugId, storeGroup: StoreGroupLike): void {
const displayName = storeGroup.name;
this.markEnd(debugId, "StoreGroup#write", displayName);
this.emit({ type: "afterStoreGroupWritePhase" });
}
beforeStoreGetState(debugId: DebugId, _store: StoreLike): void {
this.markBegin(debugId, "Store#getState");
this.emit({ type: "beforeStoreGetState" });
}
afterStoreGetState(debugId: DebugId, store: StoreLike): void {
const displayName = store.name;
this.markEnd(debugId, "Store#getState", displayName);
this.emit({ type: "afterStoreGetState" });
}
beforeStoreReceivePayload(debugId: DebugId, _store: StoreLike): void {
this.markBegin(debugId, "Store#receivePayload");
this.emit({ type: "beforeStoreReceivePayload" });
}
afterStoreReceivePayload(debugId: DebugId, store: StoreLike): void {
const displayName = store.name;
this.markEnd(debugId, "Store#receivePayload", displayName);
this.emit({ type: "afterStoreReceivePayload" });
}
willUseCaseExecute(debugId: DebugId, _useCase: UseCaseLike): void {
this.markBegin(debugId, "UserCase#execute");
this.emit({ type: "willUseCaseExecute" });
}
didUseCaseExecute(debugId: DebugId, useCase: UseCaseLike): void {
const displayName = useCase.name;
this.markEnd(debugId, "UserCase#execute", displayName);
// did -> complete
this.markBegin(debugId, "UserCase#complete");
this.emit({ type: "didUseCaseExecute" });
}
completeUseCaseExecute(debugId: DebugId, useCase: UseCaseLike): void {
const displayName = useCase.name;
this.markEnd(debugId, "UserCase#complete", displayName);
this.emit({ type: "completeUseCaseExecute" });
}
beginTransaction(debugId: string, _transaction: Transaction): void {
this.markBegin(debugId, "Transaction");
this.emit({ type: "beginTransaction" });
}
endTransaction(debugId: string, transaction: Transaction): void {
const displayName = transaction.name;
this.markEnd(debugId, "Transaction", displayName);
this.emit({ type: "endTransaction" });
}
}
|
#!/usr/bin/env python
# Copyright (c) 2012 The Native Client Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Usage:
sha1check.py <hashfile
where hashfile was generated by "sha1sum.py" (or the "sha1sum" utility)
and has the format:
da39a3ee5e6b4b0d3255bfef95601890afd80709 *filename
sha1check.py will perform sha1 hash on filename (opened in
binary mode) and compare the generated hash value with
the hash value in the input hashfile. If the two hashes
don't match or filename doesn't exist, sha1check.py will
return an error.
"""
from __future__ import print_function
import hashlib
import sys
class Error(Exception):
pass
def VerifyHash(filename, sha1sum):
try:
# open file in binary mode & sha1 hash it
h = hashlib.sha1()
with open(filename, "rb") as f:
h.update(f.read())
filehash = h.hexdigest()
except IOError:
raise Error("unable to open file " + filename)
except:
raise Error("encountered an unexpected error")
# verify the generated hash and embedded hash match
if sha1sum.lower() != filehash.lower():
print("Filename: %s" % filename)
print("Expected hash: %s" % sha1sum)
print("Actual hash: %s" % filehash)
raise Error("sha1 checksum failed on file: " + filename)
def VerifyLine(line, verbose):
# split the hash *filename into a pair
parts = line.split()
if len(parts) != 2:
raise Error("Invalid sha1 line: '%s'" % line)
sha1sum, name = parts
# make sure filename started with '*' (binary mode)
if not name or name[0] != '*':
raise Error("input hash is not from a binary file")
# remove leading '*' and any newlines from filename
filename = name[1:]
VerifyHash(filename, sha1sum)
if verbose:
print("sha1check.py: %s verified" % filename)
return filename
def VerifyFile(file_input, verbose):
rtn = []
for line in file_input:
rtn.append(VerifyLine(line, verbose))
if not rtn:
raise Error("No file hashes given on input")
return rtn
def main():
try:
VerifyFile(sys.stdin, True)
except Error as e:
sys.stdout.write('sha1check.py: %s\n' % str(e))
return 1
return 0
# all files hashed with success
if __name__ == '__main__':
sys.exit(main())
|
"""
Created on Sep 10, 2019
@author: <NAME>, <NAME>
Integrated into ASPIRE by <NAME> Feb 2021.
"""
import logging
import os
from collections import OrderedDict
import mrcfile
import numpy as np
from numpy import linalg as npla
from pandas import DataFrame
from scipy.optimize import linprog
from scipy.signal.windows import dpss
from aspire.basis.ffb_2d import FFBBasis2D
from aspire.image import Image
from aspire.numeric import fft
from aspire.operators import voltage_to_wavelength
from aspire.storage import StarFile
from aspire.utils import abs2, complex_type, grid_1d, grid_2d
logger = logging.getLogger(__name__)
class CtfEstimator:
"""
CtfEstimator Class ...
"""
def __init__(
self,
pixel_size,
cs,
amplitude_contrast,
voltage,
psd_size,
num_tapers,
dtype=np.float32,
):
"""
Instantiate a CtfEstimator instance.
:param pixel_size: Size of the pixel in \u212b (Angstrom).
:param cs: Spherical aberration in mm.
:param amplitude_contrast: Amplitude contrast.
:param voltage: Voltage of electron microscope.
:param psd_size: Block size (in pixels) for PSD estimation.
:param num_tapers: Number of tapers to apply in PSD estimation.
:returns: CtfEstimator instance.
"""
self.pixel_size = pixel_size
self.cs = cs
self.amplitude_contrast = amplitude_contrast
self.voltage = voltage
self.psd_size = psd_size
self.num_tapers = num_tapers
self.lmbd = voltage_to_wavelength(voltage) / 10.0 # (Angstrom)
self.dtype = np.dtype(dtype)
grid = grid_2d(psd_size, normalized=True, indexing="yx", dtype=self.dtype)
# Note range is -half to half.
self.r_ctf = grid["r"] / 2 * (10 / pixel_size) # units: inverse nm
self.theta = grid["phi"]
self.defocus1 = 0
self.defocus2 = 0
self.angle = 0 # Radians
self.h = 0
def set_df1(self, df):
"""
Sets defocus.
:param df: Defocus value in the direction perpendicular to df2.
"""
self.defocus1 = df
def set_df2(self, df):
"""
Sets defocus.
:param df: Defocus value in the direction perpendicular to df1.
"""
self.defocus2 = df
def set_angle(self, angle):
"""
Sets angle.
:param angle: Angle (in Radians) between df1 and the x-axis.
"""
self.angle = angle
def generate_ctf(self):
"""
Generates internal representation of the Contrast Transfer Function using parameters from this instance.
"""
astigmatism_angle = np.full(
shape=self.theta.shape, fill_value=self.angle, dtype=self.dtype
)
defocus_sum = np.full(
shape=self.theta.shape,
fill_value=self.defocus1 + self.defocus2,
dtype=self.dtype,
)
defocus = defocus_sum + (
(self.defocus1 - self.defocus2)
* np.cos(2 * (self.theta - astigmatism_angle))
)
defocus_factor = np.pi * self.lmbd * self.r_ctf * defocus / 2
amplitude_contrast_term = self.amplitude_contrast / np.sqrt(
1 - self.amplitude_contrast**2
)
chi = (
defocus_factor
- np.pi * self.lmbd**3 * self.cs * 1e6 * self.r_ctf**2 / 2
+ amplitude_contrast_term
)
h = -np.sin(chi)
self.h = h
def micrograph_to_blocks(self, micrograph, block_size):
"""
Preprocess micrograph into blocks using block_size.
:param micrograph: Micrograph as NumPy array. #NOTE looks like F order
:param blocksize: Size of the square blocks to partition micrograph.
:return: NumPy array of blocks extracted from the micrograph.
"""
# verify block_size is even
assert block_size % 2 == 0
size_x = micrograph.shape[1]
size_y = micrograph.shape[0]
step_size = block_size // 2
range_y = size_y // step_size - 1
range_x = size_x // step_size - 1
block_list = [
micrograph[
i * step_size : (i + 2) * step_size, j * step_size : (j + 2) * step_size
]
for j in range(range_y)
for i in range(range_x)
]
blocks = np.asarray(block_list, dtype=micrograph.dtype)
return blocks
def normalize_blocks(self, blocks):
"""
Preprocess CTF of micrograph using block_size.
:param blocks: NumPy array of blocks extracted from the micrograph.
:return: NumPy array of normalized blocks.
"""
# Take block size from blocks
block_size = blocks.shape[1]
assert block_size == blocks.shape[2]
# Create a sum and reshape so it may be broadcast with `block`.
blocks_sum = np.sum(blocks, axis=(-1, -2))[:, np.newaxis, np.newaxis]
blocks -= blocks_sum / (block_size**2)
return blocks
def preprocess_micrograph(self, micrograph, block_size):
"""
Preprocess micrograph into normalized blocks using block_size.
:param micrograph: Micrograph as NumPy array. #NOTE looks like F order
:param blocksize: Size of the square blocks to partition micrograph.
:return: NumPy array of normalized blocks extracted from the micrograph.
"""
return self.normalize_blocks(self.micrograph_to_blocks(micrograph, block_size))
def tapers(self, N, NW, L):
"""
Compute data tapers (which are discrete prolate spheroidal sequences (dpss))
Uses scipy implementation, see:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.windows.dpss.html
:param N: Size of each taper
:param NW: Half Bandwidth
:param L: Number of tapers
:return: NumPy array of data tapers
"""
# Note the original ASPIRE implementation is negated from original scipy...
# but at time of writing subsequent code was agnostic to sign.
return dpss(M=N, NW=NW, Kmax=L, return_ratios=False).T
def estimate_psd(self, blocks, tapers_1d):
"""
Estimate the power spectrum of the micrograph using the multi-taper method
:param blocks: 3-D NumPy array containing windows extracted from the micrograph in the preprocess function.
:param tapers_1d: NumPy array of data tapers.
:return: NumPy array of estimated power spectrum.
"""
num_1d_tapers = tapers_1d.shape[-1]
tapers_1d = tapers_1d.astype(complex_type(self.dtype), copy=False)
blocks_mt = np.zeros(blocks[0, :, :].shape, dtype=self.dtype)
blocks_tapered = np.zeros(blocks[0, :, :].shape, dtype=complex_type(self.dtype))
taper_2d = np.zeros(
(blocks.shape[1], blocks.shape[2]), dtype=complex_type(self.dtype)
)
for ax1 in range(num_1d_tapers):
for ax2 in range(num_1d_tapers):
np.matmul(
tapers_1d[:, ax1, np.newaxis],
tapers_1d[:, ax2, np.newaxis].T,
out=taper_2d,
)
for m in range(blocks.shape[0]):
np.multiply(blocks[m, :, :], taper_2d, out=blocks_tapered)
blocks_mt_post_fft = fft.fftn(blocks_tapered, axes=(-2, -1))
blocks_mt += abs2(blocks_mt_post_fft)
blocks_mt /= blocks.shape[0] ** 2
blocks_mt /= tapers_1d.shape[0] ** 2
amplitude_spectrum = fft.fftshift(
blocks_mt
) # max difference 10^-13, max relative difference 10^-14
return Image(amplitude_spectrum)
def elliptical_average(self, ffbbasis, amplitude_spectrum, circular):
"""
Computes radial/elliptical average of the power spectrum
:param ffbbasis: FFBBasis instance.
:param amplitude_spectrum: Power spectrum.
:param circular: True for radial averaging and False for elliptical averaging.
:return: PSD and noise as 2-tuple of NumPy arrays.
"""
# RCOPT, come back and change the indices for this method
coeffs_s = ffbbasis.evaluate_t(amplitude_spectrum).T
coeffs_n = coeffs_s.copy()
coeffs_s[np.argwhere(ffbbasis._indices["ells"] == 1)] = 0
if circular:
coeffs_s[np.argwhere(ffbbasis._indices["ells"] == 2)] = 0
noise = amplitude_spectrum
else:
coeffs_n[np.argwhere(ffbbasis._indices["ells"] == 0)] = 0
coeffs_n[np.argwhere(ffbbasis._indices["ells"] == 2)] = 0
noise = ffbbasis.evaluate(coeffs_n.T)
psd = ffbbasis.evaluate(coeffs_s.T)
return psd, noise
def background_subtract_1d(
self, amplitude_spectrum, linprog_method="interior-point", n_low_freq_cutoffs=14
):
"""
Estimate and subtract the background from the power spectrum
:param amplitude_spectrum: Estimated power spectrum
:param linprog_method: Method passed to linear progam solver (scipy.optimize.linprog). Defaults to 'interior-point'.
:param n_low_freq_cutoffs: Low frequency cutoffs (loop iterations).
:return: 2-tuple of NumPy arrays (PSD after noise subtraction and estimated noise)
"""
# compute radial average
center = amplitude_spectrum.shape[-1] // 2
if amplitude_spectrum.ndim == 3:
if amplitude_spectrum.shape[0] != 1:
raise ValueError(
f"Invalid dimension 0 for amplitude_spectrum {amplitude_spectrum.shape}"
)
amplitude_spectrum = amplitude_spectrum[0]
elif amplitude_spectrum.ndim > 3:
raise ValueError(
f"Invalid ndimension for amplitude_spectrum {amplitude_spectrum.shape}"
)
amplitude_spectrum = amplitude_spectrum[center, center:]
amplitude_spectrum = amplitude_spectrum[
0 : 3 * amplitude_spectrum.shape[-1] // 4
]
final_signal = np.zeros(
(n_low_freq_cutoffs - 1, amplitude_spectrum.shape[-1]), dtype=self.dtype
)
final_background = np.ones(
(n_low_freq_cutoffs - 1, amplitude_spectrum.shape[-1]), dtype=self.dtype
)
for low_freq_cutoff in range(1, n_low_freq_cutoffs):
signal = amplitude_spectrum[low_freq_cutoff:]
signal = np.ravel(signal)
N = amplitude_spectrum.shape[-1] - low_freq_cutoff
f = np.concatenate((np.ones(N), -1 * np.ones(N)), axis=0)
superposition_condition = np.concatenate(
(-1 * np.eye(N), np.eye(N)), axis=1
)
monotone_condition = np.diag(np.full((N - 1), -1), -1) + np.diag(
np.ones(N), 0
)
monotone_condition = monotone_condition[1:]
convex_condition = (
np.diag(np.full((N - 1), -1), -1)
+ np.diag(np.full(N, 2), 0)
+ np.diag(np.full((N - 1), -1), 1)
)
convex_condition = np.concatenate(
(np.zeros((N, N)), convex_condition), axis=1
)
convex_condition = convex_condition[1 : N - 1]
positivity_condition = np.concatenate(
(np.zeros((N, N)), -1 * np.eye(N)), axis=1
)
A = np.concatenate(
(superposition_condition, convex_condition, positivity_condition),
axis=0,
)
x_bound_lst = [
(signal[i], signal[i], -1 * np.inf, np.inf)
for i in range(signal.shape[0])
]
x_bound = np.asarray(x_bound_lst, A.dtype)
x_bound = np.concatenate((x_bound[:, :2], x_bound[:, 2:]), axis=0)
x = linprog(
f,
A_ub=A,
b_ub=np.zeros(A.shape[0]),
bounds=x_bound,
method=linprog_method,
)
background = x.x[N:]
bs_psd = signal - background
final_signal[low_freq_cutoff - 1, low_freq_cutoff:] = bs_psd.T
# expected difference: 10^-7 (absolute)
final_background[low_freq_cutoff - 1, low_freq_cutoff:] = background.T
return final_signal, final_background
def opt1d(
self,
amplitude_spectrum,
pixel_size,
cs,
lmbd,
w,
N,
min_defocus=500,
max_defocus=10000,
):
"""
Find optimal defocus for the radially symmetric case (where no astigmatism is present)
:param amplitude_spectrum: Estimated power specrtum.
:param pixel_size: Pixel size in \u212b (Angstrom).
:param cs: Spherical aberration in mm.
:param lmbd: Electron wavelength \u212b (Angstrom).
:param w: Amplitude contrast.
:param N: Number of rows (or columns) in the estimate power spectrum.
:param min_defocus: Start of defocus loop scan.
:param max_defocus: End of defocus loop scan.
:return: 2-tuple of NumPy arrays (Estimated average of defocus and low_freq_cutoff)
"""
center = N // 2
grid = grid_1d(N, normalized=True, dtype=self.dtype)
rb = grid["x"][center:] / 2
r_ctf = rb * (10 / pixel_size) # units: inverse nm
signal = amplitude_spectrum.T
signal = np.maximum(0.0, signal)
signal = np.sqrt(signal)
signal = signal[: 3 * signal.shape[0] // 4]
r_ctf_sq = r_ctf**2
c = np.zeros((max_defocus - min_defocus, signal.shape[1]), dtype=self.dtype)
for f in range(min_defocus, max_defocus):
ctf_im = np.abs(
np.sin(
np.pi * lmbd * f * r_ctf_sq
- 0.5 * np.pi * (lmbd**3) * cs * 1e6 * r_ctf_sq**2
+ w
)
)
ctf_im = ctf_im[: signal.shape[0]]
ctf_im = np.reshape(ctf_im, (ctf_im.shape[0], 1))
ctf_im = np.tile(ctf_im, (1, signal.shape[1]))
for m in range(0, signal.shape[1]):
signal[:, m] = signal[:, m] - np.mean(signal[m + 1 :, m], axis=0)
ctf_im[:, m] = ctf_im[:, m] - np.mean(ctf_im[m + 1 :, m], axis=0)
ctf_im[: m + 1, m] = np.zeros((m + 1))
signal[: m + 1, m] = np.zeros((m + 1))
Sx = np.sqrt(np.sum(ctf_im**2, axis=0))
Sy = np.sqrt(np.sum(signal**2, axis=0))
c[f - min_defocus, :] = np.sum(ctf_im * signal, axis=0) / (Sx * Sy)
avg_defocus, low_freq_cutoff = np.unravel_index(np.argmax(c), c.shape)[:2]
avg_defocus += min_defocus
return avg_defocus, low_freq_cutoff
def background_subtract_2d(self, signal, background_p1, max_col):
"""
Subtract background from estimated power spectrum
:param signal: Estimated power spectrum
:param background_p1: 1-D background estimation
:param max_col: Internal variable, returned as the second parameter from opt1d.
:return: 2-tuple of NumPy arrays (Estimated PSD without noise and estimated noise).
"""
signal = signal.asnumpy()
N = signal.shape[1]
grid = grid_2d(N, normalized=False, indexing="yx", dtype=self.dtype)
radii = np.sqrt((grid["x"] / 2) ** 2 + (grid["y"] / 2) ** 2)
background = np.zeros(signal.shape, dtype=self.dtype)
for r in range(max_col + 2, background_p1.shape[1]):
background[:, (r < radii) & (radii <= r + 1)] = background_p1[max_col, r]
mask = radii <= max_col + 2
background[:, mask] = signal[:, mask]
signal = signal - background
signal = np.maximum(0, signal)
return Image(signal), Image(background)
def pca(self, signal, pixel_size, g_min, g_max):
"""
:param signal: Estimated power spectrum.
:param pixel_size: Pixel size in \u212b (Angstrom).
:param g_min: Inverse of minimun resolution for PSD.
:param g_max: Inverse of maximum resolution for PSD.
:return: ratio.
"""
# RCOPT
signal = signal.asnumpy()[0].T
N = signal.shape[0]
center = N // 2
grid = grid_2d(N, normalized=True, indexing="yx", dtype=self.dtype)
r_ctf = grid["r"] / 2 * (10 / pixel_size)
grid = grid_2d(N, normalized=False, indexing="yx", dtype=self.dtype)
X = grid["x"]
Y = grid["y"]
signal -= np.min(signal)
rad_sq_min = N * pixel_size / g_min
rad_sq_max = N * pixel_size / g_max
min_limit = r_ctf[center, (center + np.floor(rad_sq_min)).astype(int)]
signal[r_ctf < min_limit] = 0
max_limit = r_ctf[center, (center + np.ceil(rad_sq_max)).astype(int)]
signal = np.where(r_ctf > max_limit, 0, signal)
moment_02 = Y**2 * signal
moment_02 = np.sum(moment_02, axis=(0, 1))
moment_11 = Y * X * signal
moment_11 = np.sum(moment_11, axis=(0, 1))
moment_20 = X**2 * signal
moment_20 = np.sum(moment_20, axis=(0, 1))
moment_mat = np.zeros((2, 2))
moment_mat[0, 0] = moment_20
moment_mat[1, 1] = moment_02
moment_mat[0, 1] = moment_11
moment_mat[1, 0] = moment_11
moment_evals = npla.eigvalsh(moment_mat)
ratio = moment_evals[0] / moment_evals[1]
return ratio
def gd(
self,
signal,
df1,
df2,
angle_ast,
r,
theta,
pixel_size,
g_min,
g_max,
amplitude_contrast,
lmbd,
cs,
):
"""
Runs gradient ascent to optimize defocus parameters
:param signal: Estimated power spectrum
:param df1: Defocus value in the direction perpendicular to df2.
:param df2: Defocus value in the direction perpendicular to df1.
:param angle_ast: Angle between df1 and the x-axis, Radians.
:param r: Magnitude of spatial frequencies.
:param theta: Phase of spatial frequencies.
:param pixel_size: Pixel size in \u212b (Angstrom).
:param g_min: Inverse of minimun resolution for PSD.
:param g_max: Inverse of maximum resolution for PSD.
:param amplitude_contrast: Amplitude contrast.
:param lmbd: Electron wavelength \u212b (Angstrom).
:param cs: Spherical aberration in mm.
:return: Optimal defocus parameters
"""
# step size
alpha1 = 1e5
alpha2 = 1e4
# initialization
x = df1 + df2
y = (df1 - df2) * np.cos(2 * angle_ast)
z = (df1 - df2) * np.sin(2 * angle_ast)
a = np.pi * lmbd * r**2 / 2
b = np.pi * lmbd**3 * cs * 1e6 * r**4 / 2 - np.full(
shape=r.shape, fill_value=amplitude_contrast, dtype=self.dtype
)
signal = signal.asnumpy()[0].T
N = signal.shape[1]
center = N // 2
rad_sq_min = N * pixel_size / g_min
rad_sq_max = N * pixel_size / g_max
max_val = r[center, int(center - 1 + np.floor(rad_sq_max))]
min_val = r[center, int(center - 1 + np.ceil(rad_sq_min))]
mask = (r <= max_val) & (r > min_val)
a = a[mask]
b = b[mask]
signal = signal[..., mask]
r = r[mask]
theta = theta[mask]
sum_A = np.sum(signal**2)
dx = 1
dy = 1
dz = 1
stop_cond = 1e-20
iter_no = 1
while np.maximum(np.maximum(dx, dy), dz) > stop_cond:
inner_cosine = y * np.cos(2 * theta) + z * np.sin(2 * theta)
psi = a * x + a * inner_cosine - b
outer_sine = np.sin(psi)
outer_cosine = np.cos(psi)
sine_x_term = a
sine_y_term = a * np.cos(2 * theta)
sine_z_term = a * np.sin(2 * theta)
c1 = np.sum(np.abs(outer_sine) * signal)
c2 = np.sqrt(sum_A * np.sum(outer_sine**2))
# gradients of numerator
dx_c1 = np.sum(np.sign(outer_sine) * outer_cosine * a * signal)
dy_c1 = np.sum(
np.sign(outer_sine) * outer_cosine * a * np.cos(2 * theta) * signal
)
dz_c1 = np.sum(
np.sign(outer_sine) * outer_cosine * a * np.sin(2 * theta) * signal
)
derivative_sqrt = 1 / (2 * np.sqrt(sum_A * np.sum(outer_sine**2)))
derivative_sine2 = 2 * outer_sine * outer_cosine
# gradients of denomenator
dx_c2 = derivative_sqrt * sum_A * np.sum(derivative_sine2 * sine_x_term)
dy_c2 = derivative_sqrt * sum_A * np.sum(derivative_sine2 * sine_y_term)
dz_c2 = derivative_sqrt * sum_A * np.sum(derivative_sine2 * sine_z_term)
# gradients
dx = (dx_c1 * c2 - dx_c2 * c1) / c2**2
dy = (dy_c1 * c2 - dy_c2 * c1) / c2**2
dz = (dz_c1 * c2 - dz_c2 * c1) / c2**2
# update
x = x + alpha1 * dx
y = y + alpha2 * dy
z = z + alpha2 * dz
if iter_no < 2:
stop_cond = np.minimum(np.minimum(dx, dy), dz) / 1000
if iter_no > 400:
stop_cond = np.maximum(np.maximum(dx, dy), dz) + 1
iter_no = iter_no + 1
df1 = (x + np.abs(y + z * 1j)) / 2
df2 = (x - np.abs(y + z * 1j)) / 2
angle_ast = np.angle(y + z * 1j) / 2 # Radians
inner_cosine = y * np.cos(2 * theta) + z * np.sin(2 * theta)
outer_sine = np.sin(a * x + a * inner_cosine - b)
outer_cosine = np.cos(a * x + a * inner_cosine - b)
sine_x_term = a
sine_y_term = a * np.cos(2 * theta)
sine_z_term = a * np.sin(2 * theta)
c1 = np.sum(np.abs(outer_sine) * signal)
c2 = np.sqrt(sum_A * np.sum(outer_sine**2))
p = c1 / c2
return df1, df2, angle_ast, p
# Note, This doesn't actually use anything from the class.
# It is used in a solver loop of some sort, so it may not be correct
# to just use what is avail in the obj.
def write_star(self, df1, df2, ang, cs, voltage, pixel_size, amp, name, output_dir):
"""
Writes CTF parameters to starfile.
"""
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
data_block = {}
data_block["_rlnMicrographName"] = name
data_block["_rlnDefocusU"] = df1
data_block["_rlnDefocusV"] = df2
data_block["_rlnDefocusAngle"] = ang
data_block["_rlnSphericalAbberation"] = cs
data_block["_rlnAmplitudeContrast"] = amp
data_block["_rlnVoltage"] = voltage
data_block["_rlnDetectorPixelSize"] = pixel_size
df = DataFrame([data_block])
blocks = OrderedDict()
blocks["root"] = df
star = StarFile(blocks=blocks)
star.write(os.path.join(output_dir, os.path.splitext(name)[0]) + ".star")
def estimate_ctf(
data_folder,
pixel_size,
cs,
amplitude_contrast,
voltage,
num_tapers,
psd_size,
g_min,
g_max,
output_dir,
dtype=np.float32,
):
"""
Given paramaters estimates CTF from experimental data
and returns CTF as a mrc file.
"""
dtype = np.dtype(dtype)
assert dtype in (np.float32, np.float64)
dir_content = os.scandir(data_folder)
mrc_files = [f.name for f in dir_content if os.path.splitext(f)[1] == ".mrc"]
mrcs_files = [f.name for f in dir_content if os.path.splitext(f)[1] == ".mrcs"]
file_names = mrc_files + mrcs_files
amp = amplitude_contrast
amplitude_contrast = np.arctan(
amplitude_contrast / np.sqrt(1 - amplitude_contrast**2)
)
lmbd = voltage_to_wavelength(voltage) / 10 # (Angstrom)
ctf_object = CtfEstimator(
pixel_size, cs, amplitude_contrast, voltage, psd_size, num_tapers, dtype=dtype
)
# Note for repro debugging, suggest use of doubles,
# closer to original code.
ffbbasis = FFBBasis2D((psd_size, psd_size), 2, dtype=dtype)
results = []
for name in file_names:
with mrcfile.open(
os.path.join(data_folder, name), mode="r", permissive=True
) as mrc:
micrograph = mrc.data
# Try to match dtype used in Basis instance
micrograph = micrograph.astype(dtype, copy=False)
micrograph_blocks = ctf_object.preprocess_micrograph(micrograph, psd_size)
tapers_1d = ctf_object.tapers(psd_size, num_tapers / 2, num_tapers)
signal_observed = ctf_object.estimate_psd(micrograph_blocks, tapers_1d)
amplitude_spectrum, _ = ctf_object.elliptical_average(
ffbbasis, signal_observed, True
) # absolute differenceL 10^-14. Relative error: 10^-7
# Optionally changing to: linprog_method='simplex',
# will more deterministically repro results in exchange for speed.
signal_1d, background_1d = ctf_object.background_subtract_1d(
amplitude_spectrum, linprog_method="interior-point"
)
avg_defocus, low_freq_skip = ctf_object.opt1d(
signal_1d,
pixel_size,
cs,
lmbd, # (Angstrom)
amplitude_contrast,
signal_observed.shape[-1],
)
low_freq_skip = 12
signal, background_2d = ctf_object.background_subtract_2d(
signal_observed, background_1d, low_freq_skip
)
ratio = ctf_object.pca(signal_observed, pixel_size, g_min, g_max)
signal, additional_background = ctf_object.elliptical_average(
ffbbasis, signal.sqrt(), False
)
background_2d = background_2d + additional_background
initial_df1 = (avg_defocus * 2) / (1 + ratio)
initial_df2 = (avg_defocus * 2) - initial_df1
grid = grid_2d(psd_size, normalized=True, indexing="yx", dtype=dtype)
r_ctf = grid["r"] / 2 * (10 / pixel_size)
theta = grid["phi"]
angle = -5 / 12 * np.pi # Radians (-75 degrees)
cc_array = np.zeros((6, 4))
for a in range(0, 6):
df1, df2, angle_ast, p = ctf_object.gd(
signal,
initial_df1,
initial_df2,
angle + a * np.pi / 6.0, # Radians, + a*30degrees
r_ctf,
theta,
pixel_size,
g_min,
g_max,
amplitude_contrast,
lmbd, # (Angstrom)
cs,
)
cc_array[a, 0] = df1
cc_array[a, 1] = df2
cc_array[a, 2] = angle_ast # Radians
cc_array[a, 3] = p
ml = np.argmax(cc_array[:, 3], -1)
result = (
cc_array[ml, 0],
cc_array[ml, 1],
cc_array[ml, 2], # Radians
cs,
voltage,
pixel_size,
amp,
name,
)
ctf_object.write_star(*result, output_dir)
results.append(result)
ctf_object.set_df1(cc_array[ml, 0])
ctf_object.set_df2(cc_array[ml, 1])
ctf_object.set_angle(cc_array[ml, 2]) # Radians
ctf_object.generate_ctf()
with mrcfile.new(
output_dir + "/" + os.path.splitext(name)[0] + "_noise.mrc", overwrite=True
) as mrc:
mrc.set_data(background_2d[0].astype(np.float32))
mrc.voxel_size = pixel_size
mrc.close()
df = (cc_array[ml, 0] + cc_array[ml, 1]) * np.ones(theta.shape, theta.dtype) + (
cc_array[ml, 0] - cc_array[ml, 1]
) * np.cos(2 * theta - 2 * cc_array[ml, 2] * np.ones(theta.shape, theta.dtype))
ctf_im = -np.sin(
np.pi * lmbd * r_ctf**2 / 2 * (df - lmbd**2 * r_ctf**2 * cs * 1e6)
+ amplitude_contrast
)
ctf_signal = np.zeros(ctf_im.shape, ctf_im.dtype)
ctf_signal[: ctf_im.shape[0] // 2, :] = ctf_im[: ctf_im.shape[0] // 2, :]
ctf_signal[ctf_im.shape[0] // 2 + 1 :, :] = signal[
:, :, ctf_im.shape[0] // 2 + 1
]
with mrcfile.new(
output_dir + "/" + os.path.splitext(name)[0] + ".ctf", overwrite=True
) as mrc:
mrc.set_data(np.float32(ctf_signal))
mrc.voxel_size = pixel_size
mrc.close()
return results
|
Psychometric Properties of the Pediatric Patient‐Reported Outcomes Measurement Information System Item Banks in a Dutch Clinical Sample of Children With Juvenile Idiopathic Arthritis
Objective To assess the psychometric properties of 8 pediatric Patient‐Reported Outcomes Measurement Information System (PROMIS) item banks in a clinical sample of children with juvenile idiopathic arthritis (JIA). Methods A total of 154 Dutch children (mean ± SD age 14.4 ± 3.0 years; range 8–18 years) with JIA completed 8 pediatric version 1.0 PROMIS item banks (anger, anxiety, depressive symptoms, fatigue, pain interference, peer relationships, physical function mobility, physical function upper extremity) twice and the Pediatric Quality of Life Inventory (PedsQL) and the Childhood Health Assessment Questionnaire (C‐HAQ) once. Structural validity of the item banks was assessed by fitting a graded response model (GRM) and inspecting GRM fit (comparative fit index , Tucker‐Lewis index , and root mean square error of approximation ) and item fit (S‐X2 statistic). Convergent validity (with PedsQL/C‐HAQ subdomains) and discriminative validity (active/inactive disease) were assessed. Reliability of the item banks, short forms, and computerized adaptive testing (CAT) was expressed as the SE of theta (SE ). Test–retest reliability was assessed using intraclass correlation coefficients (ICCs) and smallest detectable change. Results All item banks had sufficient overall GRM fit (CFI >0.95, TLI >0.95, RMSEA <0.08) and no item misfit (all S‐X2 P > 0.001). High correlations (>0.70) were found between most PROMIS T scores and hypothesized PedsQL/C‐HAQ (sub)domains. Mobility, pain interference, and upper extremity item banks were able to discriminate between patients with active and inactive disease. Regarding reliability, PROMIS item banks outperformed legacy instruments. Post hoc CAT simulations outperformed short forms. Test–retest reliability was strong (ICC >0.70) for all full‐length item banks and short forms, except for the peer relationships item bank. Conclusion The pediatric PROMIS item banks displayed sufficient psychometric properties for Dutch children with JIA. PROMIS item banks are ready for use in clinical research and practice for children with JIA.
INTRODUCTION
In recent years, the focus of health care has been drifting toward the inclusion of health-related quality of life (HRQoL) outcomes for patients in research and daily clinical practice by administering patient-reported outcome measures (PROMs) (1)(2)(3)(4)(5)(6). Previous studies have shown that rheumatology could benefit greatly from the use of patient-reported outcomes, as patients experience a wide array of problems (7) for which there is a disconnect between patient-reported outcomes and outcomes reported | 1781 by parents or clinicians (8). In clinical practice, there are often multiple PROMs available to measure the same construct/domain that differ in content, length, and scoring methods. These PROMs vary in their psychometric quality and often suffer from ceiling or floor effects when assessing patients who are outside the measurement range of the questionnaire. Most traditional PROMs (also known as legacy instruments) are scored using classical test theory (CTT), where all questions carry the same weight when calculating domain scores. The domain scores of these PROMs are incomparable due to the ordinal scoring methods used in CTT. In item response theory (IRT) modeling, the difficulty and discriminatory power of items can be taken into account when calculating a domain score. Additionally, IRT uses interval-based scores, which allows comparison of scores on the same metric. Therefore, a group of researchers from several US-based academic institutions and the National Institutes of Health initiated the creation of the Patient-Reported Outcomes Measurement Information System (PROMIS) (9,10), a new, universal set of IRT-based PROMs for adults and children that can accurately and quickly assess aspects of physical, mental, and social health of patients (9,11).
The US PROMIS group developed several item banks to assess relevant domains of physical, mental, and social health, such as fatigue, pain interference, or peer relationships (10). An item bank is a collection of a large number of items intended to measure 1 construct over a wide range of functioning, symptoms, or evaluations of well-being. This allows comparisons between different samples using the same PROM. The PROMIS item banks were developed using IRT modeling, which allows us to order items based on their difficulty. Using this information, items can be selected from the full-length item bank to create a short form, which measures a similar range of functioning as the full-length item bank. An online alternative to short forms is computerized adaptive testing (CAT). CAT uses the information of the IRT model (i.e., item difficulty and discrimination) and previous responses (11) to choose which items to administer to a specific patient. If, for example, a patient answers that he or she is never tired, the CAT will not offer an item about being exhausted to this patient, as the item about being exhausted has a higher difficulty. CAT thus provides more tailored items to patients than short forms, which makes the estimates of the construct more reliable (12). As long as items are selected from the same item bank, scores from short forms and CATs can be compared on the same scale.
In 2009, the Dutch-Flemish PROMIS group (www.dutch flemi shpro mis.nl) was founded, followed by the Dutch-Flemish pediatric PROMIS group in 2011, to translate and implement the PROMIS item banks in The Netherlands and Flanders, Belgium. The pediatric PROMIS group translated 9 full PROMIS item banks into Dutch-Flemish (13).
The goal of this study was to assess the psychometric properties of 8 Dutch-Flemish PROMIS pediatric item banks in a clinical sample of Dutch children with juvenile idiopathic arthritis (JIA). The application of PROMIS is highly anticipated within rheumatology (14,15), and psychometric properties of the pediatric item banks were previously assessed in children with JIA in the US (8), making comparisons possible.
In the current study, the structural validity of the item banks was investigated and construct validity was assessed by comparing the PROMIS instruments to legacy instruments (the Pediatric Quality of Life Inventory and the Childhood Health Assessment Questionnaire ) and by comparing scores from patients with active and inactive disease. Furthermore, we assessed the reliability of the individual measurements for fulllength item banks, short forms, and CATs. Finally, we assessed the test-retest reliability of the PROMIS item banks.
PATIENTS AND METHODS
Participants. All children diagnosed with JIA, 8-18 years of age, and under treatment in the Emma Children's Hospital Amsterdam University Medical Centers, Onze Lieve Vrouwe Gasthuis West, the Reade center for Rehabilitation and Rheumatology in Amsterdam, or the Leiden University Medical Centre in Leiden, were eligible and asked to participate in the study between June 2015 and January 2017. The study was approved by the medical ethics committees of all the participating centers. An invitation was sent to children and their parents to log in to the study website (www.hetkl ikt.nu/promis). All participants provided informed consent. Participating children were asked to complete 8 full pediatric PROMIS item banks at the start of the study (T1) and again 10 days later (T2) to assess test-retest reliability. Additionally, participants were asked to complete the PedsQL and C-HAQ at T1. All questionnaires were completed online. A reminder for T1 and T2 was sent out 3 days after the initial invitation. Children unable to understand Dutch or children with limitations/disorders that made them unable to complete (online) questionnaires were excluded from the study. Nonrespondent data were not available.
Patient characteristics. Personal data on age and sex were provided by the children. Medical data on the type of JIA, presence of uveitis, medication use, age at disease onset, disease duration, physician score of disease activity, and the number of joints with arthritis (1 = monoarthritis, 2-4 = oligoarthritis, 5-10 = polyarthritis, >10 = severe polyarthritis) were extracted retrospectively by a pediatric rheumatology expert (MAJvR) from the electronic medical records. The type of JIA was categorized in accordance to the International League of Associations for Rheumatology criteria (16). Disease activity was extracted from medical records by a rheumatologist (MAJvR) using the 100-mm physician visual analog scale (VAS; range 0-100, with 0 indicating no disease activity and higher scores indicating more activity).
Measures. PROMIS item banks.
Eight full-length, Dutch PROMIS, version 1.0, pediatric self-report item banks (anger , anxiety , depressive symptoms , fatigue , pain interference , peer relationships , physical function mobility, and physical function upper extremity ) were completed by the children. All item banks utilize a 7-day recall period. A 5-point Likert scale ranging from 1 ("never") to 5 ("almost always") is used for all item banks, except the mobility and upper extremity item banks. For these item banks, the response categories range from 1 ("not able to do") to 5 ("with no trouble"). Total scores are calculated by applying the original US IRT model to the data and estimating the level of functioning of the patient (theta). This level of functioning is transformed into a T score, with a score of 50 representing the mean of the general US population (SD 10). For all item banks, higher scores represent more of the construct (e.g., better mobility or more pain interference). Scores can also be calculated for the standard PROMIS short forms, consisting of 8 items for all domains, except for anger (5 items) and fatigue (10 items), by extracting short-form item responses from the full-length item bank.
PedsQL generic scale 4.0. The PedsQL (23) is a 23-item questionnaire that assesses the self-reported HRQoL of children (ages 8-18 years) across the following 4 domains: physical functioning (8 items); emotional functioning (5 items); social functioning (5 items); and school functioning (5 items). The PedsQL utilizes a 7-day recall period. Items are scored using a 5-point Likert scale ranging from 1 ("never a problem") to 5 ("almost always a problem"). The response options are transformed into values of 0, 25, 50, 75, and 100, respectively. Domain scores (range 0-100, with a higher score representing better functioning) are calculated by summing and averaging the items within each domain. The total PedsQL score (range 0-100) is calculated by averaging all individual item scores. The PedsQL is an often used, validated tool for Dutch children with JIA (7,24).
C-HAQ. The C-HAQ is a 30-item questionnaire that measures self-reported functional ability in children (ages 8-18 years) (25). The C-HAQ is composed of the following 8 categories: dressing and grooming (4 items); arising (2 items); eating (3 items); walking (2 items); hygiene (5 items); reach (4 items); grip (5 items); and activities (5 items). The C-HAQ utilizes a 1-week recall period. Each item on the C-HAQ is scored from 0 ("without any difficulty") to 3 ("unable to do"). The highest scoring item within a category determines the score for that category.
The disability index (range 0 -3 ) averages the category scores. Additionally, the C-HAQ contains two 100-mm VAS to measure pain (0 = no pain, 100 = very severe pain) and well-being (0 = very well, 100 = very poor) over the past week. The C-HAQ is a validated tool for assessing Dutch children with JIA (25,26) and a recommended instrument for assessing daily functioning in rheumatology patients (27).
Statistical analysis. Patient characteristics.
Descriptive analyses were performed to describe sociodemographic and clinical characteristics of the children, using SPSS, version 24.0 (28). All further analyses were performed in R (29).
Structural validity. To assess the structural validity of the PROMIS item banks, a graded response model (GRM) was fitted to each of the item banks. A GRM is an IRT model for items with ordinal response categories and requires several assumptions to be met, such as unidimensionality, local independence, and monotonicity. To assess unidimensionality of each item bank, a confirmatory factor analysis (CFA) was performed using the R-package lavaan, version 0.6-3 (30). An acceptable fit of a unidimensional model is indicated by a comparative fit index (CFI) value and Tucker-Lewis Index (TLI) score >0.95, a stan dardized root mean square residual (SRMR) value <0.10, and a root mean square error of approximation (RMSEA) value <0.08 (31). Scaled indices were reported. Local independence was assessed by looking at the residual correlations in the CFA model. An item pair is considered local independent when it has a residual correlation <0.20 (32). Finally, monotonicity was assessed using Mokken scaling (33,34). The assumption of monotonicity is met when the item H values of all items are ≥0.30 and the H value of the entire scale is ≥0.50.
Once the assumptions were met, a GRM was fitted to each item bank to estimate item discrimination and threshold parameters using the expectation-maximization algorithm within the R-package mirt, version 1.29 (35). The discrimination parameter (α) represents the ability of an item to distinguish between patients with a different level of functioning (θ). The threshold parameters (β) represent the required level of functioning of a person to choose a higher response category over a lower response category. Previous simulation studies have shown that fitting a GRM requires a large sample size of ~500 respondents in most cases, but that increased unidimensionality and high discriminatory parameters of an item bank reduce the number of respondents required (36,37). As the items in PROMIS item banks were specifically chosen based on their discriminatory power and their contribution to measuring a single construct, we expected that a smaller sample size could be used. Caution is advised when assessing the estimated parameters, however, as other sample characteristics (i.e., skewness of responses) can impact parameter calibration. Model fit of the GRM model was assessed using the same CFI, TLI, SRMR, and RMSEA criteria as for the CFA. Item fit was assessed using the S-X 2 statistic (38), which calculates the differences between observed | 1783 and expected responses under the GRM model. A P value of the S-X 2 statistic <0.001 for an item is considered an item misfit (32).
Construct validity. Construct validity was investigated by assessing convergent and discriminative validity. Convergent validity was assessed by correlating the PROMIS item bank T scores to the PedsQL or C-HAQ using Pearson's correlation coefficient (r). A strong correlation (>0.70 or lower than -0.70) was expected between PROMIS T scores and the sum scores of the PedsQL and C-HAQ scales measuring similar constructs. Correlations with unrelated constructs were expected to be lower (Δr > 0.10).
Discriminative (known-groups) validity was assessed by comparing the T scores of PROMIS item banks between patients with an active and inactive disease using an independent sample t-test. Disease activity can be represented by results from the physician VAS and the number of joints with arthritis. However, a combination of these variables would result in an active disease group too small for valid comparison. The correlation between these 2 variables was high (r = 0.75), indicating that a combination of these variables would not impact the results much. Therefore, the physician VAS was used to discriminate active (>0) and inactive (0) disease, as this resulted in large enough groups for valid and reliable comparisons. It was expected that the physical health domains would be most affected by JIA (7,24). Mobility and upper extremity T scores were hypothesized to be significantly lower for patients with an active disease. The pain interference T scores were expected to be significantly higher for patients with active disease. For the remaining item banks, no differences in T scores were hypothesized between patients with active and inactive disease. Each PROMIS item bank was considered to have sufficient construct validity if at least 75% of the hypotheses were confirmed.
Reliability. In IRT, the reliability of an item bank can vary across levels of the measured construct. The estimated level of functioning is represented by θ, which is standardized to have a mean of 0 and an SD of 1 in the calibration sample. Each response pattern has a θ estimate and an associated SE of theta estimate (SE ). An SE(θ) of 0.32 corresponds to a reliability of 0.90. To compare the reliability of the PROMIS item banks to similar domains on the PedsQL, a GRM was fit to each PedsQL domain to calculate the θ estimates and SE(θ). Thetas and SE(θ)s were estimated for the full-length PROMIS item banks and short forms using the expected a posteriori (EAP) estimator. Post hoc CAT simulations were performed using R-package catR, version 3.16 (39) for each item bank, using maximum posterior weighted information selection criterion and the EAP estimator (40) to assess whether or not a CAT would outperform short forms. The starting item for each CAT was the item that offered most information at the mean of the population (θ = 0). The maximum number of items for the CAT simulation was set to the number of items in the short form of the same item bank, which ensured that the CAT did not administer more items than the short form. The stopping rule of the SE(θ) was <0.32 (41).
Test-retest reliability. Test-retest reliability was assessed for the full-length item banks and the short forms by calculating the intraclass correlation coefficient (ICC; two-way random-effects model for absolute agreement) (42) of the T scores for the patients who completed the PROMIS item banks twice (within 4 weeks). An ICC >0.70 was considered acceptable (42). The smallest detectable change (SDC) was calculated for all full-length item banks as 1.96 × √2 × (SD × ). The SDC represents the smallest change in score that falls outside of the measurement error (42).
Patient characteristics.
A total of 154 children with JIA completed all PROMIS pediatric item banks, the PedsQL, and the C-HAQ. A total of 111 children completed the item banks twice, with a time interval ranging 1-14 weeks (mean 2.6). Patient characteristics are shown in Table 1. The mean ± SD Structural validity. Unidimensionality was sufficient for all item banks except for the anxiety item bank (RMSEA = 0.103) ( Table 2). Local independence did not hold for all item banks (not for anxiety, mobility, peer relationships, and upper extremity). As the percentages of local dependent item pairs were low (1-4%), the GRM analyses were performed without removing items. The assumption of monotonicity was met for all items and item banks. The item parameters and item fit statistics of the fitted GRMs are available in Supplementary 22.25. Two discrimination parameters of the upper extremity item bank had outlying discriminatory values (α > 10). For the item banks peer relationships, mobility, and upper extremity, not all item thresholds could be calculated, as not all response categories were used by the respondents. There were no items with item misfit (S-X 2 < 0.001) in any of the item banks.
Construct validity. The correlations between the PROMIS item banks, the PedsQL, and the C-HAQ are shown in Table 3. For all item banks, at least 1 expected strong correlation (>0.70) with a relevant PedsQL or C-HAQ subdomain was found, except for the peer relationship item bank. For the mobility and upper extremity item banks, additional correlations were found that were nearly the same strength (Δr < 0.10) as the hypothesized strong correlation with the PedsQL physical subscale.
Discriminative validity was assessed by comparing T scores from patients with active disease (n = 35) to those from patients with inactive disease (n = 105). The results are shown in Table 3. Patients with active disease scored significantly lower on the mobility item bank (mean difference -4.62; t(138) = 2.50, P = 0.014) and the upper extremity item bank (mean difference -3.81; t(137) = 2.17, P = 0.032) than patients with inactive disease. For the pain interference item bank, patients with active disease scored significantly higher (mean difference 4.93; t(136) = -2.70, P = 0.008) than patients with no disease activity. For the anger, anxiety, depressive symptoms, fatigue, and pain interference item banks, at least 75% of the hypotheses regarding construct validity were confirmed. The mobility, upper extremity, and peer relationships item banks did not meet the criterion (71%).
Reliability. All PROMIS item banks provided reliable measurements (SE < 0.32) for the sample mean of 0 and a range of at least 2 SD of theta in the direction of clinical interest (e.g., higher thetas for depressive symptoms, lower thetas for mobility). The only exception was the upper extremity item bank, which did not reach satisfactory reliability for the mean. The reliability of measurements of the full item bank, short forms, post hoc CATs, and their related subdomain from the PedsQL across the range of theta for all items banks is visualized in Figures 1 and 2. The number of reliable measurements, the number of items used, and the average SE(θ) value of the full item banks, short forms, and CATs are shown in Table 4.
Test-retest reliability. Ten patients were removed from the test-retest reliability analyses, as they did not complete the second measurement within 4 weeks of the initial measurement. Most item banks displayed sufficient (ICC >0.70) test-retest reliability. Only the item bank peer relationships displayed a moderate test-retest reliability (ICC 0.69). The SDC ranged from 12.1 to 18.7. The SDC and ICC values are shown in Table 4.
DISCUSSION
This is the first study to assess the psychometric properties of the pediatric PROMIS item banks in a Dutch clinical sample. The PROMIS item banks all displayed sufficient validity and reliability for use in clinical practice for children with JIA. All item banks fit the underlying IRT model. The item banks correlated highly with similar (sub)domains from the legacy instruments PedsQL and C-HAQ. The item banks pain interference, mobility, and upper extremity were able to discriminate between active and inactive JIA. Other studies have shown that issues regarding physical health commonly occur in these 3 domains in children with JIA (7,24). All item banks measure their domain-specific levels of functioning accurately across a wide range of level of functioning and in the clinically most relevant direction from the mean. The PROMIS short forms and CATs provided reliable estimations for the majority of patients. CATs outperformed short forms in terms of test length and number of reliably estimated patients.
The aim of the pediatric Dutch-Flemish PROMIS group is to improve the measurements of patient-reported outcomes in The Netherlands and Belgium by providing researchers and health care professionals access to the generic pediatric PROMIS item banks, short forms, and CATs. The current study supports the application of CATs in clinical samples. The PROMIS item banks outperformed legacy instruments (the PedsQL) by providing more reliable measurements across a broader range of functioning.
A limitation of this study is that our sample was small and contained a large proportion of patients with inactive disease. Due to a combination of relatively good health and a small sample size, the physical function item banks did not have enough variation in responses to provide reliable parameter estimates; particularly, 2 items from the upper extremity item banks had outlying discrimination parameters due to a lack of variety of item responses. Due to the skewed data, a moderate ceiling effect was present for the mobility and upper extremity item banks. This might indicate that there are not enough items with a high difficulty present in these item banks to discriminate between patients with healthier levels of functioning. However, having fewer precise measurements at a healthy level of functioning is less important than having precise measurements in the clinical range. The skewness of the data also has an effect on the informative value of items, and consequently, on the SE(θ). The item banks peer relationships, mobility, and upper extremity displayed lower item thresholds and some local dependent item pairs, also due to skewness. Similar skewed data were found in a US sample of patients with JIA (8). Despite the small sample size, this study shows strong psychometric properties in this population. The psychometric properties of the PROMIS item banks in this study were similar to the properties reported in the developmental phase of the instruments (17)(18)(19)(20)(21)(22) in terms of IRT model and item fit. For the study of US children with JIA, fit indices were not available. Brandon et al (8) investigated the discriminative validity across different levels of disease activity in children with JIA. Their study found discriminative abilities for the fatigue, mobility, pain interference, and upper extremity item banks. Our findings support these results, except for the fatigue item bank. This is possibly due to different methods of determining disease activity. In this study, a comparison was only made between presence and total absence of disease activity, as there were only limited retrospective data available to assess disease activity, and few patients with disease activity to facilitate group comparisons. The reliability of the measurements of the Dutch JIA sample were generally higher than those found in the US sample (8). This is possibly due to differences in model calibration and parameterization. To our knowledge, no studies have been published that assess the test-retest reliability of the full pediatric item banks. In the current study, test-retest reliability was sufficient for all item banks, except the peer relationships item bank (ICC 0.69). Varni et al (43) assessed the test-retest reliability of the pediatric short forms and found similar results. Additionally, the current study displayed similar test-retest reliability for short forms and full-length item banks.
To enable international comparisons of PROMIS T scores, differential item functioning (DIF) needs to be assessed between The Netherlands and the US. As the US data on JIA children were unobtainable, assessing DIF was not possible in this study. A next step is to calibrate the pediatric item banks in a normative Dutch sample and perform DIF analyses with the US normative sample.
In conclusion, the current study demonstrates sufficient psychometric properties for the pediatric PROMIS item banks in children with JIA and provides evidence for the advantages of using the PROMIS CATs in Dutch clinical populations. |
import { RundownTimingContext } from '../../../../lib/rundown/rundownTiming'
import { WithTiming, withTiming } from './withTiming'
interface IProps {
filter?: (timingDurations: RundownTimingContext) => any
children?: (timingDurations: RundownTimingContext) => JSX.Element | null
}
export const RundownTimingConsumer = withTiming<IProps, {}>((props) => ({
filter: props.filter,
}))(({ timingDurations, children }: WithTiming<IProps>) => {
return children ? children(timingDurations) : null
})
|
/**
* Close the AudioInputStream objects as necessary.
* @throws IOException
*/
private void close() throws IOException {
if (isEncoded)
decodedStream.close();
if (encodedStream != null) {
encodedStream.close();
encodedStream = null;
}
decodedStream = null;
} |
package com.etolmach.spring.jcommander;
/**
* @author etolmach
*/
public interface JCommandWrapper {
/**
* Get name
*
* @return Command name
*/
String getName();
/**
* Get parameter bean
*
* @return Parameter bean
*/
Object getParameterBean();
}
|
/**
* Defines the known packet types for the core Artemis protocol.
* @author rjwut
*/
public final class CorePacketType {
public static final String COMMS_MESSAGE = "commsMessage";
public static final String COMM_TEXT = "commText";
public static final String CONNECTED = "connected";
public static final String OBJECT_BIT_STREAM = "objectBitStream";
public static final String OBJECT_DELETE = "objectDelete";
public static final String PLAIN_TEXT_GREETING = "plainTextGreeting";
public static final String SIMPLE_EVENT = "simpleEvent";
public static final String START_GAME = "startGame";
public static final String VALUE_INT = "valueInt";
/**
* No instantiation allowed.
*/
private CorePacketType() { }
} |
<gh_stars>1-10
use std::sync::Arc;
use std::time::{Duration, SystemTime};
use bytesize;
use shiplift::{rep::Image, ImageListOptions};
use termion::event::Key;
use tui::{
layout::Rect,
style::{Color, Modifier, Style},
widgets::{Block, Borders, Row, Table, Widget},
Frame,
};
use crate::app::AppCommand;
use crate::docker::DockerExecutor;
use crate::views::{human_duration, View};
use crate::Backend;
pub struct ImagesListView {
images: Vec<Image>,
selected: usize,
}
impl ImagesListView {
pub fn new() -> ImagesListView {
ImagesListView {
images: Vec::new(),
selected: 0,
}
}
}
impl View for ImagesListView {
fn handle_input(&mut self, key: Key, _docker: Arc<DockerExecutor>) -> Option<AppCommand> {
let max_index = self.images.len() - 1;
match key {
Key::Down | Key::Char('j') => {
if !self.images.is_empty() {
self.selected = (self.selected + 1).min(max_index);
}
Some(AppCommand::NoOp)
}
Key::Up | Key::Char('k') => {
if !self.images.is_empty() && self.selected > 0 {
self.selected -= 1;
}
Some(AppCommand::NoOp)
}
Key::PageDown | Key::Ctrl('d') => {
if !self.images.is_empty() {
self.selected = (self.selected + 10).min(max_index);
}
Some(AppCommand::NoOp)
}
Key::PageUp | Key::Ctrl('u') => {
if !self.images.is_empty() {
self.selected = if self.selected >= 10 {
self.selected - 10
} else {
0
};
}
Some(AppCommand::NoOp)
}
Key::End | Key::Char('G') => {
if !self.images.is_empty() {
self.selected = max_index;
}
Some(AppCommand::NoOp)
}
Key::Home | Key::Char('g') => {
if !self.images.is_empty() {
self.selected = 0;
}
Some(AppCommand::NoOp)
}
_ => None,
}
}
fn refresh(&mut self, docker: Arc<DockerExecutor>) {
let options = ImageListOptions::builder().all(true).build();
let images = docker.images(&options).unwrap();
self.images = images;
if self.images.is_empty() {
self.selected = 0;
} else if self.selected >= self.images.len() {
self.selected = self.images.len() - 1;
}
}
fn draw(&self, t: &mut Frame<Backend>, rect: Rect) {
let selected_style = Style::default().fg(Color::Yellow).modifier(Modifier::BOLD);
let normal_style = Style::default().fg(Color::White);
let header = ["Image ID", "Parent", "Tag", "Created", "Virtual Size"];
let height = rect.height as usize - 4; // 2 for border + 2 for header
let offset = if self.selected >= height {
self.selected - height + 1
} else {
0
};
let rows: Vec<_> = self
.images
.iter()
.enumerate()
.map(|(i, c)| {
let creation_timestamp = SystemTime::UNIX_EPOCH + Duration::from_secs(c.created);
let duration = creation_timestamp.elapsed().unwrap();
let mut duration_str = human_duration(&duration);
duration_str.push_str(" ago");
let id = if c.id.starts_with("sha256:") {
(&c.id[7..17]).to_string()
} else {
c.id.clone()
};
let parent = if c.parent_id.starts_with("sha256:") {
(&c.parent_id[7..17]).to_string()
} else {
c.parent_id.clone()
};
let data: Vec<String> = vec![
id,
parent,
c.repo_tags
.as_ref()
.and_then(|tags| tags.first())
.cloned()
.unwrap_or_else(|| "<none>".to_string()),
duration_str,
bytesize::to_string(c.virtual_size, false),
];
if i == self.selected {
Row::StyledData(data.into_iter(), selected_style)
} else {
Row::StyledData(data.into_iter(), normal_style)
}
})
.skip(offset)
.collect();
Table::new(header.iter(), rows.into_iter())
.block(Block::default().borders(Borders::ALL))
.widths(&[10, 10, 45, 15, 20]) // TODO be smarter with sizes here
.render(t, rect);
}
}
|
extern crate neon;
use neon::prelude::*;
use std::collections::HashMap;
pub struct StaticAssetMap {
static_assets: HashMap<String, &'static str>,
}
impl StaticAssetMap {
fn new() -> StaticAssetMap {
let mut result = StaticAssetMap {
static_assets: HashMap::new(),
};
result.static_assets.insert(
"layout.svg".to_owned(),
include_str!("../../static_assets/layout.svg"),
);
result.static_assets.insert("test".to_owned(), "test");
result
}
}
declare_types! {
pub class JsStaticAssetMap for StaticAssetMap {
init(_) {
Ok(StaticAssetMap::new())
}
method get(mut cx) {
let name = cx.argument::<JsString>(0)?.value();
let this = cx.this();
let result = {
let guard = cx.lock();
let asset_map = this.borrow(&guard);
asset_map.static_assets[&name]
};
Ok(cx.string(&result).upcast())
}
}
}
fn hello(mut cx: FunctionContext) -> JsResult<JsString> {
Ok(cx.string("hello node"))
}
register_module!(mut module_context, {
module_context.export_function("hello", hello)?;
module_context.export_class::<JsStaticAssetMap>("StaticAssetMap")
});
|
<reponame>anhkiet1227/OOP-UIT
#include <bits/stdc++.h>
using namespace std;
class ThanhPhan
{
protected:
string text;
int mauText;
int mauNen;
public:
ThanhPhan();
virtual ~ThanhPhan();
virtual void nhap();
virtual int getMauNen();
virtual int getMauText();
};
ThanhPhan::ThanhPhan()
{
text = "";
mauNen = 0;
mauText = 0;
}
ThanhPhan::~ThanhPhan() {}
void ThanhPhan::nhap()
{
cout << "Chon mau bang so duoi day : \n";
cout << "1. Red, 2. Orange, 3.Yellow, 4. Spring Green\n";
cout << "5. Red, 6. Orange, 7.Yellow, 8. Spring Green\n";
cout << "9. Red, 10. Orange, 11.Yellow, 12. Spring Green\n";
cout << "Chon mau nen : ";
cin >> mauNen;
cout << "Chon mau text : ";
cin >> mauText;
cin.ignore();
cout << "Nhap text : ";
getline(cin, text);
}
int ThanhPhan::getMauNen()
{
return mauNen;
}
int ThanhPhan::getMauText()
{
return mauText;
}
class Nhan : public ThanhPhan
{
public:
Nhan(){};
~Nhan(){};
void nhap()
{
cout << "Nhap label: \n";
ThanhPhan::nhap();
}
};
class Nut : public ThanhPhan
{
public:
Nut(){};
~Nut(){};
void nhap()
{
cout << "Nhap button: \n";
cout << "Chon 1: Nhap button image\n Chon 2: Button text\n";
int tl;
cin >> tl;
if (tl == 1)
{
cout << "Nhap button image: \n";
cout << ".....Xog!!";
text = "";
mauNen = 0;
mauText = 0;
}
else
{
cout << "Nhap button text: \n";
ThanhPhan::nhap();
}
}
};
bool isMauBoTuc(int x, int y)
{
return abs(x - y) == 6;
}
bool isMauTuongDong(int x, int y, int z) // x, y, z tang dan
{
int data[15] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 1, 2};
return (data[x + 1] == y && data[x + 2] == z);
}
int main()
{
ThanhPhan *DS[1000];
int n;
cout << "Nhap so luong thanh phan : ";
cin >> n;
for (int i = 0; i < n; i++)
{
cout << "Chon 1: Nhap Label\nChon 2: Nhap button\n";
int tl;
cin >> tl;
if (tl == 1)
DS[i] = new Nhan();
else
DS[i] = new Nut();
DS[i]->nhap();
cout << endl
<< endl;
}
//cau b
if (isMauBoTuc(DS[0]->getMauNen(), DS[0]->getMauText()))
cout << "Thanh phan dau tien la mau bo tuc!\n";
else
cout << "Thanh phan dau tien Khong la mau bo tuc!\n";
//cau c
int dd[13];
for (int i = 0; i <= 12; i++)
dd[i] = 0;
for (int i = 0; i < n; i++)
dd[DS[i]->getMauNen()]++;
int spt = 0;
int mau[13];
for (int i = 1; i <= 12; i++)
if (dd[i] != 0)
mau[spt++] = i;
cout << "Mau nen cac thanh phan: ";
if (spt == 1)
cout << "Theo nguyen tac: mau don sac\n";
else if (spt == 2 && isMauBoTuc(mau[0], mau[1]))
cout << "Theo nguyen tac: Mau bo tuc\n";
else if (spt == 3 && isMauTuongDong(mau[0], mau[1], mau[2]))
cout << "Theo nguyen tac: Mau tuong dong\n";
else
cout << "Khong theo nguyen tac nao!\n";
return 0;
}
|
/**
* Saves node probabilities to an output file.
* @param outputFile Output file name (with path).
*/
public void saveNodeProbabilities(String outputFile) {
Set<Integer> nodes = new HashSet<Integer>();
for (Node node: this.rna.getRoadNetwork().getNodeIDtoNode())
if (node != null)
nodes.add(node.getID());
String NEW_LINE_SEPARATOR = System.lineSeparator();
ArrayList<String> header = new ArrayList<String>();
header.add("nodeID");
header.add("startProb");
header.add("nodeProb");
FileWriter fileWriter = null;
CSVPrinter csvFilePrinter = null;
CSVFormat csvFileFormat = CSVFormat.DEFAULT.withRecordSeparator(NEW_LINE_SEPARATOR);
try {
fileWriter = new FileWriter(outputFile);
csvFilePrinter = new CSVPrinter(fileWriter, csvFileFormat);
csvFilePrinter.printRecord(header);
for (int nodeID: nodes) {
ArrayList<String> record = new ArrayList<String>();
record.add(Integer.toString(nodeID));
Double startProb = this.getThetaEstimateStart().get(nodeID);
if (startProb == null) startProb = 0.0;
record.add(Double.toString(startProb));
Double endProb = this.getThetaEstimateEnd().get(nodeID);
if (endProb == null) endProb = 0.0;
record.add(Double.toString(endProb));
csvFilePrinter.printRecord(record);
}
} catch (Exception e) {
LOGGER.error(e);
} finally {
try {
fileWriter.flush();
fileWriter.close();
csvFilePrinter.close();
} catch (IOException e) {
LOGGER.error(e);
}
}
} |
/**
* This is a generic ClassLoader of which many examples abound.
* We do some tricks with resolution from a hashtable of names and bytecodes,
* and InputStreams as well.</p>
* If you want to find a class and throw an exception if its not in the available cache, used findClass.<br/>
* If you want to load a class by all means available and make it available, use loadClass.<p/>
* This is to support the Relatrix JAR load, and remote classloading.
* We can operate in embedded mode, or remote using a client to retrieve bytecode from server.
* @author Jonathan Groff (C) NeoCoreTechs 1999, 2000, 2020
*/
public class HandlerClassLoader extends ClassLoader {
private static boolean DEBUG = false;
private static boolean DEBUGSETREPOSITORY = true;
private static ConcurrentHashMap<String,Class> cache = new ConcurrentHashMap<String,Class>();
private static ConcurrentHashMap<String, byte[]> classNameAndBytecodes = new ConcurrentHashMap<String, byte[]>();
private static boolean useEmbedded = false;
public static String defaultPath = "/etc/"; // bytecode repository path
public static RelatrixClientInterface remoteRepository = null;
private ClassLoader parent = null;
static int size;
public HandlerClassLoader() { }
/**
* Variation when specified on command line as -Djava.system.class.loader=com.neocoretechs.relatrix.server.HandlerClassLoader -DBigSack.properties="../BigSack.properties"
* System environment variable RemoteClassLoader is set as remote node name, port is assumed default of 9999
* @param parent
*/
public HandlerClassLoader(ClassLoader parent) {
super(parent);
this.parent = parent;
if(DEBUG)
System.out.println("DEBUG: c'tor: HandlerClassLoader with parent:"+parent);
try {
String remote = System.getenv("RemoteClassLoader");
if(remote != null) {
String hostName = InetAddress.getLocalHost().getHostName();
connectToRemoteRepository(hostName, remote, 9999);
}
} catch (IllegalAccessException | IOException e) {
e.printStackTrace();
}
}
/**
* Variation when local and remote are assumed to be this node and port is default 9999
* @throws IOException
* @throws IllegalAccessException
*/
public static void connectToRemoteRepository() throws IOException, IllegalAccessException {
useEmbedded = false;
String hostName = InetAddress.getLocalHost().getHostName();
remoteRepository = new RelatrixKVClient(hostName, hostName, 9999);
}
/**
* Variation when remote is located on a different node, port is still assumed the default of 9999
* @param remote
* @throws IOException
* @throws IllegalAccessException
*/
public static void connectToRemoteRepository(String remote) throws IOException, IllegalAccessException {
useEmbedded = false;
String hostName = InetAddress.getLocalHost().getHostName();
remoteRepository = new RelatrixKVClient(hostName, remote, 9999);
}
/**
* Variation when remote is different node, and port has been set to something other than standard default
* @param remote
* @param port
* @throws IOException
* @throws IllegalAccessException
*/
public static void connectToRemoteRepository(String remote, int port) throws IOException, IllegalAccessException {
useEmbedded = false;
String hostName = InetAddress.getLocalHost().getHostName();
remoteRepository = new RelatrixKVClient(hostName, remote, port);
}
/**
* Variation when everything is different, somehow
* @param local
* @param remote
* @param port
* @throws IOException
* @throws IllegalAccessException
*/
public static void connectToRemoteRepository(String local, String remote, int port) throws IOException, IllegalAccessException {
useEmbedded = false;
remoteRepository = new RelatrixKVClient(local, remote, port);
}
/**
* Local repository for embedded mode, no remote server
* @param path Path to tablespace and log parent directories
* @throws IOException
* @throws IllegalAccessException
*/
public static void connectToLocalRepository(String path) throws IOException, IllegalAccessException {
useEmbedded = true;
if(path != null) {
if(!path.endsWith("/"))
path += "/";
defaultPath = path;
}
BigSackAdapter.setTableSpaceDir(defaultPath+"BytecodeRepository/Bytecodes");
}
/**
* Find a class by the given name
*/
public synchronized Class findClass(String name) throws ClassNotFoundException {
Class c;
if ( (c = cache.get(name)) != null) {
if(DEBUG)
System.out.println("DEBUG:"+this+".findClass("+name+") return cache.get");
return c;
}
throw new ClassNotFoundException(name+" not found in HandlerClassLoader.findClass()");
}
/**
* loadClass will attempt to load the named class, If not found in cache
* or system or user, will attempt to use Hastable of name and bytecodes
* set up from defineClasses. defineClass will call this on attempting
* to resolve a class, so we have to be ready with the bytes.
* @param name The name of the class to load
* @param resolve true to call resolveClass()
* @return The resolved Class Object
* @throws ClassNotFoundException If we can't load the class from system, or loaded, or cache
*/
public synchronized Class loadClass(String name, boolean resolve) throws ClassNotFoundException {
if(DEBUG)
System.out.println("DEBUG:"+this+".loadClass("+name+")");
Class c = null;
try {
c = Class.forName(name); // can it be loaded by normal means? and initialized?
return c;
} catch(Exception e) {
if(DEBUG) {
System.out.println("DEBUG:"+this+".loadClass Class.forName("+name+") exception "+e);
e.printStackTrace();
}
}
try {
c = findSystemClass(name);
} catch (Exception e) {
if(DEBUG) {
System.out.println("DEBUG:"+this+".loadClass findSystemClass("+name+") exception "+e);
e.printStackTrace();
}
}
if (c == null) {
c = cache.get(name);
} else {
if(DEBUG)
System.out.println("DEBUG:"+this+".loadClass exit found sys class "+name+" resolve="+resolve);
return c;
}
if (c == null) {
c = findLoadedClass(name);
} else {
if(DEBUG)
System.out.println("DBUG:"+this+".loadClass exit cache hit:"+c+" for "+name+" resolve="+resolve);
return c;
}
// this is our last chance, otherwise noClassDefFoundErr and we're screwed
if (c == null) {
byte[] bytecodes = classNameAndBytecodes.get(name);
if(DEBUG)
System.out.println("DEBUG: "+this+" Attempt to retrieve "+name+" from classNameAndBytecodes");
// grab it from repository
try {
bytecodes = getBytesFromRepository(name);
if( bytecodes == null) {
// blued and tattooed
if(DEBUG)
System.out.println(this+".LoadClass bytecode not found in repository for "+name);
throw new ClassNotFoundException("The requested class: "+name+" can not be found on any resource path");
}
} catch(Exception e) {
throw new ClassNotFoundException("The requested class: "+name+" can not be found on any resource path");
}
c = defineClass(name, bytecodes, 0, bytecodes.length);
if(DEBUG)
System.out.println("DEBUG:"+this+" Putting class "+name+" of class "+c+" to cache with "+bytecodes.length+" bytes");
cache.put(name, c);
} else {
if(DEBUG)
System.out.println("DEBUG:"+this+".loadClass exit found loaded "+name+" resolve="+resolve);
return c;
}
//if (resolve)
resolveClass(c);
if(DEBUG)
System.out.println("DEBUG:"+this+".loadClass exit resolved "+name+" resolve="+resolve);
return c;
}
/**
* Define a single class by name and byte array.
* We will attempt to liberate it from the cache first; if it's not
* there, we go defining.
* @param name The class name
* @param data The byte array to get bytecodes from
*/
public Class defineAClass(String name, byte data[]) {
return defineAClass(name, data, 0, data.length);
}
/**
* Define a single class by name and position in byte array.
* We will attempt to liberate it from the cache first; if it's not
* there, we go defining.
* @param name The class name
* @param data The byte array to get bytecodes from
* @param offset The offset to above array
* @param length The length of bytecodes at offset
*/
public synchronized Class defineAClass(String name, byte data[], int offset, int length) {
// System.out.println("HandlerClassLoader.defineAClass enter "+name);
Class c;
// if ( (c = (Class)cache.get(name)) != null) {
// System.out.println("HandlerClassLoader.defineAClass return cache.get "+name);
// return c;
// }
// force an update
classNameAndBytecodes.put(name, data); // fix later for offset
c = defineClass(name, data, offset, length);
cache.put(name, c);
// System.out.println("HandlerClassLoader.defineAClass return cache.put "+name);
return c;
}
public synchronized void defineClasses(String jarFile) throws IOException {
defineClasses(new JarFile(jarFile));
}
public synchronized void defineClasses(JarFile jarFile) throws IOException {
Enumeration<JarEntry> e = jarFile.entries();
byte[] buffer = new byte[4096];
while (e.hasMoreElements()) {
JarEntry entry = e.nextElement();
String entryname = entry.getName();
if (!entry.isDirectory() && entryname.endsWith(".class")) {
String classname = entryname.substring(0, entryname.length() - 6);
if (classname.startsWith("/")) {
classname = classname.substring(1);
}
classname = classname.replace('/', '.');
//entry.getSize();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int num;
try {
InputStream data = jarFile.getInputStream(entry);
while ((num = data.read(buffer)) > 0) {
baos.write(buffer, 0, num);
}
baos.flush();
Class<?> c = defineAClass(classname, baos.toByteArray(), 0, baos.size());
} catch (NoClassDefFoundError | UnsatisfiedLinkError ex) {}
}
}
}
/**
* Define classes from byte array JAR
* @param jarFile The JAR byte array
*/
@Deprecated
public synchronized void defineClasses(byte jarFile[]) {
defineClassStream(new ByteArrayInputStream(jarFile));
}
/**
* Define a set of classes from JAR input stream
* @param in the inputstream with JAR format
*/
@Deprecated
public synchronized void defineClassStream(InputStream in) {
// we can't seem to get size from JAR, so big buf
byte[] bigbuf = new byte[500000];
try {
ZipInputStream zipFile = new ZipInputStream(in);
for (ZipEntry entry = zipFile.getNextEntry(); entry != null; entry =zipFile.getNextEntry()) {
if (!entry.isDirectory() || entry.getName().indexOf("META-INF") == -1 ) {
if( entry.getName().endsWith(".class") ) {
String entryName = entry.getName().replace('/', '.');
// System.out.println(String.valueOf(entry.getSize()));
entryName = entryName.substring(0,entryName.length()-6);
int i = 0;
int itot = 0;
while( i != -1 ) {
i= zipFile.read(bigbuf,itot,bigbuf.length-itot);
if( i != -1 ) itot+=i;
}
System.out.println("JAR Entry "+entryName+" read "+String.valueOf(itot));
// move it right size buffer cause it's staying around
byte bytecode[] = new byte[itot];
System.arraycopy(bigbuf, 0, bytecode, 0, itot);
defineAClass(entryName, bytecode);
}
}
zipFile.closeEntry();
}
//
} catch(Exception e) {
System.out.println("HandlerClassLoader.defineClassStream failed "+e.getMessage());
e.printStackTrace();
}
}
/**
* Retrieve the bytecodes from BigSack repository
* @param name The class name to get
* @return The byte array or null
*/
public static byte[] getBytesFromRepository(String name) throws BytecodeNotFoundInRepositoryException {
byte[] retBytes = null;
ClassNameAndBytes cnab = new ClassNameAndBytes(name, retBytes);
if(DEBUG)
System.out.println("DEBUG: HandlerClassLoader.getBytesFromRepository Attempting get for "+name);
try {
if(useEmbedded) {
TransactionalTreeMap localRepository = BigSackAdapter.getBigSackTransactionalTreeMap(String.class); // class type of key
if(DEBUG)
System.out.println("DEBUG: HandlerClassLoader.getBytesFromRepository Attempting get from local repository "+localRepository);
cnab = (ClassNameAndBytes) localRepository.get(name);
} else {
if(DEBUG)
System.out.println("DEBUG: HandlerClassLoader.getBytesFromRepository Attempting get from remote repository "+remoteRepository);
cnab = (ClassNameAndBytes) remoteRepository.get(name);
}
if( cnab != null ) {
if(cnab.getBytes() == null) {
if(DEBUG)
System.out.println("DEBUG: HandlerClassLoader.getBytesFromRepository Bytecode payload from remote repository "+remoteRepository+" came back null");
} else {
if(DEBUG)
System.out.println("DEBUG: HandlerClassLoader.getBytesFromRepository Bytecode payload returned "+cnab.getBytes().length+" bytes from remote repository "+remoteRepository);
return cnab.getBytes();
}
} else {
System.out.println("Failed to return bytecodes from remote repository "+remoteRepository);
throw new BytecodeNotFoundInRepositoryException("Failed to return bytecodes from remote repository "+remoteRepository);
}
} catch(Exception e) {
e.printStackTrace();
}
return null;
}
/**
* Put the bytecodes to BigSack repository. This function to be
* performed outside of class loading cause it happens rarely.
* @param name The class name to put
* @param bytes The associated bytecode array
*/
public static void setBytesInRepository(String name, byte[] bytes) {
if(DEBUG)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository for "+name);
ClassNameAndBytes cnab = new ClassNameAndBytes(name, bytes);
try {
if(useEmbedded) {
TransactionalTreeMap localRepository = BigSackAdapter.getBigSackTransactionalTreeMap(String.class); // class type of key
localRepository.put(name, cnab);
BigSackAdapter.commitTransaction(String.class);
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Stored and committed bytecode in local repository for class:"+name);
} else {
if(remoteRepository != null) {
try {
remoteRepository.transactionalStore(name, cnab);
remoteRepository.transactionCommit(String.class);
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Stored and committed bytecode in remote repository for class:"+name);
} catch (DuplicateKeyException dce) {
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Removing existing bytecode in remote repository prior to replace for class "+name);
remoteRepository.remove(name);
try {
remoteRepository.transactionalStore(name, cnab);
remoteRepository.transactionCommit(String.class);
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Replaced and committed bytecode in remote repository for class:"+name);
} catch (DuplicateKeyException e) {}
}
} else {
System.out.println("REMOTE REPOSITORY HAS NOT BEEN DEFINED!, NO ADDITION POSSIBLE!");
}
}
} catch(IOException | ClassNotFoundException | IllegalAccessException e ) {
e.printStackTrace();
if( useEmbedded ) {
try {
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Rolling back bytecode in local repository for class:"+name);
BigSackAdapter.rollbackTransaction(String.class); // class type of key
} catch (IOException e1) {
e1.printStackTrace();
}
} else {
try {
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Rolling back bytecode in remote repository for class:"+name);
remoteRepository.transactionRollback(String.class);
} catch (IOException e1) {
e1.printStackTrace();
}
}
}
}
/**
* Remove all classes STARTING WITH the given name, use caution.
* @param name The value that the class STARTS WITH, to remove packages at any level desired
*/
public static void removeBytesInRepository(String name) {
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.removeBytesInRepository for "+name);
try {
if(useEmbedded) {
ArrayList<String> remo = new ArrayList<String>();
TransactionalTreeMap localRepository = BigSackAdapter.getBigSackTransactionalTreeMap(String.class); // class type of key
Iterator<?> it = localRepository.keySet();
while(it.hasNext()) {
Comparable key = (Comparable) it.next();
if( ((String)key).startsWith(name))
remo.add((String) key);
}
for(String s: remo) {
localRepository.remove(s);
classNameAndBytecodes.remove(s);
cache.remove(s);
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.removeBytesInRepository Removed bytecode for class:"+s);
}
BigSackAdapter.commitTransaction(String.class);
} else {
if(remoteRepository != null) {
ArrayList<String> remo = new ArrayList<String>();
RemoteKeySetIterator it = remoteRepository.keySet(String.class);
while(remoteRepository.hasNext(it)) {
Comparable key = (Comparable) remoteRepository.next(it);
if( ((String)key).startsWith(name))
remo.add((String) key);
}
for(String s: remo) {
remoteRepository.remove(s);
classNameAndBytecodes.remove(s);
cache.remove(s);
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.removeBytesInRepository Removed bytecode for class:"+s);
}
remoteRepository.transactionCommit(String.class);
} else
System.out.println("REMOTE REPOSITORY HAS NOT BEEN DEFINED!, NO REMOVAL POSSIBLE!");
}
} catch(IOException | ClassNotFoundException | IllegalAccessException e ) {
System.out.println(e);
e.printStackTrace();
if( useEmbedded )
try {
BigSackAdapter.rollbackTransaction(String.class);
} catch (IOException e1) {
e1.printStackTrace();
}
else
try {
remoteRepository.transactionRollback(String.class);
} catch (IOException e1) {
e1.printStackTrace();
}
}
}
/**
* Put the bytecodes to BigSack repository. This function to be
* performed outside of class loading cause it happens rarely.
* @param name The class name to put
* @param bytes The associated bytecode array
* @throws FileNotFoundException
*/
public static void setBytesInRepositoryFromJar(String jarFile) throws IOException, FileNotFoundException {
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepositoryFromJar for JAR file:"+jarFile);
try (JarFile file = new JarFile(jarFile)) {
file.stream().forEach(entry-> {
if (!entry.isDirectory() && !entry.getName().contains("META-INF") && entry.getName().endsWith(".class")) {
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepositoryFromJar for JAR file entry:"+entry.getName());
String entryName = entry.getName().replace('/', '.');
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepositoryFromJar size:"+String.valueOf(entry.getSize()));
entryName = entryName.substring(0,entryName.length()-6);
byte[] bigbuf = new byte[(int) entry.getSize()];
InputStream istream = null;
try {
istream = file.getInputStream(entry);
} catch (IOException e) {
e.printStackTrace();
}
int i = 0;
int itot = 0;
while( itot < bigbuf.length ) {
try {
i = istream.read(bigbuf,itot,bigbuf.length-itot);
} catch (IOException e) {
e.printStackTrace();
}
if( i != -1 )
itot+=i;
else
break;
}
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepositoryFromJar JAR Entry "+entryName+" read "+String.valueOf(itot));
// move it right size buffer cause it's staying around
//byte bytecode[] = new byte[itot];
//System.arraycopy(bigbuf, 0, bytecode, 0, itot);
setBytesInRepository(entryName, bigbuf);//bytecode);
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepositoryFromJar Loading bytecode for JAR Entry "+entryName+" read "+bigbuf.length);//bytecode.length);
}
});
//
} catch(Exception e) {
System.out.println("HandlerClassLoader.setBytesInRepository failed "+e.getMessage());
e.printStackTrace();
}
}
/**
* Load the classes in the designated directory path into the repository for defining classes.
* The use case here is if we are running a server and wish to define new classes, we wont have to bounce it, or
* copy files, or JAR files and copy them and bounce a server. Remember that when loading class, ALL dependencies must be loaded
* at once, or various errors including NPEs from loadClass due to not locating bytecode can occur.
* @param packg the package designation for loaded/loading classes
* @param dir The directory to load class files
* @throws IOException If the directory is not valid
*/
public static void setBytesInRepository(String packg, Path dir) throws IOException {
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Attempting to load class files for package:"+packg+" from path:"+dir);
try (DirectoryStream<Path> stream = Files.newDirectoryStream(dir, "*.{class}")) {
if(stream != null) {
for (Path entry: stream) {
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Found file:"+entry);
String entryName = entry.getFileName().toString().replace('/', '.');
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository size:"+String.valueOf(entry.toFile().length()));
if(packg.length() > 1) // account for default package
entryName = packg+"."+entryName.substring(0,entryName.length()-6);
else
entryName = entryName.substring(0,entryName.length()-6);
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Processing class "+entryName);
byte[] bytes = Files.readAllBytes(entry);
setBytesInRepository(entryName, bytes); // chicken and egg, egg, or chicken
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepository Loading bytecode for File Entry "+entryName+" read "+bytes.length);
}
} else
if(DEBUG || DEBUGSETREPOSITORY)
System.out.println("DEBUG: HandlerClassLoader.setBytesInRepositoryFromJar No class files available from path "+dir);
} catch (DirectoryIteratorException ex) {
// I/O error encountered during the iteration, the cause is an IOException
throw ex.getCause();
}
}
public static void main(String[] args) throws IOException, IllegalAccessException, IllegalArgumentException, ClassNotFoundException {
//Path p = FileSystems.getDefault().getPath("C:/users/jg/workspace/volvex/bin/com/neocoretechs/volvex");
//setBytesInRepository("com.neocoretechs.volvex",p);
size = 0;
connectToRemoteRepository();
remoteRepository.entrySetStream(String.class).of().forEach(e-> {
System.out.printf("Class: %s size:%d%n",((ClassNameAndBytes)((Map.Entry)e).getValue()).getName(),
((ClassNameAndBytes)((Map.Entry)e).getValue()).getBytes().length);
size += ((ClassNameAndBytes)((Map.Entry)e).getValue()).getBytes().length;
});
System.out.printf("Total size=%d%n",size);
remoteRepository.close();
}
} |
package org.refactoringminer.astDiff.actions;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import com.github.gumtreediff.actions.Diff;
import com.github.gumtreediff.actions.model.Action;
import com.github.gumtreediff.tree.Tree;
/**
* @author Pourya Alikhani Fard [email protected]
*/
public abstract class ExtendedAbstractITreeClassifier implements ExtendedTreeClassifier {
protected final Diff diff;
protected final Set<Tree> srcUpdTrees = new HashSet<>();
protected final Set<Tree> dstUpdTrees = new HashSet<>();
protected final Set<Tree> srcMvTrees = new HashSet<>();
protected final Set<Tree> dstMvTrees = new HashSet<>();
protected final Set<Tree> srcDelTrees = new HashSet<>();
protected final Set<Tree> dstAddTrees = new HashSet<>();
protected final Map<Tree,Action> dstMmTrees = new HashMap<>();
protected final Map<Tree, Action> srcMmTrees = new HashMap<>();
protected final Map<Tree, Action> dstMoveInTreeMap = new HashMap<>();
protected final Map<Tree, Action> srcMoveOutTreeMap = new HashMap<>();
public ExtendedAbstractITreeClassifier(ASTDiff diff) {
this.diff = diff;
}
protected abstract void classify();
public Set<Tree> getUpdatedSrcs() {
return srcUpdTrees;
}
public Set<Tree> getUpdatedDsts() {
return dstUpdTrees;
}
public Set<Tree> getMovedSrcs() {
return srcMvTrees;
}
public Set<Tree> getMovedDsts() {
return dstMvTrees;
}
public Set<Tree> getDeletedSrcs() {
return srcDelTrees;
}
public Set<Tree> getInsertedDsts() {
return dstAddTrees;
}
public Map<Tree, Action> getMultiMapSrc() {
return srcMmTrees;
}
public Map<Tree, Action> getMultiMapDst() {
return dstMmTrees;
}
public Map<Tree, Action> getDstMoveInTreeMap() {
return dstMoveInTreeMap;
}
public Map<Tree, Action> getSrcMoveOutTreeMap() {
return srcMoveOutTreeMap;
}
}
|
<filename>java/org/apache/tomcat/dbcp/dbcp2/DriverManagerConnectionFactory.java
package org.apache.tomcat.dbcp.dbcp2;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Properties;
/**
* A {@link DriverManager}-based implementation of {@link ConnectionFactory}.
*
* @since 2.0
*/
public class DriverManagerConnectionFactory implements ConnectionFactory {
static {
// Related to DBCP-212
// Driver manager does not sync loading of drivers that use the service
// provider interface. This will cause issues is multi-threaded
// environments. This hack makes sure the drivers are loaded before
// DBCP tries to use them.
DriverManager.getDrivers();
}
private final String connectionUri;
private final String userName;
private final char[] userPassword;
private final Properties properties;
/**
* Constructor for DriverManagerConnectionFactory.
*
* @param connectionUri
* a database url of the form <code> jdbc:<em>subprotocol</em>:<em>subname</em></code>
* @since 2.2
*/
public DriverManagerConnectionFactory(final String connectionUri) {
this.connectionUri = connectionUri;
this.properties = new Properties();
this.userName = null;
this.userPassword = <PASSWORD>;
}
/**
* Constructor for DriverManagerConnectionFactory.
*
* @param connectionUri
* a database url of the form <code> jdbc:<em>subprotocol</em>:<em>subname</em></code>
* @param properties
* a list of arbitrary string tag/value pairs as connection arguments; normally at least a "user" and
* "password" property should be included.
*/
public DriverManagerConnectionFactory(final String connectionUri, final Properties properties) {
this.connectionUri = connectionUri;
this.properties = properties;
this.userName = null;
this.userPassword = null;
}
/**
* Constructor for DriverManagerConnectionFactory.
*
* @param connectionUri
* a database url of the form <code>jdbc:<em>subprotocol</em>:<em>subname</em></code>
* @param userName
* the database user
* @param userPassword
* the user's password
*/
public DriverManagerConnectionFactory(final String connectionUri, final String userName,
final char[] userPassword) {
this.connectionUri = connectionUri;
this.userName = userName;
this.userPassword = <PASSWORD>(userPassword);
this.properties = null;
}
/**
* Constructor for DriverManagerConnectionFactory.
*
* @param connectionUri
* a database url of the form <code>jdbc:<em>subprotocol</em>:<em>subname</em></code>
* @param userName
* the database user
* @param userPassword
* the user's password
*/
public DriverManagerConnectionFactory(final String connectionUri, final String userName,
final String userPassword) {
this.connectionUri = connectionUri;
this.userName = userName;
this.userPassword = <PASSWORD>(userPassword);
this.properties = null;
}
@Override
public Connection createConnection() throws SQLException {
if (null == properties) {
if (userName == null && userPassword == null) {
return DriverManager.getConnection(connectionUri);
}
return DriverManager.getConnection(connectionUri, userName, Utils.toString(userPassword));
}
return DriverManager.getConnection(connectionUri, properties);
}
/**
* @return The connection URI.
* @since 2.6.0
*/
public String getConnectionUri() {
return connectionUri;
}
/**
* @return The Properties.
* @since 2.6.0
*/
public Properties getProperties() {
return properties;
}
/**
* @return The user name.
* @since 2.6.0
*/
public String getUserName() {
return userName;
}
}
|
from cursoaulahu.github import buscar_avatar
print(buscar_avatar('viollarr'))
|
import { FieldType, FormlyFieldConfig } from '@ngx-formly/core';
export const RateMockFields = [
{
type: 'group',
className: "d-block mb-3 col-6",
templateOptions: {
},
fieldGroup: [
{
type: 'code-card',
className: "d-block mb-3 col-12",
templateOptions: {
title: '基本用法',
description: '最简单的用法。',
},
fieldGroup: [
{
key: 'rate',
type: 'rate',
className: "d-inline-block mx-2",
templateOptions: {
}
}
]
},
{
type: 'code-card',
className: "d-block mb-3 col-12",
templateOptions: {
title: '文案展现',
subtitle: '给评分组件加上文案展示。',
},
fieldGroup: [
{
key: 'rate2',
type: 'rate',
className: "d-inline-block w-100",
defaultValue: 3,
templateOptions: {
tooltips: ['terrible', 'bad', 'normal', 'good', 'wonderful'],
},
}
]
},
{
type: 'code-card',
className: "d-block mb-3 col-12",
templateOptions: {
title: '清楚',
subtitle: '支持允许或者禁用清除。',
},
fieldGroup: [
{
key: 'rate3',
type: 'rate',
className: "d-inline-block w-100",
defaultValue: 3,
wrappers: ['form'],
templateOptions: {
layout: 'inline',
label: 'allowClear: true',
tooltips: ['terrible', 'bad', 'normal', 'good', 'wonderful'],
allowClear: true,
allowHalf: true
},
},
{
key: 'rate3',
type: 'rate',
className: "d-inline-block w-100",
defaultValue: 3,
wrappers: ['form'],
templateOptions: {
label: 'allowClear: false',
layout: 'inline',
tooltips: ['terrible', 'bad', 'normal', 'good', 'wonderful'],
allowClear: false,
allowHalf: true
},
}
]
},
{
type: 'code-card',
className: "d-block mb-3 col-12",
templateOptions: {
title: '日期格式',
subtitle: '最简单的用法,在浮层中可以选择或者输入日期。',
},
fieldGroup: [
{
key: 'checked2',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "Checkbox",
}
},
{
key: 'checked1',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "nzDisabled",
}
}
]
},
]
},
{
type: 'group',
className: "d-block mb-3 col-6",
templateOptions: {
title: '基本',
subtitle: '最简单的用法,在浮层中可以选择或者输入日期。',
},
fieldGroup: [
{
type: 'code-card',
className: "d-block mb-3 col-12",
fieldGroup: [
{
key: 'checked2',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "Checkbox",
}
},
{
key: 'checked1',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "nzDisabled",
}
}
]
},
{
type: 'code-card',
className: "d-block mb-3 col-12",
templateOptions: {
title: '切换不同选择器',
subtitle: '最简单的用法,在浮层中可以选择或者输入日期。',
},
fieldGroup: [
{
key: 'checked2',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "Checkbox",
}
},
{
key: 'checked1',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "nzDisabled",
}
}
]
},
{
type: 'code-card',
className: "d-block mb-3 col-12",
templateOptions: {
title: '日期格式',
subtitle: '最简单的用法,在浮层中可以选择或者输入日期。',
},
fieldGroup: [
{
key: 'checked2',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "Checkbox",
}
},
{
key: 'checked1',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "nzDisabled",
}
}
]
},
{
type: 'code-card',
className: "d-block mb-3 col-12",
templateOptions: {
title: '日期格式',
subtitle: '最简单的用法,在浮层中可以选择或者输入日期。',
},
fieldGroup: [
{
key: 'checked2',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "Checkbox",
}
},
{
key: 'checked1',
type: 'checkbox',
className: "d-inline-block mx-2",
templateOptions: {
text: "nzDisabled",
}
}
]
},
]
},
] |
Advances in coastal wetland remote sensing
To plan for wetland protection and sensible coastal development, scientists and managers need to monitor the changes in coastal wetlands as the sea level continues to rise and the coastal population keeps expanding. Advances in remote sensor design and data analysis techniques are providing significant improvements for studying and mapping natural and man-induced changes of coastal wetlands. New techniques include fusion of multi-sensor, multi-resolution and multitemporal images; object-based and knowledge-based classification algorithms; wetland biomass/health mapping with radar, LiDAR, and imagery; high-resolution satellite data; hyperspectral sensors; and quadcopters with digital cameras. Results of case studies show that analysis of new satellite and aircraft data, combined with a minimum of field observations, allows researchers to effectively determine longterm trends and short-term changes of wetland vegetation and hydrology. The objective of this paper is to review recent developments in wetland remote sensing and to evaluate the performance of the new techniques. |
/**
* The main test code used to send requests and retrieve messages from the service via a WebSocket
*
* @param wsMsgSource A source containing the number of webSocketMessages and the type of messages to send
* @param takeN the expected number of messages to receive
* @return a CompletableFuture of a List of messages that have been returned from the service
*/
private CompletableFuture<List<WebSocketMessage>> sendAndReceiveMessages(final Source<WebSocketMessage, NotUsed> wsMsgSource, final int takeN) {
Source<Message, CompletableFuture<Optional<Message>>> nonClosingSource = wsMsgSource.map(this::writeTextMessage).concatMat(Source.maybe(), Keep.right());
Sink<Message, CompletionStage<List<WebSocketMessage>>> listSink = Flow.<Message>create().map(this::readTextMessage).toMat(Sink.seq(), Keep.right());
Pair<Pair<CompletableFuture<Optional<Message>>, CompletionStage<WebSocketUpgradeResponse>>, CompletionStage<List<WebSocketMessage>>> request = nonClosingSource
.viaMat(Http.get(system).webSocketClientFlow(WebSocketRequest.create(String.format("ws://%s:%d/resource/" + token, HOST, PORT))), Keep.both())
.take(takeN)
.toMat(listSink, Keep.both())
.run(materializer);
var connectComplete = request.first().second();
var serverComplete = request.second();
WebSocketUpgradeResponse wsUpgrade = connectComplete.toCompletableFuture().join();
LOGGER.info("WebSocket request got WebSocketUpgrade response: {}", wsUpgrade.response());
return serverComplete.toCompletableFuture();
} |
<reponame>yangwenbang/car
package com.car.modules.car.service;
import com.baomidou.mybatisplus.service.IService;
import com.car.common.utils.PageUtils;
import com.car.modules.car.entity.ExerciseTypeEntity;
import java.util.List;
import java.util.Map;
/**
*
*
* @author lzp
* @email <EMAIL>
* @date 2019-02-11 14:51:15
*/
public interface ExerciseTypeService extends IService<ExerciseTypeEntity> {
PageUtils queryPage(Map<String, Object> params);
List<ExerciseTypeEntity>queryAll();
}
|
def icon(self) -> str:
if self.cls in terminals:
for process in reversed(list(self.process().children())):
name = process.name()
if name in icons:
return icons[name]
return icons.get(self.cls, default_icon) |
// Start runs the registry server
func (s *Server) Start() error {
if s.listener != nil {
return nil
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
uri := r.RequestURI
s.Debugf("[registry/server] %s", uri)
if uri == "/health" {
fmt.Fprintln(w, "OK")
return
}
if uri == "/v2/" {
jsonstr, err := json.Marshal(&InfoResponse{
Info: "An IPFS-backed Docker registry",
Project: projectURL,
Gateway: s.ipfsGateway,
Handles: []string{
contentTypes["manifestListV2Schema"],
contentTypes["manifestV2Schema"],
},
Problematic: []string{"version 1 registries"},
})
if err != nil {
fmt.Fprintln(w, err)
return
}
w.Header().Set("Docker-Distribution-API-Version", "registry/2.0")
fmt.Fprintln(w, string(jsonstr))
return
}
if len(uri) <= 1 {
fmt.Fprintln(w, "invalid multihash")
return
}
var suffix string
if strings.HasSuffix(uri, "/latest") {
suffix = "-v1"
accepts := r.Header["Accept"]
for _, accept := range accepts {
if accept == contentTypes["manifestV2Schema"] ||
accept == contentTypes["manifestListV2Schema"] {
suffix = "-v2"
break
}
}
}
parts := strings.Split(uri, "/")
if len(parts) <= 2 {
fmt.Fprintln(w, "out of range")
return
}
hash := regutil.IpfsifyHash(parts[2])
rest := strings.Join(parts[3:], "/")
path := hash + "/" + rest
location := s.ipfsURL(path)
if suffix != "" {
location = location + suffix
}
s.Debugf("[registry/server] location %s", location)
req, err := http.NewRequest("GET", location, nil)
if err != nil {
fmt.Fprintf(w, err.Error())
return
}
httpClient := http.Client{}
resp, err := httpClient.Do(req)
if err != nil {
fmt.Fprintf(w, err.Error())
return
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
fmt.Fprintf(w, err.Error())
return
}
w.Header().Set("Docker-Distribution-API-Version", "registry/2.0")
w.Header().Set("Content-Type", contentTypes["manifestV2Schema"])
fmt.Fprintf(w, string(body))
})
var err error
s.listener, err = net.Listen("tcp", s.host)
if err != nil {
return err
}
s.Debugf("[registry/server] listening on %s", s.listener.Addr())
if s.tlsKeyPath != "" && s.tlsCertPath != "" {
return http.ServeTLS(s.listener, nil, s.tlsCertPath, s.tlsKeyPath)
}
return http.Serve(s.listener, nil)
} |
def network_choices(self) -> Iterator[str]:
for ecosystem_name, ecosystem in self.ecosystems.items():
yield ecosystem_name
for network_name, network in ecosystem.networks.items():
if ecosystem_name == self.default_ecosystem.name:
yield f":{network_name}"
yield f"{ecosystem_name}:{network_name}"
for provider in network.providers:
if (
ecosystem_name == self.default_ecosystem.name
and network_name == ecosystem.default_network
):
yield f"::{provider}"
elif ecosystem_name == self.default_ecosystem.name:
yield f":{network_name}:{provider}"
elif network_name == ecosystem.default_network:
yield f"{ecosystem_name}::{provider}"
yield f"{ecosystem_name}:{network_name}:{provider}" |
import sys
# =============================================================================
#
# Copyright (c) Kitware, Inc.
# All rights reserved.
# See LICENSE.txt for details.
#
# This software is distributed WITHOUT ANY WARRANTY; without even
# the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
# PURPOSE. See the above copyright notice for more information.
#
# =============================================================================
import smtk
import smtk.session.polygon
import smtk.io
import smtk.testing
class TestPolygonCreation(smtk.testing.TestCase):
def setVectorValue(self, item, vals):
item.setNumberOfValues(len(vals))
for i in range(len(vals)):
item.setValue(i, vals[i])
def createModel(self, **args):
"""Create an empty geometric model.
"""
cm = smtk.session.polygon.CreateModel.create()
if cm is None:
return
xAxis = args['x_axis'] if 'x_axis' in args else None
yAxis = args['y_axis'] if 'y_axis' in args else None
normal = args['normal'] if 'normal' in args else (
args['z_axis'] if 'z_axis' in args else None)
origin = args['origin'] if 'origin' in args else None
modelScale = args['model_scale'] if 'model_scale' in args else None
featureSize = args['feature_size'] if 'feature_size' in args else None
if modelScale is not None and featureSize is not None:
print('Specify either model_scale or feature_size but not both')
return
method = -1
if modelScale is not None:
if normal is not None:
print(
'When specifying model_scale, you must specify x and y axes. Normal is ignored.')
method = 2
if featureSize is not None:
if normal is not None:
method = 1
else:
method = 0
cm.parameters().find('construction method').setDiscreteIndex(method)
if origin is not None:
self.setVectorValue(cm.parameters().find('origin'), origin)
if xAxis is not None:
self.setVectorValue(cm.parameters().find('x axis'), xAxis)
if yAxis is not None and cm.parameters().find('y axis') is not None:
self.setVectorValue(cm.parameters().find('y axis'), yAxis)
if normal is not None and cm.parameters().find('z axis') is not None:
self.setVectorValue(cm.parameters().find('z axis'), normal)
if modelScale is not None:
cm.parameters().find('model scale').setValue(modelScale)
if featureSize is not None:
cm.parameters().find('feature size').setValue(featureSize)
self.res = cm.operate()
self.resource = smtk.model.Resource.CastTo(
self.res.find('resource').value(0))
return self.resource.findEntitiesOfType(int(smtk.model.MODEL_ENTITY))[0]
def createVertices(self, pt, model, **kwargs):
"""Create one or more vertices given point coordinates.
Point coordinates should be specified as a list of 3-tuples.
The vertices are inserted into the given model
"""
crv = smtk.session.polygon.CreateVertices.create()
crv.parameters().associate(model.component())
numPts = len(pt)
numCoordsPerPoint = max([len(pt[i]) for i in range(numPts)])
pgi = crv.parameters().find('point dimension')
pgi.setDiscreteIndex(0 if numCoordsPerPoint == 2 else 1)
pgr = crv.parameters().find(
'2d points' if numCoordsPerPoint == 2 else '3d points')
pgr.setNumberOfGroups(numPts)
for ix in range(numPts):
xx = pgr.item(ix, 0)
self.setVectorValue(xx, pt[ix][0:numCoordsPerPoint] +
[0, ] * (numCoordsPerPoint - len(pt[ix])))
self.res = crv.operate()
created = self.res.find('created')
return [smtk.model.EntityRef(created.value(i)) for i in range(created.numberOfValues())]
class CurveType:
ARC = 1
LINE = 6
def createEdge(self, verts, curve_type=CurveType.LINE, **kwargs):
"""Create an edge from a pair of vertices.
"""
import itertools
cre = smtk.session.polygon.CreateEdge.create()
cre.parameters().find('construction method').setValue(0)
if hasattr(verts[0], '__iter__'):
# Verts is actually a list of tuples specifying point coordinates.
# Look for a model to associate with the operator.
if 'model' not in kwargs:
print('Error: No model specified.')
return None
cre.parameters().associate(kwargs['model'].component())
# Pad and flatten point data
numCoordsPerPoint = max([len(verts[i]) for i in range(len(verts))])
tmp = min([len(verts[i]) for i in range(len(verts))])
x = cre.parameters().find('points')
c = cre.parameters().find('coordinates')
if c:
c.setValue(0, numCoordsPerPoint)
if tmp != numCoordsPerPoint:
ptflat = []
for p in verts:
ptflat.append(p + [0, ] * (numCoordsPerPoint - len(p)))
ptflat = list(itertools.chain(*ptflat))
else:
ptflat = list(itertools.chain(*verts))
if x:
x.setNumberOfValues(len(ptflat))
for i in range(len(ptflat)):
x.setValue(i, ptflat[i])
else:
[cre.parameters().associate(x.component()) for x in verts]
t = cre.parameters().find('curve type')
if t:
t.setValue(0, curve_type)
if 'offsets' in kwargs:
o = cre.parameters().find('offsets')
if o:
o.setNumberOfValues(len(kwargs['offsets']))
for i in range(len(kwargs['offsets'])):
o.setValue(i, kwargs['offsets'][i])
if 'midpoint' in kwargs:
x = cre.parameters().find('point')
if x:
x.setNumberOfValues(len(kwargs['midpoint']))
for i in range(len(kwargs['midpoint'])):
x.setValue(i, kwargs['midpoint'][i])
if 'color' in kwargs:
c = cre.parameters().find('color')
if c:
c.setNumberOfValues(len(kwargs['color']))
for i in range(len(kwargs['color'])):
c.setValue(i, kwargs['color'][i])
c.setValue(0, kwargs['color'])
self.res = cre.operate()
entList = self.res.find('created')
numNewEnts = entList.numberOfValues()
eList = [smtk.model.EntityRef(entList.value(i))
for i in range(entList.numberOfValues())]
edgeList = []
for i in range(numNewEnts):
if eList[i].isEdge():
edgeList.append(eList[i])
return edgeList[0] if len(edgeList) == 1 else edgeList
def tweakEdge(self, edge, newPts, **kwargs):
"""Tweak an edge by providing a new set of points along it.
"""
import itertools
twk = smtk.session.polygon.TweakEdge.create()
twk.parameters().associateEntity(edge)
numCoordsPerPoint = max([len(newPts[i]) for i in range(len(newPts))])
tmp = min([len(newPts[i]) for i in range(len(newPts))])
x = twk.parameters().find('points')
c = twk.parameters().find('coordinates')
if c:
c.setValue(0, numCoordsPerPoint)
if tmp != numCoordsPerPoint:
ptflat = []
for p in newPts:
ptflat.append(p + [0, ] * (numCoordsPerPoint - len(p)))
ptflat = list(itertools.chain(*ptflat))
else:
ptflat = list(itertools.chain(*newPts))
if x:
self.setVectorValue(x, ptflat)
if 'promote' in kwargs:
o = twk.parameters().find('promote')
if o:
SetVectorValue(o, kwargs['promote'])
self.res = twk.operate()
modlist = self.res.find('modified')
result = [smtk.model.EntityRef(modlist.value(i))
for i in range(modlist.numberOfValues())]
crelist = self.res.find('created')
result += [smtk.model.EntityRef(crelist.value(i))
for i in range(crelist.numberOfValues())]
return result
def splitEdge(self, edge, point, **kwargs):
"""Split an edge at a point along the edge.
"""
import itertools
spl = smtk.session.polygon.SplitEdge.create()
spl.parameters().associateEntity(edge)
x = spl.parameters().find('point')
x.setNumberOfValues(len(point))
for i in range(len(point)):
x.setValue(i, point[i])
self.res = spl.operate()
edgeList = self.res.find('created')
numEdges = edgeList.numberOfValues()
return smtk.model.EntityRef(edgeList.value(0)) if numEdges == 1 else [smtk.model.EntityRef(edgeList.value(i)) for i in range(numEdges)]
def createFaces(self, modelOrEdges, **kwargs):
"""Create all possible planar faces from a set of edges.
"""
crf = smtk.session.polygon.CreateFaces.create()
# Associate model or edges to operator:
if hasattr(modelOrEdges, '__iter__'):
[crf.parameters().associateEntity(ent) for ent in modelOrEdges]
else:
crf.parameters().associateEntity(modelOrEdges)
self.res = crf.operate()
faceList = self.res.find('created')
numFaces = faceList.numberOfValues()
return smtk.model.EntityRef(faceList.value(0)) if numFaces == 1 else [smtk.model.EntityRef(faceList.value(i)) for i in range(numFaces)]
def setUp(self):
self.writeJSON = False
def createTestEdges(self, mod):
openEdgeTestVerts = [[4, 3.5], [3, 3.5]]
elist = self.createEdge(openEdgeTestVerts, model=mod)
edges = [smtk.model.Edge(elist)]
self.assertIsNotNone(edges[0], 'Expected a single edge.')
self.assertEqual(len(edges[0].vertices()), 2,
'Expected two vertices bounding edge.')
edges[0].setName('Jinky')
# Test non-periodic edge with self-intersection.
# Test periodic edge with self-intersection.
# edgeTestVerts = [[0,1], [1,2], [0,2], [1,1], [4,0], [4,3], [5,3],
# [3,0], [4,0], [11,10]]
edgeTestVerts = [[0, 1], [1, 2], [0, 2], [0.5, 1.5],
[4, 0], [4, 3], [5, 3], [3, 0], [4, 0], [11, 10]]
edgeTestOffsets = [0, 4, 9, 9, 12] # Only first 3 edges are valid
elist = self.createEdge(
edgeTestVerts, offsets=edgeTestOffsets, model=mod)
edges += [smtk.model.Edge(e) for e in elist]
edges[1].setName('Appendix')
edges[2].setName('Tango')
edges[3].setName('BowTieA')
edges[4].setName('BowTieB')
# Test creation of periodic edge with no model vertices.
periodicEdgeVerts = [[0, 4], [1, 4], [1, 5], [0, 5], [0, 4]]
edge = self.createEdge(periodicEdgeVerts, model=mod)
edges += [smtk.model.Edge(edge)]
edges[5].setName('Square')
print('Created a total of {:1} edges'.format(len(edges)))
return edges
def testTweakEdge(self):
mod = self.createModel()
tinkered = []
edges = self.createTestEdges(mod)
flist = self.createFaces(mod)
print('{:1} faces'.format(len(flist)))
for ff in range(len(flist)):
print('Face {:1} edges {:2}'.format(
ff, ';'.join([x.name() for x in smtk.model.Face(flist[ff]).edges()])))
# Test the easy case: an isolated, non-periodic edge is reshaped:
print('Tweaking {:1} {:2}'.format(
edges[0].name(), str(edges[0].entity())))
mods = self.tweakEdge(edges[0], [[0, 0], [1, 0], [2, 3], [3, 3]])
tinkered += mods
# Test that when an edge is tweaked whose endpoint is connected to a second edge,
# the second edge's point-sequence and tessellation are also updated:
print('Tweaking {:1} {:2}'.format(
edges[1].name(), str(edges[1].entity())))
mods = self.tweakEdge(edges[1], [[0, 1], [1, 1]])
tinkered += mods
print('Tweaking {:1} {:2}'.format(
edges[4].name(), str(edges[4].entity())))
mods = self.tweakEdge(edges[4], [[4, 1.5], [5, 3], [
4.5, 3.25], [4, 3], [4, 1.5]])
tinkered += mods
print('Tweaking {:1} {:2}'.format(
edges[3].name(), str(edges[3].entity())))
mods = self.tweakEdge(edges[3], [[4, 1.5], [3, 0], [
3.5, -0.25], [4, 0], [4, 1.5]])
tinkered += mods
print('Tinkered with ', tinkered)
self.imageComparison(
mod, tinkered, ['baseline', 'smtk', 'polygon', 'tweakEdge-caseA.png'], False)
def imageComparison(self, mod, edges, imagePath, doInteract):
if self.haveVTK() and self.haveVTKExtension():
from vtk import vtkColorSeries
self.startRenderTest()
# mod = smtk.model.Model(mod)
#[mod.addCell(x) for x in self.resource.findEntitiesOfType(smtk.model.CELL_ENTITY, False)]
# Color faces but not edges or vertices
cs = vtkColorSeries()
cs.SetColorScheme(vtkColorSeries.BREWER_QUALITATIVE_SET1)
clist = [cs.GetColor(i) for i in range(cs.GetNumberOfColors())]
edgeColors = [(c.GetRed() / 255., c.GetGreen() / 255.,
c.GetBlue() / 255., 1.0) for c in clist]
ents = self.resource.findEntitiesOfType(
smtk.model.CELL_ENTITY, False)
for ei in range(len(ents)):
ents[ei].setFloatProperty(
'color', edgeColors[ei % len(edgeColors)])
print(ents[ei].name(), ' color ',
edgeColors[ei % len(edgeColors)])
#[v.setFloatProperty('color', [0,0,0,1]) for v in self.resource.findEntitiesOfType(smtk.model.VERTEX, True)]
#[e.setFloatProperty('color', [0,0,0,1]) for e in self.resource.findEntitiesOfType(smtk.model.EDGE, True)]
ms, vs, mp, ac = self.addModelToScene(mod)
ac.GetProperty().SetLineWidth(2)
ac.GetProperty().SetPointSize(6)
self.renderer.SetBackground(1.0, 1.0, 1.0)
cam = self.renderer.GetActiveCamera()
cam.SetFocalPoint(5, 5, 0)
cam.SetPosition(5, 5, 5)
cam.SetViewUp(0, 1, 0)
self.renderer.ResetCamera()
self.renderWindow.Render()
# smtk.testing.INTERACTIVE = doInteract
# Skip the image match if we don't have a baseline.
# This allows the test to succeed even on systems without the test
# data but requires a match on systems with the test data.
# self.assertImageMatchIfFileExists(imagePath, 70)
# self.assertImageMatch(imagePath)
self.interact()
else:
self.assertFalse(
self.haveVTKExtension(),
'Could not import vtk. Python path is {pp}'.format(pp=sys.path))
if __name__ == '__main__':
smtk.testing.process_arguments()
smtk.testing.main()
|
class PrepareHostSession: # pylint: disable=too-many-instance-attributes
"""
Session open and close operation
"""
def __init__(self):
self.auth_id = 0
self.connection_type = apis.kSSS_ConnectionType_Plain
self._enc_key = None
self._mac_key = None
self._dek_key = None
self._tunnel_session_ctx = None
self._host_session_ctx = None
self._connect_ctx = apis.SE_Connect_Ctx_t()
self.session = None
self.subsystem = None
self._host_keystore = None
self.scpkey = None
self._host_session = None
def open_host_session(self, subsystem):
"""
Open Host session
:param subsystem: Host session subsystem. Eg: Mbedtls, Openssl
:return: status
"""
from . import session
session_obj = session.Session(from_pickle=False)
if not session_obj.session_ctx:
session_obj.session_ctx = apis.sss_session_t()
status = apis.sss_session_open(
ctypes.byref(session_obj.session_ctx), subsystem, 0,
apis.kSSS_ConnectionType_Plain, None)
if status == apis.kStatus_SSS_Success:
self.subsystem = subsystem
self.session = session_obj
return status
def setup_counter_part_session(self):
"""
Open Host Crypto Session. Either MbedTLS or Openssl depending on sssapisw library.
:return: Status
"""
from . import keystore
status = self.open_host_session(apis.kType_SSS_mbedTLS)
if status != apis.kStatus_SSS_Success:
# Retry with OpenSSL
status = self.open_host_session(apis.kType_SSS_OpenSSL)
if status != apis.kStatus_SSS_Success:
log.error("Failed to openHost Session")
return status
self._host_keystore = keystore.KeyStore(self.session)
return status
def prepare_host(self, connect_ctx):
# Host Session open
if connect_ctx.auth.authType in [apis.kSE05x_AuthType_UserID,
apis.kSE05x_AuthType_AESKey,
apis.kSE05x_AuthType_ECKey,
apis.kSE05x_AuthType_SCP03]:
if connect_ctx.auth.authType == apis.kSE05x_AuthType_SCP03:
self.connection_type = apis.kSSS_ConnectionType_Encrypted
self._read_scpkey()
self._se05x_prepare_host_platformscp(connect_ctx)
elif connect_ctx.auth.authType == apis.kSE05x_AuthType_UserID:
self.auth_id = authkey.SE050_AUTHID_USER_ID
self.connection_type = apis.kSSS_ConnectionType_Password
obj = apis.sss_object_t()
self._host_crypto_alloc_setkeys(currentframe().f_lineno, obj,
apis.kSSS_CipherType_UserID,
authkey.SE050_AUTHID_USER_ID_VALUE)
connect_ctx.auth.ctx.idobj.pObj = ctypes.pointer(obj)
elif connect_ctx.auth.authType == apis.kSE05x_AuthType_AESKey:
self.auth_id = authkey.SE050_AUTHID_AESKEY
self.connection_type = apis.kSSS_ConnectionType_Encrypted
self._se05x_prepare_host_applet_scp03_keys(connect_ctx)
elif connect_ctx.auth.authType == apis.kSE05x_AuthType_ECKey:
self.auth_id = authkey.SE050_AUTHID_ECKEY
self.connection_type = apis.kSSS_ConnectionType_Encrypted
self._se05x_prepare_host_eckey(connect_ctx)
def _se05x_prepare_host_applet_scp03_keys(self, connect_ctx):
"""
Set keys using host for Applet SCP03 session
:return:
"""
static_ctx = apis.NXSCP03_StaticCtx_t()
dynamic_ctx = apis.NXSCP03_DynCtx_t()
self._host_crypto_alloc_setkeys(currentframe().f_lineno, static_ctx.Enc,
apis.kSSS_CipherType_AES,
authkey.SE050_AUTHID_AESKEY_VALUE)
self._host_crypto_alloc_setkeys(currentframe().f_lineno, static_ctx.Mac,
apis.kSSS_CipherType_AES,
authkey.SE050_AUTHID_AESKEY_VALUE)
self._host_crypto_alloc_setkeys(currentframe().f_lineno, static_ctx.Dek,
apis.kSSS_CipherType_AES,
authkey.SE050_AUTHID_AESKEY_VALUE)
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Enc)
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Mac)
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Rmac)
connect_ctx.auth.ctx.scp03.pStatic_ctx = ctypes.pointer(static_ctx)
connect_ctx.auth.ctx.scp03.pDyn_ctx = ctypes.pointer(dynamic_ctx)
def _alloc_applet_scp03_key_to_se05x_authctx(self, key_id, key_obj):
"""
Perform key object init and key object allocate handle using host session.
:param key_id: Key index
:param key_obj: key object
:return: Status
"""
status = apis.sss_key_object_init(ctypes.byref(
key_obj), ctypes.byref(self._host_keystore.keystore))
if status != apis.kStatus_SSS_Success:
raise Exception("Prepare Host sss_key_object_init %s" % status_to_str(status))
status = apis.sss_key_object_allocate_handle(
ctypes.byref(key_obj), key_id, apis.kSSS_KeyPart_Default,
apis.kSSS_CipherType_AES, 16, apis.kKeyObject_Mode_Persistent)
if status != apis.kStatus_SSS_Success:
raise Exception("Prepare Host sss_key_object_allocate_handle %s" %
status_to_str(status))
def _se05x_prepare_host_platformscp(self, connect_ctx):
"""
Prepare host for Platform SCP session
:return: Status
"""
static_ctx = apis.NXSCP03_StaticCtx_t()
dynamic_ctx = apis.NXSCP03_DynCtx_t()
# This key version is constant for platform scp
static_ctx.keyVerNo = authkey.SE05X_KEY_VERSION_NO
self._host_crypto_alloc_setkeys(currentframe().f_lineno,
static_ctx.Enc,
apis.kSSS_CipherType_AES, self._enc_key)
self._host_crypto_alloc_setkeys(currentframe().f_lineno,
static_ctx.Mac,
apis.kSSS_CipherType_AES, self._mac_key)
self._host_crypto_alloc_setkeys(currentframe().f_lineno,
static_ctx.Dek,
apis.kSSS_CipherType_AES, self._dek_key)
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Enc)
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Mac)
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Rmac)
connect_ctx.auth.ctx.scp03.pStatic_ctx = ctypes.pointer(static_ctx)
connect_ctx.auth.ctx.scp03.pDyn_ctx = ctypes.pointer(dynamic_ctx)
def _se05x_prepare_host_eckey(self, connect_ctx):
"""
Set keys using host for Fast SCP session
:return: Status
"""
static_ctx = apis.NXECKey03_StaticCtx_t()
dynamic_ctx = apis.NXSCP03_DynCtx_t()
# Init allocate Host ECDSA Key pair
status = self._alloc_eckey_key_to_se05x_authctx(static_ctx.HostEcdsaObj,
currentframe().f_lineno,
apis.kSSS_KeyPart_Pair)
if status != apis.kStatus_SSS_Success:
log.error("_alloc_eckey_key_to_se05x_authctx %s", status_to_str(status))
return status
# Set Host ECDSA Key pair
status = apis.sss_key_store_set_key(
ctypes.byref(self._host_keystore.keystore),
ctypes.byref(static_ctx.HostEcdsaObj),
ctypes.byref((ctypes.c_ubyte * len(authkey.SSS_AUTH_SE05X_KEY_HOST_ECDSA_KEY))
(*authkey.SSS_AUTH_SE05X_KEY_HOST_ECDSA_KEY)),
len(authkey.SSS_AUTH_SE05X_KEY_HOST_ECDSA_KEY),
len(authkey.SSS_AUTH_SE05X_KEY_HOST_ECDSA_KEY) * 8, 0, 0)
if status != apis.kStatus_SSS_Success:
log.error("sss_key_store_set_key %s", status_to_str(status))
return status
# Init allocate Host ECKA Key pair
status = self._alloc_eckey_key_to_se05x_authctx(static_ctx.HostEcKeypair,
currentframe().f_lineno,
apis.kSSS_KeyPart_Pair)
if status != apis.kStatus_SSS_Success:
log.error("_alloc_eckey_key_to_se05x_authctx %s", status_to_str(status))
return status
# Generate Host EC Key pair
status = apis.sss_key_store_generate_key(
ctypes.byref(self._host_keystore.keystore),
ctypes.byref(static_ctx.HostEcKeypair), 256, None)
if status != apis.kStatus_SSS_Success:
log.error("_alloc_eckey_key_to_se05x_authctx %s", status_to_str(status))
return status
# Init allocate SE ECKA Public Key
status = self._alloc_eckey_key_to_se05x_authctx(static_ctx.SeEcPubKey,
currentframe().f_lineno,
apis.kSSS_KeyPart_Public)
if status != apis.kStatus_SSS_Success:
log.error("_alloc_eckey_key_to_se05x_authctx %s", status_to_str(status))
return status
# Init Allocate Master Secret
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
static_ctx.masterSec)
# Init Allocate ENC Session Key
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Enc)
# Init Allocate MAC Session Key
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Mac)
# Init Allocate DEK Session Key
self._alloc_applet_scp03_key_to_se05x_authctx(currentframe().f_lineno,
dynamic_ctx.Rmac)
connect_ctx.auth.ctx.eckey.pStatic_ctx = ctypes.pointer(static_ctx)
connect_ctx.auth.ctx.eckey.pDyn_ctx = ctypes.pointer(dynamic_ctx)
return status
def _alloc_eckey_key_to_se05x_authctx(self, key_obj, key_id, key_type):
"""
Key object initialization and allocate handle for fast SCP session
:param key_obj: Key Object
:param key_id: Key index
:param key_type: key type
:return: Status
"""
status = apis.sss_key_object_init(ctypes.byref(
key_obj), ctypes.byref(self._host_keystore.keystore))
if status != apis.kStatus_SSS_Success:
log.error("sss_key_object_init %s", status_to_str(status))
return status
status = apis.sss_key_object_allocate_handle(ctypes.byref(key_obj), key_id, key_type,
apis.kSSS_CipherType_EC_NIST_P, 256,
apis.kKeyObject_Mode_Persistent)
if status != apis.kStatus_SSS_Success:
log.error("sss_key_object_allocate_handle %s", status_to_str(status))
return status
def _host_crypto_alloc_setkeys(self, key_id, key_obj, cypher_type, key_value):
"""
Key object initialization, allocate handle and Set key using host
:param key_id: Key Index
:param key_obj: Key object
:param cypher_type: Cypher type
:param key_value: Key value
:return: None
"""
status = apis.sss_key_object_init(ctypes.byref(
key_obj), ctypes.byref(self._host_keystore.keystore))
if status != apis.kStatus_SSS_Success:
raise Exception("Prepare Host sss_key_object_init %s" % status_to_str(status))
status = apis.sss_key_object_allocate_handle(
ctypes.byref(key_obj), key_id, apis.kSSS_KeyPart_Default,
cypher_type, len(key_value), apis.kKeyObject_Mode_Persistent)
if status != apis.kStatus_SSS_Success:
raise Exception("sss_key_object_allocate_handle %s" % status_to_str(status))
status = apis.sss_key_store_set_key(
ctypes.byref(self._host_keystore.keystore),
ctypes.byref(key_obj),
ctypes.byref((ctypes.c_ubyte * len(key_value))(*key_value)),
len(key_value), len(key_value) * 8, 0, 0)
if status != apis.kStatus_SSS_Success:
raise Exception("sss_key_store_set_key %s" % status_to_str(status))
def _read_scpkey(self):
enc_key_str = ""
mac_key_str = ""
dek_key_str = ""
if os.path.isfile(self._scpkey):
scp_file = open(self._scpkey, 'r')
scp_data = scp_file.readlines()
for line in scp_data:
line = line.replace(" ", "")
line = line.replace("\n", "")
if "ENC" in line:
enc_key_str = line.replace("ENC", "")
elif "MAC" in line:
mac_key_str = line.replace("MAC", "")
elif "DEK" in line:
dek_key_str = line.replace("DEK", "")
self._enc_key = util.transform_key_to_list(enc_key_str)
self._mac_key = util.transform_key_to_list(mac_key_str)
self._dek_key = util.transform_key_to_list(dek_key_str)
else:
raise Exception("Invalid scp key file. !!") |
/*
* When allocating pud or pmd pointers, we allocate a complete page
* of PAGE_SIZE rather than PUD_TABLE_SIZE or PMD_TABLE_SIZE. This
* is to ensure that the page obtained from the memblock allocator
* can be completely used as page table page and can be freed
* correctly when the page table entries are removed.
*/
static int early_map_kernel_page(unsigned long ea, unsigned long pa,
pgprot_t flags,
unsigned int map_page_size,
int nid,
unsigned long region_start, unsigned long region_end)
{
unsigned long pfn = pa >> PAGE_SHIFT;
pgd_t *pgdp;
p4d_t *p4dp;
pud_t *pudp;
pmd_t *pmdp;
pte_t *ptep;
pgdp = pgd_offset_k(ea);
p4dp = p4d_offset(pgdp, ea);
if (p4d_none(*p4dp)) {
pudp = early_alloc_pgtable(PAGE_SIZE, nid,
region_start, region_end);
p4d_populate(&init_mm, p4dp, pudp);
}
pudp = pud_offset(p4dp, ea);
if (map_page_size == PUD_SIZE) {
ptep = (pte_t *)pudp;
goto set_the_pte;
}
if (pud_none(*pudp)) {
pmdp = early_alloc_pgtable(PAGE_SIZE, nid, region_start,
region_end);
pud_populate(&init_mm, pudp, pmdp);
}
pmdp = pmd_offset(pudp, ea);
if (map_page_size == PMD_SIZE) {
ptep = pmdp_ptep(pmdp);
goto set_the_pte;
}
if (!pmd_present(*pmdp)) {
ptep = early_alloc_pgtable(PAGE_SIZE, nid,
region_start, region_end);
pmd_populate_kernel(&init_mm, pmdp, ptep);
}
ptep = pte_offset_kernel(pmdp, ea);
set_the_pte:
set_pte_at(&init_mm, ea, ptep, pfn_pte(pfn, flags));
smp_wmb();
return 0;
} |
#271 The Wicked and The Unexpected
Whilst I am always delighted to be presented with a simple, classic page layout and a nice page of neatly ordered panels and gutters. I am never happier than when looking at a comic that plays with the space of the page in order to better tells its stories. I love trying to work out how the panels, their contents and the space around it all have been structured and manipulated in order to better convey their message.
So Issue 23 of The Wicked and the Divine was always going to be up my street. In this issue of the comic Gillen, McKelvie, Wilson and Cowles mix things up by taking their gods/celebrities and featuring them in an issue of Pantheon Monthly. Illustrated in part by Kevin Wada, the issue is full of lush full page fashion shots, fake adverts for luxury brands and interviews/profiles of key members of the Pantheon. The stories and interviews are penned by journalists from newspapers and magazines, writers such as Leigh Alexander, Dorian Lynskey, Mary HK Choi and Ezekiel Kweku.
I found the process by which they created these interviews fascinating, Kieron Gillen role-played as the his characters and was interviewed by the journalist for each piece who then wrote up the interviews as they would normally. Kevin Wada was responsible for the pictures illustrating each article, while the series’ main artist and colourist, Jamie McKelvie and Matt Wilson created the adverts that punctuated the interviews. Gillen acts as the magazine’s editor, and his opening letter is a fascinating example of the form, perfectly mimicking the style of a magazine editorial whilst also setting the scene for the issue’s narrative aims.
Visually the style is high-end, much more like the luxurious magazines that cost close to £10 and come out quarterly then a monthly glossy. The style is too careful, the pages too well presented. I would 100% buy this magazine on a half day’s holiday, buy myself a coffee and sit outside a cafe feeling delighted with the world and my treat to myself. And that is part of the genius of this issue, Gillen, McKelvie et al are blurring the lines between fact and fiction, comics and magazines, the Pantheon as celebrities and their less groomed reality. The cover of the issue sits within the broader continuity of the series, they haven’t broken with their distinct sequence for this, but once you open the cover you are immediately aware that this is something outwith the series’ usual chronology.
Question – But Hattie, you’re talking about a magazine. Is it really a comic?
Answer – Yes.
Glad we’ve got that out of the way.
In a series that is, in part,about the way in which we create celebrity and the effect that that has on the individuals on both sides of the celebrity/fan relationship, it is fascinating to see the creators of this series take the time to show the reader how the press works within this world. Since Laura became a god, our experience of the Pantheon and its fandom is relatively one-sided, we are experiencing this story from the point of view of the gods themselves. Early in the series we had the perspective of the press in the form of Cassandra, and then she too joined the Pantheon. So whilst we don’t get a view of the fans as such in these articles, we are afforded a sight of how these celebrities are being curated in the press and presented to their fans, and that is fascinating.
Take for example the issue’s first interview, one between The Morrigan and Leigh Alexander. Wada illustrates first of all with a full page picture of The Morrigan in the process of getting ready. Disembodied hands lean in from out of shot, one applying lipstick with a confident nonchalance and the other curling her hair. The use of a lip brush to apply the lipstick in the image pushes the reader to question the creation of the image, is the brush applying lipstick or painting the lips themselves. The images’ artifice is reinforced and so the reader is pushed to consider questions about what is real in the world of the Pantheon and indeed within this comic. By glimpsing The Morrigan in the process of being made up we are reminded of the elements of performance that go into being a member of the Pantheon. the reader is pushed the question who these people really are. Reality and artifice are called into question and the reader is reminded that this is an interview with a fictional character as well as a break with the expected form of the comic.
The choices made about The Morrigan’s clothes in this image are also of note. She is wearing a dress covered in images of stained glass windows (arguably comics themselves) that have been deconstructed and pasted together out of order. In this comic Gillen and McKelvie are ripping up the rule book as to what constitutes a comic, and so too does this dress take an established art form and deconstruct it. And yet The Morrigan is still bound or restricted by the dress, it forces her to sit upright in an not altogether comfortable position and she holds herself awkwardly, this pushes me to wonder about the way in which the expectations of the comic form have controlled the work of these creators and how they seek to break free of them.
The adverts and interviews work together to give the reader an insight into a different version of The Pantheon. We see their groomed and seemingly perfect external side, rather than the squabbling teenagers and twenty-somethings that are scrambling their way through a series of awful situations. We are pushed to contrast the difference between their seemingly perfect exteriors and how they are perceived by their fans and detractors, compared to the versions of them that we have read every month for the past 2.5 years. Wada’s lush and beautiful images only serve to further this idea, significantly different in style from the series’ usual images, the difference between the normal narrative and this version of the characters is visually reinforced throughout.
The Wicked and The Divine isn’t a perfect series, I don’t really think any comic can be, but its creators are constantly seeking to push at the boundaries of what a comic might be and can do. They seek to explore the unexpected not only in their content and story but also in their use of the form and that is particularly refreshing. In a world where mainstrean comics seem to struggle to innovate, and often harbour outdated and offensive views, the commitment made by these creators ( mpublished by one of the larger comics publishers) to innovate formally and represent the diverse world in which we live is sadly unexpected but always welcome. |
/**
*
*/
package com.lambton.surveyapp.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import com.lambton.surveyapp.service.UserService;
import com.lambton.surveyapp.view.models.SurveyResponseVO;
import com.lambton.surveyapp.view.models.SurveyResultVO;
import com.lambton.surveyapp.view.models.UserAnalyticsVO;
import com.lambton.surveyapp.view.models.UserVO;
/**
* @author <NAME>-C0783516
* @Since Jun 19, 2021 11:14:13 AM
*
*/
@RestController
@RequestMapping("/surveyapp")
public class UserController {
@Autowired
private UserService userService;
@GetMapping("/user/profile/get")
public UserVO getUserProfile() {
return userService.getProfile();
}
@PostMapping("/user/profile/update")
public UserVO updateUserProfile(@RequestBody UserVO userVO) {
return userService.updateProfile(userVO);
}
@PostMapping("/user/survey/submit")
public SurveyResultVO submitSurveyResponse(@RequestBody SurveyResponseVO surveyResponseVO) {
return userService.submitSurveyResponse(surveyResponseVO);
}
@GetMapping("/user/anylytics")
public UserAnalyticsVO getUserAnalytics() {
return userService.getUserAnalytics();
}
}
|
/**
* Returns a new {@link HazelcastMQInstance} using the given configuration. If
* the configuration is null, a default configuration will be used.
*
* @param config
* the configuration for the instance
* @return the new HazelcastMQ instance
*/
public static HazelcastMQInstance newHazelcastMQInstance(
HazelcastMQConfig config) {
if (config == null) {
config = new HazelcastMQConfig();
}
return new DefaultHazelcastMQInstance(config);
} |
/*
* vmw_otables_setup - Set up guest backed memory object tables
*
* @dev_priv: Pointer to a device private structure
*
* Takes care of the device guest backed surface
* initialization, by setting up the guest backed memory object tables.
* Returns 0 on success and various error codes on failure. A successful return
* means the object tables can be taken down using the vmw_otables_takedown
* function.
*/
int vmw_otables_setup(struct vmw_private *dev_priv)
{
struct vmw_otable **otables = &dev_priv->otable_batch.otables;
int ret;
if (has_sm4_context(dev_priv)) {
*otables = kmemdup(dx_tables, sizeof(dx_tables), GFP_KERNEL);
if (!(*otables))
return -ENOMEM;
dev_priv->otable_batch.num_otables = ARRAY_SIZE(dx_tables);
} else {
*otables = kmemdup(pre_dx_tables, sizeof(pre_dx_tables),
GFP_KERNEL);
if (!(*otables))
return -ENOMEM;
dev_priv->otable_batch.num_otables = ARRAY_SIZE(pre_dx_tables);
}
ret = vmw_otable_batch_setup(dev_priv, &dev_priv->otable_batch);
if (unlikely(ret != 0))
goto out_setup;
return 0;
out_setup:
kfree(*otables);
return ret;
} |
/**
* Created by gaurig on 4/2/17.
*/
public class PersonAdapter extends RecyclerView.Adapter<PersonAdapter.ViewHolder> {
Context context;
ArrayList<PersonModel> personModels = new ArrayList<>();
@Override
public PersonAdapter.ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
PersonAdapter.ViewHolder viewHolder = null;
LayoutInflater inflater = LayoutInflater.from(context);
View personView = inflater.inflate(R.layout.person_item, parent, false);
viewHolder = new PersonAdapter.ViewHolder(personView);
return viewHolder;
}
@Override
public void onBindViewHolder(PersonAdapter.ViewHolder holder, int position) {
final PersonModel personModel = personModels.get(position);
holder.name.setText(personModel.getName());
holder.screenName.setText(personModel.getScreenName());
holder.bio.setText(personModel.getBio());
String profilePicUrl = personModel.getProfileImageUrl();
Glide.with(context).load(profilePicUrl).error(R.drawable.ic_launcher)
.bitmapTransform(new RoundedCornersTransformation(context, 5, 5))
.placeholder(R.drawable.ic_launcher)
.into(holder.profilePic);
}
@Override
public int getItemCount() {
return personModels.size();
}
public PersonAdapter(Context context, ArrayList<PersonModel> personModels) {
this.context = context;
this.personModels = personModels;
}
public static class ViewHolder extends RecyclerView.ViewHolder {
public TextView bio, name, screenName;
public ImageView profilePic;
public ViewHolder(View itemView) {
super(itemView);
bio = (TextView) itemView.findViewById(R.id.bio);
name = (TextView) itemView.findViewById(R.id.name);
screenName = (TextView) itemView.findViewById(R.id.screenName);
profilePic = (ImageView) itemView.findViewById(R.id.profilePic);
}
}
} |
package main
//
//import "C"
//import (
// "fmt"
// "math/rand"
// "testing"
// "time"
//)
//
//
//var sl = NewSKipListIterator(DEFAULT_MAX_LEVEL)
//
//func init() {
//
// t1 := time.Now()
// for i := 0; i < 10000000; i++ {
// //sl.Add(C.test(), i)
// // fmt.Println(sl.level)
// }
// fmt.Println(time.Since(t1))
//}
//
//func BenchmarkNewSkipList(b *testing.B) {
//
// //fmt.Println(sl.GetGE(3))
// fmt.Println(sl.Del(3))
// fmt.Println(sl.GetK(5))
// fmt.Println(sl.GetV(5))
// for i := 0; i < b.N; i++ {
// for j := 0; j < 100000; j++ {
// sl.GetGE(num[j])
// }
// }
//
// //t2 := time.Now()
// //for i := 0; i < 100000; i++ {
// // sl.GetGE(i)
// //}
// //fmt.Println(time.Since(t2))
//
//}
|
// SetOraclePubkeySet sets oracle's pubkey set
func (b *Builder) SetOraclePubkeySet(
pubset *oracle.PubkeySet, idxs []int) error {
Rs := []*btcec.PublicKey{}
for _, idx := range idxs {
Rs = append(Rs, pubset.CommittedRpoints[idx])
}
err := b.Contract.PrepareOracleCommitments(pubset.Pubkey, Rs)
if err != nil {
return err
}
b.Contract.Oracle.PubkeySet = pubset
return nil
} |
package org.aksw.jena_sparql_api.lookup;
import java.util.Collection;
import java.util.function.Function;
import java.util.stream.Stream;
/**
* Take a stream of input items and transform it to a stream of output items
*
* @author raven
*
* @param <I>
* @param <O>
*/
public interface ItemService<I, O>
extends Function<Collection<I>, Stream<O>>
{
// default List<O> toList(Stream<I> in) {
// List<O> result = apply(in).collect(Collectors.toList());
// return result;
// }
}
|
/**
* A component for fetching artists from Spotify.
*
* @author Dawid Samolyk
*
*/
@Component
@Scope(ConfigurableBeanFactory.SCOPE_SINGLETON)
public class SpotifyArtistWebApi extends AbstractSpotifyWebApi {
@Autowired
private SpotifyArtistConverter artistConverter;
@Autowired
private SpotifyTrackWebApi tracksApi;
@Autowired
private MessagesProvider messagesProvider;
/**
* Provides artists with given names.
*
* @param name
* Artist name.
* @param topTracksLimit
* Limit of top tracks to get.
* @return A collection of artists.
* @throws SpotifyConnectorException
* Thrown when input data is invalid or artist not found or any
* internal error occurs.
*/
public Collection<SpotifyArtist> getArtistsByName(String name, int topTracksLimit)
throws SpotifyConnectorException {
final ArtistSearchRequest request = getApi().searchArtists(name).build();
try {
return fetchArtists(request, topTracksLimit);
} catch (EmptyResponseException | BadRequestException e) {
getLogger().error(e.getLocalizedMessage(), e);
throw new ArtistNotFoundException(messagesProvider.get("spotify.api.artist.notfound.error"));
} catch (IOException | WebApiException e) {
throw new SystemException(e.getLocalizedMessage(), e);
}
}
private Collection<SpotifyArtist> fetchArtists(ArtistSearchRequest request, int topTracksLimit)
throws IOException, WebApiException, ApplicationException {
final List<Artist> result = request.get().getItems();
if (result.isEmpty()) {
throw new ArtistNotFoundException(messagesProvider.get("spotify.api.artist.notfound.error"));
}
return result.stream().map(getArtistMapper(topTracksLimit)).collect(Collectors.toList());
}
private Function<? super Artist, ? extends SpotifyArtist> getArtistMapper(int topTracksLimit) {
return eachArtist -> {
final SpotifyArtist result = artistConverter.convertFrom(eachArtist);
result.setTopTracks(fetchTopTracks(eachArtist, topTracksLimit));
result.setSimilarArtists(fetchRelatedArtists(eachArtist));
return result;
};
}
private Collection<SpotifyTrack> fetchTopTracks(Artist eachArtist, int topTracksLimit) {
try {
return tracksApi.getTopTracksByArtistId(eachArtist, topTracksLimit);
} catch (SpotifyConnectorException e) {
getLogger().error(e.getLocalizedMessage(), e);
return Collections.emptyList();
}
}
private Collection<String> fetchRelatedArtists(Artist artist) {
try {
final List<Artist> result = getApi().getArtistRelatedArtists(artist.getId()).build().get();
return result.stream().map(Artist::getName).collect(Collectors.toList());
} catch (EmptyResponseException | BadRequestException e) {
getLogger().error(
messagesProvider.get("spotify.api.related.artists.notfound.error", e.getLocalizedMessage()), e);
return Collections.emptyList();
} catch (IOException | WebApiException | SystemException e) {
getLogger().error(e.getLocalizedMessage(), e);
return Collections.emptyList();
}
}
} |
# -*- encoding: utf-8 -*-
from flexible_reports.models.behaviors import Labelled, Titled
VAL = "foo"
def test_labelled():
x = Labelled()
x.label = VAL
assert str(x) == VAL
def test_titled():
x = Titled()
x.title = VAL
assert str(x) == VAL
|
For the third time in three years, the dark ambient label Cryo Chamber presents a collaboration paying tribute to an entity from the writings of H.P. Lovecraft. As before, this is a true collaboration, with the work of each artist seamlessly woven into a single, unbroken track that spans 190 minutes across three CDs. Unless you’re extremely familiar with the sounds and techniques of each artist, it’s nigh impossible to distinguish one from another; perhaps the limiting scope of the dark ambient genre is also a factor. The album features a mixture of well-known and new genre projects, and the majority are on Cryo Chamber’s roster. Here is a list of the involved artists:
Aegri Somnia, Alphaxone, Apócrýphos, Atrium Carceri, Council of Nine, Cryobiosis, DarkRad, Dronny Darko, Enmarta, Flowers for Bodysnatchers, God Body Disconnect, Gydja, Kammarheit, Kristoffer Oustad, Metatron Omega, Mystified, Neizvestija, Northumbria, ProtoU, Randal Collier-Ford, Sabled Sun, SiJ, Sjellos, Svartsinn, Wordclock, and Ugasanie.
Conceptually, then, little has changed since the first installment in this series, Cthulhu (2014). It’s also worth noting that this makes six CDs’ worth of content in three years, with Azathoth (2015) being a double-album. This time around, with Nyarlathotep being such a lengthy release, it’s a lofty challenge to maintain momentum. All the familiar dark ambient tropes are here: grand washes, oppressive drones, and thick synthetic atmosphere, all dotted and dashed with a series of sampled noise. The major difference here is the presence of stringed instruments and pipes, no doubt inspired by the origins of Nyarlathotep himself.
In Lovecraft’s writing, Nyarlathotep is the son of Azathoth, and is somewhat unique in Lovecraft’s fictitious pantheon of otherworldly gods in that he manifests as a man; to be precise, a man resembling an Egyptian pharaoh. Sometimes called “the Crawling Chaos,” Nyarlathotep can appear in a host of different forms, many quite monstrous, and aims to drive humanity insane, even plotting to bring about the end of the world. He uses otherworldly instruments to gather followers to him, and this is no doubt reflected in the album via the Eastern-flavored flutes and strings.
While named for such a malignant deity, however, Nyarlathotep itself is quite subdued, if not often sedating. One might expect that a being called “the Crawling Chaos” would inspire howling storms of madness and, well, chaos, but that’s simply not the case as the album follows its lengthy course. The beds of sound bleed into each other slowly, although it seems easier this time to pick out the transition from artist to artist. [Ed. Note: Nyarlathotep actually features a collaboration of artists whose work takes place simultaneously, not concurrently as implied.] Sometimes, it’s even beautiful.
It would seem that Cryo Chamber’s concept has run its course. With each release, the albums have grown in length, but not necessarily in scope. The traditional instruments (sampled or otherwise) contribute much-needed variety, but their presence is spaced much too far apart to maintain a cohesive mood. There are other entities yet unrepresented from Lovecraft’s mythology, but if this series continues, it’s fair to guess that we may be in for more of the same. If you’re a die-hard fan of H.P. Lovecraft or Cryo Chamber’s sound, you won’t be disappointed, but there are other more focused efforts to be heard—Cthulhu and Azathoth, for example.
Track List:
01) Nyarlathotep 1
02) Nyarlathotep 2
03) Nyarlathotep 3
Written by: Edward Rinderle
Label: Cryo Chamber (United States) / CRYO 046 / 3xCD, Digital
Dark Ambient / Drone |
<reponame>sytabaresa/IMO-Maritime-Single-Window
export class ShipFlagCodeModel {
shipFlagCodeId: number;
name: string;
description: string;
country: any;
countryId: number;
}
|
<filename>edn/go-edn-basic-types-4/main.go<gh_stars>10-100
package main
import (
"fmt"
"olympos.io/encoding/edn"
)
func main() {
var a interface{} = nil
var b map[string]string = nil
c := "foo bar baz"
aEDN, err := edn.Marshal(a)
fmt.Println(string(aEDN))
fmt.Println(err)
fmt.Println()
bEDN, err := edn.Marshal(b)
fmt.Println(string(bEDN))
fmt.Println(err)
fmt.Println()
cEDN, err := edn.Marshal(c)
fmt.Println(string(cEDN))
fmt.Println(err)
}
|
//sendToPeer sends consensus data to specified peer
func (cbi *ConsensusChainedBftImpl) sendToPeer(consensusData []byte, index uint64) {
peer := cbi.smr.getPeerByIndex(index)
if peer == nil {
cbi.logger.Errorf("get peer with index %v failed", cbi.selfIndexInEpoch, index)
return
}
msg := &net.NetMsg{
To: peer.id,
Type: net.NetMsg_CONSENSUS_MSG,
Payload: consensusData,
}
go cbi.msgbus.Publish(msgbus.SendConsensusMsg, msg)
} |
<reponame>tjguk/networkzero<gh_stars>10-100
from . import graphical, motor, text |
def add_player(self, state: object) -> object:
raise RuntimeError("Cannot add new players to an existing game.") |
/** @file chert_alldocsmodifiedpostlist.cc
* @brief A ChertAllDocsPostList plus pending modifications.
*/
/* Copyright (C) 2008 Lemur Consulting Ltd
* Copyright (C) 2006,2007,2008,2009,2010 <NAME>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <config.h>
#include "chert_alldocsmodifiedpostlist.h"
#include "chert_database.h"
#include "debuglog.h"
#include "str.h"
using namespace std;
ChertAllDocsModifiedPostList::ChertAllDocsModifiedPostList(Xapian::Internal::RefCntPtr<const ChertDatabase> db_,
Xapian::doccount doccount_,
const map<Xapian::docid, Xapian::termcount> & doclens_)
: ChertAllDocsPostList(db_, doccount_),
doclens(doclens_),
doclens_it(doclens.begin())
{
LOGCALL_CTOR(DB, "ChertAllDocsModifiedPostList", db_.get() | doccount_ | doclens_);
}
void
ChertAllDocsModifiedPostList::skip_deletes(Xapian::weight w_min)
{
LOGCALL_VOID(DB, "ChertAllDocsModifiedPostList::skip_deletes", w_min);
while (!ChertAllDocsPostList::at_end()) {
if (doclens_it == doclens.end()) return;
if (doclens_it->first != ChertAllDocsPostList::get_docid()) return;
if (doclens_it->second != static_cast<Xapian::termcount>(-1)) return;
++doclens_it;
ChertAllDocsPostList::next(w_min);
}
while (doclens_it != doclens.end() && doclens_it->second == static_cast<Xapian::termcount>(-1)) {
++doclens_it;
}
}
Xapian::docid
ChertAllDocsModifiedPostList::get_docid() const
{
LOGCALL(DB, Xapian::docid, "ChertAllDocsModifiedPostList::get_docid()", NO_ARGS);
if (doclens_it == doclens.end()) RETURN(ChertAllDocsPostList::get_docid());
if (ChertAllDocsPostList::at_end()) RETURN(doclens_it->first);
RETURN(min(doclens_it->first, ChertAllDocsPostList::get_docid()));
}
Xapian::termcount
ChertAllDocsModifiedPostList::get_doclength() const
{
LOGCALL(DB, Xapian::termcount, "ChertAllDocsModifiedPostList::get_doclength", NO_ARGS);
// Override with value from doclens_it (which cannot be -1, because that
// would have been skipped past).
if (doclens_it != doclens.end() &&
(ChertAllDocsPostList::at_end() ||
doclens_it->first <= ChertAllDocsPostList::get_docid()))
RETURN(doclens_it->second);
RETURN(ChertAllDocsPostList::get_doclength());
}
PostList *
ChertAllDocsModifiedPostList::next(Xapian::weight w_min)
{
LOGCALL(DB, PostList *, "ChertAllDocsModifiedPostList::next", w_min);
if (have_started) {
if (ChertAllDocsPostList::at_end()) {
++doclens_it;
skip_deletes(w_min);
RETURN(NULL);
}
Xapian::docid unmod_did = ChertAllDocsPostList::get_docid();
if (doclens_it != doclens.end() && doclens_it->first <= unmod_did) {
if (doclens_it->first < unmod_did &&
doclens_it->second != static_cast<Xapian::termcount>(-1)) {
++doclens_it;
skip_deletes(w_min);
RETURN(NULL);
}
++doclens_it;
}
}
ChertAllDocsPostList::next(w_min);
skip_deletes(w_min);
RETURN(NULL);
}
PostList *
ChertAllDocsModifiedPostList::skip_to(Xapian::docid desired_did,
Xapian::weight w_min)
{
LOGCALL(DB, PostList *, "ChertAllDocsModifiedPostList::skip_to", desired_did | w_min);
if (!ChertAllDocsPostList::at_end())
ChertAllDocsPostList::skip_to(desired_did, w_min);
/* FIXME: should we use lower_bound() on the map? */
while (doclens_it != doclens.end() && doclens_it->first < desired_did) {
++doclens_it;
}
skip_deletes(w_min);
RETURN(NULL);
}
bool
ChertAllDocsModifiedPostList::at_end() const
{
LOGCALL(DB, bool, "ChertAllDocsModifiedPostList::end", NO_ARGS);
RETURN(doclens_it == doclens.end() && ChertAllDocsPostList::at_end());
}
string
ChertAllDocsModifiedPostList::get_description() const
{
string desc = "ChertAllDocsModifiedPostList(did=";
desc += str(get_docid());
desc += ')';
return desc;
}
|
import { state } from 'decorator/state'
import { bridge } from 'native/bridge'
import { native } from 'native/native'
import { Event } from 'event/Event'
import { TapGestureDetector } from 'gesture/TapGestureDetector'
import { View } from 'view/View'
import { Window } from 'view/Window'
import './TextInput.style'
@bridge('dezel.form.TextInput')
/**
* @class TextInput
* @super View
* @since 0.1.0
*/
export class TextInput extends View {
//--------------------------------------------------------------------------
// Properties
//--------------------------------------------------------------------------
/**
* The text input's type.
* @property type
* @since 0.1.0
*/
@native public type!: 'text' | 'number' | 'email' | 'phone' | 'password'
/**
* The text input's value.
* @property value
* @since 0.1.0
*/
@native public value!: string
/**
* The text input's placeholder.
* @property placeholder
* @since 0.1.0
*/
@native public placeholder!: string
/**
* The text input's placeholder color.
* @property placeholderColor
* @since 0.1.0
*/
@native public placeholderColor!: string
/**
* The text input's format.
* @property format
* @since 0.1.0
*/
@native public format!: string
/**
* The text input's locale.
* @property locale
* @since 0.1.0
*/
@native public locale!: string
/**
* Whether the text input auto corrects the value.
* @property autocorrect
* @since 0.1.0
*/
@native public autocorrect!: boolean
/**
* Whether the text input auto capitalizes the value.
* @property autocapitalize
* @since 0.1.0
*/
@native public autocapitalize!: boolean
/**
* The text input's font family.
* @property fontFamily
* @since 0.1.0
*/
@native public fontFamily!: string
/**
* The text input's font weight.
* @property fontWeight
* @since 0.1.0
*/
@native public fontWeight!: string
/**
* The text input's font style.
* @property fontStyle
* @since 0.1.0
*/
@native public fontStyle!: string
/**
* The text input's font size.
* @property fontSize
* @since 0.1.0
*/
@native public fontSize!: number
/**
* The text input's text color.
* @property textColor
* @since 0.1.0
*/
@native public textColor!: string
/**
* The text input's text alignment.
* @property textAlign
* @since 0.1.0
*/
@native public textAlign!: 'top left' | 'top right' | 'top center' | 'left' | 'right' | 'center' | 'bottom left' | 'bottom right' | 'bottom center'
/**
* The text input's text kerning.
* @property textKerning
* @since 0.1.0
*/
@native public textKerning!: number
/**
* The text input's text leading.
* @property textLeading
* @since 0.1.0
*/
@native public textLeading!: number
/**
* The text input's text decoration.
* @property textDecoration
* @since 0.1.0
*/
@native public textDecoration!: string
/**
* The text input's text transform.
* @property textTransform
* @since 0.1.0
*/
@native public textTransform!: 'none' | 'uppercase' | 'lowercase' | 'capitalize'
/**
* The text input's text shadow blur size.
* @property textShadowBlur
* @since 0.1.0
*/
@native public textShadowBlur!: number
/**
* The text input's text shadow color.
* @property textShadowColor
* @since 0.1.0
*/
@native public textShadowColor!: string
/**
* The text input's text shadow top offset.
* @property textShadowOffsetTop
* @since 0.1.0
*/
@native public textShadowOffsetTop!: number
/**
* The text input's text shadow left offset.
* @property textShadowOffsetLeft
* @since 0.1.0
*/
@native public textShadowOffsetLeft!: number
/**
* Whether the text input is disabled.
* @property disabled
* @since 0.1.0
*/
@state public disabled: boolean = false
//--------------------------------------------------------------------------
// Methods
//--------------------------------------------------------------------------
/**
* Moves the focus to this text input.
* @method focus
* @since 0.1.0
*/
public focus() {
native(this).focus()
return this
}
/**
* Removes the focus from this text input.
* @method blur
* @since 0.1.0
*/
public blur() {
native(this).blur()
return this
}
/**
* Select text at range.
* @method selectRange
* @since 0.1.0
*/
public selectRange(): TextInput
public selectRange(index: number | null): TextInput
public selectRange(index: number | null, ending: number | null): TextInput
public selectRange(...args: Array<any>): TextInput {
switch (args.length) {
case 0:
native(this).selectRange()
break
case 1:
native(this).selectRange(args[0], args[0])
break
case 2:
native(this).selectRange(args[0], args[1])
break
}
return this
}
//--------------------------------------------------------------------------
// Events
//--------------------------------------------------------------------------
/**
* @inherited
* @method onEvent
* @since 0.1.0
*/
public onEvent(event: Event) {
switch (event.type) {
case 'change':
this.onChange(event)
break
case 'focus':
this.onFocusDefault(event)
this.onFocus(event)
break
case 'blur':
this.onBlurDefault(event)
this.onBlur(event)
break
}
return super.onEvent(event)
}
/**
* Called when the text input value changes.
* @method onChange
* @since 0.1.0
*/
public onChange(event: Event) {
}
/**
* Called when the text input receives the focus.
* @method onFocus
* @since 0.1.0
*/
public onFocus(event: Event) {
}
/**
* Called when the text input loses the focus.
* @method onBlur
* @since 0.1.0
*/
public onBlur(event: Event) {
}
/**
* @inherited
* @method onMoveToWindow
* @since 0.1.0
*/
public onMoveToWindow(window: Window | null, former: Window | null) {
if (window) {
window.gestures.append(this.gesture)
return
}
if (former) {
former.gestures.remove(this.gesture)
return
}
}
//--------------------------------------------------------------------------
// Private API
//--------------------------------------------------------------------------
/**
* @property gesture
* @since 0.1.0
* @hidden
*/
private gesture: TapGestureDetector = new TapGestureDetector(callback => {
let target = this.window?.findViewAt(callback.x, callback.y)
if (target == this) {
return
}
this.blur()
})
/**
* @method onBlurDefault
* @since 0.1.0
* @hidden
*/
private onBlurDefault(event: Event) {
this.states.remove('focused')
}
/**
* @method onFocusDefault
* @since 0.1.0
* @hidden
*/
private onFocusDefault(event: Event) {
this.states.append('focused')
}
//--------------------------------------------------------------------------
// Native API
//--------------------------------------------------------------------------
/**
* @method nativeOnChange
* @since 0.1.0
* @hidden
*/
private nativeOnChange(value: string) {
this.emit<TextInputChangeEvent>('change', { data: { value } })
}
/**
* @method nativeOnFocus
* @since 0.1.0
* @hidden
*/
private nativeOnFocus() {
this.emit('focus')
}
/**
* @method nativeOnBlur
* @since 0.1.0
* @hidden
*/
private nativeOnBlur() {
this.emit('blur')
}
}
/**
* @type TextInputChangeEvent
* @since 0.1.0
*/
export type TextInputChangeEvent = {
value: string
} |
<filename>file/zstd.go<gh_stars>10-100
package file
import (
"github.com/valyala/gozstd"
"io"
)
const (
DefaultZstdCompressionLevel = gozstd.DefaultCompressionLevel
)
type ZstdWrite struct {
w *gozstd.Writer
}
func (z *ZstdWrite) NewWriter(w io.Writer) (err error) {
z.w = gozstd.NewWriterLevel(w, DefaultZstdCompressionLevel)
return nil
}
func (z *ZstdWrite) Write(p []byte) (int, error) {
return z.w.Write(p)
}
func (z *ZstdWrite) WriteString(s string) (int, error) {
return z.w.Write([]byte(s))
}
func (z *ZstdWrite) Reset(w io.Writer) {
n := &gozstd.CDict{}
z.w.Reset(w, n, DefaultZstdCompressionLevel)
}
func (z *ZstdWrite) Flush() error {
return z.w.Flush()
}
func (z *ZstdWrite) Close() error {
return z.w.Close()
}
|
During the long years of racial apartheid it was a no-go zone off Africa’s southern tip, run so secretively that photographs of its most famous prisoner could never emerge.
Now Robben Island, a UN World Heritage site, is the latest and one of the most unusual additions to Google’s Street View. Users can follow in Nelson Mandela’s footsteps all the way into the spartan five sq metre prison cell where he languished for 18 years.
In contrast to the draconian censorship of the white-minority regime, Google was invited to tour with its Trekker backpack-mounted camera taking 360-degree panoramic photos of the island, which has variously served as a leper colony, quarantine station, mental hospital and political prison. It is now a museum and tourist attraction with a small resident population.
Facebook Twitter Pinterest A screengrab from the Google Street View tour of Robben Island showing Nelson Mandela’s cell door. Photograph: Guardian
Street View’s images include the house where Robert Sobukwe, the founder of the Pan Africanist Congress, was kept in solitary confinement, the quarry where political prisoners laboured and Mandela’s eyes were damaged by light and dust, and the courtyard where Mandela gardened a small plot and covertly began writing his autobiography, Long Walk to Freedom.
The Google project, which includes a virtual tour guided by former political prisoner Vusumzi Mcongo, comes after recent setbacks at Robben Island including complaints over the quality of the museum and breakdowns of the ferry that brings visitors from Cape Town.
Sibongiseni Mkhize, the chief executive of the Robben Island Museum, said: “The reason Robben Island is now a museum is to educate people about the part of South Africa’s heritage that is embodied in the island’s multi-layered history. Together with Google, we are making this heritage accessible to people all over the world.”
Facebook Twitter Pinterest Robben Island prison’s courtyard in the mid-1960s. Photograph: Sipa Press/Rex
The initiative was welcomed by a veteran of the anti-apartheid struggle, Ahmed Kathrada, a former inmate and close friend of the late Mandela. “Not being able to see or interact with children for 20 years was possibly the most difficult thing to endure during my time on the island,” he said. “There’s a poetic justice that children in classrooms all over the world will now be able to visit Robben Island using this technology.”
Google said it would also develop notes for teachers who will be using this interactive tour as an educational tool. Luke McKend, the director of Google South Africa, added: “Robben Island is a symbol of South Africa’s fight for freedom. We’re excited about helping people to learn more about this heritage and to explore the island from any device, anywhere in the world.” |
/**
* The LineBasedGazetteerResource uses an input stream from any URL to retrieve the ID, name values
* used to populate its Gazetteer. The ID, name fields can be {@link
* LineBasedGazetteerResource#PARAM_SEPARATOR separated} by anything that can be recognized by a
* regular expression (using <code>String.spit(regex, 2)</code>).
*
* @author Florian Leitner
*/
public
class LineBasedGazetteerResource extends ExactGazetteerResource {
/**
* The field separator to use. May be a regular expression and defaults to a tab. Input lines are
* split into ID, name pairs.
*/
public static final String PARAM_SEPARATOR = "FieldSeparator";
@ConfigurationParameter(name = PARAM_SEPARATOR, mandatory = false, defaultValue = "\t")
protected String separator;
public static
class Builder extends ExactGazetteerResource.Builder {
public
Builder(String url) {
super(LineBasedGazetteerResource.class, url);
}
/** Any regular expression that can be used to split the Gazetteer input lines in two. */
public
Builder setSeparator(String regex) {
setOptionalParameter(PARAM_SEPARATOR, regex);
return this;
}
}
/**
* Configure a Gazetteer from a line-based data stream.
* <p/>
* In the simplest case, this could be just a flat-file.
*
* @param resourceUri where a line-based data stream can be fetched
*/
public static
Builder configure(String resourceUri) {
return new Builder(resourceUri);
}
@Override
public synchronized
void load(DataResource dataResource) throws ResourceInitializationException {
super.load(dataResource);
}
/**
* Compile the DFA.
*
* @throws ResourceInitializationException
*
*/
@Override
public
void afterResourcesInitialized() {
Pattern pattern = Pattern.compile(separator);
String line = null;
InputStream inStr = null;
try {
inStr = getInputStream();
final BufferedReader reader = new BufferedReader(new InputStreamReader(inStr));
Set<String> knownKeys = new HashSet<String>();
String lastId = null;
while ((line = reader.readLine()) != null) {
line = line.trim();
if (line.length() == 0) continue;
String[] keyVal = pattern.split(line, 2);
if (!keyVal[0].equals(lastId)) {
knownKeys = new HashSet<String>();
lastId = keyVal[0];
}
put(keyVal[0], keyVal[1], knownKeys);
}
} catch (final IOException e) {
throw new RuntimeException(e.getLocalizedMessage() + " while loading " + resourceUri);
} catch (IndexOutOfBoundsException e) {
throw new RuntimeException(
"received an illegal line from " + resourceUri + ": '" + line +
"'"
);
} finally {
if (inStr != null) {
try {
inStr.close();
} catch (final IOException e) {}
}
}
}
} |
/**
* Returns a filtered copy of the given permissions, only including the tags specified.
*/
public static Permissions filterPermissionsByTags(Permissions perms, Iterable<Tag> tags) {
Permissions filtered = new Permissions();
for (Tag tag: tags) {
String tagStr = tag.getValue();
AccessList acl = perms.get(tagStr);
if (acl != null) {
AccessList aclCopy = new AccessList(new ArrayList<>(acl.getIn()),
new ArrayList<>(acl.getNotIn()));
filtered.put(tagStr, aclCopy);
}
}
return filtered;
} |
<reponame>barsgroup/objectpack
# coding: utf-8
"""
Виртуальная модель и proxy-обертка для работы с группой моделей
"""
from __future__ import absolute_import
from collections import Iterable
from functools import reduce
from itertools import islice
import copy
from six.moves import filter
from six.moves import filterfalse
from six.moves import map
from six.moves import zip
import six
from django.db.models import manager
from django.db.models import query
from m3_django_compat import ModelOptions
from m3_django_compat import get_related
def kwargs_only(*keys):
keys = set(keys)
def wrapper(fn):
def inner(self, *args, **kwargs):
wrong_keys = set(kwargs.keys()) - keys
if wrong_keys:
raise TypeError(
'%s is an invalid keyword argument(s)'
' for this function' % (
', '.join('"%s"' % k for k in wrong_keys)))
return fn(self, args, **kwargs)
return inner
return wrapper
def _call_if_need(x):
return x() if callable(x) else x
# ==============================================================================
# VirtualModelManager
# =============================================================================
class ObjectWrapper(object):
"""Обертка над объектами, инвертирующая результат сравнения больше/меньше.
Используется для сортировки данных в виртуальных моделях.
"""
def __init__(self, obj, direction):
self.obj = obj
self.direction = direction
def __eq__(self, other):
if isinstance(other, ObjectWrapper):
other = other.obj
return self.obj == other
def __lt__(self, other):
if isinstance(other, ObjectWrapper):
other = other.obj
is_obj_not_none = self.obj is not None
is_other_not_none = other is not None
if is_obj_not_none and is_other_not_none:
first, second = self.obj, other
else:
first, second = is_obj_not_none, is_other_not_none
return first < second if self.direction == 1 else first > second
class VirtualModelManager(object):
"""
Имитация QueryManager`а Django для VirtualModel.
"""
_operators = {
'contains': lambda val: lambda x: val in x,
'iexact': lambda val: lambda x, y=val.lower(): x.lower() == y,
'icontains': (
lambda val: lambda x, y=val.lower(): y in x.lower() if x
else False
),
'lte': lambda val: lambda x: x <= val,
'gte': lambda val: lambda x: x >= val,
'lt': lambda val: lambda x: x < val,
'gt': lambda val: lambda x: x > val,
'isnull': lambda val: lambda x: (x is None or x.id is None) == val,
'in': lambda vals: lambda x: x in vals,
'overlap': lambda vals: (
lambda searched_vals: set(searched_vals) & set(vals)
),
}
def __init__(self, model_clz=None, procs=None, **kwargs):
if not model_clz:
return
self._clz = model_clz
self._procs = procs or []
self._ids_getter_kwargs = kwargs
def __get__(self, inst, clz):
if inst:
raise TypeError("Manager can not be accessed from model instance!")
return self.__class__(clz)
def all(self):
return self._fork_with(self._procs[:])
def __getitem__(self, arg):
if isinstance(arg, slice):
procs = self._procs[:]
procs.append(
lambda data: islice(data, arg.start, arg.stop, arg.step))
return self._fork_with(procs)
return list(self)[arg]
def __iter__(self):
return reduce(
lambda arg, fn: fn(arg),
self._procs,
map(
self._clz._from_id,
_call_if_need(
self._clz._get_ids(
**self._ids_getter_kwargs))))
def _fork_with(self, procs=None, **kwargs):
kw = self._ids_getter_kwargs.copy()
kw.update(kwargs)
if not procs:
procs = self._procs[:]
return self.__class__(self._clz, procs, **kw)
def configure(self, **kwargs):
return self._fork_with(**kwargs)
@classmethod
def _make_getter(cls, key, val=None, allow_op=False):
def folder(fn, attr):
return lambda obj: fn(getattr(obj, attr))
def default_op(op):
return lambda val: lambda obj: val == getattr(obj, op)
key = key.split('__')
if allow_op:
if len(key) > 1:
op = key.pop()
op = cls._operators.get(op, default_op(op))(val)
else:
op = (lambda val: lambda obj: obj == val)(val)
else:
def op(obj):
return obj
return reduce(folder, reversed(key), op)
@classmethod
def _from_q(cls, q):
fns = [
cls._make_getter(c[0], c[1], allow_op=True)
if isinstance(c, tuple) else
cls._from_q(c)
for c in q.children
]
comb = all if q.connector == q.AND else any
return lambda obj: comb(f(obj) for f in fns)
def _filter(self, combinator, args, kwargs):
procs = self._procs[:]
fns = []
if args:
fns.extend(self._from_q(q) for q in args)
if kwargs:
fns.extend(
self._make_getter(key, val, allow_op=True)
for key, val in six.iteritems(kwargs)
)
if fns:
procs.append(
lambda items: combinator(
lambda obj: all(fn(obj) for fn in fns), items))
return self._fork_with(procs)
def filter(self, *args, **kwargs):
return self._filter(filter, args, kwargs)
def exclude(self, *args, **kwargs):
return self._filter(filterfalse, args, kwargs)
def order_by(self, *args):
procs = self._procs[:]
getters, dirs = [], []
for a in args:
if a.startswith('-'):
d = -1
a = a[1:]
else:
d = 1
getters.append(self._make_getter(a))
dirs.append(d)
if getters:
# генератор процедур сортировки
def make_proc(**kwargs):
return lambda data: iter(sorted(data, **kwargs))
def key_fn(obj):
return tuple(
ObjectWrapper(getter(obj), direction)
for getter, direction in zip(getters, dirs)
)
procs.append(make_proc(key=key_fn))
return self._fork_with(procs)
def get(self, *args, **kwargs):
if args and not kwargs:
kwargs['id'] = args[0]
result = list(self.filter(**kwargs))
if not result:
raise self._clz.DoesNotExist()
elif len(result) > 1:
raise self._clz.MultipleObjectsReturned()
return result[0]
def values(self, *args):
return (
dict(list(zip(args, t)))
for t in self.values_list(*args)
)
@kwargs_only('flat')
def values_list(self, args, flat=False):
if flat and len(args) > 1:
raise TypeError(
"'flat' is not valid when values_list is "
"called with more than one field"
)
if flat:
getter = self._make_getter(args[0])
return map(getter, self)
else:
getters = list(map(self._make_getter, args))
return (tuple(g(o) for g in getters) for o in self)
def select_related(self):
return self
def count(self):
return sum(1 for _ in self)
def __len__(self):
return self.count()
# =============================================================================
# VirtualModel
# =============================================================================
class VirtualModel(object):
"""
Виртуальная модель, реализующая Django-ORM-совместимый API, для
работы с произвольными данными.
Менеджер поддерживает дополнительный лукап:
- overlap: для проверки пересечения iterable-поля объекта виртуальной
модели и передаваемого iterable-объекта.
Пример модели:
>>> M = VirtualModel.from_data(
... lambda: (
... {'x': x, 'y': y * 10}
... for x in xrange(5)
... for y in xrange(5)
... ),
... auto_ids=True
... )
Теперь с моделью можно работать так:
>>> M.objects.count()
25
>>> M.objects.filter(x__gte=2).exclude(y__in=[10, 20, 30]).count()
6
>>> list(M.objects.filter(x=0).order_by("-y").values_list("y", flat=True))
[40, 30, 20, 10, 0]
Пример фильтрации по ``overlap``:
>>> M = VirtualModel.from_data(
... lambda: (
... {'related_objects_values': [x, y * 10], 'y': y * 10}
... for x in range(3)
... for y in range(3)
... ),
... auto_ids=True
... )
Найти все объекты, у которых в ``related_objects_values``
пересекается c [2, 20].
>>> list(M.objects.filter(
... related_objects_values__overlap=[2, 20]
... ).values('related_objects_values'))
[{'related_objects_values': [0, 20]},
{'related_objects_values': [1, 20]},
{'related_objects_values': [2, 0]},
{'related_objects_values': [2, 10]},
{'related_objects_values': [2, 20]}]
"""
class DoesNotExist(Exception):
pass
class MultipleObjectsReturned(Exception):
pass
@classmethod
def _get_ids(cls):
"""
Метод возвращает iterable, или callable, возвращаюший iterable,
для каждого элемента которого (iterable)
будет инстанцирован объект класса
(каждый эл-т итератора передаётся в конструктор)
"""
return NotImplemented
@classmethod
def _from_id(cls, data):
return cls(data)
objects = VirtualModelManager()
@classmethod
def from_data(cls, data, auto_ids=False, class_name="NewVirtualModel"):
"""
Возвращает субкласс, основанный на переданных данных
@data - iterable из словарей
@auto_ids - если True, поле id объектов модели
будет генерироваться автоматически
@class_name - имя класса-потомка
"""
assert isinstance(class_name, six.string_types)
if auto_ids:
def get_ids(cls):
cnt = 1
for d in _call_if_need(data):
d['id'] = cnt
yield d
cnt += 1
else:
def get_ids(cls):
return data
return type(
str(class_name),
(cls,),
{
'_get_ids': classmethod(get_ids),
'__init__': lambda self, data: self.__dict__.update(data),
'__repr__': lambda self: '%s(%r)' % (
self.__class__.__name__,
self.__dict__)
}
)
# =============================================================================
# model_proxy_metaclass
# =============================================================================
class ModelProxyMeta(type):
"""
Метакласс для ModelProxy
"""
def __new__(cls, name, bases, dic):
model = dic.get('model')
relations = dic.get('relations') or []
if not model:
return type(name, bases, dic)
class LazyMetaData(object):
"""
Дескриптор, реализующий ленивое построение данных,
необходимых для работы meta-данных прокси-модели
"""
FIELDS, FIELD_DICT = 0, 1
_CACHING_ATTR = '_lazy_metadata'
def __init__(self, attr):
self._attr = attr
def __get__(self, inst, clazz):
# дескриптор должен работать только для класса
assert inst is None
cache = getattr(clazz, self._CACHING_ATTR, None)
if not cache:
cache = self._collect_metadata()
setattr(clazz, self._CACHING_ATTR, cache)
return cache[self._attr]
def _collect_metadata(self):
# сбор полей основной модели и указанных моделей,
# связанных с ней
def add_prefix(field, prefix):
field = copy.copy(field)
field.attname = '%s.%s' % (prefix, field.attname)
return field
def submeta(meta, path):
for field in path.split('.'):
field = ModelOptions(meta.model).get_field(field)
meta = get_related(field).parent_model._meta
return meta
meta = model._meta
fields_ = []
fields_dict = {}
for prefix, meta in [(model.__name__.lower(), meta)] + [
(rel, submeta(meta, rel)) for rel in relations]:
for f in meta.fields:
f = add_prefix(f, prefix)
fields_.append(f)
fields_dict[f.attname] = f
return fields_, fields_dict
# django-подобный класс метаинформации о модели
class BaseMeta(object):
fields = LazyMetaData(LazyMetaData.FIELDS)
field_dict = LazyMetaData(LazyMetaData.FIELD_DICT)
verbose_name = model._meta.verbose_name
verbose_name_plural = model._meta.verbose_name_plural
@classmethod
def get_field(cls, field_name):
return cls.field_dict[field_name]
meta_mixin = dic.pop('Meta', None)
if meta_mixin:
dic['_meta'] = type('_meta', (meta_mixin, BaseMeta), {})
else:
dic['_meta'] = BaseMeta
relations_for_select = [r.replace('.', '__') for r in relations]
# обёртка над QueryManager
class WrappingManager(object):
def __init__(self, manager, proxy=None):
self._manager = manager
self._proxy_cls = proxy
self._query = None
def _get_query(self):
if not self._query:
try:
self._query = self._manager._clone()
except AttributeError:
self._query = self._manager
return self._query.select_related(*relations_for_select)
def __get__(self, inst, clz):
if inst:
raise TypeError(
"Manager can not be accessed from model instance!")
return self.__class__(self._manager, clz)
def __iter__(self):
# при итерации по объектам выборки основной модели,
# каждый объект оборачивается в Proxy
for item in self._get_query():
yield self._proxy_cls(item)
def get(self, *args, **kwargs):
return self._proxy_cls(self._get_query().get(*args, **kwargs))
def iterator(self):
return iter(self)
def __getitem__(self, *args):
def wrap(obj):
if isinstance(obj, (list, dict)):
return obj
return self._proxy_cls(obj)
result = self._get_query().__getitem__(*args)
if isinstance(result, Iterable):
return map(wrap, result)
else:
return wrap(result)
def __getattr__(self, attr):
# все атрибуты, которые не перекрыты,
# берутся в Manager`е базовой модели
if attr in self.__dict__:
return self.__dict__[attr]
else:
result = getattr(self._manager, attr)
def wrapped(fn):
def inner(*args, **kwargs):
result = fn(*args, **kwargs)
if isinstance(
result, (manager.Manager, query.QuerySet)):
return self.__class__(result, self._proxy_cls)
return result
return inner
if callable(result):
return wrapped(result)
return result
dic['objects'] = WrappingManager(model.objects)
dic['DoesNotExist'] = model.DoesNotExist
dic['MultipleObjectsReturned'] = model.MultipleObjectsReturned
# создание класса Proxy
return super(ModelProxyMeta, cls).__new__(cls, name, bases, dic)
model_proxy_metaclass = ModelProxyMeta
# =============================================================================
# ModelProxy
# =============================================================================
class ModelProxy(six.with_metaclass(model_proxy_metaclass, object)):
"""
Proxy-объект инкапсулирующий в себе несколько моделей
(для случая, когда одна модель - основная, о другие - её поля)
"""
model = None
# список извлекаемых связанных моделей вида
# ['relation', 'relation2.relation3']
relations = None
def __init__(self, obj=None):
self.relations = self.relations or []
if obj is None:
def wrap_save_method(child, parent, attr):
old_save = child.save
def inner(*args, **kwargs):
result = old_save(*args, **kwargs)
setattr(parent, attr, child.id)
return result
return inner
# если объект не указан - создается новый
obj = self.model()
# список объектов, созданных при заполнении связанных объектов
created_objects = []
# создаются экземпляры связанных объектов (вглубь)
for path in self.relations:
sub_obj, sub_model = obj, self.model
for item in path.split('.'):
# получение связанной модели
field = ModelOptions(sub_model).get_field(item)
sub_model = get_related(field).parent_model
# объект может быть уже заполнен, при частично
# пересекающихся путях в relations
# в этом случае новый объект не создается,
# а испольуется созданный ранее
try:
existed = getattr(sub_obj, item, None)
except sub_model.DoesNotExist:
existed = None
if existed and existed in created_objects:
sub_obj = existed
continue
# создание пустого объекта
new_sub_obj = sub_model()
created_objects.append(new_sub_obj)
# оборачивание save, для простановки xxx_id у род.модели
new_sub_obj.save = wrap_save_method(
new_sub_obj, sub_obj, '%s_id' % item)
# созданный объект, вкладывается в зависимый
setattr(sub_obj, item, new_sub_obj)
# подъем на уровень выше
sub_obj = new_sub_obj
self.id = None
else:
self.id = obj.id
setattr(self, self.model.__name__.lower(), obj)
setattr(self, '_object', obj)
# заполнение атрибутов proxy по заданным связям вглубь (xxx.yyy)
for rel in self.relations:
attr = rel.split('.', 1)[0]
setattr(self, attr, getattr(obj, attr, None))
def save(self):
raise NotImplementedError()
def safe_delete(self):
raise NotImplementedError()
|
def letrec(tree, args, *, gen_sym, **kw):
with dyn.let(gen_sym=gen_sym):
return _destructure_and_apply_let(tree, args, _letrec) |
<reponame>raviqqe/djtile<filename>src/store/channel.ts
export default class {
public interval: number = 1; // in seconds
private audio?: AudioBuffer;
private source?: AudioBufferSourceNode;
public playOneShot = () => {
this.prepareSource().start();
}
public toggle = () => {
if (this.source) {
this.stop();
} else {
this.start();
}
}
public setAudio = (audio: AudioBuffer) => {
this.audio = audio;
}
private start = () => {
this.source = this.prepareSource();
this.source.loop = true;
this.source.loopEnd = this.interval;
this.source.start();
}
private stop = () => {
if (!this.source) {
throw new Error("audio is not played");
}
this.source.stop();
delete this.source;
}
private prepareSource = (): AudioBufferSourceNode => {
if (!this.audio) {
throw new Error("no audio data is set");
}
const context = new AudioContext();
const source = context.createBufferSource();
source.buffer = this.audio;
source.connect(context.destination);
return source;
}
}
|
from heapq import heappop,heappush
N = int(input())
G = [[] for _ in range(N)]
for _ in range(N - 1):
a,b,c = map(int, input().split())
G[a-1].append({"to":b-1, "cost":c})
G[b-1].append({"to":a-1, "cost":c})
Q,K = map(int, input().split())
K -= 1
dist = [float("inf")] * N
dist[K] = 0
que = [(0, K)]
while que:
d,v = heappop(que)
if dist[v] < d: continue
for e in G[v]:
if dist[e["to"]] > d + e["cost"]:
dist[e["to"]] = d + e["cost"]
heappush(que, (dist[e["to"]], e["to"]))
for _ in range(Q):
x,y = map(int, input().split())
x -= 1; y -= 1
print(dist[x] + dist[y]) |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Dec 8 10:59:00 2021
@author: philippbst
"""
from pinn.general_pinn import GeneralPINN
import torch
from torch.autograd import grad
from pinn.utils_pinn import check_gradientTracking
class NavierLame2DPINN(GeneralPINN):
def evaluateResiduals(self, X):
u_pred_dict, u_deriv_dict = self._forwardWithGrads(X)
Rx, Ry = self._calculateResiduals(u_pred_dict, u_deriv_dict)
return Rx, Ry
def calculate_displacement_gradient_components(self, X):
_, u_deriv_dict = self._forwardWithGrads(X)
return u_deriv_dict.get('u_x'), u_deriv_dict.get('u_y'), u_deriv_dict.get('v_x'), u_deriv_dict.get('v_y')
def calculate_linear_strains(self, X):
u_x, u_y, v_x, v_y = self.calculate_displacement_gradient_components(X)
# Don't creat new tensors and do it in index notation to keep the gradient tracking of autograd
eps_xx = 0.5 * (u_x + u_x)
eps_xy = 0.5 * (u_y + v_x)
eps_yy = 0.5 * (v_y + v_y)
return eps_xx, eps_xy, eps_yy
def calculate_cauchy_stresses(self, X):
eps_xx, eps_xy, eps_yy = self.calculate_linear_strains(X)
lambd = self.pdeParam.lambd
my = self.pdeParam.my
# Don't creat new tensors and do it in index notation to keep the gradient tracking of autograd
sigma_xx = lambd * (eps_xx + eps_yy) + 2 * my * eps_xx
sigma_yy = lambd * (eps_xx + eps_yy) + 2 * my * eps_yy
sigma_xy = 2 * my * eps_xy
return sigma_xx, sigma_xy, sigma_yy
def _neumannBCLoss(self, X, Y, N = None):
sigma_xx, sigma_xy, sigma_yy = self.calculate_cauchy_stresses(X)
'''
Highly specific implementation of the Neumann BC loss
for the our tackled problem in frequency domain
'''
numEvals = X.shape[0]
MSE_sig_xy = torch.sum((sigma_xy - Y[:,0])**2) / numEvals
MSE_sig_yy = torch.sum((sigma_yy - Y[:,1])**2) / numEvals
loss_BC_Neu = MSE_sig_yy + MSE_sig_xy
'''
TO DO:
Find a way to generalize implementation and calculate the Neumann BC loss based on Inputs
Therefore ise sigma_xx, sigma_xy and sigma_yy as input Y to enable calculation of any arbitrary Neumann BC
'''
return loss_BC_Neu
def _calculateResiduals(self, u_pred_dict, u_deriv_dict):
# unpack PDE parameters and partial derivatives
lambd = self.pdeParam.lambd
my = self.pdeParam.my
omega = self.pdeParam.omega
rho = self.pdeParam.rho
u_pred = u_pred_dict.get('u_pred')
v_pred = u_pred_dict.get('v_pred')
u_xx = u_deriv_dict.get('u_xx')
u_yy = u_deriv_dict.get('u_yy')
u_xy = u_deriv_dict.get('u_xy')
v_xx = u_deriv_dict.get('v_xx')
v_yy = u_deriv_dict.get('v_yy')
v_yx = u_deriv_dict.get('v_yx')
''' Navier Lame in frequency domain'''
# Residual in x direction
Rx = (rho*(omega**2)*u_pred + my * (u_xx + u_yy) + (lambd + my) * (u_xx + v_yx))
# Residual in y direction
Ry = (rho*(omega**2)*v_pred + my * (v_xx + v_yy) + (lambd + my) * (u_xy + v_yy))
''' <NAME> for static case'''
# # Residual in x direction
# Rx = my * (u_xx + u_yy) + (lambd + my) * (u_xx + v_yx)
# # Residual in y direction
# Ry = my * (v_xx + v_yy) + (lambd + my) * (u_xy + v_yy)
return Rx, Ry
@check_gradientTracking
def _forwardWithGrads(self, X):
# making a prediction for u for the given points X
U_pred = self.model(X)
u_pred = U_pred[:, 0:1]
v_pred = U_pred[:, 1:2]
# Compute gradients of u,v,w w.r.t inputs x,y,z
# g_u = grad(u_pred.sum(), X, create_graph=True)[0]
# g_v = grad(v_pred.sum(), X, create_graph=True)[0]
g_u = grad(u_pred, X, grad_outputs=torch.ones(u_pred.shape), create_graph=True)[0]
g_v = grad(v_pred, X, grad_outputs=torch.ones(u_pred.shape), create_graph=True)[0]
# compute sevond order derivatives of output w.r.t inputs x,y,z
shape = g_u[:, 0:1].shape
gg_u_x = grad(g_u[:, 0:1], X, grad_outputs=torch.ones(shape), create_graph=True)[0]
gg_u_y = grad(g_u[:, 1:2], X, grad_outputs=torch.ones(shape), create_graph=True)[0]
gg_v_x = grad(g_v[:, 0:1], X, grad_outputs=torch.ones(shape), create_graph=True)[0]
gg_v_y = grad(g_v[:, 1:2], X, grad_outputs=torch.ones(shape), create_graph=True)[0]
# Extract the required derivatives
u_x = g_u[:,0:1]
u_y = g_u[:,1:2]
v_x = g_v[:,0:1]
v_y = g_v[:,1:2]
u_xx = gg_u_x[:, 0:1]
u_xy = gg_u_x[:, 1:2]
u_yy = gg_u_y[:, 1:2]
v_xx = gg_v_x[:, 0:1]
v_yx = gg_v_y[:, 0:1]
v_yy = gg_v_y[:, 1:2]
# write model predictions and derivatives in a ditionary to obtain uniform output
u_pred_collection = [u_pred, v_pred]
u_pred_names = ['u_pred', 'v_pred']
u_deriv_collection = [u_xx, u_yy, v_xx, v_yy, v_yx, u_xy, u_x, u_y, v_x, v_y]
u_deriv_names = ['u_xx', 'u_yy', 'v_xx', 'v_yy', 'v_yx', 'u_xy','u_x', 'u_y', 'v_x', 'v_y']
u_pred_dict = {}
u_deriv_dict = {}
for i, name in enumerate(u_pred_names):
u_pred_dict[name] = u_pred_collection[i]
for i, name in enumerate(u_deriv_names):
u_deriv_dict[name] = u_deriv_collection[i]
return u_pred_dict, u_deriv_dict
def _physicsInformedLoss(self, X):
u_pred_dict, u_deriv_dict = self._forwardWithGrads(X)
u_pred = u_pred_dict.get('u_pred')
v_pred = u_pred_dict.get('v_pred')
numEvals = u_pred.shape[0]
#calculate the residuals
Rx, Ry = self._calculateResiduals(u_pred_dict, u_deriv_dict)
# MSE of Residual in x-direction
MSE_Rx = torch.sum(Rx**2) / numEvals
# MSE of Residual in y direction
MSE_Ry = torch.sum(Ry**2) / numEvals
# Entire PDE residual
loss_CP = MSE_Rx+MSE_Ry
return loss_CP
def _robinBCLoss(self, X, Y, N = None):
raise ("Robin BC not implemented for 2D Navier Lame")
|
package costhistory
import (
"math"
"testing"
"time"
)
const T0 = 1577836800
var expectPerSecond = []float64{
0.00000000000000000000,
0.50196078431372548323,
0.75294117647058822484,
0.87843137254901959565,
0.94117647058823528106,
0.97254901960784312376,
0.98823529411764710062,
0.99607843137254903354,
1.00000000000000000000,
0.49803921568627451677,
0.24705882352941177516,
0.12156862745098039047,
0.05882352941176470507,
0.02745098039215686236,
0.01176470588235294101,
0.00392156862745098034,
0.00000000000000000000,
0.00000000000000000000,
0.00000000000000000000,
0.00000000000000000000,
0.00000000000000000000,
0.00000000000000000000,
0.00000000000000000000,
0.00000000000000000000,
0.00000000000000000000,
}
func TestCostHistory(t *testing.T) {
t0 := time.Unix(T0, 0).UTC()
tN := func(n uint) time.Time {
dur := time.Duration(n) * time.Second
return t0.Add(dur)
}
var nowCounter uint
nowFn := func() time.Time {
now := tN(nowCounter)
nowCounter++
return now
}
h := CostHistory{
NumBuckets: 8,
BucketInterval: 1 * time.Second,
DecayFactor: 0.5,
NowFn: nowFn,
}
h.Init()
data := h.Data()
checkData(t, 0, data, Data{
Now: t0,
Counter: 0,
PerSecond: expectPerSecond[0],
})
for i := uint(1); i < 9; i++ {
h.UpdateRelative(1)
data := h.Data()
checkData(t, i, data, Data{
Now: tN(i),
Counter: uint64(i),
PerSecond: expectPerSecond[i],
})
}
for i := uint(9); i < 17; i++ {
h.Update()
data := h.Data()
checkData(t, i, data, Data{
Now: tN(i),
Counter: 8,
PerSecond: expectPerSecond[i],
})
}
for i := uint(17); i < 25; i++ {
h.Update()
data := h.Data()
checkData(t, i, data, Data{
Now: tN(i),
Counter: 8,
PerSecond: expectPerSecond[i],
})
}
}
func checkData(t *testing.T, index uint, actual Data, expect Data) {
if !actual.Now.Equal(expect.Now) {
t.Errorf(
"[%d]: Now: expected %q, got %q",
index,
expect.Now.Format(time.RFC3339Nano),
actual.Now.Format(time.RFC3339Nano),
)
}
if actual.Counter != expect.Counter {
t.Errorf(
"[%d]: Counter: expected %d, got %d",
index,
expect.Counter,
actual.Counter,
)
}
if math.Abs(actual.PerSecond-expect.PerSecond) >= 1e-18 {
t.Errorf(
"[%d]: PerSecond: expected %.20f, got %.20f",
index,
expect.PerSecond,
actual.PerSecond,
)
}
}
|
import { Component, AfterViewInit, Input, Output, EventEmitter, ElementRef } from '@angular/core';
import { AppService } from '../../services/services';
import { Page, Exercise, Lesson, Asset } from '../../models/models';
@Component({
selector: 'wat-modals',
templateUrl: 'index.html'
})
export class Modals implements AfterViewInit {
_ = _;
_value: object;
_open: boolean;
_modalIndex: number;
rivers = [
"aldan",
"allegheny",
"amazon",
"amur",
"angara",
"araguaia",
"aras",
"argun",
"arkansas",
"aruwimi",
"athabasca",
"ayeyarwady",
"barcoo",
"belaya",
"beni",
"benue",
"bermejo",
"brahmaputra",
"brazos",
"breg",
"caine",
"canadian",
"cauca",
"chambeshi",
"churchill",
"colorado",
"columbia",
"congo",
"cooper",
"danube",
"darling",
"daugava",
"detroit",
"dnieper",
"dniester",
"dvina",
"elbe",
"essequibo",
"euphrates",
"finlay",
"fly",
"fraser",
"ganges",
"georgina",
"gila",
"godavari",
"grande",
"guapay",
"guaviare",
"hooghly",
"ili",
"indigirka",
"indus",
"iriri",
"irtysh",
"ishim",
"jefferson",
"jubba",
"juruena",
"kagera",
"kama",
"kasai",
"katun",
"khatanga",
"khoper",
"kolyma",
"krishna",
"kuskokwim",
"lena",
"lerma",
"limpopo",
"loire",
"lomami",
"mackenzie",
"madeira",
"magdalena",
"mekong",
"mississippi",
"missouri",
"mtkvari",
"murray",
"murrumbidgee",
"narmada",
"naryn",
"nelson",
"niagara",
"niger",
"nile",
"ob",
"ohio",
"oka",
"okavango",
"olenyok",
"olyokma",
"orange",
"orinoco",
"ottawa",
"padma",
"panj",
"paraguay",
"peace",
"pearl",
"pechora",
"pecos",
"pilcomayo",
"platte",
"rhine",
"rocha",
"salween",
"saskatchewan",
"selenge",
"senegal",
"shebelle",
"shire",
"slave",
"snake",
"sukhona",
"sutlej",
"tagus",
"tarim",
"tennessee",
"tigris",
"tobol",
"tocantins",
"tsangpo",
"ubangi",
"ucayali",
"uele",
"ural",
"uruguay",
"vaal",
"vilyuy",
"vistula",
"vitim",
"vltava",
"volga",
"volta",
"vyatka",
"warburton",
"yamuna",
"yangtze",
"yenisei",
"yukon",
"zambezi",
"zeya"
]
stones = [
"granite",
"anorthosite",
"banktop",
"bearl",
"blaxter",
"brownstone",
"caen",
"catcastle",
"chalk",
"charnockite",
"clipsham",
"clunch",
"comblanchien",
"corallian",
"corncockle",
"cotswold",
"czaple",
"diabase",
"diorite",
"dolomite",
"dunhouse",
"elazig",
"emprador",
"limestone",
"flint",
"marble",
"frosterley",
"gabbro",
"gneiss",
"granodiorite",
"haslingden",
"heavitree",
"jerusalem",
"ketton",
"kielce",
"larvikite",
"locharbriggs",
"marmara",
"monzonite",
"mugla",
"travertine",
"onyx",
"peperino",
"przedborowa",
"quartzite",
"sandstone",
"serpentinite",
"slate",
"steatite",
"stromatolites",
"strzegom",
"strzelin",
"syenite",
"szczytna",
"tezontle",
"travertine",
"tuffeau",
"yorkstone"
]
constructor(private service: AppService, private e: ElementRef) {
}
ngAfterViewInit() {
}
@Input()
set value(data) {
$('#collection-modal').find('form').form('clear');
$('#template-modal').find('form').form('clear');
this._value = data;
if (data.action === 'delete') {
/* for delete confirmation */
$('#collection-modal').find('form').form({
fields: {
name: {
identifier: 'name',
rules: [
{
type: 'isExactly',
value: data['value'].name,
prompt: 'Please enter the name.'
}
]
}
}
});
$('#template-modal').find('form').form({
fields: {
name: {
identifier: 'name',
rules: [
{
type: 'isExactly',
value: data['value'].name,
prompt: 'Please enter the name.'
}
]
}
}
});
} else if (data.action === 'create') {
// use if pre-assigned
$('#collection-modal').find('input[name="name"]').val(data['value'].name);
$('#collection-modal').find('input[name="cname"]').val(data['value'].path);
$('#collection-modal').find('textarea[name="description"]').val(data['value'].description);
$('#template-modal').find('input[name="name"]').val(data['value'].name);
$('#template-modal').find('textarea[name="content"]').val(data['value'].content);
$('#template-modal').find('textarea[name="description"]').val(data['value'].description);
// overwrite with random names
$('#collection-modal').find('input[name="name"]').val([
_.sample(this.rivers),
_.sample(this.stones),
_.random(9) + '' + _.random(9)
].join('-'));
$('#template-modal').find('input[name="name"]').val([
_.sample(this.rivers),
_.sample(this.stones),
_.random(9) + '' + _.random(9)
].join('-'));
$('#collection-modal').find('form').form({
fields: {
name: {
identifier: 'name',
rules: [
{
type: 'empty',
prompt: 'Please enter the name.'
}
]
},
cname: {
identifier: 'cname',
rules: [
{
type: 'regExp',
value: /^(\w+(-\w+)*)?$/,
prompt: 'Please use words with only alphanumeric and _ characters (words can be hyphenated).'
}
]
}
}
});
$('#template-modal').find('form').form({
fields: {
name: {
identifier: 'name',
rules: [
{
type: 'empty',
prompt: 'Please enter the name.'
}
]
}
}
});
} else if (data.action === 'edit') {
$('#collection-modal').find('input[name="name"]').val(data['value'].name);
$('#collection-modal').find('input[name="cname"]').val(data['value'].path);
$('#collection-modal').find('textarea[name="description"]').val(data['value'].description);
$('#template-modal').find('input[name="name"]').val(data['value'].name);
$('#template-modal').find('textarea[name="content"]').val(data['value'].content);
$('#template-modal').find('textarea[name="description"]').val(data['value'].description);
$('#collection-modal').find('form').form({
fields: {
name: {
identifier: 'name',
rules: [
{
type: 'empty',
prompt: 'Please enter the name.'
}
]
},
cname: {
identifier: 'cname',
rules: [
{
type: 'regExp',
value: /^(\w+(-\w+)*)?$/,
prompt: 'Please use words with only alphanumeric and _ characters (words can be hyphenated).'
}
]
}
}
});
$('#template-modal').find('form').form({
fields: {
name: {
identifier: 'name',
rules: [
{
type: 'empty',
prompt: 'Please enter the name.'
}
]
}
}
});
}
$('#collection-modal').find('input[name="name"]').focus((event) => {
$(event.currentTarget).select();
$(event.currentTarget).mouseup(() => {
$(event).preventDefault();
});
}).unbind('focus');
$('#template-modal').find('input[name="name"]').focus((event) => {
$(event.currentTarget).select();
$(event.currentTarget).mouseup(() => {
$(event).preventDefault();
});
}).unbind('focus');
}
@Input()
set open(t) {
this._open = t;
let $modal = $('#collection-modal');
if (this._value['type'] === 'template') {
$modal = $('#template-modal');
}
if (t) {
$modal.find('.accordion').accordion('close', 0);
$modal.modal({
onHide: () => {
// workaround
this.service.activeItem.openModal = false;
}
}).modal('show');
} else {
$modal.modal('hide');
}
}
submit = () => {
let $modal = $('#collection-modal');
if (this._value['type'] === 'template') {
$modal = $('#template-modal');
}
if (!$modal.find('form').form('is valid')) {
// form is not valid
return;
}
let apiAction = '';
if (this._value['action'] === 'delete') {
apiAction = 'delete';
} else if (this._value['action'] === 'edit') {
apiAction = 'put';
} else if (this._value['action'] === 'create') {
apiAction = 'post';
}
$.api({
action: apiAction + ' ' + this._value['type'],
on: 'now',
method: apiAction,
urlData: apiAction === 'delete' ? { id: this._value['value'].id } : undefined,
data: JSON.stringify({
"id": this._value['value'].id,
"name": $modal.find('input[name="name"]').val(),
"path": $modal.find('input[name="cname"]').val(), // either
"content": $modal.find('textarea[name="content"]').val(), // not both
"description": $modal.find('textarea[name="description"]').val(),
"lesson": this._value['value'].lesson ? this._value['value'].lesson.id : undefined, // either
"exercise": this._value['value'].exercise ? this._value['value'].exercise.id : undefined // not both
}),
contentType: 'application/json',
onResponse: (response) => {
// make some adjustments to response
this.service.activeItem.openModal = false;
if (this._value['action'] === 'delete') {
switch (this._value['type']) {
case 'lesson':
this.service.removeLesson(this._value['value'].id); break;
case 'exercise':
this.service.removeExercise(this._value['value'].id); break;
case 'page':
this.service.removePage(this._value['value'].id); break;
case 'template':
this.service.removeTemplate(this._value['value'].id); break;
}
} else if (this._value['action'] === 'edit') {
this._value['value'].name = response.name;
this._value['value'].path = response.path; // either
this._value['value'].content = response.content; // not both
this._value['value'].description = response.description;
} else if (this._value['action'] === 'create') {
this._value['value'].id = response.id;
this._value['value'].name = response.name;
this._value['value'].path = response.path; // either
this._value['value'].content = response.content; // not both
this._value['value'].description = response.description;
switch (this._value['type']) {
case 'lesson':
this.service.lessons[response.id] = this._value['value'];
this.service.showLesson(this._value['value'].id);
break;
case 'exercise':
this.service.setExercise(this._value['value']);
this.service.showExercise(this._value['value'].id);
break;
case 'page':
this.service.setPage(this._value['value']);
this.service.showPage(this._value['value'].id);
break;
case 'template':
this.service.setTemplate(this._value['value']);
break;
}
}
}
});
}
cancel = () => {
this.service.activeItem.openModal = false;
}
}
|
def analyse_cache_dir(self, jobhandler=None, batchsize=1, **kwargs):
if jobhandler is None:
jobhandler = SequentialJobHandler()
files = glob.glob(os.path.join(self.cache_dir, '*.phy'))
records = []
outfiles = []
dna = self.collection[0].is_dna()
for infile in files:
id_ = fileIO.strip_extensions(infile)
outfile = self.get_result_file(id_)
if not os.path.exists(outfile):
record = Alignment(infile, 'phylip', True)
records.append(record)
outfiles.append(outfile)
if len(records) == 0:
return []
args, to_delete = self.task_interface.scrape_args(records, outfiles=outfiles, **kwargs)
with fileIO.TempFileList(to_delete):
result = jobhandler(self.task_interface.get_task(), args, 'Cache dir analysis', batchsize)
for (out, res) in zip(outfiles, result):
if not os.path.exists(out) and res:
with open(out, 'w') as outfl:
json.dump(res, outfl)
return result |
#!/usr/bin/env stack
-- stack --install-ghc ghci --resolver lts-16 --package free --package functor-combinators --package vinyl
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE TemplateHaskell #-}
import Control.Monad.Free.TH
import Control.Monad.Free
data PipeF i o a where
YieldF :: o -> PipeF i o ()
AwaitF :: PipeF i o i
-- YieldF o a
-- | AwaitF (Maybe i -> a)
-- deriving Functor
-- makeFree ''PipeF
type Pipe i o = Free (PipeF i o)
comp :: Pipe b c y -> Pipe a b x -> Pipe a c y
comp = \case
Pure x -> \_ -> Pure x
Free (YieldF o) -> \u -> Free $ YieldF _
-- $ YieldF o (comp x u)
-- Free (AwaitF f ) -> \case
-- Pure y -> comp (f Nothing ) (Pure y)
-- Free (YieldF o y) -> comp (f (Just o)) y
-- Free (AwaitF g ) -> Free $ AwaitF (comp (Free (AwaitF f)) . g)
-- comp :: Pipe b c y -> Pipe a b x -> Pipe a c y
-- comp = \case
-- Pure x -> \_ -> Pure x
-- Free (YieldF o x) -> \u -> Free $ YieldF o (comp x u)
-- Free (AwaitF f ) -> \case
-- Pure y -> comp (f Nothing ) (Pure y)
-- Free (YieldF o y) -> comp (f (Just o)) y
-- Free (AwaitF g ) -> Free $ AwaitF (comp (Free (AwaitF f)) . g)
-- comp :: Pipe b c y -> Pipe a b x -> Pipe a c y
-- comp = \case
-- Pure x -> \_ -> Pure x
-- Free (YieldF o x) -> \u -> Free $ YieldF o (comp x u)
-- Free (AwaitF f ) -> \case
-- Pure y -> comp (f Nothing ) (Pure y)
-- Free (YieldF o y) -> comp (f (Just o)) y
-- Free (AwaitF g ) -> Free $ AwaitF (comp (Free (AwaitF f)) . g)
(.|) :: Pipe a b c -> Pipe b c y -> Pipe a c y
f .| g = comp g f
main :: IO ()
main = pure ()
|
Response Predictors in Chronic Migraine: Medication Overuse and Depressive Symptoms Negatively Impact Onabotulinumtoxin-A Treatment
Background: Despite numerous studies that have investigated clinical, radiological, and biochemical response predictors, the clinical profile of those patients who might benefit from OnabotulinumtoxinA is still missing. The aim of the present study was to identify potential OnabotulinumtoxinA response predictors among several clinical characteristics and confirm OnabotulinumtoxinA efficacy and safety in chronic migraine (CM) prevention. Methods: The study was conducted at the Headache Center—Neurology Clinic—Spedali Civili Hospital of Brescia. Eighty-four consecutive CM patients were enrolled, with a mean age of 48 years (SD 9.7) and a mean disease duration of 10.1 years (SD 6.6). The mean reported headache-days frequency was 22.5 (SD 5.9) per month, while the mean number of severe headache-days was 15.2 (SD 8.9) with a mean monthly medication intake of 33.2 (SD 5.6). The clinical characteristics analyzed as potential response predictors were: gender, disease duration, migraine characteristics (location, side constancy, unilateral autonomic and neurovegetative symptoms), previous prophylactic treatments, add-on therapies, withdrawal therapies, psychiatric (anxiety and depression symptoms) comorbidities and medication overuse. Results: A significant reduction from baseline to 3, 6, 9, and 12 month treatment cycles in total headache days, high intensity headache days and triptans consumption per month was found. Depressive symptoms and medication overuse negatively predicted OnabotulinumtoxinA outcome. Conclusions: Our results confirm the efficacy and safety of OnabotulinumtoxinA in CM. Depressive comorbidity and medication overuse, among all clinical variables, were the only significant response predictors. Such findings provide interesting insights regarding patients selection for OnabotulinumtoxinA treatment as, with the introduction of anti calcitonin gene-related (CGRP) monoclonal antibodies, clinicians will have to thoroughly judge and tailor among the many available therapeutic options now available. Future research might be needed to confirm our findings, in particular for its therapeutic implications.
BACKGROUND
Chronic migraine (CM), a headache occurring on ≥15 days/month (with migraine characteristics on ≥8 days/month) for at least 3 months (1), affects ∼1.4-2.2% of adults and untold millions worldwide (2), causing greater disability compared to episodic migraine (EM) and significantly impacting quality of life (3). CM sufferers, compared to EM, tend to report lower levels of household income and full-time employment and are more likely to be occupationally disabled (4). Moreover, EM and CM are associated with a significant number of systemic and psychiatric comorbidities such as obesity, irritable bowel syndrome and autoimmune, respiratory, vascular, sleep and affective disorders (4)(5)(6)(7)(8)(9). In particular, major depression, anxiety and post-traumatic stress disorder were found to be more frequent in CM than in patients with EM, as well as being significant risk factors for migraine chronification (9).
Such an individual, health related and economic burden imposes the need for a mandatory safe and effective treatment. OnabotulinumtoxinA was the first, and in many countries still the only, treatment specifically and selectively approved for the prophylaxis of CM in adults. Its approval, safety and efficacy were based on the results of the Phase III Research Evaluating Migraine Prophylaxis Therapy (PREEMPT) studies (10, 11)-two large, randomized double-blind, placebo-controlled trials-and the recent Chronic migraine OnabotulinuMtoxinA Prolonged Efficacy open Label (COMPEL) study (12)-a multicenter, openlabel long-term prospective study. These studies demonstrated that OnabotulinumtoxinA treatment not only significantly reduced the frequency of headache days but also showed highly significant improvements in multiple headache symptom measures and in patients' self-perceived quality of life. Besides, patients not responding to the first treatment cycle may wellrespond to up to two subsequent cycles (13). These results were confirmed by several real-life studies (14)(15)(16). Moreover, since its approval in Italy in 2013, prophylaxis with OnabotulinumtoxinA is perceived by Headache specialists as generally effective and safe, with a high level of compliance to recent recommendations (17). OnabotulinumtoxinA efficacy has been proven even in the context of CM associated with medication overuse, a frequent complication found in chronic migraneurs, encumbered by enormous treatments failure rates (18,19).
Several studies have been conducted, over the past few years, searching for imaging, molecular and clinical response predictors. A recent MRI study conducted by Hubbar et al. (20) revealed that OnabotulinumtoxinA responders showed significant cortical thickening in the right primary somatosensory cortex, anterior insula, left superior temporal gyrus, and pars opercularis compared to non-responders, whereas Bumb et al. (21) did not find any significant difference in terms of white matter lesions between responders and non-responders. Increased interictal plasma levels of calcitonin gene-related peptide (CGRP) (22,23), vasoactive intestinal Abbreviations: CM, chronic migraine; EM, episodic migraine; MIDAS, Migraine Disability Assessment Score; NSAIDs, non-steroidal anti-inflammatory drugs; SF-36, Short Form-36. peptide (VIP) (22) and pentraxin 3 (PTX3) (23)-all markers of trigeminal and parasympathetic activation-have been associated with better responses to OnabotulintoxinA, whereas decreased interictal CGRP salivary levels (24) were found in response to OnabotulinumtoxinA. Clinical variables have been the more thoroughly investigated, being more easily obtainable, as potential response predictors. Imploding (25), strictly unilateral pain (26,27), unilateral autonomic symptoms (eyelid edema, tearing, nasal congestion, etc.) (27), short disease duration (26,28), pericranial muscle tenderness (29,30), ocular-type headache (31), and younger age (32) have all been associated with better clinical responses, though such findings have not been constantly replicated. Despite the number of studies that have focused on OnabotulinumtoxinA response predictors, the clinical phenotype of patients who might benefit from OnabotulinumtoxinA is still missing.
The aim of the present study was to identify potential OnabotulinumtoxinA response predictors among several clinical characteristics and confirm OnabotulinumtoxinA efficacy and safety in CM prevention.
Subjects
The study was conducted at the Headache Center-Neurology Clinic at the Spedali Civili Hospital of Brescia. We prospectively evaluated all adults aged 18-65 years with CM, with or without a diagnosis of medical overuse, between January 2015 and October 2018. Diagnosis was made according to the ICHD III beta criteria (33). All patients enrolled had failed to respond to at least two different classes of prophylactic treatments. Exclusion criteria were the following: hypersensitivity to OnabotulinumtoxinA or to any of the excipients, presence of infection at the proposed injection sites, presence of neuromuscular (e.g., myasthenia gravis or Lambert-Eaton Syndrome) and/or peripheral motor neuropathic diseases (e.g., amyotrophic lateral sclerosis or motor neuropathy), and pregnancy. We collected data concerning demographics, migraine characteristics (location, side constancy, presence of unilateral autonomic, and neuro vegetative symptoms), disease duration, previous prophylaxis, and withdrawal therapies (within the preceding 6 months), add-on therapies, headache days for month (distinguished in days with mild pain and moderate-intense pain), symptomatic drugs overuse (triptans, NSAIDs and combination analgesics), and systemic and psychiatric comorbidities (depression and anxiety). Patients were considered positive for depressive symptoms if they were experiencing, at baseline, a depressed mood together with at least three of the following symptoms: diminished interest or pleasure in their daily activities, significant weight loss, fatigue, feelings of worthlessness, and diminished ability to concentrate (33). Patients were considered positive for anxiety symptoms if they were experiencing, at baseline, excessive anxiety and worry, together with at least three of the following symptoms: restlessness, fatigue, impaired concentration, irritability, increased muscle aches or soreness, difficulty sleeping (33). At each subsequent evaluation, every 3 months, data on the frequency and intensity of headaches, use of symptomatic drugs, and the occurrence of side effects (by analysis of headache diary) was gathered. Migraine induced disability was assessed using the Migraine Disability Assessment Score Questionnaire (MIDAS) and the Short Form-36 (SF-36) at baseline, 3, 6, 9, and 12 months after treatment onset.
Methods
Patients were injected with OnabotulinumtoxinA according to the PREEMPT protocol, after their informed consent for treatment with OnabotulinumtoxinA. We administrated 155 units intramuscularly using a 29 gauge needle as 0.1 ml (5 U) in 31 sites around the head and neck, divided across seven specific areas: corrugator 10 U, procerus 5 U, frontalis 20 U, temporalis 40 U, occipitalis 30 U, cervical paraspinal muscle group 20 U, and trapezius 30 U. At the investigator's discretion, additional units, up to 40, could be administered into the temporalis, occipitalis and/or trapezius muscle using a follow-the-pain strategy. The decision on any additional doses and locations were based on the patient's report of a usual location or predominant pain and the clinician's best judgement of the potential benefit of additional doses in the specified muscles. The maximum total dose we actually administered was 175 U (additional 20 U) in 35 sites, accordingly to a patient's pain and/or side localization.
Statistical Analysis
Statistical analyses were performed with IBM SPSS Statistics 25.0 software for Windows (SPSS Inc., Chicago, IL, USA).
Regarding response predictors, we only analyzed data from patients who had completed at least three treatment cycles. For continuous variables (disease duration, number of previous prophylactic treatments) a linear regression analysis was conducted, whereas for ordinal variables (gender, location, side constancy, presence of unilateral autonomic and neurovegetative symptom, withdrawal therapy, add-on therapies, medication overuse, depressive, and anxiety symptoms) chi-square analyses were conducted.
A one-way repeated measures ANOVA was conducted to test whether there were statistically significant differences in the days of headache per month (overall and subdivided in days of high and low intensity), analgesics' consumption per month (overall and subdivided according to the different type of analgesic, i.e., NSAIDs, triptans or combination analgesics), and MIDAS and SF-36 scores from baseline to 3, 6, 9, and 12 months of treatment. A significant difference was set to be at p < 0.05.
RESULTS
Eighty-four consecutive patients were enrolled (73% females) with a mean age of 48 years (SD 9.7) and a mean disease duration (duration since CM diagnosis) of 10.1 years (SD 6.6). The mean patient-estimated headache-days frequency was 22.5 days (SD 5.9) per month, while the mean number of days with severe headache (NRS ≥ 6) was 15.2 (SD 8.9). Fifty-five patients (65.5%) displayed medication overuse. Twenty-five patients (28%) had a history of withdrawal therapy in the 6 months preceding OnabotulinumtoxinA treatment. Nineteen patients (21%) were co-administered with a CM preventive drug (topiramate, amitriptyline, venlafaxine, paroxetine, propranolol, pizotifen, valproate acid, and pregabalin). The mean monthly medication intake was 33.2 (SD 5.6) and the use of NSAIDs and triptans was preponderant (only nine patients were on combination analgesics). See Table 1 for full baseline demographics and clinical characteristics. Side effects were reported by sixteen patients (19%) and consisted of neck pain (six patients), transient eyelid ptosis (four patients) and shoulders' tenderness (two patients). No serious adverse events were reported. Fortyfour patients underwent three cycles of OnabotulinumtoxinA treatment, whereas thirty-four concluded a 12 months' treatment (five cycles). At the end of the third cycle patients were stratified into three groups according to reduction of headache days per month: >50% (responders), 30-50% (partial responders) and <30% (non-responders). See Figure 1 for patients' responder rates at 3, 6, 9, and 12 months of treatment.
When comparing baseline characteristics between responders and partial-and non-responders we found a significantly lower frequency of depressive symptoms and medication overuse in the responders' group (respectively p = 0.002 and p = 0.05; see Tables 2, 3). When comparing clinical and demographic characteristics between patients with or without depressive symptoms there was a significant difference in terms of analgesic consumptions, in agreement with the findings above reported (see Table 4). When conducting the same analysis comparing patients with or without medication overuse, lower MIDAS scores (104.9 ± 66.9 vs. 67.3 ± 51.4, p = 0.01), number of headache days (23.6 ± 5.9 vs. 20.4 ± 5.3, p = 0.01) and high intensity headache days per month (16.8 ± 9.4 vs. 12.2 ± 7.7, p = 0.01) at baseline were found in patients who did not present with medication overuse (see Table 5). Accordingly, with the diagnosis, patients without medication overuse also exhibited lower triptans (6.9 ± 6.07 vs. 15.9 ± 18.3, p = 0.01) and overall analgesics (17.6 ± 10.8 vs. 41.1 ± 37.04, p = 0.002) consumption compared to patients with medication overuse. A statistically significant reduction from baseline to 3, 6, 9, and 12 months' treatment cycles in terms of total headaches days (22.8 ± 5.8 at baseline vs. 16.3 ± 7.8 at 3 months vs. 15.9 ± 8.1 at 6
DISCUSSION
Our study confirms OnabotulinumtoxinA safety and efficacy in CM prophylaxis. Following three treatment cycles 47 and 20% of patients were classified as, respectively, responders and partial responders. A significant improvement in frequency, severity, and triptans consumption occurred from the first treatment cycle and was maintained throughout the study. These findings, in particular response percentages and the selective reduction of high intensity headache days and triptans consumption, is in line with results coming from previous larger studies (10)(11)(12).
Although numerous studies proved OnabotulinumtoxinA to be equally effective in the presence of medication overuse (18,19) and depressive comorbidities (34,35), they were found to be negative predictors in terms of treatment response. In fact, both medication overuse and depressive symptoms were less frequent in responders compared to partial and non-responders. Stating this does not imply that OnabotulinumtoxinA was not effective in treating CM in these subgroups of patients-both medication overuse and depressive symptoms were highly represented in our cohort-but they were, indeed, associated with a poorer outcome. In the present study, the subgroup who did not present medication overuse displayed significant baseline lower MIDAS scores, number of headache days, and high intensity headache days, compared to those who did present medication overuse. On the other side, when comparing patients with or without depressive comorbidity, a lower analgesic consumption was found in the latter.
Given the high comorbidity between CM and affective disorders and their association with negative treatment outcomes, we tested whether the presence of anxiety and depressive symptoms could predict OnabotulinumtoxinA efficacy. As a matter of fact, we did find a lower frequency of depression symptoms in the responders subgroup. Moreover, patients who did not present depressive symptoms not only had a higher probability to respond to OnabotulinumtoxinA, but also presented, at baseline, with a lower analgesic intake in the absence of a lower headache frequency compared to the other group.
Mood and anxiety disorders are two to ten times more frequent in migraineurs compared to the general population, especially in chronic migraineurs (5). Based on current evidence, the relationship between migraine and psychiatric comorbidities seems bidirectional, with each condition increasing the incidence of the other (36). The recognition of these comorbidities is essential for several reasons, i.e., diagnostic vigilance, treatment options and implications regarding outcomes and adherences. Several pathophysiological mechanisms have been proposed to explain this comorbidity. These include genetic factors, monoamines dysfunctions, ovarian hormone and hypothalamicpituitary adrenal axis dysregulation (36). Serotoninergic deficits are implicated in the pathogenesis of both migraines and depression. Reduced serotonin levels have been documented during migraine attacks and serotonin depletion has been found to enhance cortical spreading depression-induced trigeminal nociception by increasing the cortical excitability and sensitivity of trigeminal nociceptive system (37). Although less consistently, genetic studies also support a plausible role of the dopaminergic system in migraine and depression, with different polymorphisms in the dopamine transporter and D2-D4 receptors being more frequent in subjects suffering from both migraines and depression compared to the general population (38). Dopaminergic dysfunctions might be involved in the activation and sensitization of the trigeminovascular system (38). Twin and family studies demonstrated that around 20% of the variance in migraine and depression is due to shared genetics (39). Moreover, it has been found that migraineurs with comorbid depression display smaller total brain volumes compared to patients suffering from either migraine alone or depression alone (40). Taken all together, these findings suggest a significant shared background between migraine and depression and it has been proposed that migraine with and migraine without depression comorbidity might represent two distinct clinical and biological phenotypes (41, 42). Given our results, the "pure migraine" phenotype demonstrated a better response pattern to OnabotulinumtoxinA. Why would that be the case? Depression, along with other factors, is characterized by a series of neurotransmitter events and brain areas repeated activation that leads to a state of central sensitization. Chronic migraine is equally characterized by central sensitization, with migraine chronification being significantly influenced by medication overuse and depression. Thus, it seems plausible that patients presenting all these conditions might exhibit a well-consolidated and more severe state of central sensitization, making treatment challenging, independent from disease duration or headache frequency.
To our knowledge, three previous studies have assessed OnabotulinumtoxinA in CM with comorbid depression (32,34,43). Boudreau et al. (34) found a significant improvement in both number of headache days and depressive symptoms following 24 weeks of treatment, and a recent study by Blumenfeld et al. (35) found a significant improvement in depressive symptoms in the overall cohort and an even better outcome in responders compared to non-responders. However, these studies were not designed to analyze response predictors' , thus, we do not know whether responders exhibited a higher or lower level of depressive symptomatology compared to non-responders. Disco et al. (32), in line with our results, also found depression and anxiety disorders to be associated to a lower responsiveness trend at the limit of significance.
Treatment-wise our results might suggest that, in the everyday setting, patients displaying significant depressive traits might find a more beneficial outcome from other oral therapeutics (e.g., amitriptyline or topiramate) alone or in association with OnabotulinumtoxinA and, in the presence of medication overuse, withdrawal treatments preceding treatment might have long-term benefits. A more stringent selection of patients presenting a "pure migraine" phenotype could improve the identification of OnabotulinumtoxinA responders, allowing a more tailored treatment for CM, making it a double win for both patients and clinicians.
This study has several potential limitations. Firstly, the small sample size. Although our efficacy results are in agreement with larger, multi-center studies like the PREEMPT and COMPEL studies, conclusions regarding response predictors' need further replications. Secondly, depressive and anxiety symptoms were assessed qualitatively, i.e., no validated scales were used, although collected in the form of open questions regarding the diagnostic items formulated by the DSM-V (43).
DATA AVAILABILITY
The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher.
ETHICS STATEMENT
The research protocol was approved by the Ethics Committee of the Brescia Hospital, Brescia, Italy. Written informed consent was obtained from all participants.
AUTHOR CONTRIBUTIONS
FS: study conception and design, acquisition of data, analysis and interpretation of data, and drafting of manuscript. SC: acquisition of data, analysis and interpretation of data, and drafting of manuscript. PL: acquisition of data and critical revision. RR: study conception and design, acquisition of data, drafting of manuscript, and critical revision. AP: drafting of manuscript and critical revision. |
from math import ceil, sqrt
t1 = int(input())
for z in range(t1):
x = int(input())
low = ceil(x**(1/3))
high = int(round((4*x)**(1/3)))
check = False
for m in range(low,high+1):
if(x%m==0):
lala = (m**2-x//m)
if(lala%3==0 and lala>0):
t = sqrt(m**2-4*(lala//3))
if(t==int(t)):
check = True
if(check):
print("YES")
else:
print("NO")
|
Resilience in Competitive Athletes With Spinal Cord Injury
Individuals who experience loss of their physical abilities often face the challenges of adapting to a new way of life. Past research has shown that sport participation can assist the physical and psychological adaptation to acquired physical disabilities. The purposes of our study were to examine the following: (a) the resilience process of sport participants with acquired spinal cord injury, and (b) the role of sport participation in the resilience process. We conducted semistructured phenomenological interviews with 12 male quadriplegic wheelchair rugby players. Results show that the development of resilience is a multifactorial process involving pre-existing factors and pre-adversity experiences, disturbance/disturbing emotions, various types and sources of social support, special opportunities and experiences, various behavioral and cognitive coping strategies, motivation to adapt to changes, and learned attributes or gains from the resilience process. We discuss implications for future research and practice. |
/**
* The root coordinator for the bottom toolbar. It has two subcomponents. The browing mode bottom
* toolbar and the tab switcher mode bottom toolbar.
*/
public class BottomToolbarCoordinator {
/** The browsing mode bottom toolbar component */
private final BrowsingModeBottomToolbarCoordinator mBrowsingModeCoordinator;
/** The tab switcher mode bottom toolbar component */
private TabSwitcherBottomToolbarCoordinator mTabSwitcherModeCoordinator;
/** The tab switcher mode bottom toolbar stub that will be inflated when native is ready. */
private final ViewStub mTabSwitcherModeStub;
/** A provider that notifies components when the theme color changes.*/
private final BottomToolbarThemeColorProvider mBottomToolbarThemeColorProvider;
/**
* Build the coordinator that manages the bottom toolbar.
* @param fullscreenManager A {@link ChromeFullscreenManager} to update the bottom controls
* height for the renderer.
* @param stub The bottom toolbar {@link ViewStub} to inflate.
* @param tabProvider The {@link ActivityTabProvider} used for making the IPH.
* @param homeButtonListener The {@link OnClickListener} for the home button.
* @param searchAcceleratorListener The {@link OnClickListener} for the search accelerator.
* @param shareButtonListener The {@link OnClickListener} for the share button.
*/
public BottomToolbarCoordinator(ChromeFullscreenManager fullscreenManager, ViewStub stub,
ActivityTabProvider tabProvider, OnClickListener homeButtonListener,
OnClickListener searchAcceleratorListener, OnClickListener shareButtonListener) {
final View root = stub.inflate();
mBrowsingModeCoordinator = new BrowsingModeBottomToolbarCoordinator(root, fullscreenManager,
tabProvider, homeButtonListener, searchAcceleratorListener, shareButtonListener);
mTabSwitcherModeStub = root.findViewById(R.id.bottom_toolbar_tab_switcher_mode_stub);
mBottomToolbarThemeColorProvider = new BottomToolbarThemeColorProvider();
}
/**
* Initialize the bottom toolbar with the components that had native initialization
* dependencies.
* <p>
* Calling this must occur after the native library have completely loaded.
* @param resourceManager A {@link ResourceManager} for loading textures into the compositor.
* @param layoutManager A {@link LayoutManager} to attach overlays to.
* @param tabSwitcherListener An {@link OnClickListener} that is triggered when the
* tab switcher button is clicked.
* @param newTabClickListener An {@link OnClickListener} that is triggered when the
* new tab button is clicked.
* @param menuButtonHelper An {@link AppMenuButtonHelper} that is triggered when the
* menu button is clicked.
* @param tabModelSelector A {@link TabModelSelector} that incognito toggle tab layout uses to
switch between normal and incognito tabs.
* @param overviewModeBehavior The overview mode manager.
* @param windowAndroid A {@link WindowAndroid} for watching keyboard visibility events.
* @param tabCountProvider Updates the tab count number in the tab switcher button and in the
* incognito toggle tab layout.
* @param incognitoStateProvider Notifies components when incognito mode is entered or exited.
*/
public void initializeWithNative(ResourceManager resourceManager, LayoutManager layoutManager,
OnClickListener tabSwitcherListener, OnClickListener newTabClickListener,
OnClickListener closeTabsClickListener, AppMenuButtonHelper menuButtonHelper,
TabModelSelector tabModelSelector, OverviewModeBehavior overviewModeBehavior,
WindowAndroid windowAndroid, TabCountProvider tabCountProvider,
IncognitoStateProvider incognitoStateProvider) {
mBottomToolbarThemeColorProvider.setIncognitoStateProvider(incognitoStateProvider);
mBottomToolbarThemeColorProvider.setOverviewModeBehavior(overviewModeBehavior);
mBrowsingModeCoordinator.initializeWithNative(resourceManager, layoutManager,
tabSwitcherListener, menuButtonHelper, overviewModeBehavior, windowAndroid,
tabCountProvider, mBottomToolbarThemeColorProvider, tabModelSelector);
mTabSwitcherModeCoordinator = new TabSwitcherBottomToolbarCoordinator(mTabSwitcherModeStub,
incognitoStateProvider, mBottomToolbarThemeColorProvider, newTabClickListener,
closeTabsClickListener, menuButtonHelper, tabModelSelector, overviewModeBehavior,
tabCountProvider);
}
/**
* Show the update badge over the bottom toolbar's app menu.
*/
public void showAppMenuUpdateBadge() {
mBrowsingModeCoordinator.showAppMenuUpdateBadge();
}
/**
* Remove the update badge.
*/
public void removeAppMenuUpdateBadge() {
mBrowsingModeCoordinator.removeAppMenuUpdateBadge();
}
/**
* @return Whether the update badge is showing.
*/
public boolean isShowingAppMenuUpdateBadge() {
return mBrowsingModeCoordinator.isShowingAppMenuUpdateBadge();
}
/**
* @param layout The {@link ToolbarSwipeLayout} that the bottom toolbar will hook into. This
* allows the bottom toolbar to provide the layout with scene layers with the
* bottom toolbar's texture.
*/
public void setToolbarSwipeLayout(ToolbarSwipeLayout layout) {
mBrowsingModeCoordinator.setToolbarSwipeLayout(layout);
}
/**
* @return The wrapper for the app menu button.
*/
public MenuButton getMenuButtonWrapper() {
if (mBrowsingModeCoordinator.isVisible()) {
return mBrowsingModeCoordinator.getMenuButton();
}
if (mTabSwitcherModeCoordinator != null) {
return mTabSwitcherModeCoordinator.getMenuButton();
}
return null;
}
/**
* Clean up any state when the bottom toolbar is destroyed.
*/
public void destroy() {
mBrowsingModeCoordinator.destroy();
if (mTabSwitcherModeCoordinator != null) {
mTabSwitcherModeCoordinator.destroy();
mTabSwitcherModeCoordinator = null;
}
mBottomToolbarThemeColorProvider.destroy();
}
} |
<gh_stars>1-10
import Link from "next/link";
interface Props {
href?: string;
msg?: string;
}
export function BackLink({ href, msg }: Props) {
return (
<div style={{ margin: "15px 0", fontWeight: "bold" }}>
<Link href={href ? href : "/"}>
<a>← {msg ? msg : "Back to home"}</a>
</Link>
</div>
);
}
export default BackLink;
|
/**
* Writes a new element with the given value and no attribute.
* If the given value is null, then this method does nothing.
*
* @param localName local name of the tag to write.
* @param value text to write inside the element.
* @throws XMLStreamException if the underlying STAX writer raised an error.
*/
protected final void writeSingleValue(final String localName, final Object value) throws XMLStreamException {
if (value != null) {
writer.writeStartElement(localName);
writer.writeCharacters(value.toString());
writer.writeEndElement();
}
} |
/*
** Determine the current size of a file in bytes
*/
int
unixFileSize(sqlite3_file* id, i64* pSize)
{
unixFile* p = (unixFile*)id;
#if TKEYVFS_TRACE
fprintf(stderr, "Begin unixFileSize ...\n");
if (((unixFile*)id)->zPath) {
fprintf(stderr, "filename: %s\n", ((unixFile*)id)->zPath);
}
#endif
*pSize = p->fileSize;
if (*pSize == 1) {
*pSize = 0;
}
#if TKEYVFS_TRACE
fprintf(stderr, "End unixFileSize ...\n");
#endif
return SQLITE_OK;
} |
time = 2
list = [list(map(int,input().split())) for i in range(time)] #list[0]にnとxが,list[1]にL1~Lnが入る
n, x = [i for i in list[0]]
y = [0]
n = 1
for i in range(len(list[1])):
y.append(y[i] + list[1][i])
if (y[i+1] > x):
break
n += 1
print(n) |
// RegisterRequest is used to drop a message in the queue
func RegisterRequest(request *http.Request, event *github.PullRequestEvent) {
pre := &PullRequestEvent{
Request: request,
Event: event,
}
queuemutex.Lock()
queue = append(queue, pre)
queuemutex.Unlock()
} |
#include <iostream>
int main()
{
/*
Array - Collection of data
Array's best friend is loops
2 Properties - Index, size
numbers is the name of the array
size is 10
it can store up to 20 integers only
*/
// This is a declaration of an array, you must give it a size no matter what
int number[10]
int array[];
// Array initialization
int num[4] = {1, 2, 3, 4};
std::string num[] = {"yellow", "red", "blue"};
// Array Assigment
// You can never assign an array
}
|
# -*- coding: utf-8 -*-
# Header ...
import os, logging
class Logger(object):
def __init__(self, logger=None, name="log", log_dir=None):
self.logger = None
self.name = name
self.log_dir = log_dir
self.config_logger(logger)
def config_logger(self, logger):
if logger is not None: self.logger = logger
elif self.log_dir is not None: # Config the default logger
logger = logging.getLogger(__name__)
logger.setLevel(level = logging.INFO)
handler = logging.FileHandler(self.name + ".log")
handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
self.logger = logger
def info(self, message, is_write=True):
if is_write and self.logger is not None: self.logger.info(message)
print("INFO - " + message)
def debug(self, message, is_write=True):
if is_write and self.logger is not None: self.logger.debug(message)
print("DEBUG - " + message)
def warning(self, message, is_write=True):
if is_write and self.logger is not None: self.logger.warning(message)
print("WARNING - " + message)
def critical(self, message, is_write=True):
if is_write and self.logger is not None: self.logger.critical(message)
print("CRITICAL - " + message)
def error(self, message, error_type=Exception, is_write=True):
if is_write and self.logger is not None: self.logger.error(message)
raise error_type(message)
if __name__ == "__main__":
logger = Logger(name="log", log_dir=os.getcwd())
logger.info("Test ...")
|
/**
* Try to start the build agent and block until it is up and running.
*
* @return
* @throws InterruptedException
* @param host
* @param port
* @param bindPath
*/
public static void startServer(String host, int port, String bindPath, Optional<Path> logFolder) {
try {
IoLoggerName[] primaryLoggers = { IoLoggerName.FILE };
Options options = new Options(host, port, bindPath, true, false, 3, 100, "");
Map<String, String> mdcMap = new HashMap<>();
mdcMap.put("ctx", RandomUtils.randString(6));
buildAgent = new BuildAgentServer(logFolder, Optional.empty(), primaryLoggers, options, mdcMap);
log.info("Server started.");
} catch (BuildAgentException e) {
throw new RuntimeException("Cannot start build agent.", e);
}
runningPort.set(buildAgent.getPort());
} |
<gh_stars>10-100
from django.contrib import admin
from .models import (Calendar, Course, Exam_timetable, Grades, Holiday,
Curriculum_Instructor, Meeting, Student, Student_attendance, Curriculum,
Timetable)
class CurriculumAdmin(admin.ModelAdmin):
model = Curriculum
search_fields = ('course_code',)
class CourseAdmin(admin.ModelAdmin):
model = Course
search_fields = ('course_name',)
class StudentAdmin(admin.ModelAdmin):
model = Student
search_fields = ('id__user__username',)
class Curriculum_InstructorAdmin(admin.ModelAdmin):
model = Curriculum_Instructor
search_fields = ('curriculum_id__course_code',)
admin.site.register(Student,StudentAdmin)
admin.site.register(Course,CourseAdmin)
admin.site.register(Curriculum_Instructor,Curriculum_InstructorAdmin)
admin.site.register(Meeting)
admin.site.register(Exam_timetable)
admin.site.register(Timetable)
admin.site.register(Student_attendance)
admin.site.register(Grades)
admin.site.register(Calendar)
admin.site.register(Holiday)
admin.site.register(Curriculum,CurriculumAdmin)
#Hello!
|
package com.apachat.swipereveallayout.core.interfaces;
import com.apachat.swipereveallayout.core.SwipeLayout;
public interface Swipe {
void onClosed(SwipeLayout view);
void onOpened(SwipeLayout view);
void onSlide(SwipeLayout view, float slideOffset);
} |
PORTLAND Ore. (Reuters) - Record drought on the U.S. West Coast has exposed the ruins of an Oregon hamlet once submerged under the waters of a man-made reservoir, allowing a rare opportunity for an archaeological excavation, a U.S. Bureau of Reclamation official said on Thursday.
The tiny community of Klamath Junction was once home to two gas stations and a cluster of homes and other buildings that date back to the 1920s, but its residents were relocated and the structures inundated as part of a 1960 irrigation project that extended a reservoir known as Emigrant Lake.
“We want to determine if there’s historic significance at the site,” including whether to add the site to the National Register of Historic Places, said Douglas DeFlitch, an Oregon field office manager for the bureau. “One man’s treasure is another man’s trash.”
California has been suffering under its worst drought on record while swaths of Oregon and Washington state to the north were also seeing abnormally dry conditions, which have brought a busy wildfire season and prompted efforts to limit water usage.
This year’s drought has drained Emigrant Lake, near Ashland in southern Oregon, of about 90 percent of its water, leaving boat ramps dry, turning the lake bed into a mucky plain and revealing building foundations, debris and scattered tools at the site, DeFlitch said.
Oregon law requires sites more than 50 years old to be assessed for historical significance and any health hazards, such as oil leaks, before volunteers can be called in to help clear remains, DeFlitch said.
DeFlitch said he expects to have the results of the archaeological and hazardous materials reviews within the month.
Klamath Junction may have been partially exposed during a previous drought in 1994, but Bureau of Reclamation officials was not able to inspect the site at the time, he said. |
<filename>SmartHouseServer/src/cn/onlysoft/smarthouseweb/server/model/Packet.java<gh_stars>0
package cn.onlysoft.smarthouseweb.server.model;
public class Packet {
private String workFlowId;
private int type;
private String gateId;
private String content;
public String getWorkFlowId() {
return workFlowId;
}
public void setWorkFlowId(String workFlowId) {
this.workFlowId = workFlowId;
}
public String getGateId() {
return gateId;
}
public void setGateId(String gateId) {
this.gateId = gateId;
}
public int getType() {
return type;
}
public void setType(int type) {
this.type = type;
}
public String getContent() {
return content;
}
public void setContent(String content) {
this.content = content;
}
}
|
So I love trying different snacks from other countries so this exchange seems right up my alley and it definitely delivered!
I got some pretty interesting things called OMG's which have graham crackers which ive never tried, I actually opened this first and they were surprisingly good! Quite morish too and I was originally aiming to only have 1 item from the exchange a week but it seems i dont have many left...
I got some gourmet popcorn with maple syrup of course which sounds pretty nice, i'll save that for the next trip to the cinema.
Another snack i got is these pizza flavored goldfish, sounds quite odd but it actually makes me really want to try it! Definitely sounds quite tasty!
I also got some fruit roll ups which i've heard quite a lot about but we do kinda get these in the form of winders but it'll be interesting to see how they differ.
And last but not least PEEPS!! My eye's actually lit up when i saw those. I dont know why but I've always wanted to try them as I've seen them in quite a few shows, I realize they are just Marshmallows but it'll be pretty cool to finally try them!
So thank you strych9! You really made my day after quite a crappy day :D |
<reponame>beru/libonnx
#include <onnx.h>
struct operator_pdata_t {
struct onnx_graph_t * else_branch;
struct onnx_graph_t * then_branch;
};
static int If_init(struct onnx_node_t * n)
{
struct operator_pdata_t * pdat;
if((n->ninput == 1) && (n->noutput >= 1))
{
pdat = malloc(sizeof(struct operator_pdata_t));
if(pdat)
{
pdat->else_branch = onnx_graph_alloc(n->ctx, onnx_attribute_read_graph(n, "else_branch", NULL));
pdat->then_branch = onnx_graph_alloc(n->ctx, onnx_attribute_read_graph(n, "then_branch", NULL));
if(!pdat->else_branch || !pdat->then_branch)
{
if(pdat->else_branch)
onnx_graph_free(pdat->else_branch);
if(pdat->then_branch)
onnx_graph_free(pdat->then_branch);
free(pdat);
return 0;
}
n->priv = pdat;
return 1;
}
}
return 0;
}
static int If_exit(struct onnx_node_t * n)
{
struct operator_pdata_t * pdat = (struct operator_pdata_t *)n->priv;
if(pdat)
{
if(pdat->else_branch)
onnx_graph_free(pdat->else_branch);
if(pdat->then_branch)
onnx_graph_free(pdat->then_branch);
free(pdat);
}
return 1;
}
static int If_reshape(struct onnx_node_t * n)
{
struct operator_pdata_t * pdat = (struct operator_pdata_t *)n->priv;
struct onnx_tensor_t * x = n->inputs[0];
uint8_t * px = (uint8_t *)x->datas;
struct onnx_graph_t * g;
struct onnx_node_t * t;
int i;
if(px[0])
g = pdat->then_branch;
else
g = pdat->else_branch;
if(g->nlen > 0)
{
for(i = 0; i < g->nlen; i++)
{
t = &g->nodes[i];
t->reshape(t);
}
if(t)
{
for(i = 0; i < min(t->noutput, n->noutput); i++)
{
struct onnx_tensor_t * a = t->outputs[i];
struct onnx_tensor_t * b = n->outputs[i];
onnx_tensor_reshape_identity(b, a, a->type);
}
}
}
return 1;
}
static void If_operator(struct onnx_node_t * n)
{
struct operator_pdata_t * pdat = (struct operator_pdata_t *)n->priv;
struct onnx_tensor_t * x = n->inputs[0];
uint8_t * px = (uint8_t *)x->datas;
struct onnx_graph_t * g;
struct onnx_node_t * t;
int i;
if(px[0])
g = pdat->then_branch;
else
g = pdat->else_branch;
if(g->nlen > 0)
{
for(i = 0; i < g->nlen; i++)
{
t = &g->nodes[i];
t->operator(t);
}
if(t)
{
for(i = 0; i < min(t->noutput, n->noutput); i++)
{
struct onnx_tensor_t * a = t->outputs[i];
struct onnx_tensor_t * b = n->outputs[i];
if(x->type == ONNX_TENSOR_TYPE_STRING)
{
char ** pa = (char **)a->datas;
char ** pb = (char **)b->datas;
for(size_t o = 0; o < b->ndata; o++)
{
if(pb[o])
free(pb[o]);
pb[o] = strdup(pa[o]);
}
}
else
{
memcpy(b->datas, a->datas, a->ndata * onnx_tensor_type_sizeof(a->type));
}
}
}
}
}
void resolver_default_op_If(struct onnx_node_t * n)
{
if(n->opset >= 13)
{
n->init = If_init;
n->exit = If_exit;
n->reshape = If_reshape;
n->operator = If_operator;
}
else if(n->opset >= 11)
{
n->init = If_init;
n->exit = If_exit;
n->reshape = If_reshape;
n->operator = If_operator;
}
else if(n->opset >= 1)
{
n->init = If_init;
n->exit = If_exit;
n->reshape = If_reshape;
n->operator = If_operator;
}
}
|
<filename>src/redux/reducers.tsx
// External
import { combineReducers } from 'redux';
// Types
import {
ADD_NOTES,
SET_USERNAME,
SET_ONLINE,
SET_GAME_TAB_HIGHLIGHT,
UPDATE_CHAT_HIGHLIGHT,
SET_MESSAGE_DELAY,
UPDATE_STYLE,
SET_MISSION_HIGHLIGHT,
// eslint-disable-next-line no-unused-vars
ActionTypes,
} from './actions';
// Reducers
function notes(state = '', action: ActionTypes): string {
switch (action.type) {
case ADD_NOTES:
return action.text;
default:
return state;
}
}
function username(state = '', action: ActionTypes): string {
switch (action.type) {
case SET_USERNAME:
return action.text;
default:
return state;
}
}
function online(state: boolean = false, action: ActionTypes): boolean {
switch (action.type) {
case SET_ONLINE:
return action.value;
default:
return state;
}
}
function highlighted(
state: boolean[] = [false, false, false, false, false],
action: ActionTypes
): boolean[] {
switch (action.type) {
case SET_GAME_TAB_HIGHLIGHT:
state[action.index] = action.value;
return state;
default:
return state;
}
}
function chatHighlights(
state: { [key: string]: string } = {},
action: ActionTypes
): { [key: string]: string } {
switch (action.type) {
case UPDATE_CHAT_HIGHLIGHT:
// eslint-disable-next-line no-case-declarations
const newState = { ...state };
newState[action.player] = action.color;
return newState;
default:
return state;
}
}
function messageDelay(state: number[] = [0, 0, 0, 0, 0], action: ActionTypes): number[] {
switch (action.type) {
case SET_MESSAGE_DELAY:
state.push(action.timestamp);
state.shift();
return state;
default:
return state;
}
}
function missionHighlight(
state: { mission: number; round: number } = { mission: -1, round: -1 },
action: ActionTypes
): { mission: number; round: number } {
switch (action.type) {
case SET_MISSION_HIGHLIGHT:
return {
mission: action.mission,
round: action.round,
};
default:
return state;
}
}
/* Set the same starting value on profile */
function style(
state: any = {
playArea: 1,
playTabs: 2,
playFontSize: 12,
avatarSize: 75,
avatarStyle: true,
themeLight: false,
coloredNames: true,
numberOfMessages: 5,
},
action: ActionTypes
): any {
switch (action.type) {
case UPDATE_STYLE:
state = action.style;
return state;
default:
return state;
}
}
export const rootReducer = combineReducers({
notes,
username,
online,
highlighted,
chatHighlights,
messageDelay,
style,
missionHighlight,
});
export type rootType = ReturnType<typeof rootReducer>;
|
//
// Created by jcfei on 18-9-9.
//
#include "pinv.h"
#include "cpd.h"
namespace TensorLet_decomposition{
template<class datatype>
cp_format<datatype> cp_als(Tensor3D<datatype> &a, int r, int max_iter = 500, double tolerance = 1e-6) {
if( r == 0 ){
printf("CP decomposition rank cannot be zero.");
exit(1);
}
MKL_INT *shape = a.size(); //dimension
MKL_INT n1 = shape[0]; MKL_INT n2 = shape[1]; MKL_INT n3 = shape[2];
double norm_a = cblas_dnrm2(n1 * n2 * n3, a.pointer, 1);
double tolerance_times_norm_a = (1 - tolerance) * norm_a;
/*****************************
******* Allocate memory ******
******************************/
// a col-major, b col-major, c col-major, ct 是 ct 的 col-major
datatype* A = (datatype*)mkl_malloc(n1 * r * sizeof(datatype), 64);
datatype* B = (datatype*)mkl_malloc(n2 * r * sizeof(datatype), 64);
datatype* C = (datatype*)mkl_malloc(n3 * r * sizeof(datatype), 64);
datatype* Ct = (datatype*)mkl_malloc(n3 * r * sizeof(datatype), 64);
datatype* lamda = (datatype*)mkl_malloc(r * sizeof(datatype), 64);
if( A == NULL || B == NULL || C == NULL || Ct == NULL || lamda == NULL ){
printf("Cannot allocate enough memory for A, B, C, and lamda.");
exit(1);
}
datatype* at_times_a = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); //A^t * A
datatype* bt_times_b = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); //B^t * B
datatype* ct_times_c = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); //C^t * C
datatype* ct_times_c_times_bt_times_b = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); // c^t * c * b^t *b
datatype* ct_times_c_times_at_times_a = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); // c^t * c * a^t *a
datatype* bt_times_b_times_at_times_a = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); // b^t * b * a^t *a
if( at_times_a == NULL || bt_times_b == NULL || ct_times_c == NULL
|| ct_times_c_times_at_times_a == NULL || ct_times_c_times_bt_times_b == NULL
|| bt_times_b_times_at_times_a == NULL ){
printf("Cannot allocate enough memory for A * A^t.");
exit(1);
}
/****************************
******* randomization *******
*****************************/
MKL_INT status[3]; // random state
VSLStreamStatePtr stream;
// The seed need to test
srand((unsigned)time(NULL));
MKL_INT SEED = rand();
vslNewStream(&stream,VSL_BRNG_MCG31, SEED);
status[0] = vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, n1 * r, A, 0, 1);
status[1] = vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, n2 * r, B, 0, 1);
status[2] = vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, n3 * r, C, 0, 1);
vslDeleteStream(&stream);
if( status[0] + status[1] +status[2] != 0){
printf("Random initialization failed for A, B, C.");
exit(1);
}
/****************************
*********** Update **********
*****************************/
int turn = 0;
while(turn < max_iter){
// double normA = cblas_dnrm2(n1*r, A, 1);
// double normB = cblas_dnrm2(n2*r, B, 1);
// double normC = cblas_dnrm2(n3*r, C, 1);
// double normCt = cblas_dnrm2(n3*r, Ct, 1);
// cout << normA << " " << normB << " " << normC << " " << normCt << endl;
/************************
******* update A ********
************************/
datatype* c_kr_b = (datatype*)mkl_malloc(n2 * n3 * r * sizeof(datatype), 64); // kr(c,b)
datatype* x1_times_c_kr_b = (datatype*)mkl_malloc(n1 * r * sizeof(datatype), 64); // X(1) * kr(c,b)
if( c_kr_b == NULL || x1_times_c_kr_b == NULL ){
printf("Cannot allocate enough memory for kr product.");
exit(1);
}
// c_kr_b tested: right, c col-major b col-major
for(MKL_INT i = 0; i < r; ++i){
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasTrans,
n2, n3, 1, 1, B + i * n2, n2, C + i * n3, n3,
0, c_kr_b + i * n2 * n3, n2);
}
// tested: right
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasNoTrans,
n1, r, n2*n3, 1, a.pointer, n1, c_kr_b, n2 * n3,
0, x1_times_c_kr_b, n1); // X(1) * kr(c,b)
// tested: right
cblas_dgemm(CblasColMajor, CblasTrans, CblasNoTrans,
r, r, n3, 1, C, n3, C, n3,
0, ct_times_c, r); // c^t * c
// tested: right
cblas_dgemm(CblasColMajor, CblasTrans, CblasNoTrans,
r, r, n2, 1, B, n2, B, n2,
0, bt_times_b, r); // b^t * b
// 对称性
// cblas_dsyrk(CblasColMajor, CblasUpper, CblasTrans,
// r, n3, 1, C, n3,
// 0, ct_times_c, r);
// cblas_dsyrk(CblasColMajor, CblasUpper, CblasTrans,
// r, n2, 1, B, n2,
// 0, bt_times_b, r);
// tested right
vdMul(r * r, ct_times_c, bt_times_b, ct_times_c_times_bt_times_b);
datatype* inverse_a = (datatype*)malloc(r*r*sizeof(datatype));
pinv(ct_times_c_times_bt_times_b,inverse_a,r);
//update A
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasNoTrans,
n1, r, r, 1, x1_times_c_kr_b, n1, inverse_a, r,
0, A, n1);
// //normalize A
// for(MKL_INT i = 0; i < r; ++i){
// double norm = cblas_dnrm2(n1, A + i * n1, 1);
// if (norm > 1){
// cblas_dscal(n1, 1.0 / norm, A + i * n1, 1);
// }
// }
MKL_free(c_kr_b);
MKL_free(x1_times_c_kr_b);
/************************
******* update B ********
************************/
datatype* c_kr_a = (datatype*)mkl_malloc(n3 * n1 * r * sizeof(datatype), 64);
datatype* x2_times_c_kr_a = (datatype*)mkl_malloc(n2 * r * sizeof(datatype), 64);
if( c_kr_a == NULL || x2_times_c_kr_a == NULL ){
printf("Cannot allocate enough memory for kr product.");
exit(1);
}
for(MKL_INT i = 0; i < r; ++i){
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasTrans,
n1, n3, 1, 1, A + i * n1, n1, C + i * n3, n3,
0, c_kr_a + i * n1 * n3, n1); // kr(c,a)
}
for(MKL_INT i = 0; i < n3; ++i){
cblas_dgemm(CblasColMajor, CblasTrans, CblasNoTrans,
n2, r, n1, 1, a.pointer + i * n1 * n2, n1, c_kr_a + i * n2, n1,
1, x2_times_c_kr_a, n2); // X(2) * kr(c,a), rank update
}
cblas_dgemm(CblasColMajor, CblasTrans, CblasNoTrans,
r, r, n1, 1, A, n1, A, n1,
0, at_times_a, r); // a^t * a
// cblas_dsyrk(CblasColMajor, CblasUpper, CblasTrans,
// r, n1, 1, A, n1,
// 0, at_times_a, r);
vdMul(r * r, ct_times_c, at_times_a, ct_times_c_times_at_times_a);
datatype* inverse_b = (datatype*)malloc(r*r*sizeof(datatype));
pinv(ct_times_c_times_at_times_a,inverse_b,r);
// update B
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasNoTrans,
n2, r, r, 1, x2_times_c_kr_a, n2, inverse_b, r,
1, B, n2);
// normA = cblas_dnrm2(n1*r, inverse_b, 1);
// cout << "inverse_a " << normA << endl;
//normalize B
// for(MKL_INT i = 0; i < r; ++i){
// double norm = cblas_dnrm2(n2, B + i * n2, 1);
// if (norm > 1){
// cblas_dscal(n2, 1.0 / norm, B + i * n2, 1);
// }
// }
MKL_free( c_kr_a );
MKL_free( x2_times_c_kr_a );
/************************
******* update C ********
************************/
datatype* b_kr_a = ( datatype* )mkl_malloc( n1 * n2 * r * sizeof( datatype ), 64 );
datatype* x3_times_b_kr_a = ( datatype* )mkl_malloc( n3 * r * sizeof(datatype), 64 );
if( b_kr_a == NULL || x3_times_b_kr_a == NULL ){
printf("Cannot allocate enough memory for kr product.");
exit(1);
}
for( MKL_INT i = 0; i < r; ++i ){
cblas_dgemm( CblasColMajor, CblasNoTrans, CblasTrans,
n1, n2, 1, 1, A + i * n1, n1, B + i * n2, n2,
0, b_kr_a + i * n1 * n2, n1 ); // kr(b,a)
// cblas_dgemm(CblasColMajor,CblasNoTrans,CblasTrans,n2,n1,1,1,B+i*n2,n2,A+i*n1,n1,0,b_kr_a+i*n1*n2,n2); // kr(a,b)
}
cblas_dgemm( CblasRowMajor, CblasNoTrans, CblasTrans,
n3, r, n1 * n2, 1, a.pointer, n1 * n2, b_kr_a, n1 * n2,
0, x3_times_b_kr_a, r ); // X(3) * kr(b,a) CblasRowMajor
// cblas_dsyrk( CblasColMajor, CblasUpper, CblasTrans,
// r, n2, 1, B, n2,
// 0, bt_times_b, r);
cblas_dgemm(CblasColMajor, CblasTrans, CblasNoTrans,
r, r, n2, 1, B, n2, B, n2,
0, bt_times_b, r); // b^t * b
vdMul( r * r, bt_times_b, at_times_a, bt_times_b_times_at_times_a );
datatype* inverse_c = (datatype*)malloc(r*r*sizeof(datatype));
pinv(bt_times_b_times_at_times_a,inverse_c,r);
// update Ct
cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans,
n3, r, r, 1, x3_times_b_kr_a, r, inverse_c, r,
1, Ct, r);
for(MKL_INT i = 0; i < r; ++i){
cblas_dcopy(n3, Ct + i, r, C + i * n3, 1); //tranpose
}
// lamda[0] = cblas_dnrm2(n3, C, 1);
// for(MKL_INT i = 1; i < r; ++i){
// lamda[i] = cblas_dnrm2(n3, C + i * n3, 1);
// if (lamda[i] > lamda[0] * 1e-5){
// cblas_dscal(n3, 1/lamda[i], C + i * n3, 1); //normalize
// }
// }
//normalize C
// for(MKL_INT i = 0; i < r; ++i){
// double norm = cblas_dnrm2(n3, C + i * n3, 1);
// if (norm > 1e-1){
// cblas_dscal(n3, 1 / norm, C + i * n3, 1); //normalize
// }
// }
datatype* a_con = (datatype*)mkl_malloc(n1 * n2 * n3 * sizeof(datatype), 64);
cblas_dgemm( CblasColMajor, CblasNoTrans, CblasNoTrans,
n1 * n2, n3, r, 1, b_kr_a, n1 * n2, Ct, r,
0, a_con, n1 * n2 ); // X(3)^t = kr(b,a) * C^t, CblasColMajor
cblas_daxpy(n1 * n2 * n3, -1.0, a.pointer, 1, a_con, 1);
double norm_s = cblas_dnrm2(n1 * n2 * n3, a_con, 1);
double epision = norm_s / norm_a;
// cout << "norm error: " << epision << endl;
if( epision < tolerance){
break;
}
/* clean up */
MKL_free(b_kr_a);
MKL_free( x3_times_b_kr_a );
MKL_free(a_con);
turn++;
}
MKL_free( Ct );
MKL_free( at_times_a );
MKL_free( bt_times_b );
MKL_free( ct_times_c );
MKL_free( ct_times_c_times_bt_times_b );
MKL_free( ct_times_c_times_at_times_a );
MKL_free( bt_times_b_times_at_times_a );
cp_format<datatype> result;
result.cp_A = A;
result.cp_B = B;
result.cp_C = C;
result.cp_lamda = lamda;
return result;
}
template<class datatype>
cp_format<datatype> cp_als( Tensor3D<datatype> &a, int r, double tolerance = 1e-6) {
int max_iter = 1;
MKL_INT *shape = a.size(); //dimension
MKL_INT n1 = shape[0]; MKL_INT n2 =shape[1]; MKL_INT n3 = shape[2];
//random A,B,C
datatype* A = (datatype*)mkl_malloc(n1 * r * sizeof(datatype), 64);
datatype* B = (datatype*)mkl_malloc(n2 * r * sizeof(datatype), 64);
datatype* C = (datatype*)mkl_malloc(n3 * r * sizeof(datatype), 64);
if( A == NULL || B == NULL || C == NULL ){
printf("Cannot allocate enough memory.");
exit(1);
}
/******randomization *****/
MKL_INT status[3]; // random state
VSLStreamStatePtr stream;
// The seed need to test
srand((unsigned)time(NULL));
MKL_INT SEED = rand();
vslNewStream(&stream,VSL_BRNG_MCG31, SEED);
status[0] = vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, n1 * r, A, 0, 1);
// srand((unsigned)time(NULL));
// SEED = rand();
status[1] = vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, n2 * r, B, 0, 1);
// srand((unsigned)time(NULL));
// SEED = rand();
status[2] = vdRngUniform(VSL_RNG_METHOD_UNIFORM_STD, stream, n3 * r, C, 0, 1);
vslDeleteStream(&stream);
if( status[0] + status[1] +status[2] != 0){
printf("Random initialization failed.");
exit(1);
}
// for(int i = 0; i < 10; ++i){
// cout << A[i] << " ";
// }
// cout << endl;
// for(int i = 0; i < 10; ++i){
// cout << B[i] << " ";
// }
// cout << endl;
// for(int i = 0; i < 10; ++i){
// cout << C[i] << " ";
// }
// cout << endl;
/***********************
*******update A********
***********************/
datatype* c_kr_b = (datatype*)mkl_malloc(n2 * n3 * r * sizeof(datatype), 64); // kr(c,b)
datatype* x1_times_c_kr_b = (datatype*)mkl_malloc(n1 * r * sizeof(datatype), 64); // X(1) * kr(c,b)
if( c_kr_b == NULL || x1_times_c_kr_b == NULL ){
printf("Cannot allocate enough memory.");
exit(1);
}
// c_kr_b tested: right
for(MKL_INT i = 0; i < r; ++i){
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasTrans,
n2, n3, 1, 1, B + i * n2, n2, C + i * n3, n3,
0, c_kr_b + i * n2 * n3, n2);
}
// for(int i = 0; i < 6; ++i){
// cout << C[i] << " ";
// }
// cout << endl;
// for(int i = 0; i < 6; ++i){
// cout << B[i] << " ";
// }
// cout << endl;
// for(int i = 0; i < 18; ++i){
// cout << c_kr_b[i] << " ";
// }
// cout << endl;
//
// for(int i = 0; i < 27; ++i){
// cout << a.pointer[i] << " ";
// }
// cout << endl;
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasNoTrans,
n1, r, n2*n3, 1, a.pointer, n1, c_kr_b, n2 * n3,
0, x1_times_c_kr_b, n1); // X(1) * kr(c,b)
// for(int i = 0; i < 6; ++i){
// cout << x1_times_c_kr_b[i] << " ";
// }
datatype* at_times_a = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); //A^t * A
datatype* bt_times_b = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); //B^t * B
datatype* ct_times_c = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); //C^t * C
datatype* ct_times_c_times_bt_times_b = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); // c^t * c * b^t *b
datatype* ct_times_c_times_at_times_a = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); // c^t * c * a^t *a
datatype* bt_times_b_times_at_times_a = (datatype*)mkl_malloc(r * r * sizeof(datatype), 64); // b^t * b * a^t *a
if( at_times_a == NULL || bt_times_b == NULL || ct_times_c == NULL
|| ct_times_c_times_at_times_a == NULL || ct_times_c_times_bt_times_b == NULL
|| bt_times_b_times_at_times_a == NULL ){
printf("Cannot allocate enough memory.");
exit(1);
}
cblas_dsyrk(CblasColMajor, CblasUpper, CblasTrans,
r, r, n3, C, n3,
0, ct_times_c, r);
cblas_dsyrk(CblasColMajor, CblasUpper, CblasTrans,
r, r, n2, B, n2,
0, bt_times_b, r);
vdMul(r * r, ct_times_c, bt_times_b, ct_times_c_times_bt_times_b);
//pinv need test
int info = -1;
MKL_INT* ivpv=(MKL_INT*)mkl_malloc(r * sizeof(MKL_INT), 64);
datatype* work=(datatype*)mkl_malloc(r * sizeof(datatype),64);
MKL_INT order = r;
dsytrf("U", &order, ct_times_c_times_bt_times_b, &r, ivpv, work, &r, &info);
dsytri("U", &order, ct_times_c_times_bt_times_b, &r, ivpv, work, &info);
// cblas_dsymm(CblasColMajor, CblasRight, CblasUpper, n1, r, 1, cal_a, r, c_times_ct_times_b_times_bt,n1,0,A,n1);
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasNoTrans,
n1, r, r, 1, x1_times_c_kr_b, n1, ct_times_c_times_bt_times_b, r,
0, A, n1);
MKL_free(c_kr_b);
MKL_free(x1_times_c_kr_b);
/************************
********update B*********
************************/
datatype* c_kr_a = (datatype*)mkl_malloc(n3 * n1 * r * sizeof(datatype), 64);
datatype* x2_times_c_kr_a = (datatype*)mkl_malloc(n2 * r * sizeof(datatype), 64);
if( c_kr_a == NULL || x2_times_c_kr_a == NULL ){
printf("Cannot allocate enough memory.");
exit(1);
}
for(MKL_INT i = 0; i < r; ++i){
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasTrans,
n1, n3, 1, 1, A + i * n1, n1, C + i * n3, n3,
0, c_kr_a + i * n3 * n1, n1); // kr(c,a)
}
for(MKL_INT i = 0; i < n3; ++i){
cblas_dgemm(CblasColMajor, CblasTrans, CblasNoTrans,
n2, r, n1, 1, a.pointer + i * n1 * n2, n1, c_kr_a + i * n2, n1,
1, x2_times_c_kr_a, n2); // X(2) * kr(c,a) rank update
}
cblas_dsyrk(CblasColMajor, CblasUpper, CblasTrans,
r, r, n1, A, n3,
0, at_times_a, r);
vdMul(r * r, ct_times_c, at_times_a, ct_times_c_times_at_times_a);
dsytrf( "U", &r, ct_times_c_times_at_times_a, &r, ivpv, work, &r, &info );
dsytri( "U", &r, ct_times_c_times_at_times_a, &r, ivpv, work, &info );
cblas_dsymm( CblasColMajor, CblasRight, CblasUpper,
n2, r, 1, x2_times_c_kr_a, r, ct_times_c_times_at_times_a, n2,
0, B, n2 );
MKL_free( c_kr_a );
MKL_free( x2_times_c_kr_a );
/************************
******* update C ********
************************/
datatype* b_kr_a = ( datatype* )mkl_malloc( n1 * n2 * r * sizeof( datatype ), 64 );
datatype* x3_times_b_kr_a = ( datatype* )mkl_malloc( n3 * r * sizeof(datatype), 64 );
for( MKL_INT i = 0; i < r; ++i ){
cblas_dgemm( CblasColMajor, CblasNoTrans, CblasTrans,
n1, n2, 1, 1, A + i * n1, n1, B + i * n2, n2,
0, b_kr_a + i * n1 * n2, n1 ); // kr(b,a)
// cblas_dgemm(CblasColMajor,CblasNoTrans,CblasTrans,n2,n1,1,1,B+i*n2,n2,A+i*n1,n1,0,b_kr_a+i*n1*n2,n2); // kr(a,b)
}
cblas_dgemm( CblasRowMajor, CblasNoTrans, CblasTrans,
n3, r, n1 * n2, 1, a.pointer, n1 * n2, b_kr_a, n1 * n2,
0, x3_times_b_kr_a, r ); // X(3) * kr(b,a) CblasRowMajor
cblas_dsyrk( CblasColMajor, CblasUpper, CblasTrans,
r, r, n2, B, n2,
0, bt_times_b, r);
vdMul( r * r, bt_times_b, at_times_a, bt_times_b_times_at_times_a );
dsytrf( "U", &r, bt_times_b_times_at_times_a, &r, ivpv, work, &r, &info );
dsytri( "U", &r, bt_times_b_times_at_times_a, &r, ivpv, work, &info );
cblas_dsymm( CblasRowMajor, CblasRight, CblasUpper,
n3, r, 1, x3_times_b_kr_a, r, bt_times_b_times_at_times_a, r,
0, C, r );
MKL_free(b_kr_a);
MKL_free( x3_times_b_kr_a );
MKL_free( ivpv );
MKL_free( work );
MKL_free( at_times_a );
MKL_free( bt_times_b );
MKL_free( ct_times_c );
MKL_free( ct_times_c_times_bt_times_b );
MKL_free( ct_times_c_times_at_times_a );
MKL_free( bt_times_b_times_at_times_a );
cp_format<datatype> result;
result.cp_A = A;
result.cp_B = B;
result.cp_C = C;
return result;
}
}
|
class Memory:
'''
To do: in theory, a lock could be deleted in between when its existence is checked
and when it is acquired. At some point this needs to be fixed.
'''
#if necessary for efficiency, these keys should be numbers or enum constants.
STATES = "__world states"
STATE = "__current state"
GOAL_GRAPH = "__goals"
CURRENT_GOALS = "__current goals"
PLANS = "__plans"
ACTIONS = "__actions"
FEEDBACK = "__feedback"
#ROS constants used by rosrun classes and related modules in act and perceive.
ROS_OBJS_DETECTED = "__detected object queue"
ROS_OBJS_STATE = "__state"
STATE_HISTORY = "__history"
CALIBRATION_MATRIX = "__camera calibration status"
CALIBRATION_Z = "__Z"
STACK_Z = "__SZ"
UNSTACK_Z = "__UZ"
STACK_3Z = "__3SZ"
UNSTACK_3Z = "__3SZ"
RAISING_POINT = "__RP"
PUTTING_POINT = "__PP"
ROS_WORDS_HEARD = "__words heard queue"
ROS_FEEDBACK = "__ros feedback"
Objects = "__objects"
DELIVERED = "__deliveredscore"
#MetaCognitive
TRACE_SEGMENT = "__trace segment"
META_ANOMALIES = "__meta anomalies"
META_GOALS = "__meta goals"
META_CURR_GOAL = "__meta current goal"
META_PLAN = "__meta plan" #TODO allow more than one plan
# data recording
PLANNING_COUNT = "__PlanningCount" # number of times planning was performed (including replanning)
GOALS_ACTIONS_ACHIEVED = "__GoalsActionsAchieved" # number of goals achieved
GOALS_ACHIEVED = "__GoalsAchieved" # number of goals achieved
ACTIONS_EXECUTED = "__ActionsExecuted" # number of actions executed
MIDCA_CYCLES = "__MIDCA Cycles"
CURR_PLAN = "__CurrPlan"
# GDA
DISCREPANCY = "__discrepancy"
EXPLANATION = "__explanation"
EXPLANATION_VAL = "__explanation_val"
#Goal Tranformation
CL_TREE = "__class heirarchy tree"
OB_TREE = "__object heirarchy tree"
#Construction
TIME_CONSTRUCTION = "__Construction Time"
ACTUAL_TIME_CONSTRUCTION = "__Actual Time"
EXPECTED_TIME_CONSTRUCTION = "__Expected Time"
SELECTED_BUILDING_LIST = "__Buildings Selected"
COMPLETE_BUILDING_LIST = "__Buildings"
EXECUTED_BUILDING_LIST = "__Executed Buildings"
REJECTED_GOALS = "__rejected goals"
ACTUAL_SCORES ="__Expected Scores"
P = "__ list of scores"
t = "__list of times"
# Restaurant
MONEY = "__money limit"
SELECTED_ORDERS = "__selected orders"
COMPLETED_ORDERS = "__completed orders"
EXPECTED_SCORE = "__expected score"
ACTUAL_SCORE = "__actual score"
EXPECTED_COST= "__expected cost"
ACTUAL_COST= "__actual cost"
def __init__(self, args = {}):
self.knowledge = {}
self.update(args)
self.logger = None
self.mainLock = threading.Lock() #to synchronize lock creation
self.locks = {} #lock for each key
self.logEachAccess = True
#MetaCognitive Variables
self.metaEnabled = False
self.myMidca = None # pointer to MIDCA object
self.trace = False
#Handles structs with custom update methods, dict update by dict or tuple, list append, and simple assignment.
def _update(self, structname, val):
with self.mainLock:
if structname not in self.locks:
self.locks[structname] = threading.Lock()
with self.locks[structname]:
if not structname in self.knowledge:
self.knowledge[structname] = val
elif self.knowledge[structname].__class__.__name__ == "dict":
if val.__class__.__name__ == "dict":
self.knowledge[structname].update(val) #update dict with dict
elif len(val) == 2:
self.knowledge[structname][val[0]] = val[1] #update dict with tuple
elif hasattr(self.knowledge[structname], "update"):
self.knowledge[structname].update(val) #generic update
else:
self.knowledge[structname] = val #assignment
self.logAccess(structname)
def add(self, structname, val):
'''
Used to create lists of items. If nothing is stored under structname, will create
a one-item list containing val. If there is a list, will append val. If some item
is stored with no append method, will create a two-item list with the previously
stored item and val.
'''
with self.mainLock:
if structname not in self.locks:
self.locks[structname] = threading.Lock()
with self.locks[structname]:
if not structname in self.knowledge:
self.knowledge[structname] = [val]
elif hasattr(self.knowledge[structname], "append"):
self.knowledge[structname].append(val)
else:
self.knowledge[structname] = [self.knowledge[structname], val]
self.logAccess(structname)
def set(self, structname, val):
with self.mainLock:
if structname not in self.locks:
self.locks[structname] = threading.Lock()
with self.locks[structname]:
self.knowledge[structname] = val
self.logAccess(structname)
def update(self, args):
for structname, val in list(args.items()):
self._update(structname, val)
def update_all(self, structname, val):
with self.mainLock:
if structname not in self.locks:
self.locks[structname] = threading.Lock()
with self.locks[structname]:
if structname in self.knowledge and (not isinstance(self.knowledge[structname], str)):
struct = self.knowledge[structname]
if hasattr(struct, "__getitem__") or hasattr(struct, "__iter__"):
for item in struct:
if hasattr(item, "update"):
item.update(val)
elif hasattr(struct, "update"):
struct.update(val)
self.logAccess(structname)
def remove(self, structname):
with self.mainLock:
if structname not in self.locks:
self.locks[structname] = threading.Lock()
with self.locks[structname]:
self.logAccess(structname)
if structname in self.knowledge:
del self.knowledge[structname]
del self.locks[structname]
def clear(self):
self.knowledge.clear()
self.locks.clear()
def get(self, structname):
with self.mainLock:
if structname not in self.locks:
return None #if there is knowledge stored there must be a lock.
with self.locks[structname]:
self.logAccess(structname)
if structname in self.knowledge:
return self.knowledge[structname]
return None
def get_and_clear(self, structname):
with self.mainLock:
if structname not in self.locks:
return None #if there is knowledge stored there must be a lock.
with self.locks[structname]:
self.logAccess(structname)
val = self.knowledge[structname]
del self.knowledge[structname]
del self.locks[structname]
return val
def get_and_lock(self, structname):
with self.mainLock:
if structname not in self.locks:
self.locks[structname] = threading.Lock()
self.locks[structname].acquire()
return None
self.locks[structname].acquire()
self.logAccess(structname)
return self.knowledge[structname]
def unlock(self, structname):
try:
self.locks[structname].release()
except KeyError:
pass
def enableLogging(self, logger):
self.logger = logger
def logAccess(self, key):
if self.logger and self.logEachAccess:
self.logger.logEvent(MemAccessEvent(key))
def enableMeta(self, phaseManager):
if not self.trace:
raise Exception("Please call mem.enableTrace() before calling enableMeta()")
self.metaEnabled = True
self.myMidca = phaseManager
def enableTrace(self):
if not self.trace:
self.trace = CogTrace() |
/**
* Maps Jira Issue response object to TimeEvent object
*
* @param issue
* @return
*/
private TimeEvent issueAsTimeEvent(Issue issue) {
String previewUrl = String.format(host + BROWSE_ISSUE_URI_FORMAT, issue.getKey());
String projectName = issue.getFields().getProject().getName();
long projectId = projectName.contains("-") ? Long.parseLong(projectName.split("-")[0]) : 0L;
TimeEvent event = new ImportedTimeEvent(
Long.parseLong(issue.getId()), issue.getFields().getSummary(), "currently_assigned_tasks",
previewUrl, new Date(), projectId);
return event;
} |
// This function is mine
// It was added for the reason that i wanted somehow external commands to be executed outside the source code when a branching condition was met
func exe_cmd(cmd string, wg *sync.WaitGroup) {
fmt.Println(cmd)
out, err := exec.Command("sh", "-c", cmd).Output()
if err != nil {
fmt.Println("failed !!!!!!! with %s\n", err)
}
fmt.Printf("%s", out)
wg.Done()
} |
// ValidateMessaging validates messaging provider and credentials used for
// the user registration.
func (cfg *UserRegistryConfig) ValidateMessaging() error {
if cfg.messaging == nil {
return errors.ErrUserRegistryConfigMessagingNil.WithArgs(cfg.Name)
}
if found := cfg.messaging.FindProvider(cfg.EmailProvider); !found {
return errors.ErrUserRegistryConfigMessagingProviderNotFound.WithArgs(cfg.Name)
}
providerType := cfg.messaging.GetProviderType(cfg.EmailProvider)
if providerType == "email" {
providerCreds := cfg.messaging.FindProviderCredentials(cfg.EmailProvider)
if providerCreds == "" {
return errors.ErrUserRegistryConfigMessagingProviderCredentialsNotFound.WithArgs(cfg.Name, cfg.EmailProvider)
}
if providerCreds != "passwordless" {
if cfg.credentials == nil {
return errors.ErrUserRegistryConfigCredentialsNil.WithArgs(cfg.Name)
}
if found := cfg.credentials.FindCredential(providerCreds); !found {
return errors.ErrUserRegistryConfigCredentialsNotFound.WithArgs(cfg.Name, providerCreds)
}
}
}
return nil
} |
/** Respond to the mouse click depicted by EVENT. */
public void doClick(MouseEvent event) {
int x = event.getX() - SEPARATOR_SIZE,
y = event.getY() - SEPARATOR_SIZE;
int r = y / SQUARE_SEP + 1;
int c = x / SQUARE_SEP + 1;
_commandOut.printf("%d %d%n", r, c);
} |
Performance evaluation of wireless data traffic in mm wave massive MIMO communication
Received Feb 1, 2020 Revised May 21, 2020 Accepted Jun 1, 2020 Due to the evaluation of mobile devices and applications in the current decade, a new direction for wireless networks has emerged. The general consensus about the future 5G network is that the following should be taken into account; the purpose of thousand-fold system capacity, hundredfold energy efficiency, lower latency, and smooth connectivity. The massive multiple-input multipleoutput (MIMO), as well as the Millimeter wave (mm Wave) have been considered in the ultra-dense cellular network (UDN), because they are viewed as the emergent solution for the next generations of communication. This article focuses on evaluating and discussing the performance of mm Wave massive MIMO for ultra-dense network, which is one of the major technologies for the 5G wireless network. More so, the energy efficiencies of two kinds of architectures for wireless backhaul networks were investigated and compared in this article. The results of the simulation revealed some points that should be considered during the deployment of small cells in the two architectures UDN with backhaul network capacity and backhaul energy efficiency, that the changing the frequency bands in Distribution approach gives the same energy efficiency reached to 600 Mb/s at 15 nodes while the Conventional approach results reached less than 100 Mb/s at the same number of nodes.
INTRODUCTION
It is expected that in the next decade, there will be a research boom in the area of fifth generation wireless cellular networks. More so, there is an emergence of many potential transmission technologies which are capable of handling 1000 times volume of wireless traffic in future wireless communications . The millimeter-wave massive MIMO is perceived as a promising technique and 5G transmission technology, because it offers gigabit-per-second data rates. It demonstrates the potentials of providing significant enhancement in energy and spectral efficiencies, as well as increasing the capacity of mobile networks. Another major technology that is often featured in the list of 5G enablers is the small cell network, and the deployment of these small cell base stations only requires low power, it is self-organizing and cost efficient The main aim of using small cells is to improve the energy efficiency and throughput of cellular networks. The small cells have become attractive to mobile operators that are involved in the development of wireless transmission systems, because there is need for the next generation wireless networks to support higher volumes of data (one thousand times higher mobile data rate volume per area) with lower energy consumption, and this can only be achieved through the use of small cells that enhance the overlaying of a small geographical location of outdoor/indoor applications by the wireless transmission systems. The ultra-dense network (UDN) has been proposed as a major system architecture that can be used to achieve an aggressive version of 5G; this is because the UDN is capable of facilitating green communications, providing seamless coverage, and enabling Gbps user experience . When the transmission power of the 5G base stations is constrained at the same level of 4G base station transmission power, then it becomes necessary for the antennas transmission power at 5G BS to be declined ten to twenty times compared with each antenna transmission power at 4G BS and mm wave communication technology is explored for application in cellular networks, and this communication technology is capable to offering over 100 MHz frequency bandwidths as present in Figure 1 . Huge path losses can be recompensed using antenna arrays that are highly directional. In UDN, it is the macro-cell base stations (BSs) that are responsible for controlling the allocation of resources, user scheduling, and supporting high-mobility users; these macro-cell BSs are often characterized by huge coverage. Despite the benefits offered by the two architectures deploying small cells in UDN, there are some limitations that are associated with these architectures and they include (1). Densification for small cells and connectivity with mm Wave massive MIMO with integrated access and backhaul for two architectures in ultradense cellular networks, (2) Transmission of power in the downlink, (3) lack of a method of optimizing the throughput of the uplink, downlink, and the overall system in ultra-dense cellular networks and, (4) The energy consumption of the system and how to optimize the dissipated energy and energy efficiency in the system. The contributions of this article are in three areas; (1) investigation of 5G concepts, features, the deployment of small cells in Ultra-dense network (UDN) in 5G networks, and design requirement when using mm Wave with massive MIMO for the improvement of link reliability; (2)development and simulation of mm Waves massive MIMO backhauling for two architectures (Conventional approach and distribution approach), taking account of LOS channels between small cells; (3)analysis and discussion of the changes that occur in the energy efficiency, spectral efficiency, and the capacity of a backhaul link over number of small cells.
Ultra-Dense Network for small cells. It is expected that the coverage of cell site will become smaller than what is obtainable today (i.e., micro or macro cell) because the use of higher RAN frequencies is employed by 5G. Increasing the capacity of cell site by 1000 times is not achievable. Thus, the deployment of dense small cell is the merely efficacious method of facilitating one thousand times extra capacity in network of 5G network. Due to the inherently of dense small cells deployed in grid, 5G backhaul will be confronted by the following challenges: 1. The reuse of frequency will be highly limited by denser backhaul link caused by denser small cell grid. With this, there will be need for better use of wireless backhaul spectrum, some set of new requirements for the syncronization of cell site. As forecasted, more precised requirements will be required by 5G network than LTE-A (i.e., 1.5 μs to approx. 0.5 μs).
Massive MIMO UDN. Multiple input multiple-output wireless systems have been incorporate into current standards, and are now used globally. In recent times, massive MIMO systems, which are equipped with tens or even hundreds of antennas, have emerged as improved MIMO technique designed to meet the growing traffic demand of 5G wireless communication networks. Massive MIMO (MM) is a multi-user MIMO technology involving the servicing of K single-antenna user equipment (UEs) on the same time-frequency resource by a base station (BS) equipped with a relatively large number M of antennas. Basically, the massive MIMO is specifically tailored for use in a cellular network, whereby a set of single antenna co-channel users are served by a large number Nt of antennas . In this process, the channel becomes near-deterministic because the BS-UE radio links become nearly orthogonal to each other. This is attributed to the asymptotic disappearance of intra cell interference, fast fading and irrelevant noise from the M engine. Favorable propagation is capable of yielding significant EE gains, because it allows the realization of multiple orders of multiplexing and array gains Millimeter Wave in UDN. By means of Mm Wave communication, high bandwidth is offered, there by increasing the data rate. The frequency band of the millimeter wave (mm Wave) is 30 ˜ 300 GHz, corresponding to wavelengths from 10 to 1 mm. As a result of the physical properties possessed by the mm Wave, it is able to effectively solve several problems associated with high-speed broadband wireless access . UDN involves the dense deployment of small cells in hotspots like shopping malls, office buildings, etc. However, the deployment of these small cells requires high data rate so that traffic can be offloaded from macro cells, since the large majority of traffic demand comes from these hotspots. More so, it is important for the operator to pay attention to cost of deployment and power efficiency as they are crucial However, the following reasons make the mm Wave more appropriate for backhaul in UDN: High Capacity and Inexpensive: the potential Gigahertz transmission bandwidth can be achieved by means of the large amount of underutilized mm Wave including unlicensed V-band (57 67GHz) and lightly licensed .E-band (71-76GHz and 81-86GHz) (the specific regulation may vary from country to country) and Immunity to Interference: rain attenuation causes the E-band's transmission distance comfort zone to be up to several kilometers, while oxygen and rain attenuation causes the V-band to be about 500-700m . The high path loss makes the mm Wave to be a more suitable candidate for UDN, where minimal inter-cell interference and improved frequency reuses are expected. It is important to note that when mm Wave is utilized in UDN, rain attenuation is minor issue.
SYSTEM MODEL 2.1. Conventional approach
One of the key solutions for the fifth generation wireless network with regards to massive MIMO (Large-Scale MIMO) millimeter wave communication technologies, is the small cells. The scenario of small cell is an inevitable solution for the future 5G wireless network. The usual architecture of cellular network is a kind of tree grid architecture in which the base station managers in the core network monitor each macro cell base station, and the given gateway forwards the backhaul traffic to the core network. In the first backhaul solution, it is assumed that the macrocell base station is situated within the center of the macrocell, and it is assumed that the small cells are distributed homogenously within the macrocell. In the massive MIMO, the Main base station simultaneously uses an antenna with a few hundred base station antennas. Here, the combination of huge available bandwidth (in millimeter Wave frequency bands) and high antenna gains that can be achieved with massive MIMO antenna arrays enables the exploitation of spatial domain DoF for the formation of high-resolution beamas as presented in Figure 2. This in turn enhances better spectral efficiency, compactness, increased reliability, and inclusive system capacity. The configuration of all the small cells base stations is done using the same transmission power and coverage. In conventional cellular networks that require the deployment of microcells, a hybrid architecture is provided so as to enhance the deployment of hotspots and microcells such as picocells, femtocells.
Distribution approach
Due to the high demand of the millimeter wave communication technologies alongside the massive multi input output antenna, there is need for the densification of small cells in the 5G wireless networks. Nevertheless, it is so tricky for the broadband internet or the fiber link to forward the backhaul traffic of each small cell base station, given challenges associated with the location and cost of deployment in urban areas. When this was compared with the central approach, it was observed that there are no main base stations through which all backhaul traffic from small cells can be combined. In addition to this, all backhaul traffic is relayed to certain small cells base stations asmensioned in Figure 3 . It is assumed that all the small cells base stations are distributed homogenously within a specific spot. In order to enable the reception of the wireless backhaul data traffic from the small cells in the macrocell, the configuration of the gateway is done at the macrocell base stations which often have sufficient space for the installation of massive MIMO millimeter wave antennas. In the distribution architecture of ultra-dense cellular networks, the deployment of multiple gateways allows flexibility in terms of forwarding the backhaul traffic into the core network. Here, the deployment of gateways is done at multiple small cell base stations in accordance to the requirement of backhaul traffic and geography scenarios. The adjacent SBSs employs the use of millimeter wave communications to relay the backhaul traffic of a SBS. All backhaul traffic from adjacent SBSs will be cooperatively forwarded to a specified SBS which is connected to the core network by FTTC links as mentioned.
METHODOLOGY
The simulation starts to simulate the energy efficiency of both configurations under changing the mmwave frequency band to determine the effect of changing frequency bands on 5G backhauling. The effect of changing the number of users is also simulated. The performance evaluation of the two proposed system considers LOS communication between nodes. If the system is considered LOS, all communication between nodes is LOS. This can improve the system capacity. There are two performance metrics used in this paper, the first one is the energy efficiency (EE), the second one is the capacity of wireless backhaul (C). The description of these parameters is as follows:
Capacity with deployed small cells
Capacity is a measure of how many information bits per time unit can be transferred without error over a given channel. = 2 (1 + ). We are focusing on t the channels between deployed transmitters and receivers, their respective capacities Instead the SNR, we adding a so called interference margin. This term describes how much the experienced noise increases due to interference.
Energy Efficiency
Expect for the backhaul network capacity, the backhaul energy efficiency is another key constrain parameter which restrict the densification of 5G ultra-dense cellular networks. Energy efficiency definition function: P = transmit power, SNR, the energy will be Table 1 sammurized the parameters used in this comparitive study. There are two operators in the scenario as mentioned above named by C and EE to perform together a cognetive network. The number of samall cells is 15 and energy efficiency is 28 used and changed to 50 and EE 38 GHz to also make a comparison between the effect of using the two types on the performance metrics under study.the channel type is LOS. The carrier frequency used is the 38 GHz to satisfy the 5G requirement and mmwave concept.
RESULTS AND ANALYSIS
This section gives the simulation results of the performance evaluation of the two architectures proposed that mentioned in figure 2 and 3. It consists of two sections that are transact with all performance metric mentioned in section 3 in order to make a full overview of the small cell deployment in UDN systems. Here, Figure 4 shows Conventional approach, the energy efficiency backhaul of ultra-dense networks was analyzed, considering the small cell (radio access nodes) base radio station number. In addition, the small cell BS operating power, the linear function of the radio access nodes, and BS backhaul transmission power were also considered. Firstly, it was observed that, an increase occurred in the energy efficiency due to the increase in the number of radio access nodes base station. Secondly, the raise in the number of frequency bands with made the number of small cells fixed, thereby reducing the energy efficiency of wireless backhaul networks. as an increase occurs in the number of small cells in the distribution approach, a significant increase occurs in the energy efficiency of backhaul networks. It can be observed from this figure that in the distribution approach, the energy efficiency of the three Millimeter wave frequency bands is about the same especially at 2.3GHz, and 38 GHz, and approaches 5000 Mb/s at 50 nodes. It can also be observed that the lowest value for energy efficiency backhaul was about 4200 Mb/s through 73 GHz. The energy efficiency of the 73 GHz band is lower than that of others, because of the decrease that occurs in the efficiency of energy due to the increase in frequency that results from the high atmospheric attenuation as the frequency band raise. The gases that are present in the atmosphere, absorb the high frequency signals. In addition to this, they have a short range, and as such experience high attenuation that results in the reduction of the received power, thereby reducing the energy efficiency. Figure 6, the performance analysis of using different number of users (subscribers) is studied. The aim of this is to present a discussion on the performance of the two different architectures in terms of energy efficiency. From the simulation results presented in Figure 6, it can be observed that as distance increase with fixed number of network users, a decrease occurs in the energy efficiency of wireless backhaul networks. The efficiency of energy is dependent on the number of users. As the number subscribers reduces with a fixed distance of area, an increase occurs in the energy efficiency of backhaul networks. The result presented in this figure, shows the value of energy efficiency of 3000 Mb/s was achieved when number of users is 10 subscribers and the distance only 50 meters, then when the number of subscribers increased to 25 with the same number of distance, the energy efficiency became 500Mb/s. Figure 8 and Figure 9, it can be observed that the capacity of the wireless backhaul networks is affected by the number of small cells in the architecture, considering the three different spectrum efficiency (15,20, 30) bit/Hz. As an increase occurs in the number of small cells within the Conventional architecture, an increase also occurs in the backhaul capacity. The figure clearly shows that with 50 Sbs, the highest value of which was obtained was about 27 Gbps when the spectrum efficiency was 30 bit/Hz. Also, with the same number of SBs, and with 15 bit/Hz the obtained capacity was 3Gbps. This shows that the role of the spectrum efficiency in the central architecture is crucial, and as such, cannot be underestimated. On the other hand, the Figure 9 shows that for distribution architecture, an exponential increase occurs in the backhaul capacity when there is an increase in the number of small cells. Also, with a fixed amount of small cells, an increase occurs in the backhaul capacity as the spectrum efficiency of small cells increases. It was observed that the highest value reached to 3500 Gbps. This implies the presence of spectrum efficiency gaps in 15, 20, 30 bit/Hz.
CONCLUSION
In this article, the Massive MIMO, millimeter wave communications, and small cells technologies were considered for the realization of Gigabit transference rate in (5G) networks. In addition, the performance of distributed and Conventional architecture used with small cell communication technologies and millimeter wave (mm) massive MIMO antennas in ultra-dense cellular networks, was evaluated and discussed. A comparison between the energy efficiency of wireless backhaul networks and that of other network architectures was carried out. Basically, the use of these two architectures is widely employed in the development of fifth generation systems with the aim of increasing the capacity and the energy efficiency of the entire system. Based on the results, the energy efficiency of the distribution approach is higher as compared to that of the Conventional approach in 5G mobile networks. With high bandwidth, the capacity of the mobile networks is significantly increased. |
<gh_stars>1-10
import codecs
import os
from Crypto.Util.number import long_to_bytes
def asm(code):
code = code.replace("end", "")
payload = """(module
(func $dupa (result i32)
%s
)
(export "dupa" (func $dupa)))
""" % code
with codecs.open("code.wat", "w") as out_file:
out_file.write(payload)
os.system("~/ctf/hitcon/wabt/bin/wat2wasm code.wat")
with codecs.open("code.wasm", "rb") as in_file:
data = in_file.read()
res = data[0x22:]
os.system("rm code.wat")
os.system("rm code.wasm")
return res
def dis(code):
size = long_to_bytes(len(code)+1).encode("hex")
size2 = long_to_bytes(len(code)+3).encode("hex")
prefix = ('0061736d010000000105016000017f03020100070801046475706100000a'+size2+'01'+size+'00').decode(
"hex")
with codecs.open("code.wasm", "wb") as out_file:
out_file.write(prefix + code)
os.system("~/ctf/hitcon/wabt/bin/wasm2wat code.wasm > code.wat")
with codecs.open("code.wat", "r") as in_file:
data = in_file.read()[79:-30] + "\nend"
data = data.replace(" ", "")
return data
def eval(code):
code = code.replace("end", "")
payload = """(module
(func $dupa (result i32)
%s
)
(export "dupa" (func $dupa)))
""" % code
with codecs.open("code.wat", "w") as out_file:
out_file.write(payload)
os.system("~/ctf/hitcon/wabt/bin/wat2wasm code.wat")
os.system("~/ctf/hitcon/wabt/bin/wasm-interp code.wasm --run-all-exports > res.txt")
with codecs.open('res.txt', 'r') as result_file:
res = result_file.read()
os.system("rm code.wasm")
result = int(res[14:])
if result >= 2**31:
return result-2**32
else:
return result
def main():
code = """i32.const 62537
i32.const 17488
i32.mul
i32.const 5345
i32.const 12820
i32.const 7342
i32.mul
i32.sub
i32.const 18
i32.const 40931
i32.sub
i32.const 36779
i32.add
i32.and
i32.const 19653
i32.xor
i32.const 18762
i32.const 61387
i32.sub
i32.const 28802
i32.and
i32.const 10760
i32.and
i32.const 64150
i32.const 31717
i32.add
i32.and
i32.xor
i32.or
i32.const 15746
i32.const 34874
i32.add
i32.const 60927
i32.sub
i32.const 12311
i32.xor
i32.or
i32.const 42983
i32.or
return
end
"""
# s = asm(code)
# print(s.encode("hex"))
# print(dis(s))
data = "41f3800141fe87036a41a52a41ce8d0241fe98036c41eac1006c6a7341d9f801734180f30241becc026b724183f900419294036b41cf8d016a41ddce0041d2ac0341a4e3036c6a41aac40241a8d0016c6a41e585037173720f0b".decode(
"hex")
print(dis(data))
print(eval(dis(data)))
# main()
|
import {Component, ElementRef, Input, OnDestroy, OnInit, ViewChild} from '@angular/core';
import {Dimension, IRenderable} from '../../../../model/IRenderable';
import {GridMedia} from '../GridMedia';
import {SearchTypes} from '../../../../../../common/entities/AutoCompleteItem';
import {RouterLink} from '@angular/router';
import {Thumbnail, ThumbnailManagerService} from '../../thumbnailManager.service';
import {Config} from '../../../../../../common/config/public/Config';
import {PageHelper} from '../../../../model/page.helper';
import {PhotoDTO, PhotoMetadata} from '../../../../../../common/entities/PhotoDTO';
@Component({
selector: 'app-gallery-grid-photo',
templateUrl: './photo.grid.gallery.component.html',
styleUrls: ['./photo.grid.gallery.component.css'],
providers: [RouterLink]
})
export class GalleryPhotoComponent implements IRenderable, OnInit, OnDestroy {
@Input() gridPhoto: GridMedia;
@ViewChild('img') imageRef: ElementRef;
@ViewChild('info') infoDiv: ElementRef;
@ViewChild('photoContainer') container: ElementRef;
thumbnail: Thumbnail;
keywords: { value: string, type: SearchTypes }[] = null;
infoBar = {
marginTop: 0,
visible: false,
background: 'rgba(0,0,0,0.0)'
};
animationTimer: number = null;
readonly SearchTypes: typeof SearchTypes;
searchEnabled = true;
wasInView: boolean = null;
constructor(private thumbnailService: ThumbnailManagerService) {
this.SearchTypes = SearchTypes;
this.searchEnabled = Config.Client.Search.enabled;
}
get ScrollListener(): boolean {
return !this.thumbnail.Available && !this.thumbnail.Error;
}
get Title(): string {
if (Config.Client.Other.captionFirstNaming === false) {
return this.gridPhoto.media.name;
}
if ((<PhotoDTO>this.gridPhoto.media).metadata.caption) {
if ((<PhotoDTO>this.gridPhoto.media).metadata.caption.length > 20) {
return (<PhotoDTO>this.gridPhoto.media).metadata.caption.substring(0, 17) + '...';
}
return (<PhotoDTO>this.gridPhoto.media).metadata.caption;
}
return this.gridPhoto.media.name;
}
ngOnInit() {
this.thumbnail = this.thumbnailService.getThumbnail(this.gridPhoto);
const metadata = this.gridPhoto.media.metadata as PhotoMetadata;
if ((metadata.keywords && metadata.keywords.length > 0) ||
(metadata.faces && metadata.faces.length > 0)) {
const names: string[] = (metadata.faces || []).map(f => f.name);
this.keywords = names.filter((name, index) => names.indexOf(name) === index)
.map(n => ({value: n, type: SearchTypes.person}))
.concat((metadata.keywords || []).map(k => ({value: k, type: SearchTypes.keyword})));
}
}
ngOnDestroy() {
this.thumbnail.destroy();
if (this.animationTimer != null) {
clearTimeout(this.animationTimer);
}
}
isInView(): boolean {
return PageHelper.ScrollY < this.container.nativeElement.offsetTop + this.container.nativeElement.clientHeight
&& PageHelper.ScrollY + window.innerHeight > this.container.nativeElement.offsetTop;
}
onScroll() {
if (this.thumbnail.Available === true || this.thumbnail.Error === true) {
return;
}
const isInView = this.isInView();
if (this.wasInView !== isInView) {
this.wasInView = isInView;
this.thumbnail.Visible = isInView;
}
}
getPositionText(): string {
if (!this.gridPhoto || !this.gridPhoto.isPhoto()) {
return '';
}
return (<PhotoDTO>this.gridPhoto.media).metadata.positionData.city ||
(<PhotoDTO>this.gridPhoto.media).metadata.positionData.state ||
(<PhotoDTO>this.gridPhoto.media).metadata.positionData.country;
}
mouseOver() {
this.infoBar.visible = true;
if (this.animationTimer != null) {
clearTimeout(this.animationTimer);
}
this.animationTimer = window.setTimeout(() => {
this.infoBar.background = 'rgba(0,0,0,0.8)';
if (!this.infoDiv) {
this.animationTimer = window.setTimeout(() => {
if (!this.infoDiv) {
this.infoBar.marginTop = -50;
return;
}
this.infoBar.marginTop = -this.infoDiv.nativeElement.clientHeight;
}, 10);
return;
}
this.infoBar.marginTop = -this.infoDiv.nativeElement.clientHeight;
}, 1);
}
mouseOut() {
if (this.animationTimer != null) {
clearTimeout(this.animationTimer);
}
this.animationTimer = window.setTimeout(() => {
this.infoBar.marginTop = 0;
this.infoBar.background = 'rgba(0,0,0,0.0)';
if (this.animationTimer != null) {
clearTimeout(this.animationTimer);
}
this.animationTimer = window.setTimeout(() => {
this.infoBar.visible = false;
}, 500);
}, 100);
}
/*
onImageLoad() {
this.loading.show = false;
}
*/
public getDimension(): Dimension {
if (!this.imageRef) {
return <Dimension>{
top: 0,
left: 0,
width: 0,
height: 0
};
}
return <Dimension>{
top: this.imageRef.nativeElement.offsetTop,
left: this.imageRef.nativeElement.offsetLeft,
width: this.imageRef.nativeElement.width,
height: this.imageRef.nativeElement.height
};
}
}
|
SNHG3 Functions as miRNA Sponge to Promote Breast Cancer Cells Growth Through the Metabolic Reprogramming
Cancer-associated fibroblasts (CAFs) are important ingredient in tumor microenvironment. The dynamic interplay between CAFs and cancer cells plays essential roles during tumor development and progression. However, the mechanisms of intercellular communication between CAFs and cancer cells remain largely unknown. We characterized exosomes secreted from breast cancer patient-derived CAFs by transmission electron microscopy. The expression of SNHG3, miR-330-5p, and PKM (Pyruvate Kinase M1/M2) was examined by real-time QPCR and immunoblot. The function of SNHG3 on the growth and metabolism of tumor cells was used by CCK8 and mitochondrial oxygen consumption assays. The binding between SNHG3, miR-330-5p, and PKM was examined by dual luciferase reporter assays. Orthotopical xenograft of breast tumor experiments was performed to determine the function of SNHG3 in vivo. We demonstrated that exosomes secreted from CAFs reprogram the metabolic pathways after tumor cells uptake the exosomes. CAF-secreted exosomal lncRNA SNHG3 served as a molecular sponge for miR-330-5p in breast cancer cells. Moreover, PKM could be targeted by miR-330-5p and was controlled by SNHG3 in breast cancer cells. Mechanistically, SNHG3 knockdown in CAF-secreted exosomes suppressed glycolysis metabolism and cell proliferation by the increase of miR-330-5p and decrease of PKM expression in tumor cells. SNHG3 functions as a miR-330-5p sponge to positively regulate PKM expression, inhibit mitochondrial oxidative phosphorylation, increase glycolysis carboxylation, and enhance breast tumor cell proliferation. Overall, SNHG3 could play a major role in the development and progression of breast cancer and support the therapeutic potential of targeting communication between cancer cells and tumor microenvironment.
Introduction
Tumor microenvironment (TME) consists of various cell types including cancer-associated fibroblasts (CAFs), tumor cells, and inflammatory cells . In cancer stroma, normal fibroblasts have been transformed into CAFs, which are one of the major components of the TME in many solid tumors. More and more research revealed that CAFs in the TME could not only be recruited but also be activated by paracrine factors released from tumor cells, which enable a molecular communication between CAFs and cancer cells, releasing large numbers of cytokines and regulating tumor growth and metastasis . Although CAFs have been associated with tumor cell proliferation, metabolism, angiogenesis, and metastasis , little is known about their specific roles of intercellular communications with cancer cells and the underlying mechanisms in the development and progression of cancers. Therefore, the investigation of underlying mechanisms between tumor and stromal cells like CAFs in the TME is essential in generating new therapeutic methods that can prevent cancer development and progression .
Tumor cells and CAFs form a dynamic interaction network in the tumor microenvironment . Extracellular vesicles secreted from cancer cells (such as exosomes) were demonstrated as an important intercellular communication. Exosomes could deliver various biomolecules among different cells by moving in the intracellular area . Cancer-secreted exosomes are associated with cancer progression, angiogenesis and immune exhaustion. Exosomeencapsulated noncoding RNA (miRNA or lncRNA) in cancer-secreted exosomes could regulate gene expression and signaling pathway in a post-transcriptional manner in niche cells . Previous studies focused heavily on tumor cell secreted exosomes; however, CAF-derived exosomes and their functions on tumor cells remain largely elusive. Although increasing evidence suggests that CAFs can secrete exosomal noncoding RNA (ncRNA) to promote the growth of tumor cells , the contribution of CAF-secreted exosomal ncRNA in the development and progression of cancer has not been elucidated. Consequently, there is a growing therapeutic interest to investigate the communication mechanisms between CAFs and tumor cells in TME and to identify novel targets for cancer treatment.
Long non-coding RNAs (lncRNAs), a group of RNA, are 200 nucleotides lengths without a complete open reading frame (ORF) . LncRNAs could be transcribed by RNA polymerase, spliced, and modified in the nuclei which was similar to mRNAs transcription . Current studies have revealed that the malfunction of lncRNA expression resulted in the progression of different tumors because the lncRNAs could function as oncogenes or tumor suppressor genes . As lncRNAs get involved in many biological functions such as carcinogenesis, researchers paid increasing attentions on the study of lncRNAs . Small nucleolar RNA host gene 3 (SNHG3) was discovered as a new lncRNA, which located on 1q35.3 . Accumulating evidence demonstrates that the expression of SNHG3 was increased in variety types of tumor tissues such as breast cancer, hepatocellular carcinoma, and colorectal cancer, resulting in increased proliferation and metastasis of tumor cells and poor survival of tumor-bearing patients . LncRNA was able to be released to the extracellular spaces by exosomes or in protein complexes or lipid carriers . Previous studies showed that lncRNA could be secreted into extracellular space and regulate the function of neighboring or distant cells in a paracrine manner . Therefore, extracellular lncRNA is considered as a novel type of messengers and effectors in intercellular cross talk. However, the regulatory roles and the detailed mechanism of CAF-secreted exosomal SNHG3 in breast cancer remain poorly understood. We hypothesized that lncRNAs contained within CAFsecreted exosomes can drive the modulation of metabolic activities in breast cancer cells and that exosomal lncRNAs serve as important factors for re-programming the tumor microenvironment.
In our study, we explored both the expression patterns and biological functions of breast cancer derived CAF-secreted exosomal SNHG3 on breast tumor cells, as well as the molecular mechanism of SNHG3 during the development of breast cancer. The results demonstrated a new metabolic regulatory function of CAF-secreted exosomal lncRNA in breast cancers. More importantly, we provided a novel regulation pathway between tumor cells and tumor microenvironment, which may offer novel targets for cancer therapy.
Materials and Methods
Cell Culture and Transfection MCF-7 and MD-MBA-453 breast cancer cells were purchased from the American Type Culture Collection (ATCC; Maryland, MD, USA). MCF-7 and MD-MBA-453 cells were respectively cultured in ATCC-formulated Eagle's minimum essential medium and ATCCformulated Leibovitz's L-15 medium plus 10% fetal bovine serum (FBS; Gibco, Thermo) and antibiotics (100 U/ml penicillin and 100 μg/ml streptomycin sulfate) (HyClone, USA) at 37°C in a humidified incubator with 5% CO 2 . Cells at 75% confluence were harvested for each experiment.
Breast cancer patient-derived fibroblast cells were maintained in Iscove's modified Dulbecco's medium adding 15% FBS. CAFs were seeded in 15 cm dish and when the CAFs reached 75% confluent. CAF-secreted exosomes were isolated from the culture medium of CAFs after 48 h, and then exosomes were added into the medium for continuous culture of breast cancer cells.
Exosome Extraction
Cells were cultured with exosome-depleted serum, which was prepared by centrifugation of FBS at 100,000×g at 4°C. Then, we collected the culture medium and centrifuged after 72 h incubation. Floating cells were removed from the medium following a centrifugation at 400×g for 5 min at 4°C. Next, cell debris was further removed from the supernatants by centrifugation at 3000×g for 20 min at 4°C. After flirtation of the supernatants by passing through a 0.22-μm filter, exosomes in the supernatants were ultracentrifuged and collected at 110,000×g for 4 h at 4°Cusing ultracentrifuge (Beckman). After wash of PBS, exosomes were stored at t − 80°C for further experiments.
Exosome Size Distribution Measurement
Exosomes were assessed for size distribution using a Nanobrook Omni (Brookhaven). Exosomal sizes and distribution were measured after they were resuspended and diluted in PBS by adding 2 ml of exosomes PBS into the Nanobrook Omni system.
TEM
The morphology of exosome samples was assessed by TEM. First, we prepared and diluted exosomes in PBS and place exosome-containing liquid on the copper grids. Then, the copper grids were dried and the excessive liquid was removed. Next, the samples were stained by 2% phosphotungstic acid (PTA) for 5 min at r.t. and fixed with 2% glutaraldehyde for 5 min. After PBS washing for three times, exosomes were imaged by transmission electron microscope (JEM-1230, Japan).
Exosome Labeling and Uptake by MCF-7 and MD-MBA-453 Cells
Exosomes secreted from the breast-derived CAFs were isolated as described above. After washing by PBS, the exosomes were stained by PKH67 agent (Sigma) according to the manufacturer's instructions after an ultracentrifugation at 120,000×g for 4 h at 4°C. Exosomes without PKH67 staining or no exosome adding were selected as the negative controls. To investigate the uptake of exosomes by cancer cells, MD-MBA-453 and MCF-7 cells were seeded in confocal imaging chamber. After a 24-h culture period, the chamber was washed by PBS for three times and cells were stained with different medium with either PKH67-labeled exosomes or blank control. After a further incubation for 48 h, each confocal chamber was washed by PBS for three times and cells were fixed by 4% PFA for 8 min. The DNA was stained using DAPI and washed by PBS for another two times. Finally, the uptake of exosomes by tumor cells were imaged by confocal microscope LSM880 (Carl Zeiss, Germany) and the images were further analyzed using Zen software (Carl Zeiss, Germany).
Viability Assay
CCK8 assay was performed suing CCK8 detecting kit (Dojindo) for the assessment of cell viabilities according to the manufacturer's instructions. Briefly, cells were plated in 96-well plate (Corning) in each condition. After an incubation with CCK8 assay solution for 2 h, the absorbance was recorded at the length of 450 nm.
OCR and ECAR Measurements
Oxygen consumption rate (OCR) and extracellular acidification rate (ECAR) were determined by the XF metabolic analyzers (Sea-horse, Agilent). First, we plated cells in Seahorse 24-well microplates. After the density of cells reached 70% confluent, each culture medium with indicated conditions was added in each well. Then, the plate was replaced by 800 μL of assay media after 12 incubation at 37°C with 5% CO 2 . The OCR was measured after another 1 h incubation at 37°C without 5% CO 2 . The measurement of ECAR was similar the OCR assay. The normalization of each OCR or ECAR value was calculated by cellular protein mass.
RNA Extraction and Real-Time QPCR
RNA was isolated from MD-MBA-453 cells by Trizol (Thermo), and cDNA was generated by reverse transcription kit (TaKaRa, China). 36B4 (human) was selected as an internal control. Real time QPCR was processed by the SYBR Green Mix (Yeasen, China). Data was acquired and analyzed by StepOnePlus Real-Time PCR System (Thermo).
Generation of Breast Cancer Mouse Model
Female BALB/c nude mice weighing 20 g were acquired from XinJiang Medical University (XinJiang, China). To generate the breast cancer bearing mouse model, 5 × 10 6 MD-MBA-453 alone or stably expressing anti-miR-330-5p cells (MD-MBA-453/anti-miR-330-5p or MD-MBA-453/control), or mixed with 1 × 10 6 CAFs expressing shSNHG3 or control (CAF/ shNHG3 or CAF/control), that were mixed at 1:1 with Matrigel (Thermo) were inoculated in the right, fourth mammary fat pads of the nu/nu mice. Upon reaching average tumor volumes of 50 mm 3 , the animals were randomly separated into indicated groups. Intratumoral pH was measured by a pH meter (Thermo). The animal experiment was approved by the Ethics Committee of Xinjiang Medical University (No: XJMU20190012).
Measurements of Lactate and Acetate Levels
The metabolic profiling including the levels of lactate and acetate was examined by kits according to the manufacturers' protocols, including acetate assay kit (Thermo) and the lactate assay kit (Thermo).
Statistics
All results were displayed as mean ± s.e.m. from at least three independent experiments. The analyses were performed for significance by using Prism software (GraphPad 8). Unpaired Student's t test and one-way analysis were used for the analysis of significant differences between two groups. *P < 0.05, **P < 0.01, and ***P < 0.001 were considered as significant.
The Uptake of CAF-Secreted Exosomes by Breast Tumor Cells
To investigate that whether CAFs could secrete exosomes, and the uptake of CAF-secreted exosomes by cancer cells, the exosomes were extracted from culture medium released from CAFs obtained from breast cancer bearing patient. Transmission electron microscope was used to examine the morphology of exosomes secreted by CAFs and exosomes showed roundshaped structure with the typical cup-shaped structure (Fig. 1a).
Consistent with previous studies , the particle size of CAF-secreted exosomes ranged from 40 to 100 nm (Fig. 1b). Then, we confirmed the expression of exosome marker CD63 through western blot (Fig. 1c). To examine if CAF-secreted exosomes were taken up by breast cancer cells (MCF-7 and MD-MBA-453), we pre-stained exosomes with PKH green dye and subjected exosomes to MCF-7 or MD-MBA-453 cells. Green fluorescent signals were found in MCF-7 or MD-MBA-453 cells through the confocal laser scanning microscopy (Fig. 1d, e). The uptake of CAF-secreted exosomes by breast cancer cells was further confirmed by flow cytometry. The results conformed the uptake of CAF-secreted exosomes by breast tumor cells (Fig. 1f), indicating that exosomes derived from CAFs are taken up into breast cancer cells.
Mitochondrial Function of Breast Tumor Cells Was Inhibited by CAF-Secreted Exosomes
Previous studies showed that CAFs could control tumor cell proliferation ; we first verified the effect of CAF-secreted exosomes on tumor cell proliferation. Exosomes secreted from the medium of CAFs from a breast cancer patient were isolated and used to culture breast (Fig. 1g). To examine whether CAF-secreted exosomes induce metabolic reprogramming in tumor cells, we cultured MD-MBA-453 cells with CAF-secreted exosomes and detected the oxygen consumption rate (OCR). The results showed that OCR of both MCF-7 and MD-MBA-453 cells were decreased in presence of breast CAF-secreted exosomes. To verify whether this reduction of OCR in tumor cells indeed resulted from the internalization of exosomes, we blocked the generation of exosomes with GW4869 (exosome-release inhibitor) in CAFs. Notably, GW4869 could partially reverse this decrease of OCR in breast cancer cells, indicating that the CAF-secreted exosomes mediated the reduction of OCR in tumor cells (Fig. 1h). We founded that maximal and reserve mitochondrial function of tumor cells were markedly decreased treated with exosomes, suggesting that mitochondrial respiratory capacity was suppressed by CAF-secreted exosomes (Fig. 1i). Moreover, ECAR (extracellular acidification rate) increased significantly in breast tumor cells when co-cultured with CAF-secreted exosomes (Fig. 1j). Lactate levels also increased significantly in the breast tumor cells in presence of exosomes (Fig. 1k). Taken together, our results showed that CAFsecreted exosomes decreased mitochondrial function and reprogrammed metabolic pathways of breast tumor cells.
CAF-Secreted Exosomal SNHG3 Promoted Proliferation and Downregulated Mitochondrial Role in Breast Tumor Cells
Noncoding RNAs (e.g., lncRNAs) in the exosomes could function as a cross talk mechanism between stromal and tumor cells . As we have demonstrated the exosomes secreted from CAFs could promote proliferation and reprogram metabolism of breast tumor cells, we further examined whether the effect is dependent on CAF-secreted exosomal lncRNA SNHG3. We first examined both secreted and intracellular expression level of SNHG3 in CAFs. Breast cancer derived CAFs secreted significantly increased SNHG3 than that of normal breast cells MCF10A (Fig. 2a). Then loss-of-function and gain-offunction assays were carried out to investigate the biological function of SNHG3 during the growth of breast cancer cells. CCK8 assays were utilized to identify the alteration of cell proliferation in MDA-MB-453 cells treated by exosomes secreted from CAFs transfected with si-SNHG3 or with SNHG3 overexpression plasmid. When compared with the si-control group, MDA-MB-231 cells treated by exosomes secreted from CAFs transfected with si-SNHG3 exhibited a significant inhibition of proliferation by the CCK8 assays (Fig. 2b). Consistently with this, MDA-MB-453 cells directly transfected with SNHG3 overexpression plasmid markedly enhanced the proliferation compared with control group (Fig. 2c). Furthermore, the treatment of exosomes secreted from CAFs transfected with si-SNHG3 rescued the enhancement of lactate production (Fig. 2d), while overexpression of SNHG3 in MDA-MB-453 cells significantly increased lactate production (Fig. 2e). Then, we examined mitochondrial respiration in breast cancer cells with or without exosomes secreted from CAFs transfected with si-SNHG3. OCR of MDA-MB-453 cells decreased treated by CAFsecreted exosomes and could be rescue in the presence of exosomes secreted from CAFs transfected with si-SNHG3 (Fig. 2f). Consistent with the decreased OCR treated by CAF-secreted exosomes, overexpression of SNHG3 in MDA-MB-453 cells significantly suppress oxygen consumption rate (Fig. 2g). Furthermore, we examined the glycolysis level in breast tumor cells co-cultured with exosomes secreted from CAFs transfected with si-SNHG3. The increase in ECAR of MDA-MB-453 cells treated with CAF-secreted exosomes could be rescued by the knockdown of SNHG3 in CAFs (Fig. 2h). The increase of ECAR when MDA-MB-453 cells overexpressing SNHG3 (Fig. 2i) further confirmed that the alteration of glycolysis metabolism in breast cancer cells is dependent on CAF-secreted exosomal SNHG3. Collectively, these results indicated CAF-secreted exosomal SNHG3 could promote proliferation and downregulate mitochondrial function in breast cancer cells.
CAF-Secreted Exosomal SNHG3 Regulates the miR-330 Expression in Breast Tumor Cells
Previous studies demonstrated that lncRNAs could serve as molecular sponges or ceRNAs to control the expression and function of miRNA . To illuminate the specific mechanism by which SNHG3 exhibit oncogenic function in tumor cells, we analyzed the potential targets of SNHG3 using bioinformatics databases (miRBase and starBase). Bioinformatics analysis indicated that SNHG3 consisted of the binding sequences against the region of miR-330-5p (Fig. 3a). To further determine whether SNHG3 exerted the function of regulating miR-330 at the post-transcriptional level, dual luciferase assays were performed. The results revealed that miR-330 overexpression significantly decreased the luciferase signals of SNHG3-wildtype in MD-MBA-453 cells (Fig. 3b). However, suppression of miR-330 significantly enhanced the luciferase signals of SNHG3-wildtype in MD-MBA-453 cells, while no positive luciferase signals were observed on SNHG3-mutation (Fig. 3c). Moreover, real-time quantitative PCR showed that SNHG3 expression was significantly decreased via overexpression of SNHG3, while no significant alteration was observed in the SNHG3-mutant treatment in MD-MBA-453 cells (Fig. 3d). Exosomes secreted from CAFs with the transfection of si-SNHG3 markedly increased miR-330 expression in MD-MBA-453 cells (Fig. 3e). Taken together, our results revealed that SNHG3 suppressed the expression of miR-330 by serving as a molecular sponge in vitro.
The Modulation of PKM Expression by SNHG3 in Breast Tumor Cells
Pyruvate kinase (PKM) functions as a key glycolytic enzyme which converts phosphoenolpyruvate to pyruvate, and the M2 isoform of pyruvate kinase (PKM2) is crucial for cancer cell metabolism, proliferation, and metastasis . As the expression of PKM has been reported to be regulated by various miRNA at the post-transcriptional level , we therefore investigated the effect of SNHG3/miR-330 axis on the expression of PKM and the metabolism rewiring and proliferation of breast tumor cells. First, we screened the targeting sequences between miR-330-5p and PKM mRNA using bioinformatics databases (miRBase and starBase) and showed miR-330-5p could bind to 3′UTR of PKM mRNA (Fig. 4a). Western blot showed that increasing of miR-330 and SNHG3 knockdown significantly decreased the protein expression of PKM in breast tumor cells, while silencing of miR-330 and overexpression of SNHG3 markedly increased the PKM level in MD-MBA-453 cells (Fig. 4b, c). In addition, exosomes secreted from CAFs also increased PKM expression and exosomes secreted from CAFs transfected with si-SNHG3 reversed the accumulation of PKM protein (Fig. 4d). Subsequently, miR-330 significantly suppressed the luciferase expression of PKM 3′UTR reporter activity compared to that with the miR-control treatment measured by the luciferase reporter assay, while transfection with SNHG3 markedly rescued the luciferase activity of PKM 3′-UTR reporter plasmid (Fig. 4e). On the contrary, co-transfection of MD-MBA-453 cells with anti-miR-330 and PKM 3′-UTR plasmid exhibited enhanced luciferase reporter signals, indicating the reverse by further transfection with si-SNHG3 (Fig. 4f). Collectively, our data revealed miR-330 could target PKM which was positively modulated by SNHG3.
Knockdown of CAF-Secreted Exosomal SNHG3 Inhibited Breast Cancer Cell Proliferation Through Increasing miR-330 and Decreasing PKM Expression
To examine the influence of CAF-secreted exosomal SNHG3 on tumor cell proliferation through the modulation of the expression of miR-330 and PKM, MD-MBA-453 cells were co-treated with exosomes secreted from CAFs transfected by si-SNHG3 or si-control, concurrently with the transfection of anti-miR-330, anti-miR-control, PKM, or Vector. Immunoblot showed that decreasing of miR-330 (Fig. 5a) and increasing of PKM (Fig. 5b) significantly rescued CAF-secreted exosomal SNHG3 inhibition-mediated suppression of the PKM in MD-MBA-453 cells. Furthermore, the silencing of CAF-derived exosomal SNHG3 could suppress the expression of metabolism-related protein including PFKM in the glycolysis pathway and IDH2 in the Kreb's cycle (Fig. 5c), suggesting that CAF-secreted exosomal SNHG3 may reprogram the metabolism of breast cancer cells through different molecular pathway. The CCK8 assays revealed that silencing of miR-330 (Fig. 5d) and overexpression of PKM (Fig. 5e) significantly rescued the suppression of cancer cell growth induced by the decreasing of CAF-secreted exosomal SNHG3 in MD-MBA-453 cells. Together, these results suggest that the knockdown of CAF-secreted exosomal SNHG3 inhibited breast cancer cell proliferation through increasing miR-330 and decreasing PKM expression.
SNHG3 Knockdown in CAF-Derived Exosomes Inhibited Breast Cancer by the Upregulation of miR-330 and the Downregulation of PKM
To determine whether CAF-secreted exosomal SNHG3 mediated cancer cell reprogramming results in tumor growth in vivo, we transplanted CAFs isolated from breast cancer patient which stably expressed sh-SNHG3 or sh-control, together with MD-MBA-453 cells expressing anti-miR-330 or anti-miR-control, into the breast pad of female athymic nude mice. The co-injection with breast cancer cells, expressing shcontrol with the control CAFs, significantly increased tumor growth than the coinjection cancer cells expressing anti-miR-330 (Fig. 6a). Besides, the tumor growth was significantly inhibited by knockdown of SNHG3 in CAFs compared with that in the CAFs/sh-control group. Moreover, SNHG3 silencing significantly suppressed the enhanced tumor growth mediated by the inhibition of miR-330 targeting. Tumors containing CAFs/sh-control exhibited a significantly pH decline in the tumor compared with transplantation of MD-MBA-453 cells alone, and the suppression of the CAFs SNHG3 restored the tumoral pH decline (Fig. 6b). The pH decline in the group of co-transplantation of CAFs/sh-control and breast tumor cells was consistent with higher expression of metabolites including lactate and acetate (Fig. 6c) as well as increased tumor cell growth detected through Ki67 immunostaining (Fig. 6d, e). These effects were all rescued by the inhibition of SNHG3 in CAFs. In summary, our results demonstrated that knockdown of CAF-secreted exosomal SNHG3 inhibited breast cancer glycosis and growth in vivo by upregulating miR-330 and downregulating PKM.
Discussion
Cancer-associated fibroblasts (CAFs) played an important role in tumor microenvironment in various types of solid cancers . Previous studies focused mainly on cell autonomous b Tumors were isolated and detected for 1D NMR metabolic analysis (n = 5). c Intratumoral pH in the harvested tumors was measured (n = 5). d, e Representative Ki67 immunostaining and the positive percentage of Ki67 staining of total tumor cells (n = 7). f Working model of the metabolic reprogramming of breast cancer cells by CAF-secreted exosomes via a SNHG3-miR330-PKM-mediated mechanism identified herein processes during tumor progression rather than investigating the communication between different types of cells in the tumor microenvironment. Accumulating researches revealed that exosomes could enhance communication between tumor and cancer-associated fibroblast cells in the tumor niche and have been considered as an important cross talk pattern among different cell types in the TME . In this study, we provided evidences that the expression of SNHG3 was abnormally increased in breast cancer patients-derived CAFs. Knockdown of CAF-secreted exosomal SNHG3 could inhibit the growth of breast tumor cells Mechanistically, SNHG3 could serve as a molecular sponge of miR-330 to regulate the expression of PKM in breast tumor cells. These findings provide the first insights into biological function and molecular regulation of CAFs exosomal SNHG3 in breast cancer.
SNHG3 has been identified as a novel lncRNA; therefore, there were limited investigations studying the functions of SNHG3 in various tumors . It was reported that SNHG3 was overexpressed in liver cancer, breast cancer, and colorectal cancer, which was associated with poor survival and prognosis in tumor-bearing patients . Importantly, SNHG3 was identified as a potential oncogene in breast cancer based on previous studies which focused on cancer cell-autonomous function of SNHG3 . Surprisingly, we found high expression of SNHG3 in exosomes secreted by CAFs isolated from breast cancer patients CAFs which may be important in the progression of breast tumor. However, the functional roles of CAF-secreted SNHG3 in breast tumor remained unknown. The results indicated that knockdown of CAF-secreted exosomal the decrease of SNHG3 expression inhibited cell growth and glycolysis metabolism of breast tumor cells in vitro and in vivo. Accumulating studies suggested that small nucleolar RNA host genes (SNHGs) could function as endogenous RNAs (ceRNAs) to regulate cancer cells by sponging miRNA, such as miR-182-5p, miR-186-5p and miR-101 . Therefore, we proposed whether CAF-secreted exosomal SNHG3 could regulate miRNA in cancer cells . By bioinformatics analysis and experimental validation, SNHG3 could sponge miR-330 to promote the growth and regulate the metabolism of breast cancer cells. Notably, anti-miR-330 could partially rescue the effect of CAF-secreted exosomal SNHG3 treatment, indicating that exosomal SNHG3 may get involved in the modulation of other potential miRNAs expression. Further investigation, including the identification of novel targets of SNHG3, will be needed to define the comprehensive role of CAF-secreted exosomal SNHG3 in breast cancer.
To determine the regulation and detailed mechanism of SNHG3/miR-330 axis on the proliferation and metabolism reprograming, we performed bioinformatic analysis and predicted that miR-330 could target PKM. Pyruvate kinase muscle isozyme M2 (PKM2), one isoform of PKM, is an important glycolytic enzyme participating in the final step in glycolysis, which was essential in cancer metabolism and proliferation. Knockdown PKM could inhibit proliferation and lead to apoptosis in several different types of tumor cells . Besides, PKM is essential in the metabolic reprogramming, the regulation of growth, apoptosis, and metastasis of cancer cells . Our study demonstrated that SNHG3/miR-330 signaling axis regulated the proliferation and metabolism of breast tumor cells through modulating PKM at the posttranscription level, providing potential therapeutic targets for inhibiting PKM in cancer treatment. Although we proved the roles of CAF-secreted exosomal SNHG3 signaling pathway in breast cancer development in vitro and in vivo, genetic mutation may still need to offer direct evidence. Therefore, knockout mouse models with conditional deletion of SNHG3 or its target miR-330 were generated and under investigation in our lab to define the exact role of SNHG3/miR-330 signaling axis in breast cancer progression.
In conclusion, our results provided novel insights into intercellular cross talk between tumor stromal and cancer cells. We demonstrated that CAF-secreted exosomes could reprogram cancer cell metabolism through the enrichment of exosomal non-coding RNA. More importantly, our results support the therapeutic potential of targeting exosomes-mediated cross talk between cancer and stromal cells in cancer treatment.
Funding Information This work was supported by The Natural Science Foundation of Xinjiang Uygur Autonomous Region (2019D01C258).
Compliance with Ethical Standard The animal experiment was approved by the Ethics Committee of Xinjiang Medical University (No: XJMU20190012).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
Clustering-Based Interpretation of Deep ReLU Network
Amongst others, the adoption of Rectified Linear Units (ReLUs) is regarded as one of the ingredients of the success of deep learning. ReLU activation has been shown to mitigate the vanishing gradient issue, to encourage sparsity in the learned parameters, and to allow for efficient backpropagation. In this paper, we recognize that the non-linear behavior of the ReLU function gives rise to a natural clustering when the pattern of active neurons is considered. This observation helps to deepen the learning mechanism of the network; in fact, we demonstrate that, within each cluster, the network can be fully represented as an affine map. The consequence is that we are able to recover an explanation, in the form of feature importance, for the predictions done by the network to the instances belonging to the cluster. Therefore, the methodology we propose is able to increase the level of interpretability of a fully connected feedforward ReLU neural network, downstream from the fitting phase of the model, without altering the structure of the network. A simulation study and the empirical application to the Titanic dataset, show the capability of the method to bridge the gap between the algorithm optimization and the human understandability of the black box deep ReLU networks.
Introduction
The "black box" nature of deep neural network models is often a limit for highstakes applications like diagnostic techniques, autonomous guide etc. since the reliability of the model predictions can be affected by the incompleteness in the optimization problem formalization . Recent works have shown that an adequate level of interpretability could enforce the neural network trustworthiness (see ). However, this is generally difficult to achieve without alter the mechanism of deep learning . The topic has became particularly relevant in the last decades, since technology development and data availability led to the adoption of more and more complex models, and widened the gap between performances on train/test data and interpretability.
Much work has been done in order to explain and interpret the models developed by AI in a human comprehensible manner. The main reason behind these effort is that the human experience and capacity for abstraction allow to monitor the process of the model decisions in a sound way, trying to mitigate the risks of data-driven models. On the other hand, the recent developments of deep neural networks offer enormous progress in artificial intelligence in various sectors. In particular, the ReLU functions have been shown to mitigate the vanishing gradient issue, encourage sparsity in the learned parameters and allow for efficient backpropagation.
In this paper, we tackle the explainability problem of deep ReLU networks, by characterizing the natural predisposition of the network to partition the input dataset in different clusters. The direct consequence of the clustering process is that, within each cluster, the network can be simplified and represented as an affine map. Surprisingly, the deep network provides a notion of cluster-specific importance for each feature, thus easier to interpret.
Bibliographic review
Despite the benefits and the expressiveness of Rectifier Networks have been widely investigated, e.g. in , the cluster analysis and the consequent interpretability of the networks via the modelling of the pattern of active neurons has not been discussed in the literature, to our knowledge.
On the other hand, many model-specific methodologies can be applied for gaining interpretability in neural network models (see and for a review). The importance of a feature is one of the most used strategies to gain local explainability from an opaque machine learning model. Permutation feature importance methods evaluate the feature importance through the variation of a loss metric by permuting the feature's values on a set of instances in the training or validation set. The approach is introduced for random forest in and for neural network in . Other methods, such as class model visualization , compute the partial derivative of the score function with respect to the input, and introduces expert distribution for the input giving activation maximization. In the author introduces deep lift and computes the discrete gradients with respect to a baseline point, by backpropagating the scoring difference through each unit. Integrated gradients cumulates the gradients with respect to inputs along the path from a given baseline to the instance. Finally, a set of well known methods called additive feature attribution methods defined in , rely on the redistribution of the predicted valuef (x) over the input features. They are designed to mimic the behaviour of a predictive functionf with a surrogate Boolean linear function g.
Let us denote by W i , i ∈ the weight matrices associated with the p layers of a given multilayer network with predictorf (of a q-dim target variable) and collect 4Ŵ i inŴ = . For any input u ∈ R d , the initial Directed Acyclic Graph (DAG) G of the deep network is reduced to G u which only keeps the units corresponding to active neurons 5 and the corresponding arcs. This DAG is clearly associated with a given set of weightsŴ . We can formally state this pruning for the given neural network, characterized by G, paired with input u with weightsŴ by where, since all neurons operate in "linear regime" (affine functions), as stated in the following, the output, the composition of affine functions, is in fact an affine function itself.
and assume that Y i is chosen in such a way that ∀i = 1, . . . , p − 1 : being W p+1 := .
Proof. The proof is given by induction on p.
-Basis: For p = 1 we havef = W 1 u + b 1 and Ω(1) = W 1 which confirms (2), and when considering W 2 := , we have in according to (3). 4 Theˆin the notation means that the bias term is incorporated in the variable. 5 A neuron is considered active for a particular pattern if the input falls in the right linear part of the domain's function. 6 We stress the dependence of Ω and b from the number of layer (p) since it will be useful for the proof.
-Induction step: By induction, a network with p − 1 layers is defined by an affine transformation that is Hence Now let U ⊂ R d the input space. The given deep net yields a partition on U which is associated with the following equivalence relation: . We denote 7 by ∼ = {v ∈ U : v ∼ u} and by U / ∼ the corresponding quotient set. Hence, ∼ is the equivalent class associated with representer u which, in turn, corresponds with G u . Notice that, as a consequence, ∼ is fully defined by the set of neurons of the active neural network.
Feature Importance Explanation The characterization done above allows to assigning a matrixΩ to each cluster of the network. In this way, the matrix is able to represent the network for the patterns of the specific cluster as an affine map.
For simplicity, if we consider a problem where the output of the network is scalar, the matrix Ω reduces to a d-dimensional effective vector ω whose components can be interpreted as the feature importance of the cluster's solution.
Simulation study
In this section, we report the simulation studies carried out on the ReLU network architecture applied to a Boolean artificial dataset, in order to assess the power of the clustering-based interpretation.
We consider a set of 10 Boolean feature variables v i∈ . The first 3 features determine the target variable through the following relation: whereas the other 7 features introduce noise. The rationale of the formula is that the feature v 3 split the data set in two groups (v 3 and ¬v 3 ), each ruled by a different term of the Boolean formula involving either v 1 or v 2 respectively. In the following, we investigate whether the network clustering is able to recognize the different terms of the formula.
We simulated 100, 000 samples, and we exploited a two-hidden-layer Multilayer Perceptron (MLP) with 4 and 2 neurons characterized by ReLU activation function and an output neuron with a sigmoid activation function. The cross-entropy loss is minimized via the Adam stochastic optimizer with a step size of 0.01 for 10 epochs, and a batch size of 100. An activity regularizer with 0.02 is added to the empirical loss. At the end of the training, the network solves the problem with an accuracy of 100%. The experiments are implemented with Keras in the Python environment on a regular CPU.
The analysis of the network, by considering the possible patterns of the active neurons, originates three clusters.
A trivial cluster characterized by all non-active neurons including all the
patterns predicted as 0.
Instead, the other two clusters activate the two neurons of the last layer but a different neuron of the first hidden layer. In figure 1 we report the table resuming the 8 possibilities of the Boolean function restricted to the first 3 features, as well as the bar plot for the importance of the features for the two non-trivial clusters. As explained above, the importance of the feature is computed by the coefficients of the effective vector for that cluster.
2. The second cluster, represented by the blue bars in Fig. 1, includes patterns predicted as 1 and characterized either by v 1 = v 2 = v 3 = 1 or by v 1 = 1, v 2 = 0, v 3 = 1. We can argue that this cluster takes in charge the patterns predicted as 1 due to the first term of Eq. (5), i.e. v 1 ∧ v 3 . Coherently with this setting, the feature importance given by the effective vector is zero for the feature v 2 that does not appear in the first term of Eq. (5). On the other side, the coefficient of the effective vector is positive for the feature v 1 . 3. In a similar way, the third cluster denoted with the orange color, represents the term v 2 ∧ ¬v 3 of Eq. (5) since the patterns belonging to it are predicted as 1 and are characterized either by v 1 = v 2 = 1 v 3 = 0 or by v 1 = 0 , v 2 = 1 , v 3 = 0. As expected, the feature importance of v 1 is zero, whereas for v 2 the feature importance is positive. From the simulation study, we note that the clustering-based interpretation of the ReLU network helps to achieve a more profound understanding of the solution meaning. In particular, the methodology tries to disentangle the complexity of the solutions into a set of more comprehensible linear solutions within a specific cluster.
As a further confirmation of the usefulness of cluster-based interpretation, when the activity regularizer is removed, the network keeps solving the task giving rise to 8 clusters, each one specific for each combination of the first three features. Based on the cluster interpretation, we conclude that the network has chosen a less abstract way to solve the problem.
Titanic dataset
In this section, we report the experimental analysis performed on the well-known Titanic dataset 8 . Each sample represents a passenger with specific features, and the binary target variable indicates if the person survived the Titanic disaster.
Standard data cleaning and feature selection procedures 9 are implemented. In Table 1 we report a brief description of the features. Similar to the previous experiment, we exploit an MLP with two hidden layers (4 and 2 neurons) characterized by ReLU activation function and an output neuron with a sigmoid activation function. The cross-entropy loss is minimized via the Adam stochastic optimizer with a step size of 0.01 for 10 epochs and a batch size of 100. An activity regularizer with 0.02 is added to the empirical loss. The experiments are implemented with Keras in the Python environment. The code is freely available at https://github.com/nicolapicchiotti/relu_ nn_clustering.
The accuracy of the network is 77% (see ) and the study of the active neurons patterns provides a partition of the dataset into three clusters, whose feature importance is shown in Figure 2.
(a) The first cluster a) includes passengers with mixed features and a percentage of survived ones equal to 38%. As expected from the univariate exploratory analyses, gender and class had the most significant relationship for survival rate. (b) The cluster b) instead, includes only males belonging to the third class (4% of the overall): the prediction for these passengers is always 0. This cluster confirms the expectation on the male gender and third class as relevant features for not survive. We observe that the feature importance is quite similar to the one of cluster a) except for the fact that the "age" feature assumes slightly more relevance. Finally, (c) in the cluster c) the passengers are females and belonging to the first class (16% of the overall), the predicted value is always 1. The feature importance, in this case, shows that the high value of the title, age, and the other features contribute to survival, in addition to being female. This cluster helps us to understand the solution provided by the network. For instance, we note that the "age" feature has an opposite behavior with respect to the other two clusters: in this cluster the older the women, the higher the survival probability.
In this experiment, we have shown that the ReLU network can be disentangled into a set of clusters that can be analyzed individually. The clusters have a practical meaning helping the human to interpret the mechanism of prediction of the network.
Conclusions and Limitations
This paper proposes a methodology for increasing the level of interpretability of a fully connected feedforward neural network with ReLU activation functions, downstream from the fitting phase of the model. It is worth noting that the methodology does not alter neither the structure nor the performance of the network and can be applied easily after the training of the model. The methodology relies on the clustering that naturally arises from the binary status of the different neurons of the network, in turns, related to the non-linearity of the ReLU function. The existence of a feature importance explanation based on a affine map for each cluster is proved. A simulation study and the empirical application to the Titanic dataset show the capability of the method to bridge the gap between the algorithm optimization and the human understandability.
A possible limitation of the work is related to the potential high number of clusters that the networks could generates. Further ways should be explored in order to grant a parsimonious principle of the dataset partition process, such as a principle of minimum entropy or the orthogonality of the hyperplanes representing the clusters. Finally, it is worth mentioning that, if the input variables are highly correlated or belong to a very high-dimensional spaces, the weights of the affine map cannot be trivially interpreted as the features importance, see for instance . |
# https://codeforces.com/problemset/problem/546/A
"""
Wants to buy w bananas. The ith banana cost ik
He has n dollars, how much does he need to borrow to buy w bananas
"""
k, n, w = map(int, input().split())
# w bananas cost k*w*(w+1)/2. If that's more than he has then he needs to borrow.
# Otherwise he doesn't and so borrows 0
print(max(0, k*w*(w+1)//2 - n))
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.