content
stringlengths 7
2.61M
|
---|
Mobile Biosafety Level-4 Autopsy FacilityAn Innovative Solution Recent threats of bioterrorism, outbreaks of previously unknown infectious diseases such as Severe Acute Respiratory Syndrome (SARS) and the reemergence of diseases like the Avian Influenza are very real and have caused serious concerns not only for the world-at-large, but also for many authorities. This is an even greater concern for the forensic community as they are generally ill-equipped to deal with highly infectious pathogens due to chronic under funding and administrative constraints. The cost for building a Biosafety Level 4 (BSL-4) facility is exorbitant; such a facility is also very expensive to operate and maintain. Given the state of funding for most Forensic Centers and Medical Examiner Facilities in the world, having a high containment BSL-4 facility just to carry out autopsy work is highly unlikely. In the course of dealing with the SARS outbreak in Singapore in 2003, the Centre for Forensic Medicine (CFM) of the Health Sciences Authority, together with its strategic partner, Acre Engineering, developed an innovative solution that would meet the requirements set out for a BSL-4 Mobile Autopsy Facility. This was completed at a fraction of the cost and in less than half the time spent building such a facility de novo. This paper therefore sets out to present an innovative solution to meet the need for an autopsy facility equipped to BSL-4 standards that can be mobilized and deployed at short notice to conduct autopsies on highly infectious cases at distant locations. In particular, it addresses the engineering and facilities components of the solution.
|
<filename>src/sage/topology/simplicial_complex_examples.py
# -*- coding: utf-8 -*-
"""
Examples of simplicial complexes
There are two main types: manifolds and examples related to graph
theory.
For manifolds, there are functions defining the `n`-sphere for any
`n`, the torus, `n`-dimensional real projective space for any `n`, the
complex projective plane, surfaces of arbitrary genus, and some other
manifolds, all as simplicial complexes.
Aside from surfaces, this file also provides functions for
constructing some other simplicial complexes: the simplicial complex
of not-`i`-connected graphs on `n` vertices, the matching complex on n
vertices, the chessboard complex for an `n` by `i` chessboard, and
others. These provide examples of large simplicial complexes; for
example, ``simplicial_complexes.NotIConnectedGraphs(7, 2)`` has over a
million simplices.
All of these examples are accessible by typing
``simplicial_complexes.NAME``, where ``NAME`` is the name of the example.
- :func:`BarnetteSphere`
- :func:`BrucknerGrunbaumSphere`
- :func:`ChessboardComplex`
- :func:`ComplexProjectivePlane`
- :func:`DunceHat`
- :func:`FareyMap`
- :func:`K3Surface`
- :func:`KleinBottle`
- :func:`MatchingComplex`
- :func:`MooreSpace`
- :func:`NotIConnectedGraphs`
- :func:`PoincareHomologyThreeSphere`
- :func:`PseudoQuaternionicProjectivePlane`
- :func:`RandomComplex`
- :func:`RandomTwoSphere`
- :func:`RealProjectivePlane`
- :func:`RealProjectiveSpace`
- :func:`RudinBall`
- :func:`ShiftedComplex`
- :func:`Simplex`
- :func:`Sphere`
- :func:`SumComplex`
- :func:`SurfaceOfGenus`
- :func:`Torus`
- :func:`ZieglerBall`
You can also get a list by typing ``simplicial_complexes.`` and hitting the
TAB key.
EXAMPLES::
sage: S = simplicial_complexes.Sphere(2) # the 2-sphere
sage: S.homology()
{0: 0, 1: 0, 2: Z}
sage: simplicial_complexes.SurfaceOfGenus(3)
Triangulation of an orientable surface of genus 3
sage: M4 = simplicial_complexes.MooreSpace(4)
sage: M4.homology()
{0: 0, 1: C4, 2: 0}
sage: simplicial_complexes.MatchingComplex(6).homology()
{0: 0, 1: Z^16, 2: 0}
"""
from .simplicial_complex import SimplicialComplex
from sage.structure.unique_representation import UniqueRepresentation
# Below we define a function Simplex to construct a simplex as a
# simplicial complex. We also need to use actual simplices as
# simplices, hence:
from .simplicial_complex import Simplex as TrueSimplex
from sage.sets.set import Set
from sage.misc.functional import is_even
from sage.combinat.subset import Subsets
import sage.misc.prandom as random
# Miscellaneous utility functions.
# The following two functions can be used to generate the facets for
# the corresponding examples in sage.homology.examples. These take a
# few seconds to run, so the actual examples have the facets
# hard-coded. Thus the following functions are not currently used in
# the Sage library.
def facets_for_RP4():
"""
Return the list of facets for a minimal triangulation of 4-dimensional
real projective space.
We use vertices numbered 1 through 16, define two facets, and define
a certain subgroup `G` of the symmetric group `S_{16}`. Then the set
of all facets is the `G`-orbit of the two given facets.
See the description in Example 3.12 in Datta [Dat2007]_.
EXAMPLES::
sage: from sage.topology.simplicial_complex_examples import facets_for_RP4
sage: A = facets_for_RP4() # long time (1 or 2 seconds)
sage: SimplicialComplex(A) == simplicial_complexes.RealProjectiveSpace(4) # long time
True
"""
# Define the group:
from sage.groups.perm_gps.permgroup import PermutationGroup
g1 = '(2, 7)(4, 10)(5, 6)(11, 12)'
g2 = '(1, 2, 3, 4, 5, 10)(6, 8, 9)(11, 12, 13, 14, 15, 16)'
G = PermutationGroup([g1, g2])
# Define the two simplices:
t1 = (1, 2, 4, 5, 11)
t2 = (1, 2, 4, 11, 13)
# Apply the group elements to the simplices:
facets = []
for g in G:
d = g.dict()
for t in [t1, t2]:
new = tuple([d[j] for j in t])
if new not in facets:
facets.append(new)
return facets
def facets_for_K3():
"""
Return the facets for a minimal triangulation of the K3 surface.
This is a pure simplicial complex of dimension 4 with 16
vertices and 288 facets. The facets are obtained by constructing a
few facets and a permutation group `G`, and then computing the
`G`-orbit of those facets.
See Casella and Kühnel in [CK2001]_ and Spreer and Kühnel [SK2011]_;
the construction here uses the labeling from Spreer and Kühnel.
EXAMPLES::
sage: from sage.topology.simplicial_complex_examples import facets_for_K3
sage: A = facets_for_K3() # long time (a few seconds)
sage: SimplicialComplex(A) == simplicial_complexes.K3Surface() # long time
True
"""
from sage.groups.perm_gps.permgroup import PermutationGroup
G = PermutationGroup([[(1, 3, 8, 4, 9, 16, 15, 2, 14, 12, 6, 7, 13, 5, 10)],
[(1, 11, 16), (2, 10, 14), (3, 12, 13), (4, 9, 15), (5, 7, 8)]])
return ([tuple([g(i) for i in (1, 2, 3, 8, 12)]) for g in G] +
[tuple([g(i) for i in (1, 2, 5, 8, 14)]) for g in G])
def matching(A, B):
r"""
List of maximal matchings between the sets ``A`` and ``B``.
A matching is a set of pairs `(a, b) \in A \times B` where each `a` and
`b` appears in at most one pair. A maximal matching is one which is
maximal with respect to inclusion of subsets of `A \times B`.
INPUT:
- ``A``, ``B`` -- list, tuple, or indeed anything which can be
converted to a set.
EXAMPLES::
sage: from sage.topology.simplicial_complex_examples import matching
sage: matching([1, 2], [3, 4])
[{(1, 3), (2, 4)}, {(1, 4), (2, 3)}]
sage: matching([0, 2], [0])
[{(0, 0)}, {(2, 0)}]
"""
answer = []
if len(A) == 0 or len(B) == 0:
return [set([])]
for v in A:
for w in B:
for M in matching(set(A).difference([v]), set(B).difference([w])):
new = M.union([(v, w)])
if new not in answer:
answer.append(new)
return answer
class UniqueSimplicialComplex(SimplicialComplex, UniqueRepresentation):
"""
This combines :class:`SimplicialComplex` and
:class:`UniqueRepresentation`. It is intended to be used to make
standard examples of simplicial complexes unique. See :trac:`13566`.
INPUT:
- the inputs are the same as for a :class:`SimplicialComplex`,
with one addition and two exceptions. The exceptions are that
``is_mutable`` and ``is_immutable`` are ignored: all instances
of this class are immutable. The addition:
- ``name`` -- string (optional), the string representation for this complex.
EXAMPLES::
sage: from sage.topology.simplicial_complex_examples import UniqueSimplicialComplex
sage: SimplicialComplex([[0, 1]]) is SimplicialComplex([[0, 1]])
False
sage: UniqueSimplicialComplex([[0, 1]]) is UniqueSimplicialComplex([[0, 1]])
True
sage: UniqueSimplicialComplex([[0, 1]])
Simplicial complex with vertex set (0, 1) and facets {(0, 1)}
sage: UniqueSimplicialComplex([[0, 1]], name='The 1-simplex')
The 1-simplex
"""
@staticmethod
def __classcall__(self, maximal_faces=None, name=None, **kwds):
"""
TESTS::
sage: from sage.topology.simplicial_complex_examples import UniqueSimplicialComplex
sage: UniqueSimplicialComplex([[1, 2, 3], [0, 1, 3]]) is UniqueSimplicialComplex([(1, 2, 3), (0, 1, 3)])
True
sage: X = UniqueSimplicialComplex([[1, 2, 3], [0, 1, 3]])
sage: X is UniqueSimplicialComplex(X)
True
Testing ``from_characteristic_function``::
sage: UniqueSimplicialComplex(from_characteristic_function=(lambda x:sum(x)<=4, range(5)))
Simplicial complex with vertex set (0, 1, 2, 3, 4) and facets {(0, 4), (0, 1, 2), (0, 1, 3)}
"""
char_fcn = kwds.get('from_characteristic_function', None)
if char_fcn:
kwds['from_characteristic_function'] = (char_fcn[0], tuple(char_fcn[1]))
if maximal_faces:
# Test to see if maximal_faces is a cell complex or another
# object which can be converted to a simplicial complex:
C = None
if isinstance(maximal_faces, SimplicialComplex):
C = maximal_faces
else:
try:
C = maximal_faces._simplicial_()
except AttributeError:
if not isinstance(maximal_faces, (list, tuple, Simplex)):
# Convert it into a list (in case it is an iterable)
maximal_faces = list(maximal_faces)
if C is not None:
maximal_faces = C.facets()
# Now convert maximal_faces to a tuple of tuples, so that it is hashable.
maximal_faces = tuple(tuple(mf) for mf in maximal_faces)
return super(UniqueSimplicialComplex, self).__classcall__(self, maximal_faces,
name=name,
**kwds)
def __init__(self, maximal_faces=None, name=None, **kwds):
"""
TESTS::
sage: from sage.topology.simplicial_complex_examples import UniqueSimplicialComplex
sage: UniqueSimplicialComplex([[1, 2, 3], [0, 1, 3]], is_mutable=True).is_mutable()
False
"""
if 'is_mutable' in kwds:
del kwds['is_mutable']
if 'is_immutable' in kwds:
del kwds['is_immutable']
self._name = name
SimplicialComplex.__init__(self, maximal_faces=maximal_faces, is_mutable=False, **kwds)
def _repr_(self):
"""
Print representation
If the argument ``name`` was specified when defining the
complex, use that. Otherwise, use the print representation
from the class :class:`SimplicialComplex`.
TESTS::
sage: from sage.topology.simplicial_complex_examples import UniqueSimplicialComplex
sage: UniqueSimplicialComplex([[0, 1]])
Simplicial complex with vertex set (0, 1) and facets {(0, 1)}
sage: UniqueSimplicialComplex([[0, 1]], name='Joe')
Joe
"""
if self._name:
return self._name
return SimplicialComplex._repr_(self)
# Now the functions that produce the actual examples...
def Sphere(n):
"""
A minimal triangulation of the `n`-dimensional sphere.
INPUT:
- ``n`` -- positive integer
EXAMPLES::
sage: simplicial_complexes.Sphere(2)
Minimal triangulation of the 2-sphere
sage: simplicial_complexes.Sphere(5).homology()
{0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: Z}
sage: [simplicial_complexes.Sphere(n).euler_characteristic() for n in range(6)]
[2, 0, 2, 0, 2, 0]
sage: [simplicial_complexes.Sphere(n).f_vector() for n in range(6)]
[[1, 2],
[1, 3, 3],
[1, 4, 6, 4],
[1, 5, 10, 10, 5],
[1, 6, 15, 20, 15, 6],
[1, 7, 21, 35, 35, 21, 7]]
"""
S = TrueSimplex(n+1)
facets = tuple(S.faces())
return UniqueSimplicialComplex(facets,
name='Minimal triangulation of the {}-sphere'.format(n))
def Simplex(n):
"""
An `n`-dimensional simplex, as a simplicial complex.
INPUT:
- ``n`` -- a non-negative integer
OUTPUT: the simplicial complex consisting of the `n`-simplex
on vertices `(0, 1, ..., n)` and all of its faces.
EXAMPLES::
sage: simplicial_complexes.Simplex(3)
The 3-simplex
sage: simplicial_complexes.Simplex(5).euler_characteristic()
1
"""
return UniqueSimplicialComplex([TrueSimplex(n)],
name='The {}-simplex'.format(n))
def Torus():
r"""
A minimal triangulation of the torus.
This is a simplicial complex with 7 vertices, 21 edges and 14
faces. It is the unique triangulation of the torus with 7
vertices, and has been found by Möbius in 1861.
This is also the combinatorial structure of the Császár
polyhedron (see :wikipedia:`Császár_polyhedron`).
EXAMPLES::
sage: T = simplicial_complexes.Torus(); T.homology(1)
Z x Z
sage: T.f_vector()
[1, 7, 21, 14]
TESTS::
sage: T.flip_graph().is_isomorphic(graphs.HeawoodGraph())
True
REFERENCES:
- [Lut2002]_
"""
return UniqueSimplicialComplex([[0, 1, 2], [1, 2, 4], [1, 3, 4], [1, 3, 6],
[0, 1, 5], [1, 5, 6], [2, 3, 5], [2, 4, 5],
[2, 3, 6], [0, 2, 6], [0, 3, 4], [0, 3, 5],
[4, 5, 6], [0, 4, 6]],
name='Minimal triangulation of the torus')
def RealProjectivePlane():
"""
A minimal triangulation of the real projective plane.
EXAMPLES::
sage: P = simplicial_complexes.RealProjectivePlane()
sage: Q = simplicial_complexes.ProjectivePlane()
sage: P == Q
True
sage: P.cohomology(1)
0
sage: P.cohomology(2)
C2
sage: P.cohomology(1, base_ring=GF(2))
Vector space of dimension 1 over Finite Field of size 2
sage: P.cohomology(2, base_ring=GF(2))
Vector space of dimension 1 over Finite Field of size 2
"""
return UniqueSimplicialComplex([[0, 1, 2], [0, 2, 3], [0, 1, 5], [0, 4, 5],
[0, 3, 4], [1, 2, 4], [1, 3, 4], [1, 3, 5],
[2, 3, 5], [2, 4, 5]],
name='Minimal triangulation of the real projective plane')
ProjectivePlane = RealProjectivePlane
def KleinBottle():
"""
A minimal triangulation of the Klein bottle, as presented for example
in Davide Cervone's thesis [Cer1994]_.
EXAMPLES::
sage: simplicial_complexes.KleinBottle()
Minimal triangulation of the Klein bottle
"""
return UniqueSimplicialComplex([[2, 3, 7], [1, 2, 3], [1, 3, 5], [1, 5, 7],
[1, 4, 7], [2, 4, 6], [1, 2, 6], [1, 6, 0],
[1, 4, 0], [2, 4, 0], [3, 4, 7], [3, 4, 6],
[3, 5, 6], [5, 6, 0], [2, 5, 0], [2, 5, 7]],
name='Minimal triangulation of the Klein bottle')
def SurfaceOfGenus(g, orientable=True):
"""
A surface of genus `g`.
INPUT:
- ``g`` -- a non-negative integer. The desired genus
- ``orientable`` -- boolean (optional, default ``True``). If
``True``, return an orientable surface, and if ``False``,
return a non-orientable surface.
In the orientable case, return a sphere if `g` is zero, and
otherwise return a `g`-fold connected sum of a torus with itself.
In the non-orientable case, raise an error if `g` is zero. If
`g` is positive, return a `g`-fold connected sum of a
real projective plane with itself.
EXAMPLES::
sage: simplicial_complexes.SurfaceOfGenus(2)
Triangulation of an orientable surface of genus 2
sage: simplicial_complexes.SurfaceOfGenus(1, orientable=False)
Triangulation of a non-orientable surface of genus 1
"""
if g == 0:
if not orientable:
raise ValueError("no non-orientable surface of genus zero")
else:
return Sphere(2)
if orientable:
T = Torus()
else:
T = RealProjectivePlane()
S = T
for i in range(g-1):
S = S.connected_sum(T)
if orientable:
orient_str = 'n orientable'
else:
orient_str = ' non-orientable'
return UniqueSimplicialComplex(S,
name='Triangulation of a{} surface of genus {}'.format(orient_str, g))
def MooreSpace(q):
"""
Triangulation of the mod `q` Moore space.
INPUT:
- ``q`` -0 integer, at least 2
This is a simplicial complex with simplices of dimension 0, 1,
and 2, such that its reduced homology is isomorphic to
`\\ZZ/q\\ZZ` in dimension 1, zero otherwise.
If `q=2`, this is the real projective plane. If `q>2`, then
construct it as follows: start with a triangle with vertices
1, 2, 3. We take a `3q`-gon forming a `q`-fold cover of the
triangle, and we form the resulting complex as an
identification space of the `3q`-gon. To triangulate this
identification space, put `q` vertices `A_0`, ..., `A_{q-1}`,
in the interior, each of which is connected to 1, 2, 3 (two
facets each: `[1, 2, A_i]`, `[2, 3, A_i]`). Put `q` more
vertices in the interior: `B_0`, ..., `B_{q-1}`, with facets
`[3, 1, B_i]`, `[3, B_i, A_i]`, `[1, B_i, A_{i+1}]`, `[B_i,
A_i, A_{i+1}]`. Then triangulate the interior polygon with
vertices `A_0`, `A_1`, ..., `A_{q-1}`.
EXAMPLES::
sage: simplicial_complexes.MooreSpace(2)
Minimal triangulation of the real projective plane
sage: simplicial_complexes.MooreSpace(3).homology()[1]
C3
sage: simplicial_complexes.MooreSpace(4).suspension().homology()[2]
C4
sage: simplicial_complexes.MooreSpace(8)
Triangulation of the mod 8 Moore space
"""
if q <= 1:
raise ValueError("the mod q Moore space is only defined if q is at least 2")
if q == 2:
return RealProjectivePlane()
facets = []
for i in range(q):
Ai = "A" + str(i)
Aiplus = "A" + str((i+1) % q)
Bi = "B" + str(i)
facets.append([1, 2, Ai])
facets.append([2, 3, Ai])
facets.append([3, 1, Bi])
facets.append([3, Bi, Ai])
facets.append([1, Bi, Aiplus])
facets.append([Bi, Ai, Aiplus])
for i in range(1, q-1):
Ai = "A" + str(i)
Aiplus = "A" + str((i+1) % q)
facets.append(["A0", Ai, Aiplus])
return UniqueSimplicialComplex(facets,
name='Triangulation of the mod {} Moore space'.format(q))
def ComplexProjectivePlane():
"""
A minimal triangulation of the complex projective plane.
This was constructed by <NAME> Banchoff [KB1983]_.
EXAMPLES::
sage: C = simplicial_complexes.ComplexProjectivePlane()
sage: C.f_vector()
[1, 9, 36, 84, 90, 36]
sage: C.homology(2)
Z
sage: C.homology(4)
Z
"""
return UniqueSimplicialComplex(
[[1, 2, 4, 5, 6], [2, 3, 5, 6, 4], [3, 1, 6, 4, 5],
[1, 2, 4, 5, 9], [2, 3, 5, 6, 7], [3, 1, 6, 4, 8],
[2, 3, 6, 4, 9], [3, 1, 4, 5, 7], [1, 2, 5, 6, 8],
[3, 1, 5, 6, 9], [1, 2, 6, 4, 7], [2, 3, 4, 5, 8],
[4, 5, 7, 8, 9], [5, 6, 8, 9, 7], [6, 4, 9, 7, 8],
[4, 5, 7, 8, 3], [5, 6, 8, 9, 1], [6, 4, 9, 7, 2],
[5, 6, 9, 7, 3], [6, 4, 7, 8, 1], [4, 5, 8, 9, 2],
[6, 4, 8, 9, 3], [4, 5, 9, 7, 1], [5, 6, 7, 8, 2],
[7, 8, 1, 2, 3], [8, 9, 2, 3, 1], [9, 7, 3, 1, 2],
[7, 8, 1, 2, 6], [8, 9, 2, 3, 4], [9, 7, 3, 1, 5],
[8, 9, 3, 1, 6], [9, 7, 1, 2, 4], [7, 8, 2, 3, 5],
[9, 7, 2, 3, 6], [7, 8, 3, 1, 4], [8, 9, 1, 2, 5]],
name='Minimal triangulation of the complex projective plane')
def PseudoQuaternionicProjectivePlane():
r"""
Return a pure simplicial complex of dimension 8 with 490 facets.
.. WARNING::
This is expected to be a triangulation of the projective plane
`HP^2` over the ring of quaternions, but this has not been
proved yet.
This simplicial complex has the same homology as `HP^2`. Its
automorphism group is isomorphic to the alternating group `A_5`
and acts transitively on vertices.
This is defined here using the description in [BK1992]_. This
article deals with three different triangulations. This procedure
returns the only one which has a transitive group of
automorphisms.
EXAMPLES::
sage: HP2 = simplicial_complexes.PseudoQuaternionicProjectivePlane() ; HP2
Simplicial complex with 15 vertices and 490 facets
sage: HP2.f_vector()
[1, 15, 105, 455, 1365, 3003, 4515, 4230, 2205, 490]
Checking its automorphism group::
sage: HP2.automorphism_group().is_isomorphic(AlternatingGroup(5))
True
"""
from sage.groups.perm_gps.permgroup import PermutationGroup
P = [(1, 2, 3, 4, 5), (6, 7, 8, 9, 10), (11, 12, 13, 14, 15)]
S = [(1, 6, 11), (2, 15, 14), (3, 13, 8), (4, 7, 5), (9, 12, 10)]
start_list = [
(1, 2, 3, 6, 8, 11, 13, 14, 15), # A
(1, 3, 6, 8, 9, 10, 11, 12, 13), # B
(1, 2, 6, 9, 10, 11, 12, 14, 15), # C
(1, 2, 3, 4, 7, 9, 12, 14, 15), # D
(1, 2, 4, 7, 9, 10, 12, 13, 14), # E
(1, 2, 6, 8, 9, 10, 11, 14, 15), # F
(1, 2, 3, 4, 5, 6, 9, 11, 13), # G
(1, 3, 5, 6, 8, 9, 10, 11, 12), # H
(1, 3, 5, 6, 7, 8, 9, 10, 11), # I
(1, 2, 3, 4, 5, 7, 10, 12, 15), # J
(1, 2, 3, 7, 8, 10, 12, 13, 14), # K
(2, 5, 6, 7, 8, 9, 10, 13, 14), # M
(3, 4, 6, 7, 11, 12, 13, 14, 15), # L
(3, 4, 6, 7, 10, 12, 13, 14, 15)] # N
return UniqueSimplicialComplex([[g(index) for index in tuple]
for tuple in start_list
for g in PermutationGroup([P, S])])
def PoincareHomologyThreeSphere():
"""
A triangulation of the Poincaré homology 3-sphere.
This is a manifold whose integral homology is identical to the
ordinary 3-sphere, but it is not simply connected. In particular,
its fundamental group is the binary icosahedral group, which has
order 120. The triangulation given here has 16 vertices and is
due to Björner and Lutz [BL2000]_.
EXAMPLES::
sage: S3 = simplicial_complexes.Sphere(3)
sage: Sigma3 = simplicial_complexes.PoincareHomologyThreeSphere()
sage: S3.homology() == Sigma3.homology()
True
sage: Sigma3.fundamental_group().cardinality() # long time
120
"""
return UniqueSimplicialComplex(
[[1, 2, 4, 9], [1, 2, 4, 15], [1, 2, 6, 14], [1, 2, 6, 15],
[1, 2, 9, 14], [1, 3, 4, 12], [1, 3, 4, 15], [1, 3, 7, 10],
[1, 3, 7, 12], [1, 3, 10, 15], [1, 4, 9, 12], [1, 5, 6, 13],
[1, 5, 6, 14], [1, 5, 8, 11], [1, 5, 8, 13], [1, 5, 11, 14],
[1, 6, 13, 15], [1, 7, 8, 10], [1, 7, 8, 11], [1, 7, 11, 12],
[1, 8, 10, 13], [1, 9, 11, 12], [1, 9, 11, 14], [1, 10, 13, 15],
[2, 3, 5, 10], [2, 3, 5, 11], [2, 3, 7, 10], [2, 3, 7, 13],
[2, 3, 11, 13], [2, 4, 9, 13], [2, 4, 11, 13], [2, 4, 11, 15],
[2, 5, 8, 11], [2, 5, 8, 12], [2, 5, 10, 12], [2, 6, 10, 12],
[2, 6, 10, 14], [2, 6, 12, 15], [2, 7, 9, 13], [2, 7, 9, 14],
[2, 7, 10, 14], [2, 8, 11, 15], [2, 8, 12, 15], [3, 4, 5, 14],
[3, 4, 5, 15], [3, 4, 12, 14], [3, 5, 10, 15], [3, 5, 11, 14],
[3, 7, 12, 13], [3, 11, 13, 14], [3, 12, 13, 14], [4, 5, 6, 7],
[4, 5, 6, 14], [4, 5, 7, 15], [4, 6, 7, 11], [4, 6, 10, 11],
[4, 6, 10, 14], [4, 7, 11, 15], [4, 8, 9, 12], [4, 8, 9, 13],
[4, 8, 10, 13], [4, 8, 10, 14], [4, 8, 12, 14], [4, 10, 11, 13],
[5, 6, 7, 13], [5, 7, 9, 13], [5, 7, 9, 15], [5, 8, 9, 12],
[5, 8, 9, 13], [5, 9, 10, 12], [5, 9, 10, 15], [6, 7, 11, 12],
[6, 7, 12, 13], [6, 10, 11, 12], [6, 12, 13, 15], [7, 8, 10, 14],
[7, 8, 11, 15], [7, 8, 14, 15], [7, 9, 14, 15], [8, 12, 14, 15],
[9, 10, 11, 12], [9, 10, 11, 16], [9, 10, 15, 16], [9, 11, 14, 16],
[9, 14, 15, 16], [10, 11, 13, 16], [10, 13, 15, 16],
[11, 13, 14, 16], [12, 13, 14, 15], [13, 14, 15, 16]],
name='Triangulation of the Poincare homology 3-sphere')
def RealProjectiveSpace(n):
r"""
A triangulation of `\Bold{R}P^n` for any `n \geq 0`.
INPUT:
- ``n`` -- integer, the dimension of the real projective space
to construct
The first few cases are pretty trivial:
- `\Bold{R}P^0` is a point.
- `\Bold{R}P^1` is a circle, triangulated as the boundary of a
single 2-simplex.
- `\Bold{R}P^2` is the real projective plane, here given its
minimal triangulation with 6 vertices, 15 edges, and 10
triangles.
- `\Bold{R}P^3`: any triangulation has at least 11 vertices by
a result of Walkup [Wal1970]_; this function returns a
triangulation with 11 vertices, as given by Lutz [Lut2005]_.
- `\Bold{R}P^4`: any triangulation has at least 16 vertices by
a result of Walkup; this function returns a triangulation
with 16 vertices as given by Lutz; see also Datta [Dat2007]_,
Example 3.12.
- `\Bold{R}P^n`: Lutz has found a triangulation of
`\Bold{R}P^5` with 24 vertices, but it does not seem to have
been published. Kühnel [Kuh1987]_ has described a triangulation of
`\Bold{R}P^n`, in general, with `2^{n+1}-1` vertices; see
also Datta, Example 3.21. This triangulation is presumably
not minimal, but it seems to be the best in the published
literature as of this writing. So this function returns it
when `n > 4`.
ALGORITHM: For `n < 4`, these are constructed explicitly by
listing the facets. For `n = 4`, this is constructed by
specifying 16 vertices, two facets, and a certain subgroup `G`
of the symmetric group `S_{16}`. Then the set of all facets
is the `G`-orbit of the two given facets. This is implemented
here by explicitly listing all of the facets; the facets
can be computed by the function :func:`~sage.homology.simplicial_complex.facets_for_RP4`, but
running the function takes a few seconds.
For `n > 4`, the construction is as follows: let `S` denote
the simplicial complex structure on the `n`-sphere given by
the first barycentric subdivision of the boundary of an
`(n+1)`-simplex. This has a simplicial antipodal action: if
`V` denotes the vertices in the boundary of the simplex, then
the vertices in its barycentric subdivision `S` correspond to
nonempty proper subsets `U` of `V`, and the antipodal action
sends any subset `U` to its complement. One can show that
modding out by this action results in a triangulation for
`\Bold{R}P^n`. To find the facets in this triangulation, find
the facets in `S`. These are identified in pairs to form
`\Bold{R}P^n`, so choose a representative from each pair: for
each facet in `S`, replace any vertex in `S` containing 0 with
its complement.
Of course these complexes increase in size pretty quickly as
`n` increases.
EXAMPLES::
sage: P3 = simplicial_complexes.RealProjectiveSpace(3)
sage: P3.f_vector()
[1, 11, 51, 80, 40]
sage: P3.homology()
{0: 0, 1: C2, 2: 0, 3: Z}
sage: P4 = simplicial_complexes.RealProjectiveSpace(4)
sage: P4.f_vector()
[1, 16, 120, 330, 375, 150]
sage: P4.homology() # long time
{0: 0, 1: C2, 2: 0, 3: C2, 4: 0}
sage: P5 = simplicial_complexes.RealProjectiveSpace(5) # long time (44s on sage.math, 2012)
sage: P5.f_vector() # long time
[1, 63, 903, 4200, 8400, 7560, 2520]
The following computation can take a long time -- over half an
hour -- with Sage's default computation of homology groups,
but if you have CHomP installed, Sage will use that and the
computation should only take a second or two. (You can
download CHomP from http://chomp.rutgers.edu/, or you can
install it as a Sage package using ``sage -i chomp``). ::
sage: P5.homology() # long time # optional - CHomP
{0: 0, 1: C2, 2: 0, 3: C2, 4: 0, 5: Z}
sage: simplicial_complexes.RealProjectiveSpace(2).dimension()
2
sage: P3.dimension()
3
sage: P4.dimension() # long time
4
sage: P5.dimension() # long time
5
"""
if n == 0:
return Simplex(0)
if n == 1:
return Sphere(1)
if n == 2:
return RealProjectivePlane()
if n == 3:
# Minimal triangulation found by Walkup and given
# explicitly by Lutz
return UniqueSimplicialComplex(
[[1, 2, 3, 7], [1, 4, 7, 9], [2, 3, 4, 8], [2, 5, 8, 10],
[3, 6, 7, 10], [1, 2, 3, 11], [1, 4, 7, 10], [2, 3, 4, 11],
[2, 5, 9, 10], [3, 6, 8, 9], [1, 2, 6, 9], [1, 4, 8, 9],
[2, 3, 7, 8], [2, 6, 9, 10], [3, 6, 9, 10], [1, 2, 6, 11],
[1, 4, 8, 10], [2, 4, 6, 10], [3, 4, 5, 9], [4, 5, 6, 7],
[1, 2, 7, 9], [1, 5, 6, 8], [2, 4, 6, 11], [3, 4, 5, 11],
[4, 5, 6, 11], [1, 3, 5, 10], [1, 5, 6, 11], [2, 4, 8, 10],
[3, 4, 8, 9], [4, 5, 7, 9], [1, 3, 5, 11], [1, 5, 8, 10],
[2, 5, 7, 8], [3, 5, 9, 10], [4, 6, 7, 10], [1, 3, 7, 10],
[1, 6, 8, 9], [2, 5, 7, 9], [3, 6, 7, 8], [5, 6, 7, 8]],
name='Minimal triangulation of RP^3')
if n == 4:
return UniqueSimplicialComplex(
[(1, 3, 8, 12, 13), (2, 7, 8, 13, 16), (4, 8, 9, 12, 14),
(2, 6, 10, 12, 16), (5, 7, 9, 10, 13), (1, 2, 7, 8, 15),
(1, 3, 9, 11, 16), (5, 6, 8, 13, 16), (1, 3, 8, 11, 13),
(3, 4, 10, 13, 15), (4, 6, 9, 12, 15), (2, 4, 6, 11, 13),
(2, 3, 9, 12, 16), (1, 6, 9, 12, 15), (2, 5, 10, 11, 12),
(1, 7, 8, 12, 15), (2, 6, 9, 13, 16), (1, 5, 9, 11, 15),
(4, 9, 10, 13, 14), (2, 7, 8, 15, 16), (2, 3, 9, 12, 14),
(1, 6, 7, 10, 14), (2, 5, 10, 11, 15), (1, 2, 4, 13, 14),
(1, 6, 10, 14, 16), (2, 6, 9, 12, 16), (1, 3, 9, 12, 16),
(4, 5, 7, 11, 16), (5, 9, 10, 11, 15), (3, 5, 8, 12, 14),
(5, 6, 9, 13, 16), (5, 6, 9, 13, 15), (1, 3, 4, 10, 16),
(1, 6, 10, 12, 16), (2, 4, 6, 9, 13), (2, 4, 6, 9, 12),
(1, 2, 4, 11, 13), (7, 9, 10, 13, 14), (1, 7, 8, 12, 13),
(4, 6, 7, 11, 12), (3, 4, 6, 11, 13), (1, 5, 6, 9, 15),
(1, 6, 7, 14, 15), (2, 3, 7, 14, 15), (2, 6, 10, 11, 12),
(5, 7, 9, 10, 11), (1, 2, 4, 5, 14), (3, 5, 10, 13, 15),
(3, 8, 9, 12, 14), (5, 9, 10, 13, 15), (2, 6, 8, 13, 16),
(1, 2, 7, 13, 14), (1, 7, 10, 12, 13), (3, 4, 6, 13, 15),
(4, 9, 10, 13, 15), (2, 3, 10, 12, 16), (1, 2, 5, 14, 15),
(2, 6, 8, 10, 11), (1, 3, 10, 12, 13), (4, 8, 9, 12, 15),
(1, 3, 8, 9, 11), (4, 6, 7, 12, 15), (1, 8, 9, 11, 15),
(4, 5, 8, 14, 16), (1, 2, 8, 11, 13), (3, 6, 8, 11, 13),
(3, 6, 8, 11, 14), (3, 5, 8, 12, 13), (3, 7, 9, 11, 14),
(4, 6, 9, 13, 15), (2, 3, 5, 10, 12), (4, 7, 8, 15, 16),
(1, 2, 7, 14, 15), (3, 7, 9, 11, 16), (3, 6, 7, 14, 15),
(2, 6, 8, 11, 13), (4, 8, 9, 10, 14), (1, 4, 10, 13, 14),
(4, 8, 9, 10, 15), (2, 7, 9, 13, 16), (1, 6, 9, 12, 16),
(2, 3, 7, 9, 14), (4, 8, 10, 15, 16), (1, 5, 9, 11, 16),
(1, 5, 6, 14, 15), (5, 7, 9, 11, 16), (4, 5, 7, 11, 12),
(5, 7, 10, 11, 12), (2, 3, 10, 15, 16), (1, 2, 7, 8, 13),
(1, 6, 7, 10, 12), (1, 3, 10, 12, 16), (7, 9, 10, 11, 14),
(1, 7, 10, 13, 14), (1, 2, 4, 5, 11), (3, 4, 6, 7, 11),
(1, 6, 7, 12, 15), (1, 3, 4, 10, 13), (1, 4, 10, 14, 16),
(2, 4, 6, 11, 12), (5, 6, 8, 14, 16), (3, 5, 6, 8, 13),
(3, 5, 6, 8, 14), (1, 2, 8, 11, 15), (1, 4, 5, 14, 16),
(2, 3, 7, 15, 16), (8, 9, 10, 11, 14), (1, 3, 4, 11, 16),
(6, 8, 10, 14, 16), (8, 9, 10, 11, 15), (1, 3, 4, 11, 13),
(2, 4, 5, 12, 14), (2, 4, 9, 13, 14), (3, 4, 7, 11, 16),
(3, 6, 7, 11, 14), (3, 8, 9, 11, 14), (2, 8, 10, 11, 15),
(1, 3, 8, 9, 12), (4, 5, 7, 8, 16), (4, 5, 8, 12, 14),
(2, 4, 9, 12, 14), (6, 8, 10, 11, 14), (3, 5, 6, 13, 15),
(1, 4, 5, 11, 16), (3, 5, 6, 14, 15), (2, 4, 5, 11, 12),
(4, 5, 7, 8, 12), (1, 8, 9, 12, 15), (5, 7, 8, 13, 16),
(2, 3, 5, 12, 14), (3, 5, 10, 12, 13), (6, 7, 10, 11, 12),
(5, 7, 9, 13, 16), (6, 7, 10, 11, 14), (5, 7, 10, 12, 13),
(1, 2, 5, 11, 15), (1, 5, 6, 9, 16), (5, 7, 8, 12, 13),
(4, 7, 8, 12, 15), (2, 3, 5, 10, 15), (2, 6, 8, 10, 16),
(3, 4, 10, 15, 16), (1, 5, 6, 14, 16), (2, 3, 5, 14, 15),
(2, 3, 7, 9, 16), (2, 7, 9, 13, 14), (3, 4, 6, 7, 15),
(4, 8, 10, 14, 16), (3, 4, 7, 15, 16), (2, 8, 10, 15, 16)],
name='Minimal triangulation of RP^4')
if n >= 5:
# Use the construction given by Datta in Example 3.21.
V = set(range(0, n+2))
S = Sphere(n).barycentric_subdivision()
X = S.facets()
facets = set([])
for f in X:
new = []
for v in f:
if 0 in v:
new.append(tuple(V.difference(v)))
else:
new.append(v)
facets.add(tuple(new))
return UniqueSimplicialComplex(list(facets),
name='Triangulation of RP^{}'.format(n))
def K3Surface():
"""
Return a minimal triangulation of the K3 surface.
This is a pure simplicial complex of dimension 4 with 16 vertices
and 288 facets. It was constructed by Casella and Kühnel
in [CK2001]_. The construction here uses the labeling from
Spreer and Kühnel [SK2011]_.
EXAMPLES::
sage: K3=simplicial_complexes.K3Surface() ; K3
Minimal triangulation of the K3 surface
sage: K3.f_vector()
[1, 16, 120, 560, 720, 288]
This simplicial complex is implemented just by listing all 288
facets. The list of facets can be computed by the function
:func:`~sage.homology.simplicial_complex.facets_for_K3`, but running the function takes a few
seconds.
"""
return UniqueSimplicialComplex(
[(2, 10, 13, 15, 16), (2, 8, 11, 15, 16), (2, 5, 7, 8, 10),
(1, 9, 11, 13, 14), (1, 2, 8, 10, 12), (1, 3, 5, 6, 11),
(1, 5, 6, 9, 12), (1, 2, 6, 13, 16), (1, 4, 10, 13, 14),
(1, 9, 10, 14, 15), (2, 4, 7, 8, 12), (3, 4, 6, 10, 12),
(1, 6, 7, 8, 9), (3, 4, 5, 7, 15), (1, 7, 12, 15, 16),
(4, 5, 7, 13, 16), (5, 8, 11, 12, 15), (2, 4, 7, 12, 14),
(1, 4, 5, 14, 16), (2, 5, 6, 10, 11), (1, 6, 8, 12, 14),
(5, 8, 9, 14, 16), (5, 10, 11, 12, 13), (2, 4, 8, 9, 12),
(7, 9, 12, 15, 16), (1, 2, 6, 9, 15), (1, 5, 14, 15, 16),
(2, 3, 4, 5, 9), (6, 8, 10, 11, 15), (1, 5, 8, 10, 12),
(1, 3, 7, 9, 10), (6, 7, 8, 9, 13), (1, 2, 9, 11, 15),
(2, 8, 11, 14, 16), (2, 4, 5, 13, 16), (1, 4, 8, 13, 15),
(4, 7, 8, 10, 11), (2, 3, 9, 11, 14), (2, 3, 4, 9, 13),
(2, 8, 10, 12, 13), (1, 2, 4, 11, 15), (2, 3, 9, 11, 15),
(3, 5, 10, 13, 15), (3, 4, 5, 9, 11), (6, 10, 13, 15, 16),
(8, 10, 11, 15, 16), (6, 7, 11, 13, 15), (1, 5, 7, 15, 16),
(4, 5, 7, 9, 15), (3, 4, 6, 7, 16), (2, 3, 11, 14, 16),
(3, 4, 9, 11, 13), (1, 2, 5, 14, 15), (2, 3, 9, 13, 14),
(1, 2, 5, 13, 16), (2, 3, 7, 8, 12), (2, 9, 11, 12, 14),
(1, 9, 11, 15, 16), (4, 6, 9, 14, 16), (1, 4, 9, 13, 14),
(1, 2, 3, 12, 16), (8, 11, 12, 14, 15), (2, 4, 11, 12, 14),
(1, 4, 10, 12, 13), (1, 2, 6, 7, 13), (1, 3, 6, 10, 11),
(1, 6, 8, 9, 12), (1, 4, 5, 6, 14), (3, 9, 10, 12, 15),
(5, 8, 11, 12, 16), (5, 9, 10, 14, 15), (3, 9, 12, 15, 16),
(3, 6, 8, 14, 15), (2, 4, 9, 10, 16), (5, 8, 9, 13, 15),
(2, 3, 6, 9, 15), (6, 11, 12, 14, 16), (2, 3, 10, 13, 15),
(2, 8, 9, 10, 13), (3, 4, 8, 11, 13), (3, 4, 5, 7, 13),
(5, 7, 8, 10, 14), (4, 12, 13, 14, 15), (6, 7, 10, 14, 16),
(5, 10, 11, 13, 14), (3, 4, 7, 13, 16), (6, 8, 9, 12, 13),
(1, 3, 4, 10, 14), (2, 4, 6, 11, 12), (1, 7, 9, 10, 14),
(4, 6, 8, 13, 14), (4, 9, 10, 11, 16), (3, 7, 8, 10, 16),
(5, 7, 9, 15, 16), (1, 7, 9, 11, 14), (6, 8, 10, 15, 16),
(5, 8, 9, 10, 14), (7, 8, 10, 14, 16), (2, 6, 7, 9, 11),
(7, 9, 10, 13, 15), (3, 6, 7, 10, 12), (2, 4, 6, 10, 11),
(4, 5, 8, 9, 11), (1, 2, 3, 8, 16), (3, 7, 9, 10, 12),
(1, 2, 6, 8, 14), (3, 5, 6, 13, 15), (1, 5, 6, 12, 14),
(2, 5, 7, 14, 15), (1, 5, 10, 11, 12), (3, 7, 8, 10, 11),
(1, 2, 6, 14, 15), (1, 2, 6, 8, 16), (7, 9, 10, 12, 15),
(3, 4, 6, 8, 14), (3, 7, 13, 14, 16), (2, 5, 7, 8, 14),
(6, 7, 9, 10, 14), (2, 3, 7, 12, 14), (4, 10, 12, 13, 14),
(2, 5, 6, 11, 13), (4, 5, 6, 7, 16), (1, 3, 12, 13, 16),
(1, 4, 11, 15, 16), (1, 3, 4, 6, 10), (1, 10, 11, 12, 13),
(6, 9, 11, 12, 14), (1, 4, 7, 8, 15), (5, 8, 9, 10, 13),
(1, 2, 5, 7, 15), (1, 7, 12, 13, 16), (3, 11, 13, 14, 16),
(1, 2, 5, 7, 13), (4, 7, 8, 9, 15), (1, 5, 6, 10, 11),
(6, 7, 10, 13, 15), (3, 4, 7, 14, 15), (7, 11, 13, 14, 16),
(3, 4, 10, 12, 14), (3, 6, 8, 10, 16), (2, 7, 8, 14, 16),
(2, 3, 4, 5, 13), (5, 8, 12, 13, 15), (4, 6, 9, 13, 14),
(2, 4, 5, 6, 12), (1, 3, 7, 8, 9), (8, 11, 12, 14, 16),
(1, 7, 12, 13, 15), (8, 12, 13, 14, 15), (2, 8, 9, 12, 13),
(4, 6, 10, 12, 15), (2, 8, 11, 14, 15), (2, 6, 9, 11, 12),
(8, 9, 10, 11, 16), (2, 3, 6, 13, 15), (2, 3, 12, 15, 16),
(1, 3, 5, 9, 12), (2, 5, 6, 9, 12), (2, 10, 12, 13, 14),
(2, 6, 13, 15, 16), (2, 3, 11, 15, 16), (3, 5, 6, 8, 15),
(2, 4, 5, 9, 12), (5, 6, 8, 11, 15), (6, 8, 12, 13, 14),
(1, 2, 3, 8, 12), (1, 4, 7, 8, 11), (3, 5, 7, 14, 15),
(3, 5, 7, 13, 14), (1, 7, 10, 11, 14), (6, 7, 11, 12, 15),
(3, 4, 6, 7, 12), (1, 2, 4, 7, 11), (6, 9, 10, 14, 16),
(4, 10, 12, 15, 16), (5, 6, 7, 12, 16), (3, 9, 11, 13, 14),
(5, 9, 14, 15, 16), (4, 5, 6, 7, 12), (1, 3, 9, 10, 15),
(4, 7, 8, 9, 12), (5, 9, 10, 13, 15), (1, 3, 8, 13, 16),
(2, 9, 12, 13, 14), (6, 7, 10, 12, 15), (2, 6, 8, 14, 15),
(3, 5, 6, 8, 11), (3, 4, 7, 12, 14), (1, 3, 10, 14, 15),
(7, 11, 12, 13, 16), (3, 11, 12, 13, 16), (3, 4, 5, 8, 15),
(2, 4, 7, 8, 10), (2, 4, 7, 14, 15), (1, 2, 10, 12, 16),
(1, 6, 8, 13, 16), (1, 7, 8, 13, 15), (3, 9, 11, 15, 16),
(4, 6, 10, 11, 15), (2, 4, 11, 14, 15), (1, 3, 8, 9, 12),
(1, 3, 6, 14, 15), (2, 4, 5, 6, 10), (1, 4, 9, 14, 16),
(5, 7, 9, 12, 16), (1, 3, 7, 10, 11), (7, 8, 9, 13, 15),
(3, 5, 10, 14, 15), (1, 4, 10, 12, 16), (3, 4, 5, 8, 11),
(1, 2, 6, 7, 9), (1, 3, 11, 12, 13), (1, 5, 7, 13, 16),
(5, 7, 10, 11, 14), (2, 10, 12, 15, 16), (3, 6, 7, 10, 16),
(1, 2, 5, 8, 10), (4, 10, 11, 15, 16), (5, 8, 10, 12, 13),
(3, 6, 8, 10, 11), (4, 5, 7, 9, 12), (6, 7, 11, 12, 16),
(3, 5, 9, 11, 16), (8, 9, 10, 14, 16), (3, 4, 6, 8, 16),
(1, 10, 11, 13, 14), (2, 9, 10, 13, 16), (1, 2, 5, 8, 14),
(2, 4, 5, 10, 16), (1, 2, 7, 9, 11), (1, 3, 5, 6, 9),
(5, 7, 11, 13, 14), (3, 5, 10, 13, 14), (2, 4, 8, 9, 10),
(4, 11, 12, 14, 15), (2, 3, 7, 14, 16), (3, 4, 8, 13, 16),
(6, 7, 9, 11, 14), (5, 6, 11, 13, 15), (4, 5, 6, 14, 16),
(3, 4, 8, 14, 15), (4, 5, 8, 9, 15), (1, 4, 8, 11, 13),
(5, 6, 12, 14, 16), (2, 3, 10, 12, 14), (1, 2, 5, 10, 16),
(2, 5, 7, 10, 11), (2, 6, 7, 11, 13), (1, 4, 5, 10, 16),
(2, 6, 8, 15, 16), (2, 3, 10, 12, 15), (7, 11, 12, 13, 15),
(1, 3, 8, 11, 13), (4, 8, 9, 10, 11), (1, 9, 14, 15, 16),
(1, 3, 6, 9, 15), (6, 9, 12, 13, 14), (2, 3, 10, 13, 14),
(2, 5, 7, 11, 13), (2, 3, 5, 6, 13), (4, 6, 8, 13, 16),
(6, 7, 9, 10, 13), (5, 8, 12, 14, 16), (4, 6, 9, 13, 16),
(5, 8, 9, 11, 16), (2, 3, 5, 6, 9), (1, 3, 5, 11, 12),
(3, 7, 8, 9, 12), (4, 6, 11, 12, 15), (3, 5, 9, 12, 16),
(5, 11, 12, 13, 15), (1, 3, 4, 6, 14), (3, 5, 11, 12, 16),
(1, 5, 8, 12, 14), (4, 8, 13, 14, 15), (1, 3, 7, 8, 11),
(6, 9, 10, 13, 16), (2, 4, 9, 13, 16), (1, 6, 7, 8, 13),
(1, 4, 12, 13, 15), (2, 4, 7, 10, 11), (1, 4, 9, 11, 13),
(6, 7, 11, 14, 16), (1, 4, 9, 11, 16), (1, 4, 12, 15, 16),
(1, 2, 4, 7, 15), (2, 3, 7, 8, 16), (1, 4, 5, 6, 10)],
name='Minimal triangulation of the K3 surface')
def BarnetteSphere():
r"""
Return Barnette's triangulation of the 3-sphere.
This is a pure simplicial complex of dimension 3 with 8
vertices and 19 facets, which is a non-polytopal triangulation
of the 3-sphere. It was constructed by Barnette in
[Bar1970]_. The construction here uses the labeling from De
Loera, Rambau and Santos [DLRS2010]_. Another reference is chapter
III.4 of Ewald [Ewa1996]_.
EXAMPLES::
sage: BS = simplicial_complexes.BarnetteSphere() ; BS
Barnette's triangulation of the 3-sphere
sage: BS.f_vector()
[1, 8, 27, 38, 19]
TESTS:
Checks that this is indeed the same Barnette Sphere as the one
given on page 87 of [Ewa1996]_.::
sage: BS2 = SimplicialComplex([[1, 2, 3, 4], [3, 4, 5, 6], [1, 2, 5, 6],
....: [1, 2, 4, 7], [1, 3, 4, 7], [3, 4, 6, 7],
....: [3, 5, 6, 7], [1, 2, 5, 7], [2, 5, 6, 7],
....: [2, 4, 6, 7], [1, 2, 3, 8], [2, 3, 4, 8],
....: [3, 4, 5, 8], [4, 5, 6, 8], [1, 2, 6, 8],
....: [1, 5, 6, 8], [1, 3, 5, 8], [2, 4, 6, 8],
....: [1, 3, 5, 7]])
sage: BS.is_isomorphic(BS2)
True
"""
return UniqueSimplicialComplex([
(1, 2, 4, 5), (2, 3, 5, 6), (1, 3, 4, 6), (1, 2, 3, 7), (4, 5, 6, 7), (1, 2, 4, 7),
(2, 4, 5, 7), (2, 3, 5, 7), (3, 5, 6, 7), (3, 1, 6, 7), (1, 6, 4, 7), (1, 2, 3, 8),
(4, 5, 6, 8), (1, 2, 5, 8), (1, 4, 5, 8), (2, 3, 6, 8), (2, 5, 6, 8), (3, 1, 4, 8),
(3, 6, 4, 8)],
name="Barnette's triangulation of the 3-sphere")
def BrucknerGrunbaumSphere():
r"""
Return Bruckner and Grunbaum's triangulation of the 3-sphere.
This is a pure simplicial complex of dimension 3 with 8
vertices and 20 facets, which is a non-polytopal triangulation
of the 3-sphere. It appeared first in [Br1910]_ and was studied in
[GrS1967]_.
It is defined here as the link of any vertex in the unique minimal
triangulation of the complex projective plane, see chapter 4 of
[Kuh1995]_.
EXAMPLES::
sage: BGS = simplicial_complexes.BrucknerGrunbaumSphere() ; BGS
Bruckner and Grunbaum's triangulation of the 3-sphere
sage: BGS.f_vector()
[1, 8, 28, 40, 20]
"""
# X = ComplexProjectivePlane().link([9])
# return UniqueSimplicialComplex(X.facets(),
# name="Bruckner and Grunbaum's triangulation of the 3-sphere")
return UniqueSimplicialComplex(ComplexProjectivePlane().link([9]),
name="Bruckner and Grunbaum's triangulation of the 3-sphere")
###############################################################
# examples from graph theory:
def NotIConnectedGraphs(n, i):
"""
The simplicial complex of all graphs on `n` vertices which are
not `i`-connected.
Fix an integer `n>0` and consider the set of graphs on `n`
vertices. View each graph as its set of edges, so it is a
subset of a set of size `n` choose 2. A graph is
`i`-connected if, for any `j<i`, if any `j` vertices are
removed along with the edges emanating from them, then the
graph remains connected. Now fix `i`: it is clear that if `G`
is not `i`-connected, then the same is true for any graph
obtained from `G` by deleting edges. Thus the set of all
graphs which are not `i`-connected, viewed as a set of subsets
of the `n` choose 2 possible edges, is closed under taking
subsets, and thus forms a simplicial complex. This function
produces that simplicial complex.
INPUT:
- ``n``, ``i`` -- non-negative integers with `i` at most `n`
See Dumas et al. [DHSW2003]_ for information on computing its homology
by computer, and see Babson et al. [BBLSW1999]_ for theory. For
example, Babson et al. show that when `i=2`, the reduced homology of
this complex is nonzero only in dimension `2n-5`, where it is
free abelian of rank `(n-2)!`.
EXAMPLES::
sage: simplicial_complexes.NotIConnectedGraphs(5, 2).f_vector()
[1, 10, 45, 120, 210, 240, 140, 20]
sage: simplicial_complexes.NotIConnectedGraphs(5, 2).homology(5).ngens()
6
"""
G_list = range(1, n+1)
G_vertices = Set(G_list)
E_list = []
for w in G_list:
for v in range(1, w):
E_list.append((v, w))
E = Set(E_list)
facets = []
i_minus_one_sets = list(G_vertices.subsets(size=i-1))
for A in i_minus_one_sets:
G_minus_A = G_vertices.difference(A)
for B in G_minus_A.subsets():
if len(B) > 0 and len(B) < len(G_minus_A):
C = G_minus_A.difference(B)
facet = E
for v in B:
for w in C:
bad_edge = (min(v, w), max(v, w))
facet = facet.difference(Set([bad_edge]))
facets.append(facet)
return UniqueSimplicialComplex(facets, name='Simplicial complex of not {}-connected graphs on {} vertices'.format(i, n))
def MatchingComplex(n):
"""
The matching complex of graphs on `n` vertices.
Fix an integer `n>0` and consider a set `V` of `n` vertices.
A 'partial matching' on `V` is a graph formed by edges so that
each vertex is in at most one edge. If `G` is a partial
matching, then so is any graph obtained by deleting edges from
`G`. Thus the set of all partial matchings on `n` vertices,
viewed as a set of subsets of the `n` choose 2 possible edges,
is closed under taking subsets, and thus forms a simplicial
complex called the 'matching complex'. This function produces
that simplicial complex.
INPUT:
- ``n`` -- positive integer.
See Dumas et al. [DHSW2003]_ for information on computing its homology
by computer, and see Wachs [Wac2003]_ for an expository article about
the theory. For example, the homology of these complexes seems to
have only mod 3 torsion, and this has been proved for the
bottom non-vanishing homology group for the matching complex `M_n`.
EXAMPLES::
sage: M = simplicial_complexes.MatchingComplex(7)
sage: H = M.homology()
sage: H
{0: 0, 1: C3, 2: Z^20}
sage: H[2].ngens()
20
sage: simplicial_complexes.MatchingComplex(8).homology(2) # long time (6s on sage.math, 2012)
Z^132
"""
G_vertices = Set(range(1, n+1))
facets = []
if is_even(n):
half = int(n/2)
half_n_sets = list(G_vertices.subsets(size=half))
else:
half = int((n-1)/2)
half_n_sets = list(G_vertices.subsets(size=half))
for X in half_n_sets:
Xcomp = G_vertices.difference(X)
if is_even(n):
if 1 in X:
A = X
B = Xcomp
else:
A = Xcomp
B = X
for M in matching(A, B):
facet = []
for pair in M:
facet.append(tuple(sorted(pair)))
facets.append(facet)
else:
for w in Xcomp:
if 1 in X or (w == 1 and 2 in X):
A = X
B = Xcomp.difference([w])
else:
B = X
A = Xcomp.difference([w])
for M in matching(A, B):
facet = []
for pair in M:
facet.append(tuple(sorted(pair)))
facets.append(facet)
return UniqueSimplicialComplex(facets, name='Matching complex on {} vertices'.format(n))
def ChessboardComplex(n, i):
r"""
The chessboard complex for an `n \times i` chessboard.
Fix integers `n, i > 0` and consider sets `V` of `n` vertices
and `W` of `i` vertices. A 'partial matching' between `V` and
`W` is a graph formed by edges `(v, w)` with `v \in V` and `w
\in W` so that each vertex is in at most one edge. If `G` is
a partial matching, then so is any graph obtained by deleting
edges from `G`. Thus the set of all partial matchings on `V`
and `W`, viewed as a set of subsets of the `n+i` choose 2
possible edges, is closed under taking subsets, and thus forms
a simplicial complex called the 'chessboard complex'. This
function produces that simplicial complex. (It is called the
chessboard complex because such graphs also correspond to ways
of placing rooks on an `n` by `i` chessboard so that none of
them are attacking each other.)
INPUT:
- ``n, i`` -- positive integers.
See Dumas et al. [DHSW2003]_ for information on computing its homology
by computer, and see Wachs [Wac2003]_ for an expository article about
the theory.
EXAMPLES::
sage: C = simplicial_complexes.ChessboardComplex(5, 5)
sage: C.f_vector()
[1, 25, 200, 600, 600, 120]
sage: simplicial_complexes.ChessboardComplex(3, 3).homology()
{0: 0, 1: Z x Z x Z x Z, 2: 0}
"""
A = range(n)
B = range(i)
E_dict = {}
index = 0
for v in A:
for w in B:
E_dict[(v, w)] = index
index += 1
facets = []
for M in matching(A, B):
facet = []
for pair in M:
facet.append(E_dict[pair])
facets.append(facet)
return UniqueSimplicialComplex(facets, name='Chessboard complex for an {}x{} chessboard'.format(n, i))
def RandomComplex(n, d, p=0.5):
"""
A random ``d``-dimensional simplicial complex on ``n`` vertices.
INPUT:
- ``n`` -- number of vertices
- ``d`` -- dimension of the complex
- ``p`` -- floating point number between 0 and 1
(optional, default 0.5)
A random `d`-dimensional simplicial complex on `n` vertices,
as defined for example by Meshulam and Wallach [MW2009]_, is
constructed as follows: take `n` vertices and include all of
the simplices of dimension strictly less than `d`, and then for each
possible simplex of dimension `d`, include it with probability `p`.
EXAMPLES::
sage: X = simplicial_complexes.RandomComplex(6, 2); X
Random 2-dimensional simplicial complex on 6 vertices
sage: len(list(X.vertices()))
6
If `d` is too large (if `d+1 > n`, so that there are no
`d`-dimensional simplices), then return the simplicial complex
with a single `(n+1)`-dimensional simplex::
sage: simplicial_complexes.RandomComplex(6, 12)
The 5-simplex
"""
if d+1 > n:
return Simplex(n-1)
else:
vertices = range(n)
facets = Subsets(vertices, d).list()
maybe = Subsets(vertices, d+1)
facets.extend([f for f in maybe if random.random() <= p])
return UniqueSimplicialComplex(facets,
name='Random {}-dimensional simplicial complex on {} vertices'.format(d, n))
def SumComplex(n, A):
r"""
The sum complexes of Linial, Meshulam, and Rosenthal [LMR2010]_.
If `k+1` is the cardinality of `A`, then this returns a
`k`-dimensional simplicial complex `X_A` with vertices
`\ZZ/(n)`, and facets given by all `k+1`-tuples `(x_0, x_1,
..., x_k)` such that the sum `\sum x_i` is in `A`. See the
paper by Linial, Meshulam, and Rosenthal [LMR2010]_, in which
they prove various results about these complexes; for example,
if `n` is prime, then `X_A` is rationally acyclic, and if in
addition `A` forms an arithmetic progression in `\ZZ/(n)`,
then `X_A` is `\ZZ`-acyclic. Throughout their paper, they
assume that `n` and `k` are relatively prime, but the
construction makes sense in general.
In addition to the results from the cited paper, these
complexes can have large torsion, given the number of
vertices; for example, if `n=10`, and `A=\{0, 1, 2, 3, 6\}`, then
`H_3(X_A)` is cyclic of order 2728, and there is a
4-dimensional complex on 13 vertices with `H_3` having a
cyclic summand of order
.. MATH::
706565607945 = 3 \cdot 5 \cdot 53 \cdot 79 \cdot 131
\cdot 157 \cdot 547.
See the examples.
INPUT:
- ``n`` -- a positive integer
- ``A`` -- a subset of `\ZZ/(n)`
EXAMPLES::
sage: S = simplicial_complexes.SumComplex(10, [0, 1, 2, 3, 6]); S
Sum complex on vertices Z/10Z associated to {0, 1, 2, 3, 6}
sage: S.homology()
{0: 0, 1: 0, 2: 0, 3: C2728, 4: 0}
sage: factor(2728)
2^3 * 11 * 31
sage: S = simplicial_complexes.SumComplex(11, [0, 1, 3]); S
Sum complex on vertices Z/11Z associated to {0, 1, 3}
sage: S.homology(1)
C23
sage: S = simplicial_complexes.SumComplex(11, [0, 1, 2, 3, 4, 7]); S
Sum complex on vertices Z/11Z associated to {0, 1, 2, 3, 4, 7}
sage: S.homology(algorithm='no_chomp') # long time
{0: 0, 1: 0, 2: 0, 3: 0, 4: C645679, 5: 0}
sage: factor(645679)
23 * 67 * 419
sage: S = simplicial_complexes.SumComplex(13, [0, 1, 3]); S
Sum complex on vertices Z/13Z associated to {0, 1, 3}
sage: S.homology(1)
C159
sage: factor(159)
3 * 53
sage: S = simplicial_complexes.SumComplex(13, [0, 1, 2, 5]); S
Sum complex on vertices Z/13Z associated to {0, 1, 2, 5}
sage: S.homology(algorithm='no_chomp') # long time
{0: 0, 1: 0, 2: C146989209, 3: 0}
sage: factor(1648910295)
3^2 * 5 * 53 * 521 * 1327
sage: S = simplicial_complexes.SumComplex(13, [0, 1, 2, 3, 5]); S
Sum complex on vertices Z/13Z associated to {0, 1, 2, 3, 5}
sage: S.homology(algorithm='no_chomp') # long time
{0: 0, 1: 0, 2: 0, 3: C3 x C237 x C706565607945, 4: 0}
sage: factor(706565607945)
3 * 5 * 53 * 79 * 131 * 157 * 547
sage: S = simplicial_complexes.SumComplex(17, [0, 1, 4]); S
Sum complex on vertices Z/17Z associated to {0, 1, 4}
sage: S.homology(1, algorithm='no_chomp')
C140183
sage: factor(140183)
103 * 1361
sage: S = simplicial_complexes.SumComplex(19, [0, 1, 4]); S
Sum complex on vertices Z/19Z associated to {0, 1, 4}
sage: S.homology(1, algorithm='no_chomp')
C5670599
sage: factor(5670599)
11 * 191 * 2699
sage: S = simplicial_complexes.SumComplex(31, [0, 1, 4]); S
Sum complex on vertices Z/31Z associated to {0, 1, 4}
sage: S.homology(1, algorithm='no_chomp') # long time
C5 x C5 x C5 x C5 x C26951480558170926865
sage: factor(26951480558170926865)
5 * 311 * 683 * 1117 * 11657 * 1948909
"""
from sage.rings.finite_rings.integer_mod_ring import Integers
Zn = Integers(n)
A = frozenset([Zn(x) for x in A])
facets = []
for f in Set(Zn).subsets(len(A)):
if sum(f) in A:
facets.append(tuple(f))
return UniqueSimplicialComplex(facets, name='Sum complex on vertices Z/{}Z associated to {}'.format(n, Set(A)))
def RandomTwoSphere(n):
r"""
Return a random triangulation of the 2-dimensional sphere with `n`
vertices.
INPUT:
`n` -- an integer
OUTPUT:
A random triangulation of the sphere chosen uniformly among
the *rooted* triangulations on `n` vertices. Because some
triangulations have nontrivial automorphism groups, this may
not be equal to the uniform distribution among unrooted
triangulations.
ALGORITHM:
The algorithm is taken from [PS2006]_, section 2.1.
Starting from a planar tree (represented by its contour as a
sequence of vertices), one first performs local closures, until no
one is possible. A local closure amounts to replace in the cyclic
contour word a sequence ``in1, in2, in3, lf, in3`` by
``in1, in3``. After all local closures are done, one has reached
the partial closure, as in [PS2006]_, figure 5 (a).
Then one has to perform complete closure by adding two more
vertices, in order to reach the situation of [PS2006]_, figure 5
(b). For this, it is necessary to find inside the final contour
one of the two subsequences ``lf, in, lf``.
At every step of the algorithm, newly created triangles are added
in a simplicial complex.
This algorithm is implemented in
:meth:`~sage.graphs.generators.random.RandomTriangulation`, which
creates an embedded graph. The triangles of the simplicial
complex are recovered from this embedded graph.
EXAMPLES::
sage: G = simplicial_complexes.RandomTwoSphere(6); G
Simplicial complex with vertex set (0, 1, 2, 3, 4, 5) and 8 facets
sage: G.homology()
{0: 0, 1: 0, 2: Z}
sage: G.is_pure()
True
sage: fg = G.flip_graph(); fg
Graph on 8 vertices
sage: fg.is_planar() and fg.is_regular(3)
True
"""
from sage.graphs.generators.random import RandomTriangulation
graph = RandomTriangulation(n)
graph = graph.relabel(inplace=False)
triangles = [(u, v, w) for u, L in graph._embedding.items()
for v, w in zip(L, L[1:] + [L[0]]) if u < v and u < w]
return SimplicialComplex(triangles, maximality_check=False)
def ShiftedComplex(generators):
r"""
Return the smallest shifted simplicial complex containing ``generators``
as faces.
Let `V` be a set of vertices equipped with a total order. The
'componentwise partial ordering' on k-subsets of `V` is defined as
follows: if `A = \{a_1 < \cdots < a_k\}` and `B = \{b_1 < \cdots < b_k\}`,
then `A \leq_C B` iff `a_i \leq b_i` for all `i`. A simplicial complex
`X` on vertex set `[n]` is *shifted* if its faces form an order ideal
under the componentwise partial ordering, i.e., if `B \in X` and
`A \leq_C B` then `A \in X`. Shifted complexes of dimension 1 are also
known as threshold graphs.
.. NOTE::
This method assumes that `V` consists of positive integers
with the natural ordering.
INPUT:
- ``generators`` -- a list of generators of the order ideal, which may
be lists, tuples or simplices
EXAMPLES::
sage: X = simplicial_complexes.ShiftedComplex([ Simplex([1, 6]), (2, 4), [8] ])
sage: sorted(X.facets())
[(1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (2, 3), (2, 4), (7,), (8,)]
sage: X = simplicial_complexes.ShiftedComplex([ [2, 3, 5] ])
sage: sorted(X.facets())
[(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (2, 3, 4), (2, 3, 5)]
sage: X = simplicial_complexes.ShiftedComplex([ [1, 3, 5], [2, 6] ])
sage: sorted(X.facets())
[(1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 6), (2, 6)]
"""
from sage.combinat.partition import Partitions
Facets = []
for G in generators:
G = list(reversed(sorted(G)))
L = len(G)
for k in range(L * (L+1) // 2, sum(G) + 1):
for P in Partitions(k, length=L, max_slope=-1, outer=G):
Facets.append(list(reversed(P)))
return SimplicialComplex(Facets)
def RudinBall():
r"""
Return the non-shellable ball constructed by Rudin.
This complex is a non-shellable triangulation of the 3-ball
with 14 vertices and 41 facets, constructed by Rudin in
[Rud1958]_.
EXAMPLES::
sage: R = simplicial_complexes.RudinBall(); R
Rudin ball
sage: R.f_vector()
[1, 14, 66, 94, 41]
sage: R.homology()
{0: 0, 1: 0, 2: 0, 3: 0}
sage: R.is_cohen_macaulay()
True
"""
return UniqueSimplicialComplex(
[[1, 9, 2, 5], [1, 10, 2, 5], [1, 10, 5, 11], [1, 10, 7, 11], [1, 13, 5, 11],
[1, 13, 7, 11], [2, 10, 3, 6], [2, 11, 3, 6], [2, 11, 6, 12], [2, 11, 8, 12],
[2, 14, 6, 12], [2, 14, 8, 12], [3, 11, 4, 7], [3, 12, 4, 7], [3, 12, 5, 9],
[3, 12, 7, 9], [3, 13, 5, 9], [3, 13, 7, 9], [4, 9, 1, 8], [4, 9, 6, 10],
[4, 9, 8, 10], [4, 12, 1, 8], [4, 14, 6, 10], [4, 14, 8, 10], [9, 10, 2, 5],
[9, 10, 2, 6], [9, 10, 5, 11], [9, 10, 11, 12], [9, 13, 5, 11], [10, 11, 3, 6],
[10, 11, 3, 7], [10, 11, 6, 12], [10, 14, 6, 12], [11, 12, 4, 7], [11, 12, 4, 8],
[11, 12, 7, 9], [11, 13, 7, 9], [12, 9, 1, 5], [12, 9, 1, 8], [12, 9, 8, 10],
[12, 14, 8, 10]],
name="<NAME>"
)
def ZieglerBall():
r"""
Return the non-shellable ball constructed by Ziegler.
This complex is a non-shellable triangulation of the 3-ball
with 10 vertices and 21 facets, constructed by Ziegler in
[Zie1998]_ and the smallest such complex known.
EXAMPLES::
sage: Z = simplicial_complexes.ZieglerBall(); Z
Ziegler ball
sage: Z.f_vector()
[1, 10, 38, 50, 21]
sage: Z.homology()
{0: 0, 1: 0, 2: 0, 3: 0}
sage: Z.is_cohen_macaulay()
True
"""
return UniqueSimplicialComplex(
[[1, 2, 3, 4], [1, 2, 5, 6], [1, 5, 6, 9], [2, 5, 6, 0], [3, 6, 7, 8], [4, 5, 7, 8],
[2, 3, 6, 7], [1, 6, 2, 9], [2, 6, 7, 0], [3, 2, 4, 8], [4, 1, 3, 7], [3, 4, 7, 8],
[1, 2, 4, 9], [2, 7, 3, 0], [3, 2, 6, 8], [4, 1, 5, 7], [4, 1, 8, 5], [1, 4, 8, 9],
[2, 3, 1, 0], [1, 8, 5, 9], [2, 1, 5, 0]],
name="Ziegler ball"
)
def DunceHat():
r"""
Return the minimal triangulation of the dunce hat given by Hachimori
[Hac2016]_.
This is a standard example of a space that is contractible
but not collapsible.
EXAMPLES::
sage: D = simplicial_complexes.DunceHat(); D
Minimal triangulation of the dunce hat
sage: D.f_vector()
[1, 8, 24, 17]
sage: D.homology()
{0: 0, 1: 0, 2: 0}
sage: D.is_cohen_macaulay()
True
"""
return UniqueSimplicialComplex(
[[1, 3, 5], [2, 3, 5], [2, 4, 5], [1, 2, 4], [1, 3, 4], [3, 4, 8],
[1, 2, 8], [1, 7, 8], [1, 2, 7], [2, 3, 7], [3, 6, 7], [1, 3, 6],
[1, 5, 6], [4, 5, 6], [4, 6, 8], [6, 7, 8], [2, 3, 8]],
name="Minimal triangulation of the dunce hat"
)
def FareyMap(p):
r"""
Return a discrete surface associated with `PSL(2, \GF(p))`.
INPUT:
- `p` -- a prime number
The vertices are the non-zero pairs `(x,y)` in `\GF(p)^2` modulo
the identification of `(-x, -y)` with `(x,y)`.
The triangles are the images of the base triangle ((1,0),(0,1),(1,1))
under the action of `PSL(2, \GF(p))`.
For `p = 3`, the result is a tetrahedron, for `p = 5` an icosahedron,
and for `p = 7` a triangulation of the Klein quartic of genus `3`.
As a Riemann surface, this is the quotient of the upper half plane
by the principal congruence subgroup `\Gamma(p)`.
EXAMPLES::
sage: S5 = simplicial_complexes.FareyMap(5); S5
Simplicial complex with 12 vertices and 20 facets
sage: S5.automorphism_group().cardinality()
120
sage: S7 = simplicial_complexes.FareyMap(7); S7
Simplicial complex with 24 vertices and 56 facets
sage: S7.f_vector()
[1, 24, 84, 56]
REFERENCES:
- [ISS2019] <NAME>, <NAME> and <NAME>,
*From Farey Fractions to the Klein Quartic and Beyond*.
:arxiv:`1909.08568`
"""
from sage.combinat.permutation import Permutation
from sage.groups.perm_gps.permgroup import PermutationGroup
from sage.matrix.constructor import matrix
from sage.modules.free_module_element import vector
from sage.rings.finite_rings.finite_field_constructor import GF
from sage.libs.gap.libgap import libgap
def normalise(pair):
x, y = pair
if x != 0 and p - x < x:
return ((-x) % p, (-y) % p)
elif x == 0 and p - y < y:
return (0, (-y) % p)
return (x, y)
points = [(x, y) for x in range(p) for y in range(p)
if (x, y) != (0, 0) and
(x != 0 and p - x >= x or (x == 0 and p - y >= y))]
convert = {pt: i + 1 for i, pt in enumerate(points)}
F = GF(p)
S = matrix(F, 2, 2, [0, -1, 1, 0])
T = matrix(F, 2, 2, [1, 1, 0, 1])
perm_S = Permutation([convert[normalise(S * vector(pt))]
for pt in points])
perm_T = Permutation([convert[normalise(T * vector(pt))]
for pt in points])
group = PermutationGroup([perm_S, perm_T])
triangle = [convert[normalise(pt)] for pt in [(1, 0), (0, 1), (1, 1)]]
triangle = libgap.Set(triangle)
triangles = libgap.Orbit(group, triangle, libgap.OnSets).sage()
return SimplicialComplex(triangles)
|
<reponame>MarvinHe/beanEater<filename>utils.py
import os
import pygame
from pygame.locals import *
data_path = "data"
class Color:
BLACK = (0, 0, 0)
WHITE = (244, 244, 244)
def load_image(name, colorkey=None):
fullname = os.path.join(data_path, name)
try:
image = pygame.image.load(fullname)
except pygame.error as message:
print('Cannot load image:', name)
raise message
image = image.convert()
if colorkey is not None:
if colorkey is -1:
colorkey = image.get_at((0,0))
image.set_colorkey(colorkey, RLEACCEL)
return image, image.get_rect()
def load_sound(name):
class NoneSound:
def play(self): pass
if not pygame.mixer:
return NoneSound()
fullname = os.path.join(data_path, name)
try:
sound = pygame.mixer.Sound(fullname)
except pygame.error:
print('Cannot load sound:', name)
raise SystemExit
return sound
|
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.27.1
// protoc v3.18.0
// source: api/blog/v1/comment.proto
package v1
import (
_ "github.com/envoyproxy/protoc-gen-validate/validate"
_ "google.golang.org/genproto/googleapis/api/annotations"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type Comment struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Id int64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
Content string `protobuf:"bytes,3,opt,name=content,proto3" json:"content,omitempty"`
UpdateAt *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=update_at,json=updateAt,proto3" json:"update_at,omitempty"`
ArticleId int64 `protobuf:"varint,5,opt,name=article_id,json=articleId,proto3" json:"article_id,omitempty"`
}
func (x *Comment) Reset() {
*x = Comment{}
if protoimpl.UnsafeEnabled {
mi := &file_api_blog_v1_comment_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Comment) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Comment) ProtoMessage() {}
func (x *Comment) ProtoReflect() protoreflect.Message {
mi := &file_api_blog_v1_comment_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Comment.ProtoReflect.Descriptor instead.
func (*Comment) Descriptor() ([]byte, []int) {
return file_api_blog_v1_comment_proto_rawDescGZIP(), []int{0}
}
func (x *Comment) GetId() int64 {
if x != nil {
return x.Id
}
return 0
}
func (x *Comment) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *Comment) GetContent() string {
if x != nil {
return x.Content
}
return ""
}
func (x *Comment) GetUpdateAt() *timestamppb.Timestamp {
if x != nil {
return x.UpdateAt
}
return nil
}
func (x *Comment) GetArticleId() int64 {
if x != nil {
return x.ArticleId
}
return 0
}
type CreateCommentRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // the title of string must be between 5 and 50 character
Content string `protobuf:"bytes,2,opt,name=content,proto3" json:"content,omitempty"`
ArticleId int64 `protobuf:"varint,3,opt,name=article_id,json=articleId,proto3" json:"article_id,omitempty"`
}
func (x *CreateCommentRequest) Reset() {
*x = CreateCommentRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_api_blog_v1_comment_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CreateCommentRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateCommentRequest) ProtoMessage() {}
func (x *CreateCommentRequest) ProtoReflect() protoreflect.Message {
mi := &file_api_blog_v1_comment_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateCommentRequest.ProtoReflect.Descriptor instead.
func (*CreateCommentRequest) Descriptor() ([]byte, []int) {
return file_api_blog_v1_comment_proto_rawDescGZIP(), []int{1}
}
func (x *CreateCommentRequest) GetName() string {
if x != nil {
return x.Name
}
return ""
}
func (x *CreateCommentRequest) GetContent() string {
if x != nil {
return x.Content
}
return ""
}
func (x *CreateCommentRequest) GetArticleId() int64 {
if x != nil {
return x.ArticleId
}
return 0
}
type CreateCommentReply struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Comment *Comment `protobuf:"bytes,1,opt,name=comment,proto3" json:"comment,omitempty"`
}
func (x *CreateCommentReply) Reset() {
*x = CreateCommentReply{}
if protoimpl.UnsafeEnabled {
mi := &file_api_blog_v1_comment_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CreateCommentReply) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateCommentReply) ProtoMessage() {}
func (x *CreateCommentReply) ProtoReflect() protoreflect.Message {
mi := &file_api_blog_v1_comment_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateCommentReply.ProtoReflect.Descriptor instead.
func (*CreateCommentReply) Descriptor() ([]byte, []int) {
return file_api_blog_v1_comment_proto_rawDescGZIP(), []int{2}
}
func (x *CreateCommentReply) GetComment() *Comment {
if x != nil {
return x.Comment
}
return nil
}
type ListCommentReq struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
ArticleId int64 `protobuf:"varint,1,opt,name=article_id,json=articleId,proto3" json:"article_id,omitempty"`
}
func (x *ListCommentReq) Reset() {
*x = ListCommentReq{}
if protoimpl.UnsafeEnabled {
mi := &file_api_blog_v1_comment_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ListCommentReq) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListCommentReq) ProtoMessage() {}
func (x *ListCommentReq) ProtoReflect() protoreflect.Message {
mi := &file_api_blog_v1_comment_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListCommentReq.ProtoReflect.Descriptor instead.
func (*ListCommentReq) Descriptor() ([]byte, []int) {
return file_api_blog_v1_comment_proto_rawDescGZIP(), []int{3}
}
func (x *ListCommentReq) GetArticleId() int64 {
if x != nil {
return x.ArticleId
}
return 0
}
type ListCommentReply struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Comments []*Comment `protobuf:"bytes,1,rep,name=comments,proto3" json:"comments,omitempty"`
}
func (x *ListCommentReply) Reset() {
*x = ListCommentReply{}
if protoimpl.UnsafeEnabled {
mi := &file_api_blog_v1_comment_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ListCommentReply) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListCommentReply) ProtoMessage() {}
func (x *ListCommentReply) ProtoReflect() protoreflect.Message {
mi := &file_api_blog_v1_comment_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListCommentReply.ProtoReflect.Descriptor instead.
func (*ListCommentReply) Descriptor() ([]byte, []int) {
return file_api_blog_v1_comment_proto_rawDescGZIP(), []int{4}
}
func (x *ListCommentReply) GetComments() []*Comment {
if x != nil {
return x.Comments
}
return nil
}
var File_api_blog_v1_comment_proto protoreflect.FileDescriptor
var file_api_blog_v1_comment_proto_rawDesc = []byte{
0x0a, 0x19, 0x61, 0x70, 0x69, 0x2f, 0x62, 0x6c, 0x6f, 0x67, 0x2f, 0x76, 0x31, 0x2f, 0x63, 0x6f,
0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x0b, 0x61, 0x70, 0x69,
0x2e, 0x62, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x1a, 0x1c, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73,
0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d,
0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x17, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74,
0x65, 0x2f, 0x76, 0x61, 0x6c, 0x69, 0x64, 0x61, 0x74, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x22, 0x9f, 0x01, 0x0a, 0x07, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x12, 0x0e, 0x0a, 0x02,
0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x02, 0x69, 0x64, 0x12, 0x12, 0x0a, 0x04,
0x6e, 0x61, 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65,
0x12, 0x18, 0x0a, 0x07, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x18, 0x03, 0x20, 0x01, 0x28,
0x09, 0x52, 0x07, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x12, 0x37, 0x0a, 0x09, 0x75, 0x70,
0x64, 0x61, 0x74, 0x65, 0x5f, 0x61, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e,
0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e,
0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x08, 0x75, 0x70, 0x64, 0x61, 0x74,
0x65, 0x41, 0x74, 0x12, 0x1d, 0x0a, 0x0a, 0x61, 0x72, 0x74, 0x69, 0x63, 0x6c, 0x65, 0x5f, 0x69,
0x64, 0x18, 0x05, 0x20, 0x01, 0x28, 0x03, 0x52, 0x09, 0x61, 0x72, 0x74, 0x69, 0x63, 0x6c, 0x65,
0x49, 0x64, 0x22, 0x7a, 0x0a, 0x14, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x43, 0x6f, 0x6d, 0x6d,
0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x1d, 0x0a, 0x04, 0x6e, 0x61,
0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x42, 0x09, 0xfa, 0x42, 0x06, 0x72, 0x04, 0x10,
0x05, 0x18, 0x32, 0x52, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x24, 0x0a, 0x07, 0x63, 0x6f, 0x6e,
0x74, 0x65, 0x6e, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x0a, 0xfa, 0x42, 0x07, 0x72,
0x05, 0x10, 0x00, 0x18, 0xc8, 0x01, 0x52, 0x07, 0x63, 0x6f, 0x6e, 0x74, 0x65, 0x6e, 0x74, 0x12,
0x1d, 0x0a, 0x0a, 0x61, 0x72, 0x74, 0x69, 0x63, 0x6c, 0x65, 0x5f, 0x69, 0x64, 0x18, 0x03, 0x20,
0x01, 0x28, 0x03, 0x52, 0x09, 0x61, 0x72, 0x74, 0x69, 0x63, 0x6c, 0x65, 0x49, 0x64, 0x22, 0x44,
0x0a, 0x12, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x52,
0x65, 0x70, 0x6c, 0x79, 0x12, 0x2e, 0x0a, 0x07, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x18,
0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x62, 0x6c, 0x6f, 0x67,
0x2e, 0x76, 0x31, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x52, 0x07, 0x63, 0x6f, 0x6d,
0x6d, 0x65, 0x6e, 0x74, 0x22, 0x2f, 0x0a, 0x0e, 0x4c, 0x69, 0x73, 0x74, 0x43, 0x6f, 0x6d, 0x6d,
0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x12, 0x1d, 0x0a, 0x0a, 0x61, 0x72, 0x74, 0x69, 0x63, 0x6c,
0x65, 0x5f, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x09, 0x61, 0x72, 0x74, 0x69,
0x63, 0x6c, 0x65, 0x49, 0x64, 0x22, 0x44, 0x0a, 0x10, 0x4c, 0x69, 0x73, 0x74, 0x43, 0x6f, 0x6d,
0x6d, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x70, 0x6c, 0x79, 0x12, 0x30, 0x0a, 0x08, 0x63, 0x6f, 0x6d,
0x6d, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x61, 0x70,
0x69, 0x2e, 0x62, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e,
0x74, 0x52, 0x08, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x73, 0x32, 0xf2, 0x01, 0x0a, 0x0e,
0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12, 0x6b,
0x0a, 0x0d, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x12,
0x21, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x62, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x72,
0x65, 0x61, 0x74, 0x65, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x1a, 0x1f, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x62, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31,
0x2e, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x52, 0x65,
0x70, 0x6c, 0x79, 0x22, 0x16, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x10, 0x22, 0x0b, 0x2f, 0x76, 0x31,
0x2f, 0x63, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x3a, 0x01, 0x2a, 0x12, 0x73, 0x0a, 0x12, 0x4c,
0x69, 0x73, 0x74, 0x41, 0x72, 0x74, 0x69, 0x63, 0x6c, 0x65, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e,
0x74, 0x12, 0x1b, 0x2e, 0x61, 0x70, 0x69, 0x2e, 0x62, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e,
0x4c, 0x69, 0x73, 0x74, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x71, 0x1a, 0x1d,
0x2e, 0x61, 0x70, 0x69, 0x2e, 0x62, 0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x2e, 0x4c, 0x69, 0x73,
0x74, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x52, 0x65, 0x70, 0x6c, 0x79, 0x22, 0x21, 0x82,
0xd3, 0xe4, 0x93, 0x02, 0x1b, 0x12, 0x19, 0x2f, 0x76, 0x31, 0x2f, 0x63, 0x6f, 0x6d, 0x6d, 0x65,
0x6e, 0x74, 0x73, 0x2f, 0x7b, 0x61, 0x72, 0x74, 0x69, 0x63, 0x6c, 0x65, 0x5f, 0x69, 0x64, 0x7d,
0x42, 0x56, 0x0a, 0x12, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x62,
0x6c, 0x6f, 0x67, 0x2e, 0x76, 0x31, 0x42, 0x0e, 0x43, 0x6f, 0x6d, 0x6d, 0x65, 0x6e, 0x74, 0x50,
0x72, 0x6f, 0x74, 0x6f, 0x56, 0x31, 0x50, 0x01, 0x5a, 0x2e, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62,
0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x64, 0x65, 0x76, 0x68, 0x67, 0x2f, 0x6b, 0x72, 0x61, 0x74, 0x6f,
0x73, 0x2d, 0x65, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x62, 0x6c,
0x6f, 0x67, 0x2f, 0x76, 0x31, 0x3b, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_api_blog_v1_comment_proto_rawDescOnce sync.Once
file_api_blog_v1_comment_proto_rawDescData = file_api_blog_v1_comment_proto_rawDesc
)
func file_api_blog_v1_comment_proto_rawDescGZIP() []byte {
file_api_blog_v1_comment_proto_rawDescOnce.Do(func() {
file_api_blog_v1_comment_proto_rawDescData = protoimpl.X.CompressGZIP(file_api_blog_v1_comment_proto_rawDescData)
})
return file_api_blog_v1_comment_proto_rawDescData
}
var file_api_blog_v1_comment_proto_msgTypes = make([]protoimpl.MessageInfo, 5)
var file_api_blog_v1_comment_proto_goTypes = []interface{}{
(*Comment)(nil), // 0: api.blog.v1.Comment
(*CreateCommentRequest)(nil), // 1: api.blog.v1.CreateCommentRequest
(*CreateCommentReply)(nil), // 2: api.blog.v1.CreateCommentReply
(*ListCommentReq)(nil), // 3: api.blog.v1.ListCommentReq
(*ListCommentReply)(nil), // 4: api.blog.v1.ListCommentReply
(*timestamppb.Timestamp)(nil), // 5: google.protobuf.Timestamp
}
var file_api_blog_v1_comment_proto_depIdxs = []int32{
5, // 0: api.blog.v1.Comment.update_at:type_name -> google.protobuf.Timestamp
0, // 1: api.blog.v1.CreateCommentReply.comment:type_name -> api.blog.v1.Comment
0, // 2: api.blog.v1.ListCommentReply.comments:type_name -> api.blog.v1.Comment
1, // 3: api.blog.v1.CommentService.CreateComment:input_type -> api.blog.v1.CreateCommentRequest
3, // 4: api.blog.v1.CommentService.ListArticleComment:input_type -> api.blog.v1.ListCommentReq
2, // 5: api.blog.v1.CommentService.CreateComment:output_type -> api.blog.v1.CreateCommentReply
4, // 6: api.blog.v1.CommentService.ListArticleComment:output_type -> api.blog.v1.ListCommentReply
5, // [5:7] is the sub-list for method output_type
3, // [3:5] is the sub-list for method input_type
3, // [3:3] is the sub-list for extension type_name
3, // [3:3] is the sub-list for extension extendee
0, // [0:3] is the sub-list for field type_name
}
func init() { file_api_blog_v1_comment_proto_init() }
func file_api_blog_v1_comment_proto_init() {
if File_api_blog_v1_comment_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_api_blog_v1_comment_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Comment); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_api_blog_v1_comment_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CreateCommentRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_api_blog_v1_comment_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CreateCommentReply); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_api_blog_v1_comment_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ListCommentReq); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_api_blog_v1_comment_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ListCommentReply); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_api_blog_v1_comment_proto_rawDesc,
NumEnums: 0,
NumMessages: 5,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_api_blog_v1_comment_proto_goTypes,
DependencyIndexes: file_api_blog_v1_comment_proto_depIdxs,
MessageInfos: file_api_blog_v1_comment_proto_msgTypes,
}.Build()
File_api_blog_v1_comment_proto = out.File
file_api_blog_v1_comment_proto_rawDesc = nil
file_api_blog_v1_comment_proto_goTypes = nil
file_api_blog_v1_comment_proto_depIdxs = nil
}
|
Cystic duplication cyst of ascending colon in an adult Gastrointestinal duplication cysts are rare congenital abnormalities that are usually seen in childhood. Colonic duplication cyst is very rare in adults and is usually asymptomatic. We report a 42-year-old female with a duplication cyst in the proximal ascending colon who presented with recurrent episodes of colicky abdominal pain. The cyst could be well visualized on colonoscopy and the patient underwent successful right hemicolectomy with ileotransverse anastomosis.
|
<gh_stars>0
/**
*
*/
package net.thenumenorean.essence.pl;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import org.bson.Document;
import net.thenumenorean.essence.MongoDriver;
/**
* @author <NAME>
*
*/
public class RandomPlaylist extends PlaylistGenerator {
public RandomPlaylist(MongoDriver md) {
super(md);
}
@Override
public List<Document> generatePlaylist(List<Document> currentPlaylist, List<Document> requests) {
List<Document> docs = new ArrayList<Document>();
Collections.shuffle(requests);
int rank = 0;
for (Document d : requests)
docs.add(createFromRequest(d, rank++));
return docs;
}
}
|
import { IsArray, IsNotEmpty, IsNumber, IsString } from "class-validator";
export class Company {
@IsNumber()
@IsNotEmpty()
name: string;
@IsNumber()
@IsNotEmpty()
catchPhrase: string;
@IsNumber()
@IsNotEmpty()
bs: string;
}
|
import React, { useEffect, useState } from 'react';
import clsx from 'clsx';
import { useSelector } from 'react-redux';
import TransactionListElement from '../TransactionList/TransactionListElement';
import {
TransferTransactionWithNames,
TransactionStatus,
TransferTransaction,
} from '~/utils/types';
import { isFailed } from '~/utils/transactionHelpers';
import SidedRow from '~/components/SidedRow';
import CopyButton from '~/components/CopyButton';
import { rejectReasonToDisplayText } from '~/utils/node/RejectReasonHelper';
import { transactionsSelector } from '~/features/TransactionSlice';
import styles from './TransactionView.module.scss';
import CloseButton from '~/cross-app-components/CloseButton';
interface Props {
transaction: TransferTransactionWithNames;
onClose?(): void;
}
interface CopiableListElementProps {
title: string;
value: string;
note?: string;
}
/**
* This Display the title (and the note) and contains an CopyButton, that, when pressed, copies the given value into the user's clipboard.
*/
function CopiableListElement({
title,
value,
note,
}: CopiableListElementProps): JSX.Element {
return (
<SidedRow
className={styles.listElement}
left={
<div className={styles.copiableListElementLeftSide}>
<p className={styles.copiableListElementTitle}>{title}</p>
{'\n'}
<p className="body4 m0 mT5">
{value} {note ? `(${note})` : undefined}
</p>
</div>
}
right={<CopyButton value={value} />}
/>
);
}
function displayRejectReason(transaction: TransferTransactionWithNames) {
if (isFailed(transaction)) {
return (
<p className={clsx(styles.errorMessage, 'mT0')}>
Failed:{' '}
{transaction.status === TransactionStatus.Rejected
? 'Transaction was rejected'
: rejectReasonToDisplayText(transaction.rejectReason)}
</p>
);
}
return null;
}
/**
* Detailed view of the given transaction.
*/
function TransactionView({ transaction, onClose }: Props) {
const transactions = useSelector(transactionsSelector);
const [
chosenTransaction,
setChosenTransaction,
] = useState<TransferTransaction>(transaction);
useEffect(() => {
if (chosenTransaction) {
const upToDateChosenTransaction = transactions.find(
(t) => t.transactionHash === chosenTransaction.transactionHash
);
if (upToDateChosenTransaction) {
setChosenTransaction(upToDateChosenTransaction);
}
}
}, [transactions, chosenTransaction, setChosenTransaction]);
return (
<div className={styles.root}>
<h3 className={styles.title}>Transaction details</h3>
<CloseButton className={styles.closeButton} onClick={onClose} />
<TransactionListElement
className={styles.transactionListElement}
style={{}}
transaction={transaction}
showDate
showFullMemo
/>
{displayRejectReason(transaction)}
{!!transaction.fromAddress && (
<CopiableListElement
title="From Address:"
value={`${transaction.fromAddress}`}
note={transaction.fromName}
/>
)}
{transaction.toAddress ? (
<CopiableListElement
title="To Address:"
value={`${transaction.toAddress}`}
note={transaction.toName}
/>
) : null}
<CopiableListElement
title="Transaction Hash"
value={transaction.transactionHash || 'No Transaction.'}
/>
<CopiableListElement
title="Block Hash"
value={transaction.blockHash || 'Awaiting finalization'}
/>
</div>
);
}
export default TransactionView;
|
. The present study aimed to answer the following questions: 1. do secretion of volume related hormones in patients with EH pre and post treatment with captopril differ from normotensive subjects if examined in thermal dehydration conditions; 2. is the electrolyte composition of thermal sweat related to the plasma profile of volume related hormones? and 3. does treatment by captopril influence sweat electrolytes in EH patients. In 16 patients with EH and in 20 healthy subjects a thermal dehydration test was performed. In patients with EH this test was done twice: before treatment and after 6 weeks of captopril therapy. In all subjects plasma renin activity (PRA), aldosterone (Ald) AVP and ANP were measured before and after thermal dehydration. In sweat samples collected after 15' and 45' of thermal dehydration the concentration of Na, K and Cl was assessed. In hypertensive patients before captopril treatment significantly higher values of PRA, ALD and ANP were found, while sweat concentrations of Na and Cl were significantly lower than in controls. After captopril treatment sweat electrolytes concentrations showed a tendency to normalize. No significant correlation was found between the plasma hormonal profile and sweat Na, K and Cl concentrations respectively both in controls and patients with EH pretreatment. A significant positive correlation was noticed only in hypertensive patients post-treatment between plasma aldosterone and sweat Na and Cl concentration respectively. Results obtained in this study show, that volume related hormones (Ald, AVP, ANP) do not seem to influence markedly the electrolyte composition of thermal sweat both in healthy subjects and in hypertensive patients.
|
import lighthouse from 'lighthouse/lighthouse-core';
import { launch, LaunchedChrome } from 'chrome-launcher';
import { JSReport, NetworkAsset } from './types';
import { DataPoint } from '../datadog/datadog-utils';
import { METRIC_SCORE_MAP } from '../config/metric-map';
import _get from 'lodash.get';
export interface LighthouseRunSettings {
url: string;
metricNamespace: string;
customLighthouseConfig?: Record<string, unknown>;
}
export interface LighthouseRunResults {
jsReport: JSReport;
htmlReport: string;
}
export const runLighthouse = async (
chrome: LaunchedChrome,
settings: LighthouseRunSettings
): Promise<LighthouseRunResults> => {
const { url, customLighthouseConfig } = settings;
const flags = { logLevel: 'error', output: 'html', port: chrome.port };
const runnerResult = await lighthouse(url, flags, customLighthouseConfig);
return {
jsReport: runnerResult.lhr,
htmlReport: runnerResult.report
};
};
export const launchChrome = async (): Promise<LaunchedChrome> => {
return launch({
chromeFlags: ['--headless', '--no-sandbox']
});
};
export const scoresToDataPoints = (jsReport: JSReport): DataPoint[] => {
return [
...pickReportScores(jsReport),
{
metricName: 'total_bundle_size',
value: `${getScriptsSize(jsReport)}`
}
];
};
const pickReportScores = (jsReport: JSReport) =>
Object.entries(METRIC_SCORE_MAP).map(([metricName, scorePath]) => ({
metricName,
value: _get(jsReport, scorePath)
}));
const getScriptsSize = (jsReport: JSReport): number =>
(jsReport.audits['network-requests'] as Record<
string,
{ items: NetworkAsset[] }
>).details.items
.filter((asset: NetworkAsset) => asset.resourceType === 'Script')
.reduce(
(totalSize: number, asset: NetworkAsset) =>
totalSize + asset.transferSize,
0
);
|
. This paper reports the results of surgical treatment of 39 patients with diseases of operated stomach including 26 (66.7%) men and 13 (33.3%) women aged 31-70 (mean 48.2 +/- 13.1) years. Their comprehensive examination revealed restoration of gastric stump tone and peristalsis by the end of the 6th month after surgery. The newly formed invagination canal became functionally active and peristaltic, did not hinder the natural food passage and prevented retrograde food and bile flow from lower to upper segments of the gastrointestinal tract.
|
Gov. Rick Scott on Monday signed into law legislation that reduces unemployment benefits in the state when the jobless rate falls to 5 percent or lower.
The new law also makes it more difficult for unemployed residents to qualify for unemployment benefits, denying them for misconduct such as chronic absenteeism or violation of an employer’s rules -- even outside of work.
Republicans in both Houses argued the bill was critical to protecting Florida’s business climate. The measure also supports Scott’s budget plan to reduce unemployment taxes for employers by $630.8 million.
Democratic representatives, and civic groups including Florida New Majority, objected to the legislation, HB 7005, saying benefit cuts would disproportionately affect black, Latino and low-income families.
The new law doesn’t affect Florida residents currently on unemployment benefits. Beginning Jan. 1, 2012, the number of available state benefit weeks is reduced from 26 to 23 and the number of available state benefit weeks is tied to the unemployment rate on a sliding scale. If unemployment rate is 5 percent or lower, for example, the number of available weeks is 12. If the unemployment rate is 10.5 percent or higher, the number of available weeks is 23.
The law also mandates an initial skills review for unemployed residents to qualify for state benefit. It also provides for registration with a one-stop career center for reemployment services and requires those receiving benefits to contact at least five prospective employers each week to continue receiving benefits.
Before the new law, Florida already had one of the strictest unemployment compensation programs in the country, said the National Employment Law Project, an advocacy group for the unemployed. The 75-year national standard for unemployment benefits has been 26 weeks.
Florida’s unemployment rate is also among the highest in the nation, at 10.6 percent in May.
[email protected] or 561-243-6650
|
Clinical implication of vascular and dimensional aspects of the cricothyroid space in the Turkish population. OBJECTIVE Cricothyroid space (CS) is one of the thinnest part in the framework of the larynx. The close relations of CS to the intralaryngeal subglottic area increase its anatomical importance. The aim of this study was to establish topographic distribution and the number of perforating vessels lying towards the intralaryngeal subglottic region and finding the calibrations of these vessels, thus revealing an index for the Turkish population. METHODS In this study, 5 women and 45 men autopsy materials that had no pathology or previous surgery in the area were examined during the period February to November 2003. All specimens in this study were selected randomly from the criminal lab of the Republic of Turkey Ministry of Justice, Istanbul, Turkey. Microdissections were made by SMZ 10 Stereomicroscope. Superficial vascular structures of the membrane cricothyroidea at CS and their crossing places (foramens) to the intralaryngeal area, their numbers and localizations in relation to the midline (right/left and cranial/caudal) and their diameters were established. RESULTS In the larynx dissections, which were made in 50 cases, a total of 180 vessels were seen. Seventy-eight vessels were situated on the middle line (cranial and caudal). Fifty-three vessels were at right side (cranial and caudal) and 49 vessels were at left side (cranial and caudal). In 20 specimens 2-4 vessels arrangement were passing through the foramen to the intralaryngeal subglottic area. Among these foramens, 20 of them consisted of 2 vessels (16 cranial, 4 caudal), 4 of 3 vessels (3 cranial, 1 caudal) and only one foramen was consist of 4 vessels (cranial). CONCLUSION The cricothyroid area is an anatomical compartment enclosed by a connective tissue membrane and connected to the adjacent laryngeal region by vessels. This region is important with regard to surgical procedures, spread of laryngeal cancer and traumatic lesions of the larynx. Therefore, the clinical and surgical importance of vascular anatomy and the dimension of the cricothyroid space should be given emphasis in our population.
|
Synthesis and proton conductivity of Nafion with addition of CsHSO4 INTRODUCTION Anhydrous solid electrolyte membranes with high proton conductivity at intermediate temperatures (150 250 °C T-range) are considered key materials for obtaining high power density output in polymer electrolyte fuel cells (PEFC), such as proton exchange membrane fuel cells (PEMFC), direct alcohol fuel cells (DAFC) and direct methane fuel cells (DMEFC). High proton conductivity (~10 Scm) has been observed in superprotonic conductor solid acids (CSPs) such as CsHSO4 at T > 140 °C. The use of such materials in PEFCs has simplified the water management and provided high current densities at an intermediate temperature range. However, the CSP thin films are fragile, water soluble, and the fabrication of low thickness (low ohmic resistance) films is a hard task. On the other hand, polymer electrolytes such as Nafion are flexible and can be obtained in a broad range of thicknesses. However, at anhydrous conditions, these membranes are electrical insulators. In this context, the fabrication of composite membranes based on the addition of CsHSO4 into Nafion matrix at concentrations above the percolation threshold of the inorganic phase can substantially improve the proton conductivity at intermediate temperatures (T > 140 °C). Herein, we show that the addition of CsHSO4 into Nafion at high loadings, a pronounced increase of the proton conductivity is obtained.
|
def merge(self, script: str) -> None:
if self._method.working_data:
working_path = self._method.working_data.path
else:
working_file = (
f"working_{'_'.join([m.lower() for m in self._method.name.split()])}_{self._method.uuid.hex}.xlsx"
)
working_path = self.directory / working_file
actions = [ActionScriptModel(**{"script": script})]
merge_list = self.mthdprsr.parse_merge(actions[0], self._method.input_data)
df = self._merge_dataframes(merge_list)
df.to_excel(working_path, index=False)
working_data = DataSourceModel(**{"path": str(working_path)})
working_data.columns = self.wrangle.get_dataframe_columns(df)
working_data.preserve = [c for c in working_data.columns if c.type_field == "string"]
working_data.actions = actions
working_data.row_count = len(df)
df = self.wrangle.get_dataframe_from_datasource(working_data)
working_data.checksum = self.core.get_data_checksum(df)
self._method.working_data = working_data
update = VersionModel(**{"description": "Build merged data."})
self._method.version.append(update)
|
/*
* Returns a deep copy of this ListOfDenotedSpeciesTypeComponentIndexes
*/
ListOfDenotedSpeciesTypeComponentIndexes*
ListOfDenotedSpeciesTypeComponentIndexes::clone () const
{
return new ListOfDenotedSpeciesTypeComponentIndexes(*this);
}
|
#include "../pipe.h"
#include <string.h>
#include <zlib.h>
#include "niu2x/assert.h"
#include "niu2x/global.h"
namespace nx::pipe::filter {
namespace {
void zlib_setup(z_stream* strm, int level);
void zlib_cleanup(z_stream* strm);
void unzlib_setup(z_stream* strm);
void unzlib_cleanup(z_stream* strm);
} // namespace
zlib_t::zlib_t(int level)
{
zlib_ctx_ = NX_ALLOC(z_stream, 1);
NX_ASSERT(zlib_ctx_, "out of memory");
zlib_setup(reinterpret_cast<z_stream*>(zlib_ctx_), level);
}
zlib_t::~zlib_t()
{
zlib_cleanup(reinterpret_cast<z_stream*>(zlib_ctx_));
NX_FREE(zlib_ctx_);
}
unzlib_t::unzlib_t()
{
zlib_ctx_ = NX_ALLOC(z_stream, 1);
NX_ASSERT(zlib_ctx_, "out of memory");
unzlib_setup(reinterpret_cast<z_stream*>(zlib_ctx_));
}
unzlib_t::~unzlib_t()
{
unzlib_cleanup(reinterpret_cast<z_stream*>(zlib_ctx_));
NX_FREE(zlib_ctx_);
}
bool zlib_t::transform(ringbuf& rbuf, ringbuf& wbuf, bool upstream_eof)
{
auto* strm_ = reinterpret_cast<z_stream*>(zlib_ctx_);
auto input = rbuf.continuous_elems();
auto output = wbuf.continuous_slots();
strm_->avail_in = static_cast<uInt>(input.size);
int flush = (upstream_eof && rbuf.empty()) ? Z_FINISH : Z_NO_FLUSH;
strm_->next_in = const_cast<Bytef*>(input.base);
strm_->avail_out = static_cast<uInt>(output.size);
strm_->next_out = output.base;
auto ret = deflate(strm_, flush);
NX_ASSERT(ret == Z_OK || ret == Z_STREAM_END || ret == Z_BUF_ERROR,
"zlib failed");
wbuf.update_size(output.size - strm_->avail_out);
rbuf.update_size(-(input.size - strm_->avail_in));
return ret == Z_STREAM_END;
}
bool unzlib_t::transform(ringbuf& rbuf, ringbuf& wbuf, bool upstream_eof)
{
auto* strm_ = reinterpret_cast<z_stream*>(zlib_ctx_);
auto input = rbuf.continuous_elems();
auto output = wbuf.continuous_slots();
strm_->avail_in = static_cast<uInt>(input.size);
int flush = (upstream_eof && rbuf.empty()) ? Z_FINISH : Z_NO_FLUSH;
strm_->next_in = const_cast<Bytef*>(input.base);
strm_->avail_out = static_cast<uInt>(output.size);
strm_->next_out = output.base;
auto ret = inflate(strm_, flush);
NX_ASSERT(ret == Z_OK || ret == Z_STREAM_END || ret == Z_BUF_ERROR,
"unzlib failed: %d", ret);
unused(ret);
wbuf.update_size(output.size - strm_->avail_out);
rbuf.update_size(-(input.size - strm_->avail_in));
return ret == Z_STREAM_END;
}
namespace {
void zlib_setup(z_stream* strm, int level)
{
strm->zalloc = Z_NULL;
strm->zfree = Z_NULL;
strm->opaque = Z_NULL;
auto ret = deflateInit(strm, level);
NX_ASSERT(ret == Z_OK, "zlib compress deflateInit failed");
}
void zlib_cleanup(z_stream* strm) { deflateEnd(strm); }
void unzlib_setup(z_stream* strm)
{
strm->zalloc = Z_NULL;
strm->zfree = Z_NULL;
strm->opaque = Z_NULL;
auto ret = inflateInit(strm);
NX_ASSERT(ret == Z_OK, "zlib compress inflateInit failed");
}
void unzlib_cleanup(z_stream* strm) { inflateEnd(strm); }
} // namespace
} // namespace nx::pipe::filter
|
"""
Author: <NAME>
"""
import numpy as np
import solver_utils
def solve(grid_in):
"""
This function contains the hand-coded solution for the data in
2dc579da.json of the Abstraction and Reasoning Corpus (ARC)
Transformation Description: The center row and center column of the input
grid are a different colour to the rest of the grid, effectively dividing
the grid into 4 quadrants. One of the 4 quadrants contains an element with
a different colour to every other element. The transformation is to select
this quadrant as the output grid.
Inputs: grid_in - A python list of lists containing the unsolved grid data
Returns: grid_out - A python list of lists containing the solved grid data
"""
# Convert to numpy array
grid_in_np = np.array(grid_in)
# Find the center index
midpoint = grid_in_np.shape[0] // 2
# Source : https://stackoverflow.com/questions/6252280/find-the-most-frequent-number-in-a-numpy-vector
# [Accessed: 14/11/2019]
(values,counts) = np.unique(grid_in_np, return_counts=True)
ind = np.argmin(counts)
minority_colour = values[ind]
squares = [
grid_in_np[0:midpoint,0:midpoint], # Top-left
grid_in_np[midpoint+1:,0:midpoint], # Bottom-left
grid_in_np[0:midpoint,midpoint+1:], # Top-right
grid_in_np[midpoint+1:,midpoint+1:] # Bottom-right
]
for square in squares:
if minority_colour in square:
grid_out_np = square
break
# Convert back to list of lists
grid_out = grid_out_np.tolist()
return grid_out
if __name__=='__main__':
# Get the data for the associated JSON file
data = solver_utils.parse_json_file()
# Iterate through training grids and test grids
for data_train in data['train']:
solver_utils.solve_wrapper(data_train['input'], solve)
for data_test in data['test']:
solver_utils.solve_wrapper(data_test['input'], solve)
|
Molecular Epidemiology and Antifungal Resistance of Cryptococcus neoformans From Human Immunodeficiency Virus-Negative and Human Immunodeficiency Virus-Positive Patients in Eastern China Cryptococcosis is an opportunistic and potentially lethal infection caused by Cryptococcus neoformans and Cryptococcus gattii complex, which affects both immunocompromised and immunocompetent people, and it has become a major public health concern worldwide. In this study, we characterized the molecular epidemiology and antifungal susceptibility of 133 C. neoformans isolates from East China Invasive Fungal Infection Group (ECIFIG), 20172020. Isolates were identified to species level by matrix-assisted laser desorption ionization-time of flight mass spectrometry and confirmed by IGS1 sequencing. Whole-genome sequencing (WGS) was conducted on three multidrug-resistant isolates. Among the 133 strains, 61 (45.86%) were isolated from HIV-positive patients and 72 (54.16%) were isolated from HIV-negative patients. In total, C. neoformans var. grubii accounted for 97.74% (130/133), while C. neoformans var. neoformans was rare (2.06%, 3/133). The strains were further classified into nine sequence types (STs) dominated by ST5 (90.23%, 120/133) with low genetic diversity. No association was observed between STs and HIV status. All strains were wild type to voriconazole, while high antifungal minimal inhibitory concentrations (MICs) above the epidemiological cutoff values (ECVs) were observed in C. neoformans strains, and more than half of isolates were non-wild-type to amphotericin B (89.15%, 109/133). Eight isolates were resistant to fluconazole, and eight isolates were non-wild type to 5-fluorocytosine. Furthermore, WGS has verified the novel mutations of FUR1 in 5-fluorocytosine-resistant strains. In one isolate, aneuploidy of chromosome 1 with G484S mutation of ERG11 was observed, inducing high-level resistance (MIC: 32 g/ml) to fluconazole. In general, our data showed that there was no significant difference between HIV-positive and HIV-negative patients on STs, and we elucidate the resistant mechanisms of C. neoformans from different perspectives. It is important for clinical therapy and drug usage in the future. INTRODUCTION Cryptococcosis is one of the most common fungal diseases in the world, with an estimated 223,000 new cases and 181,100 deaths worldwide each year, primarily in southern Africa and Asia (). Cryptococcosis is an opportunistic and invasive fungal infection that not only has high rates of mortality and morbidity in immunocompromised or immunosuppression patients, like acquired immune deficiency syndrome (AIDS), but also infects immunocompetent individuals (;Sloan and Parris, 2014;). There are mainly two species, namely, Cryptococcus neoformans and Cryptococcus gattii, with significant differences in ecology, molecular epidemiology, and antifungal sensitivity (Cogliati, 2013;;). In recent two decades, phylogenetic analysis based on genotypes and phenotypes has revealed two subtypes of C. neoformans and five subtypes of C. gattii. The major molecular types of C. neoformans have most commonly been designated molecular types VNI (AFLP1), VNII (AFLP1A/IB), and VNIII (AFLP3) for C. neoformans var. grubii and molecular types VNIV (AFLP2) for C. neoformans var. neoformans ((Hagen et al.,, 2017). Cryptococcosis is a more frequently observed fungal disease in AIDS patients in Europe, United States, and Africa (;;). The situation, however, is quite different in China. Previous studies showed that C. neoformans mainly originated from human immunodeficiency virus (HIV)-negative population without any risk factors reported in other countries (;). The treatment strategies for cryptococcal meningitis recommended by the Infectious Diseases Society of America (IDSA) were amphotericin B plus 5-fluorocytosine for induction therapy and fluconazole used for consolidation therapy (Baddley and Forrest, 2019). However, it is easy to induce drug resistance for treating cryptococcosis due to the long-term and single therapeutic drug use (Bermas and Geddes-McAlister, 2020). According to a recent report by the China Invasive Fungi Surveillance Network, the cryptococcal resistance rate to fluconazole has increased more than threefold (10.5% in 2010 to 34% in 2014) (). In another multicenter study in China, the resistance rate of C. neoformans to fluconazole has dramatically risen, and non-wild-type isolates to 5-fluorocytosine have also been found (). So far, the resistance mechanisms of C. neoformans were understudied. According to previous studies, cryptococcal resistance to fluconazole could be caused by point mutations of ERG11 (G1785C, G1855A, and G1855T) (;;;), overexpression of ERG11, overexpression of AFR1, and aneuploidy formation. Acquisition of aneuploidies in C. neoformans can mediate increased MIC values to fluconazole and further enable cross-adaptation to other antifungal drugs. Mutations of FCY1, FCY2, and FUR1 were the most common 5-fluorocytosine resistance mechanism of cryptococcus (). Recent studies have demonstrated that mutations of UXS1 are also involved with 5-fluorocytosine resistance (;). Indeed, comprehensive genomic characterization of C. neoformans is limited in China. Notably, antifungal susceptibility, particularly to fluconazole and 5-fluorocytosine, has been noted to vary in correlation not only with molecular types but also with HIV status (;;Arsic ). To investigate the molecular epidemiology of local cryptococcal isolates, several molecular typing methods have been developed, for example, PCRfingerprinting, randomly amplified polymorphic DNA (RAPD), PCR-restriction fragment length polymorphism (PCR-RFLP), amplified fragment length polymorphism (AFLP), microsatellite typing, multilocus microsatellite typing (MLMT), multilocus sequence typing (MLST), and whole-genome sequencing (WGS) (;;;). Extensive studies have recommended MLST as the preferred method among these molecular techniques because of its excellent discrimination ability and reproducibility between different laboratories. A normative MLST scheme of the C. neoformans/C. gattii has been established by the International Society of Human and Animal Mycoses (ISHAM) working group (). Seven housekeeping genes (CAP59, GPD1, IGS1, LAC1, PLB1, SOD1, and URA5) were selected for MLST analysis of the C. neoformans/C. gattii 1,2, and WGS exhibited high reproducibility, specificity, and discriminating power. Therefore, in this study, we explore the prevalence and antifungal drug resistance mechanism of C. neoformans in HIVpositive and HIV-negative patients in China by using highprecision MLST and WGS. Clinical Isolates Information Exactly 133 cryptococcal isolates were collected from East China Invasive Fungal Infection Group (ECIFIG) between 2017 and 2020. Sixty-one isolates derived from HIV-infected patients who had HIV antibody screening test and confirmatory tests positive were classified as HIV-positive group, while others were classified as HIV-negative group. All isolates were identified to species level by matrix-assisted laser desorption/ionization time-of-flight mass-spectrometry (Zybio, China) and confirmed by IGS1 sequencing. Ethics approval for this study was obtained from the Health Research Ethics Board of Shanghai East Hospital. DNA Extraction DNA extraction of isolates was performed by the method described by Xu et al. with some modifications. Briefly, all the isolates were sub-cultured on SDA at 30 C for 48-72 h. Monoclonal colonies were collected in the sterile Eppendorf (EP) tubes containing 50 mg glass beads (BioSpec, United States), 200 l lysis buffer, 200 l phenol-chloroform, and broken for 10 min, and then centrifuged at high speed for 5 min. Supernatants were transported to new EP tubes. DNAs were extracted by phenol-chloroform alcohol and stored at −20 C. Intergenic Spacer 1 Sequencing and Multilocus Sequence Typing Analysis Identification of Cryptococcus spp. through amplification of the intergenic spacer 1 (IGS1) region was amplified using primers, IGS1F (5 -TAAGCCCTTGTT-3 ) and IGS1R (5 -AAAGATTTATTG-3 ), from ISHAM (see text footnote 2). Polymerase chain reaction (PCR) of the IGS1 gene was performed in a 30 l final volume. The PCR mixture contains 1 l of DNA, 15 l of PCR enzyme mix, and 1 l of each primer. For PCR amplification, the PCR mixture was denatured for 5 min at 94 C followed by 35 cycles of 30 s at 94 C, 30 s at 53 C, and 1 min at 72 C, followed by one final step of 10 min at 72 C. For MLST analysis, PCR was performed on seven housekeeping genes (CAP59, GPD1, IGS1, LAC1, PLB1, SOD1, and URA5) according to the International Fungal Multi Locus Sequence Typing Database (IFMLST) (see text footnote 2). Each PCR system was amplified in a 30 l final volume as described before, the reaction procedure was described in the IFMLST profile, and all the primers were listed in the IFMLST. Then, all PCR products were purified with Gel Extraction Kit 200 (Omega Bio-Tek, United States) according to the manufacturer's instructions and were sequenced by an ABI 3730XL DNA analyzer (Shanghai, China). Sequences were assigned to the IFMLST consensus MLST scheme database to obtain sequence types (STs). Whole-Genome Sequencing Three multidrug resistance isolates with MIC ≥16 g/ml to FCZ and 5-FC were selected for whole-genome sequencing (WGS) in this study. Among them, one isolate was separated from the HIV-positive group (YQJ185), and the other two isolates were separated from the HIV-negative group (YQJ68 and YQJ247). All isolates were sub-cultured on SDA at 35 C for 48 h according to the CLSI M27-A4, and then, DNA was extracted using Zymo Quick-DNA/RNA Viral Kit (D7020), followed by library preparation using Vazyme transposase-based approach (TD502). WGS was performed using Illumina NovaSeq 6000 platform. Bioinformatics Raw reads were quality-controlled and trimmed with Trimmomatic (). SPAdes were applied for short-read assembly (). The YMAP pipeline was used for mapping with reference genome H99 and computing depth to estimate the variation of copy numbers and ploidy across chromosomes (). To determine the MAT type, short-read sequences were aligned to MATa locus (AF542528) and MAT (alpha) locus (AF542529). Statistical Analysis Categorized variables were analyzed by Fisher's exact test by IBM SPSS software (version 26.0). Continuous variables were calculated by Mann-Whitney U test. A p < 0.05 was considered significant. Antifungal Susceptibility Test In vitro antifungal susceptibility testing of total isolates was performed against four agents. In brief, the majority exhibited high sensitivity to fluconazole, 5-fluorocytosine, and voriconazole, ranging from 93.98 to 100%. However, 89.15% (109/133) of isolates were non-wild type to amphotericin B. Eight isolates were resistant to fluconazole, and eight isolates were non-wild type against 5-fluorocytosine; compared with the recommended ECVs of fluconazole and 5-fluorocytosine, high MICs of cryptococcal isolates against 5-fluorocytosine (64 g/ml) or fluconazole (32 g/ml) were observed. Interestingly, we found three multidrug isolates (1 isolate from HIV-positive group and 2 isolates from HIV-negative group) ( Table 1 and Supplementary Table 1). For isolates from HIV-positive and HIV-negative groups, the MIC distribution was similar in fluconazole (p = 0.290) but significantly different in 5-fluorocytosine (p < 0.001), with higher MIC values in HIVnegative group (Figure 1). Identification and Correlation Between ST5 and Human Immunodeficiency Virus Status According to MALDI-TOF MS and IGS1 sequencing outcomes, the 133 C. neoformans clinical isolates included 130 C. neoformans var. grubii and 3 C. neoformans var. neoformans. Among the 3 C. neoformans var. neoformans isolates, two isolates were from the HIV-positive group (ST77 and ST93), and the remaining isolate was from the HIV-negative group (ST185). As for MLST analysis, in this study, all isolates were classified into nine STs, and the majority of isolates belonged to ST5, VNI (90.23%, 120/133). For HIV-positive group, ST5 accounted for 86.88% (53/61), and five isolates with other STs included ST43 (1.64%), ST63 (1.64%), ST77 (1.64%), ST93 (1.64%), and ST230 (1.64%). In the other group, there were four STs, containing ST5 (93.05%, 67/72), ST31 (1.39%), ST185 (1.39%), and ST653 (2.78%). In comparison with the HIV-negative group, STs of the HIV-positive group exhibited more diversity. There were four isolates unknown to STs due to failure of sequencing or identifying. In addition, compared with the HIV-positive group, there was no correlation between HIV status and STs (p = 0.256). More details are provided in Supplementary Tables 2, 3. In general, our study revealed that C. neoformans var. grubii (ST5, VNI) was the most representative and predominant species in East China. Whole-Genome Sequencing In this study, we analyzed the mating type and resistant mechanisms from three multidrug-resistant strains by WGS. The detailed information about WGS, including total reads, base quality, depth, and coverage, is shown in Supplementary Table 4. All multidrug-resistant strains belonged to MAT. For an isolate (YQJ185) from the HIV-positive group, aneuploidy occurred in chromosome 1, but not in other chromosomes (Figure 2A). G484S mutation was found in ERG11 gene of YQJ185 with a high-level MIC (32 g/ml) to FCZ, located in the conserved heme-binding domain. Copy number variant (CNV) and ERG11 mutation, however, were not observed in the other two resistant isolates from the HIV-negative group. The non-synonymous mutation was also observed in FUR1 in different positions. For the HIV-positive group, D42Y mutation was found in the FUR1 gene of YQJ185 with MIC (16 g/ml) to 5FC. For the HIVnegative group, P140S mutation was found in the FUR1 gene of YQJ68 with a high MIC (32 g/ml) to 5FC, while YQJ247 has an A-T transition in an intron splice site ( Figure 2B). The ST5 isolates were located in the subclade of VNIa. As mentioned above, ST5 is the major genotype in China, but whole-genome sequences were rarely published. Phylogenetic relationships including ST5 isolates from other countries were generated in this study. Two isolates from HIV-negative patients were clustered into the same subclade with CHC-193 isolated from an HIV-negative patient in 1998 in China (Figure 3). DISCUSSION Cryptococcus neoformans is widely distributed in the world, and usually, it infects HIV-positive patients, particularly in South Africa and Asia (). However, the FIGURE 3 | Core SNP phylogenetic tree of global ST5 C. neoformans. H99 reference was used as reference and outgroup; 44,487 SNP sites were used to generate this tree. Red branches represent isolates from China. condition appears to be extremely different in China. Previous studies showed that cryptococcosis was likely to occur in immunocompetent individuals or in individuals with other underlying diseases (;). Indeed, C. neoformans exhibited lower genetic diversity in China than that in South Asia, and ST5 was the predominant genotype (;;). The MLST was one of the most common technologies to analyze the genotypic diversity of C. neoformans. In this study, our results showed that there was lower genetic diversity of C. neoformans, and ST5 is the dominant ST in China, accounting for 90.23% (120/133) in total. The same results were observed in previous Chinese studies. Indeed, our research revealed no significant difference between HIV-positive and HIV-negative patients on STs (p = 0.256). This is consistent with southwest China (). However, the situation is different in South Korea, where there were significant differences between HIV status and genetic types (). In another study from Asia, it was affirmed that most isolates from HIV-negative patients were ST5 (). Furthermore, in this study we identified five new STs in China, namely, ST230, ST43, ST77, ST185, and ST653, and all of the STs haven't been reported yet in East China. Most importantly, ST31 was the most common ST for environmental C. neoformans in China, which mainly originated from pigeon droppings (;), and ST31 was also the main ST of C. neoformans in India (). This suggests that attention is paid to the clinical isolates of C. neoformans but environmental isolates of C. neoformans need to be investigated more deeply and more extensively in the future. Fluconazole and amphotericin B are the most frequent therapeutic drugs in cryptococcosis treatment. High MICs of amphotericin B above ECVs are concerned in this study, while all isolates were sensitive to voriconazole. This is consistent with a 6-year retrospective study from Hunan, China. Interestingly, the MIC distribution of 5-fluorocytosine in the HIV-negative group was higher than that of the HIV-positive group, and there were no significant differences in other drugs. This is opposite to the study in Southeast China and is consistent with the study in Serbia (;Arsic ). Moreover, in a study from southeast China, the results exhibited no significant differences in antifungal susceptibility to fluconazole and 5-fluorocytosine between HIV-positive and HIV-negative patients (). However, the association between STs and antifungal susceptibility was not observed. In this study, three multidrug isolates were found. Therefore, in this study, we investigated the resistance mechanisms through WGS. Aneuploidy of Chromosome 1 of an isolate (YQJ185) from an HIV-infected patient was tested. Previous studies proved the correlation between the formation of aneuploidy of Chromosome 1 and excessive doses of fluconazole (;. In addition, we also revealed a point mutation of ERG11 (G484S) (;). CNV and ERG11 mutation would accelerate the speed of the cryptococcal resistance to fluconazole. However, the same resistance mechanisms were not observed in the other two isolates (HIV-negative group) against fluconazole (16 g/ml). What'more, there was no available literature that described the resistance mechanisms of 5-fluorocytosine in China. In this study, we addressed point mutations of FUR1 in different mutation sites and splice site mutation with different MICs, and it exhibited unique resistance mechanisms of 5-fluorocytosine in China. Previous studies reported that genes FCY1, FCY2, and UXS1 were associated with resistance to 5-fluorocytosine (;); however, we didn't find it in our study. ST5 or VNIa-5 is an important phylogenetic group in Southeast Asia, characterized by its ability to infect HIV-negative patients (). Despite all three genomes in this study were closely related to Vietnam strains, they are assigned to two subclades, indicating the unique evolution progress of the strain from the HIV-positive group. CONCLUSION Cryptococcus in China exhibited a low extent of genetic diversity, whether HIV-positive or HIV-negative patients were not linked to STs. VNI is the dominant molecular type in C. neoformans and ST5 is the predominant ST. Phylogenetic relationship and resistance mechanisms have evolved among the subclades of ST5 isolates with certain particularity in China. However, there are limitations in this study. First, the geographical representativeness of epidemiological characteristics and resistance mechanisms in this study is limited, only representing East China. Second, only three clinical isolates were performed by WGS in our study, and the correlation among clinical isolates, standard isolates, and environmental isolates should be involved in the future. Finally, the isolates should be inoculated on the medium with FCZ, which would contribute to finding a resistance mechanism on the genomic level. WGS can be used to discover more than just about evolutionary relationships. Hence, we are taking steps to establish a database of cryptococcal genomes using WGS in East China. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://ngdc.cncb.ac. cn/bioproject/browse/PRJCA009353, PRJCA009353. AUTHOR CONTRIBUTIONS WW designed the experiments and supervised the data analysis. ZYZ and CZ wrote the manuscript. CZ, ML, RL, and XL performed and interpreted the whole-genome sequencing data. WW, LZ, ZQZ, and ZYZ collected the strains. All authors contributed to the collection and assembly of data, manuscript writing, and final approval of the manuscript. ACKNOWLEDGMENTS We thank the East China Invasive Fungal Infection Group (ECIFIG) members for their helpful collection of clinical strains.
|
Concurrent validity of the short-form Family Impact Scale (FIS-8) in 4-year-old US children Background US data on the validity and reliability of the short-form Family Impact Scale (FIS-8; a scale for measuring the impact of a childs oral condition on his/her family) are lacking. Methods Cross-sectional analysis of data on four-year-old US children taking part in a multi-center cohort study. For child-caregiver dyads recruited at child age 12 months, the impact of the childs oral condition on the family was assessed at age 48 months using the FIS-8, with a subsample of 422 caregivers (from 686 who were approached). Internal consistency reliability was assessed using Cronbachs, with concurrent validity assessed against a global family impact item (How much are your familys daily lives affected by your childs teeth, lips, jaws or mouth?) and a global oral health item (How would you describe the health of your childs teeth and mouth?). Results Cronbachs alpha was 0.83. Although gradients in mean scores across ordinal response categories of the global family impact item were inconsistent, there were marked, consistent gradients across the ordinal categories of the global item on the childs oral health, with scores highest for those rating their childs oral health as Poor. Conclusions While the findings provide some evidence for the utility of the FIS in a US child sample, the studys replication in samples of preschoolers with greater disease experience would be useful. Background The Family Impact Scale (FIS) is part of the suite of scales-collectively known as the Child Oral Health Quality of Life questionnaire (COHQOL)-which was developed for measuring the impact of oral conditions on children and their families. The 14-item FIS was designed for use with proxy informants, usually a child's mother or father, because very young children tend to be unreliable in that respect. A short-form version (the FIS-8, comprising 8 items) was subsequently developed and shown to be valid and responsive. It has three subscales, representing the domains of parental/family activity (4 items), parental emotions (2 items) and family conflict (2 items). Although the validity and reliability of the FIS have been demonstrated in a number of studies, data from the United States (U.S.) are lacking. Only one previous U.S. study has reported using FIS data, but with the families of older children. It may be that the emergence of the 13-item Early Childhood Oral Health Impact Scale (ECOHIS) in North Carolina has meant that US researchers have tended to opt for the measure with US provenance. The ECOHIS arose from work with the original 45-item pool used by the team which had developed the original P-CPQ and FIS scales. The ECOHIS team obtained ratings of those items from health professionals (and associated staff and researchers) experienced in dealing with young children. The 36 items remaining from that process then underwent item reduction with 30 parents of 3-5-year-old children, resulting in the 13-item Open Access *Correspondence: [email protected] ECOHIS, which included 4 items on family impact. It was then field-tested with a convenience sample of parents/caregivers of 5-year-olds. The ECOHIS has a family impact component, but the FIS has been shown to be better for health services research work, such as assessing the improvement in family impact which is observed after children with early childhood caries are treated under general anaesthetic. There is also the issue of the absence from the ECOHIS of an item pertaining to disrupted sleep, an impact which is common among families of children with severe early childhood caries (ECC). Thus, the ECOHIS falls short in respect of some important features. Accurate and valid measurement of the family impact of such conditions might be better achieved, then, by using the FIS-8 (or both measures concurrently). However, there is no published information on the concurrent validity and reliability of the FIS-8 in a US child population. Accordingly, the aim of this study was to examine those aspects of the FIS-8 in a sample of four-year-old US children taking part in a large multi-center cohort study. Methods This prospective longitudinal study was managed and coordinated at the University of Michigan (Ann Arbor, MI). Three well-established primary care medical research networks-the Pediatric Research Network (Indiana University) in Indianapolis, IN; the Iowa Research Network (Iowa University) in Iowa City, IA, and Duke University's Primary Care Practice-Based Research Network in Durham, NC-identified and enrolled 1,326 children (age at baseline visit: 12 ± 3 months old). The study population was stratified by Medicaid status, and the sample was intended to be diverse in relation to racial/ethnic group and urban/rural residence. Childcaregiver dyads were identified for possible recruitment primarily through well-child medical appointments, but other venues and approaches were also used (such as neighborhood centers, daycare centers, and advertising ). The site teams additionally facilitated two follow-up in-person visits at child ages 30 ± 3 months (2.5 years after baseline; 80% follow-up, or N = 1,060) and 48 ± 3 months (4 years after baseline; 74% follow-up, or N = 982). To enhance retention and check for access to care and preventive services, intermediate contacts (that is, by phone, mail, email, etc.) occurred every four months throughout the study, with a postcard sent every birthday. The impact of the child's oral condition on the family was measured at age 48 months, with data from a subsample of 422 caregivers (of a total of 686 who had been approached). Institutional Review Board approval for the portion of these assessments done at Duke University took longer than anticipated, and so Duke site families are under-represented in the current study. The assessment of family impact using the FIS-8 was included as part of the parental questionnaire. The FIS-8 uses a five point Likert-like scale, with response options "Never" (scoring 0), "Once or twice", "Sometimes", "Often" and "Every day or almost every day". An overall score (range 0 to 32, with higher scores reflecting greater family impact of the child's oral condition) was computed by summing the scores for all items, after which the three subscale scores were computed. Two global items with ordinal response scales were used for examining the concurrent validity of the FIS-8. The first was the global family impact item "How much are your family's daily lives affected by your child's teeth, lips, jaws or mouth?". This used the response options 'Not at all' (scoring 1), 'Very little', 'Some', ' A lot' and 'Very much'. The second was the global oral health item "How would you describe the health of your child's teeth and mouth?", with response options 'Excellent' (scoring 1), 'Very good', 'Good', 'Fair' and 'Poor'. Clinical examinations of the children at age 4 were undertaken using International Caries Detection and Assessment System (ICDAS) criteria. Children's teeth were brushed before examination by trained and calibrated examiners who used a mirror and ball-ended probe (for confirmation of ICDAS lesions, where appropriate). For the current paper, only caries lesions at the d3 level (localised enamel breakdown) or higher (definite dentin involvement) are reported. Statistical analyses First, internal consistency reliability was assessed using Cronbach's alpha (for the subscales and overall scales), with values of 0.7 and more considered acceptable. Summated scale scores then were computed. Concurrent validity then was determined by examining the gradients in mean FIS-8 scale and subscale scores across the ordinal categories of the global ratings of overall family impact and child oral health; Spearman correlation coefficients were also calculated, to complement the information from those gradients. The skewed distributions of the scale scores meant that non-parametric statistics were used (Mann-Whitney U or Kruskal-Wallis tests, as appropriate) to determine the statistical significance (with an alpha value of 0.05) of the observed differences. Sociodemographic and other differences in scale scores were examined using Mann-Whitney or Kruskal-Wallis tests, as appropriate for the number of categories of the independent variable. Results Family impact scale data were collected at child age 48 months from a subset of study participants (N = 422), comprising 31.8% of the full sample. Of the 422 in the current analysis, 44 (10.4%) were in the Duke University sample, 179 (42.4%) in the Indiana University sample, and 199 (47.2%) in the University of Iowa sample. There were some systematic differences between the 422 in the current analysis and the other participants, whereby White children and tertiary-educated parents were over-represented in the former (Table 1). Similarly, the University of Iowa sample contributed disproportionately to the current study's sample. For the purposes of the subsequent analyses, those in the 'Unknown' category for parental education were combined with the 'Up to high school' category. Summary data on the dental caries experience of the sample are presented in Table 2 by sociodemographic characteristics and responses to the global items. Just over one in five had one or more d3mft, with that proportion being higher among children whose parents had not attended college, those from homes where English was not spoken, and those without an employed adult in the home. There were similar patterns-as well as an ethnic difference-with the mean d3mft. Children whose oral health was rated as worse on the global items had higher d3mft scores, on average, and greater proportions of them had had dental caries experience. Data on responses to the individual FIS-8 items are presented in Table 3. Overall, responses were positively skewed, with more than 80% of responses to any given item being 'Never', and very small proportions responding 'Often' or 'Every day'. Factor analysis (using the principal components method) revealed two components, respectively representing 45.7% (eigenvalue 3.7) and 15.0% (eigenvalue 1.2) of the variance in responses. Cronbach's alpha for the overall FIS-8 was 0.83. For the parental emotions subscale, it was 0.43, and it was 0.78 for the parental/ family activity subscale and 0.64 for the family conflict subscale. Data on the concurrent validity of the FIS-8 and subscales are presented in Table 4. Gradients in mean scores across the ordinal response categories of the global family impact item were inconsistent, with mean scores highest among those responding 'Some' to that item. By contrast, there were marked and consistent gradients in mean scores across the ordinal categories of the global item on the child's oral health, with scores highest for those rating their child's oral health as 'Poor'. Despite those marked gradients, the correlations were weak. Mean FIS-8 and subscale scores are presented by sociodemographic characteristics in Table 5. There were no apparent sex differences. Black children had higher scores, on average. There were consistent gradients by parental education level, whereby mean scores were highest for those with the least education. Mean scores were also higher for homes in which no adult was employed. Discussion This study set out to investigate the concurrent validity and reliability of the Family Impact Scale in a sample of four-year-old US children taking part in a large multicenter cohort study. These were found to be acceptable, as indicated by a Cronbach's alpha score over the 0.7 threshold for the overall scale (notwithstanding the relatively low score for one of the subscales, most likely arising from it comprising only two items) and the gradients in mean scores across two global measures being largely consistent and as expected. An issue with the findings is that the observed impact was generally low. This is reflected in the FIS data, where considerable floor effects were apparent; that is, a relatively high proportion of respondents (59% for the overall scale, and 73%, 69% and 85% for the parental emotions, parental/family activity and family conflict subscales, respectively) had the minimum value possible. By contrast, the typical proportion observed with the minimum value for the FIS in clinical samples of 3-to 6-year-olds awaiting dental rehabilitation under general anaesthetic is of the order of 6% to 10%. Floor effects mean that the scale would not be unlikely to reflect any improvement as a result of any intervention designed to improve the child's oral health and reduce the associated impact on the family. It may be that, even by four years of age in this sample, there would not have been sufficient time for marked, symptomatic dental caries experience to develop-at least, in most of the children-and affect the oral-health-related quality of life of the child and family. Recently reported summary data on dental caries experience in the children show that only one in five had cavitated carious lesions. The study has some other weaknesses. The factor analysis did not confirm the expected factor structure, but it is important to bear in mind that the original FIS arose from theory rather than empirical analysis, and that the FIS-8 was then produced using item impact analysis methods. Given the relatively low levels of impact observed in this US sample, we were not likely to be able to confirm the theory-derived scale structure of the scale, and that turned out to be the case. Further work is needed with US samples with greater disease experience. Moreover, FIS data were available for only a subsample of the larger study sample, and there were some important differences between those with FIS data and those without it. Most notably, minority children and the Duke sample were under-represented (and the predominantly rural Iowa sample over-represented), and this may have been partly responsible for the overall lack of family impact observed. A strength of the study is its recruitment of families from particular communities through purposive over-sampling. For example, the Indiana sample focused on African American parent-infant dyads, while the Iowa one recruited from rural communities. Another strength is that the parent study is a prospective one, meaning that follow-up data to at least age 9 will eventually be available, enabling the examination of longitudinal changes in family impact. The sociodemographic patterns in FIS-8 scores were informative. That there were no apparent sex differences is largely to be expected, given that the sample was too young for any of the gender-specific differences in chronic disease experience to have yet emerged. The higher mean scores among Black children most likely reflect the socio-economic position differences, given that mean scores were highest for those with the least education and for those from homes in which no adult was employed. These differences supply some indirect support for the scale's validity. The FIS-8 has had its validity demonstrated in other populations and cultures, such as Oman, New Zealand, Libya, England and Saudi Arabia. Those studies have all confirmed the scale's validity, with marked gradients observed in mean scale scores across the ordinal categories of the global "gold standard" item(s) used. Those studies have all involved clinical samples of children undergoing treatment for severe early childhood caries, though, and data from population samples are scarce. This makes the current study's findings important, because they are a rare examination of FIS-8 validity in a nonclinical sample. While the study provides a degree of evidence for the validity and internal consistency reliability of the FIS in a U.S. child sample, its replication in samples of older preschoolers with greater disease experience would be informative and useful. Future work with this cohort will document changes in family impact as the children age. Conclusions 1. The Family Impact Scale appears to be appropriate for use with US children. 2. More widespread use of the scale-both in the US and internationally-would be helpful in furthering global understanding of the impact of children's conditions on their households. Abbreviations FIS-8: Short-form Family Impact Scale; COHQOL: Child Oral Health Quality of Life questionnaire; ECOHIS: Early Childhood Oral Health Impact Scale; ICDAS: International Caries Detection and Assessment System. WMT and LAFP conducted and interpreted the analyses, and drafted the paper. SML, MAK, ATH and MF conceived of the parent study, acquired the data, contributed to interpretation of the findings, and critically revised the manuscript. All authors gave final approval and agree to be accountable for all aspects of the work. Funding This study was supported by National Institutes of Health (NIH) grant U01 DE021412-01A1 and NIH CTSA grants UL1-TR00442 (University of Iowa), 2UL1TR000433 (University of Michigan), and TR000006 (Indiana University). Availability of data and materials The datasets used and/or analysed during the current study are available from MF on reasonable request.
|
<reponame>xlyric/pv-router-esp32
#ifndef TASK_DIMMER
#define TASK_DIMMER
#include <Arduino.h>
#include "../config/config.h"
#include "../config/enums.h"
#include "../functions/dimmerFunction.h"
extern DisplayValues gDisplayValues;
/**
* Task: Modifier le dimmer en fonction de la production
*
* récupère les informations, conso ou injection et fait varier le dimmer en conséquence
*
*/
void updateDimmer(void * parameter){
for (;;){
gDisplayValues.task = true;
#if WIFI_ACTIVE == true
dimmer();
/*
/// si changement à faire
if (gDisplayValues.change != 0 ) {
Serial.println(F("changement des valeurs dimmer-MQTT"));
// envoie de l'information au dimmer et au serveur MQTT ( mosquito ou autre )
dimmer_change();
}*/
#endif
gDisplayValues.task = false;
// Sleep for 5 seconds, avant de refaire une analyse
vTaskDelay(5000 / portTICK_PERIOD_MS);
}
}
#endif
|
<filename>include/AD/numeric/bcd.h
//////////////////////////////////////////////////////////////////////////////
// NOTICE:
//
// ADLib, Prop and their related set of tools and documentation are in the
// public domain. The author(s) of this software reserve no copyrights on
// the source code and any code generated using the tools. You are encouraged
// to use ADLib and Prop to develop software, in both academic and commercial
// settings, and are free to incorporate any part of ADLib and Prop into
// your programs.
//
// Although you are under no obligation to do so, we strongly recommend that
// you give away all software developed using our tools.
//
// We also ask that credit be given to us when ADLib and/or Prop are used in
// your programs, and that this notice be preserved intact in all the source
// code.
//
// This software is still under development and we welcome any suggestions
// and help from the users.
//
// <NAME>
// 1994
//////////////////////////////////////////////////////////////////////////////
#ifndef binary_coded_decimal_h
#define binary_coded_decimal_h
#include <iostream>
#include <AD/generic/generic.h>
///////////////////////////////////////////////////////////////////////////
// Class BCD
///////////////////////////////////////////////////////////////////////////
class BCD
{
char * digits;
public:
////////////////////////////////////////////////////////////////////////
// Constructor and destructor
////////////////////////////////////////////////////////////////////////
BCD(); // e.g. BCD n;
BCD(long); // e.g. BCD n = 1234;
BCD(const BCD&); // e.g. BCD n = m;
~BCD();
////////////////////////////////////////////////////////////////////////
// Conversion
////////////////////////////////////////////////////////////////////////
operator const char * () const
{
return digits;
}
////////////////////////////////////////////////////////////////////////
// Assignment
////////////////////////////////////////////////////////////////////////
BCD& operator = (long);
BCD& operator = (const BCD&);
////////////////////////////////////////////////////////////////////////
// Selectors
////////////////////////////////////////////////////////////////////////
char operator [] (int i) const
{
return digits[i+1];
}
////////////////////////////////////////////////////////////////////////
// Arithmetic
////////////////////////////////////////////////////////////////////////
BCD operator - ();
friend BCD operator + (const BCD&, const BCD&);
friend BCD operator - (const BCD&, const BCD&);
friend BCD operator * (const BCD&, const BCD&);
friend BCD operator / (const BCD&, const BCD&);
////////////////////////////////////////////////////////////////////////
// Comparison
////////////////////////////////////////////////////////////////////////
friend Bool operator == (const BCD&, const BCD&);
friend Bool operator != (const BCD&, const BCD&);
friend Bool operator > (const BCD&, const BCD&);
friend Bool operator < (const BCD&, const BCD&);
friend Bool operator >= (const BCD&, const BCD&);
friend Bool operator <= (const BCD&, const BCD&);
////////////////////////////////////////////////////////////////////////
// I/O
////////////////////////////////////////////////////////////////////////
friend std::ostream& operator << (std::ostream&, const BCD&);
friend std::istream& operator >> (std::istream&, BCD&);
};
#endif
|
Utility and limitations of Hepascore and transient elastography to detect advanced hepatic fibrosis in HFE hemochromatosis Aspartate aminotransferase-to-platelet ratio index (APRI) and Fibrosis-4 Index (Fib4) have been validated against liver biopsy for detecting advanced hepatic fibrosis in HFE hemochromatosis. We determined the diagnostic utility for advanced hepatic fibrosis of Hepascore and transient elastography compared with APRI and Fib4 in 134 newly diagnosed HFE hemochromatosis subjects with serum ferritin levels>300 g/L using area under the receiver operator characteristic curve (AUROC) analysis and APRI- (>0.44) or Fib4- (>1.1) cut-offs for AHF, or a combination of both. Compared with APRI, Hepascore demonstrated an AUROC for advanced fibrosis of 0.69 (95% CI 0.560.83; sensitivity=69%, specificity=65%; P=0.01) at a cut-off of 0.22. Using a combination of APRI and Fib4, the AUROC for Hepascore for advanced fibrosis was 0.70 (95% CI 0.540.86, P=0.02). Hepascore was not diagnostic for detection of advanced fibrosis using the Fib4 cut-off. Elastography was not diagnostic using either APRI or Fib4 cut-offs. Hepascore and elastography detected significantly fewer true positive or true negative cases of advanced fibrosis compared with APRI and Fib4, except in subjects with serum ferritin levels>1000 g/L. In comparison with APRI or Fib4, Hepascore or elastography may underdiagnose advanced fibrosis in HFE Hemochromatosis, except in individuals with serum ferritin levels>1000 g/L. www.nature.com/scientificreports/ Some noninvasive fibrosis biomarkers have been validated against liver biopsy for detection of AHF in HH. The aspartate aminotransferase (AST)-to-platelet ratio index (APRI, cut-off value > 0.44) and Fibrosis-4 Index (Fib4, cut-off value > 1.1) demonstrate good diagnostic utility for the detection of AHF in HH with area under the receiver operator characteristic curve (AUROC) of 0.88 and 0.86, correctly identifying liver biopsy-diagnosed AHF in 85% and 80% of cases, respectively 17. Another commonly used biomarker, Hepascore, is also available for detection of AHF in chronic liver diseases, but has not been validated in HH. Hepascore incorporates some of the markers of fibrogenesis and fibrinolysis to predict AHF 18. It assesses clinical variables of sex and age and combines these with blood-based markers including bilirubin, gamma-glutamyl transferase (GGT), hyaluronic acid and alpha2-macroglobulin. A cut-off value for Hepascore > 0.50 has been suggested to be predictive of AHF in a range of different liver diseases other than HH 18,19. Transient elastography (TE) is an increasingly common non-invasive method fer detecting AHF in a range of liver diseases, but has not been validated in HH. It relies on mechanical or acoustic modalities generating shear waves which are measured by ultrasound and converted to a stiffness estimate. It can be rapidly performed in an outpatient setting and provides immediate results. Limited studies have evaluated the performance of TE in subjects with HH 28,29. Although liver biopsy was not performed in these studies, all subjects with TE > 8.7 kPa had evidence of AHF as determined noninvasively via the Fibrotest, Hepascore, Forns and Fib4 indices 28. Since APRI and Fib4 have been recently validated as noninvasive biomarkers of advanced hepatic fibrosis in HH 17, the aims of our study were to determine the diagnostic utility of Hepascore and TE in comparison with APRI-and Fib4determined cut-offs for the detection of probable AHF, and evaluate their responses to phlebotomy treatment. Patients and methods Study participants. Newly diagnosed HH subjects were prospectively recruited between August 2012 to January 2018 across four different sites in Australia (Austin Health and Eastern Health in Victoria, the Royal Brisbane and Women's Hospital, QIMR Berghofer Institute in Queensland, and Fiona Stanley Hospital in Western Australia) via referrals from medical practitioners, pathology companies which perform HFE genetic testing and through the Australian Red Cross LifeBlood Service. Inclusion criteria were homozygosity for HFE p.Cys282Tyr, age greater than 18 years and a serum ferritin level of more than 300 g/L. Individuals were excluded if they had a history of significant alcohol intake (defined as > 60 g/day for males and > 40 g/day for females), a body mass index (BMI) of more than 35 kg/m 2, were pregnant or had known liver disease from a cause other than HH. Patient demographics and clinical information were recorded. Liver biopsy was not a requirement for entry into the study 7,8. All subjects underwent TE by experienced and accredited operators using the Echosens FibroScan device as per the manufacturer's recommendations. Transducer placement was at 9/10 or 10/11 rib spaces. A reliable data set was defined as an interquartile range/median value ratio 100 equalling less than 30%. We evaluated two noninvasive biomarker models for the detection of AHF in HH (models 1 and 2): Statistical analysis. Data are presented as the mean ± SE, unless otherwise specified. Comparisons between continuous variables were performed using analysis of variance or t-test (unpaired or paired) whilst categorical analysis was conducted using chi-square analysis. AUROC curve analysis was performed for evaluation of diagnostic performance, sensitivity and specificity of Hepascore and TE in comparison with either APRI or Fib4 (Model 1) or the combination of APRI and Fib4 (Model 2) (Prism 9.0, GraphPad Software Results A total of 150 individuals were recruited to the study. There were incomplete data for 16 individuals, leaving a cohort of 134 people. The population was predominantly male (68%) with a mean age of 44 years. The mean BMI was 26.5 kg/m 2. Liver biochemistry was within accepted reference ranges, as were the platelet count and international normalised ratio (INR) ( Table 1). The mean serum ferritin level of the cohort was 691 g/L with 20 subjects having a serum ferritin level > 1000 g/L. Overall for Model 2, Hepascore demonstrated some diagnostic utility for detecting AHF, but detected significantly lower numbers of true positive and true negative cases compared with APRI and Fib4 combined. TE was not diagnostic. Both Hepascore and TE correctly identified likely AHF in most subjects with serum ferritin levels > 1000 g/L. Response of noninvasive fibrosis markers to phlebotomy treatment. For those subjects classified as having nAHF on the basis of APRI > 0.44, there were significant reductions in APRI and serum ferritin levels with treatment (Table 3). There were no treatment-related effects on Fib4, TE or Hepascore. Serum ferritin levels declined significantly with treatment in those subjects classified as not having nAHF on the basis of APRI ≤ 0.44 in Model 1. However, there were no treatment-related effects on APRI, Fib4, TE or Hepascore. For those subjects classified as having nAHF on the basis of Fib4 > 1.1, there were significant reductions in APRI, Fib4 and serum ferritin levels with treatment (Table 4). There were no treatment-related effects on TE or Hepascore. Serum ferritin levels declined significantly with treatment in those subjects classified as not having (Table 5). However, there were no treatment-related effects in the two fibrosis groups with regards to APRI, Fib4, Hepascore or TE. Discussion Early diagnosis of AHF in HH is important for guiding clinical management. Since most subjects with HH are now detected prior to the development of clinical sequelae of iron overload, there is a need to define useful noninvasive methods for assessment of AHF to ensure that only those at highest risk of AHF progress to liver biopsy. This type of approach is occurring across a broad spectrum of liver diseases previously dependent on liver biopsy for accurate fibrosis staging 24. What is clearly apparent is the variation in cut-off thresholds and suitability of these methods in various different liver diseases 24. Furthermore, liver biopsy is not well suited for routine follow-up of fibrosis following treatment due to its invasive nature and associated risks 8,9. In HH, APRI and Fib4 were recently shown to be accurate for detection of liver biopsy-diagnosed AHF with cut-off levels greater than 0.44 and 1.1, respectively 17. APRI was also useful in monitoring fibrosis regression following treatment. These cut-off values in HH were substantially lower than those reported for other liver disease aetiologies 14,15,24. With www.nature.com/scientificreports/ this knowledge, we designed the current real-world study of community-dwelling subjects with HH to determine the utility of two other commonly available noninvasive methods of detecting hepatic fibrosis, Hepascore and TE, using APRI-and Fib4-determined cut-offs as surrogates for AHF. We compared Hepascore and TE with either APRI or Fib4 (model 1) or a combination of APRI and Fib4 (model 2). Hepascore demonstrated only fair diagnostic utility for the diagnosis of nAHF based on elevated APRI (AUROC 0.69, 95% CI 0.56-0.83, P = 0.01) or the combination of APRI and Fib4 (AUROC 0.70, 95% CI 0.54-0.86, P = 0.02). Nonsignificant diagnostic utility was demonstrated for Hepascore in comparison with Fib4 (AUROC 0.54, 95% CI 0.41-0.65, P = 0.52). Similarly, TE demonstrated no diagnostic utility for AHF in comparison with APRI, Fib4 or the combination of APRI and Fib4. Hepascore (using a cut-off value of 0.22) and TE (using a cut-off value of 4.75 kPa) detected significantly less true positive or true negative cases of AHF compared with APRI or Fib4, except in subjects with serum ferritin levels > 1000 g/L. Previously, we showed substantially lower cut-off values for APRI and Fib4 for detection of AHF in HH compared to other liver diseases 17. We now extend this to demonstrate that Hepascore and TE cut-offs for the detection of AHF in HH are also substantially lower compared to other liver diseases. The lower cut-offs probably reflect the lesser degree of hepatic inflammation which occurs in HFE Hemochromatosis liver injury compared with other liver diseases 8,31. As all individuals underwent phlebotomy treatment following recruitment into the study, we were able to evaluate the responses to treatment of APRI, Fib4, Hepascore and TE. All subjects, irrespective of the presence or absence of AHF, demonstrated significant reductions in serum ferritin levels with phlebotomy treatment, as expected. Subjects with AHF were older and had higher ALT and/or AST values than those who did not have AHF, compatible with previous reports of HH subjects with liver biopsy-confirmed AHF 2,5,8,17,32. Furthermore, subjects with nAHF defined on the basis of elevated APRI demonstrated a significant reduction in APRI with phlebotomy treatment whilst those defined on the basis of elevated Fib4 demonstrated significant reductions in APRI and Fib4 with treatment. There were no statistically significant phlebotomy treatment-related effects on Hepascore and TE in subjects with or without nAHF. Our observations are consistent with previous studies Table 2. Characteristics of study subjects in Model 1 (APRI or Fib4) and Model 2 (combined APRI and Fib4 scores). All data are presented as the mean ± SE. ALT: alanine aminotransferase; APRI: aspartate aminotransferase to platelet ratio index; AST: aspartate aminotransferase; Fib4: fibrosis-4; GGT: gammaglutamyl transferase; nAHF: noninvasive biomarker advanced hepatic fibrosis; No nAHF: no noninvasive biomarker advanced hepatic fibrosis; TE: transient elastography. *P < 0.05, **P < 0.01, ***P < 0.0001 compared with nAHF group (unpaired t test). The majority of individuals with HH in our study did not have evidence of AHF and most had serum ferritin levels < 1000 g/L. Using APRI, Fib4 or the combination of APRI and Fib4, we observed a likelihood of AHF in Table 3. Comparison of noninvasive biomarkers pre-and post-treatment in the subjects with either noninvasive biomarker-detected advanced hepatic fibrosis (nAHF) using APRI > 0.44 or no evidence of noninvasive biomarker-detected advanced hepatic fibrosis (no nAHF) using APRI ≤ 0.44 in Model 1. Results are shown as mean ± SE. APRI: aspartate aminotransferase to platelet ratio index; Fib4: fibrosis-4; nAHF: noninvasive biomarker advanced hepatic fibrosis; No nAHF: no noninvasive biomarker advanced hepatic fibrosis; TE: transient elastography. *P < 0.05, **P < 0.01, ***P < 0.0001 compared with pre-treatment (paired t test). www.nature.com/scientificreports/ 13%, 30% or 10% of HH subjects, respectively. Previous population-based studies have reported similar prevalences of between 10 and 25% for AHF in subjects at the time of diagnosis, and primarily in those with serum ferritin levels > 1000 g/L at the time of diagnosis 2,5,7,9,32. Interestingly, both Hepascore and TE identified all HH subjects who had serum ferritin levels > 1000 g/L and who had elevation of APRI above 0.44. Hepascore and TE identified 6 of 8 HH subjects who had serum ferritin levels > 1000 g/L and who had elevation of Fib4 above 1.1. Thus, Hepascore and TE may be more reliable in the subgroup of individuals who have serum ferritin levels above 1000 g/L. Our study has several strengths and weaknesses. While the relatively small sample size could be considered a potential limitation, the strengths of the study include the prospective enrolment and collection of data from community-dwelling subjects who were able to be followed-up after phlebotomy treatment. Subjects enrolled in our study were not required to undergo liver biopsy for routine clinical care and thus we were unable to compare our noninvasive biomarkers with liver-biopsy confirmed fibrosis. However, we believe our use of APRI and Fib4 cut-off values which have been recently validated against liver biopsy-staged fibrosis in HH 17 is a pragmatic alternative given the relative rarity in routine clinical practice of liver biopsy for evaluation of HH. Conclusion The use of Hepascore or TE for detection of AHF in HH may lead to underdiagnosis, except when individuals have serum ferritin levels elevated above 1000 g/L. Overall, APRI or Fib4 are more clinically useful than Hepascore or TE for detection of AHF. Table 4. Comparison of noninvasive biomarkers pre-and post-treatment in the subjects with either noninvasive biomarker-detected advanced hepatic fibrosis (nAHF) using Fib4 > 1.1 or no evidence of noninvasive biomarker-detected advanced hepatic fibrosis (no nAHF) using Fib4 ≤ 1.1 in Model 1. Results are shown as mean ± SE. APRI: aspartate aminotransferase to platelet ratio index; Fib4: fibrosis-4; nAHF: noninvasive biomarker advanced hepatic fibrosis; No nAHF: no noninvasive biomarker advanced hepatic fibrosis; TE: transient elastography. *P < 0.05, **P < 0.01, ***P < 0.0001 compared with pre-treatment (paired t test).
|
Underground: My Life with SDS and the WeathermenBy Mark RuddHarper Collins, 2009 Those who have characterized Mark Rudd’s memoir, Underground, as unapologetic must not have read it. The book passionately reflects on the 1960s and 1970s, a time when a new world order seemed not only possible, but likely.Rudd begins this well-written, almost-confessional book with an account of entering college in the fall of 1965. He admits that Columbia University was a dream come true, since it was such a radical departure from his middle- class, suburban upbringing in New Jersey. At Columbia, he was encouraged to read revolutionary theorists, such as Malcolm X, and was deeply affected by David Gilbert, the chair of the university’s Independent Committee on Vietnam, who openly declared his opposition to the war and suggested that antiwar activists adhere to their beliefs instead of behaving like “good Germans.” As a Jew reared in the shadow of the Holocaust, Rudd found Gilbert’s words potent and quickly became immersed in campus activism, soon joining Students for a Democratic Society (SDS).The discovery of Columbia’s connection to the Institute for Defense Analyses, a think tank affiliated with the Pentagon, in early 1967 led SDS members to intensify their anti-war efforts. Combined with pre-existing university plans to raze several buildings in largely Black Harlem for the construction of a gymnasium in Morningside Park, the predominantly progressive student body felt pushed to the brink.Due to an administration crackdown, students decided to occupy five buildings on the Columbia campus in April 1968. African-American students and Harlem residents entered Hamilton Hall and refused to leave. White students took over Low Library and other surrounding buildings and penned demands. Rudd’s excitement over the week-long sitin is palpable, and readers who have ever immersed themselves in organizing will feel the contagion.Rudd writes with vivid fury about the police violence that ended the occupation and rails against a mainstream media that portrayed the protesters as “lunatic, destructive kids.”He is also conscious, albeit in hindsight, of the media’s fixation on him as the archetypical leader — the charismatic white man ostensibly in charge. At the time, however, Rudd savored the attention and admits to rampant womanizing. After being expelled from Columbia in spring of 1968, he became a “traveling salesman for SDS” speaking throughout the country to ramp up opposition to the war. However, as SDS grew, factions emerged which ultimately destroyed the largest student mobilization in U.S. history.While Rudd helped found the most radical portion in SDS, the Weathermen, in 1969, he had early concerns about the group’s dogmatism. “I did not realize at the time that we had unwittingly reproduced conditions that all hermetically sealed cults use: isolation, sleep deprivation, arbitrary acts of loyalty, even sexual initiation as bonding,” he writes.Rudd buried these worries as the Weathermen became the Weather Underground, which ultimately carried out 24 property-destroying bombings across the United States. He writes that he accepted the idea — now recognized as delusional — that “we had begun the war against the pigs” and describes a mood that is difficult to fathom in 2009. In retrospect he calls it “a fantasy of revolutionary urban-guerrilla warfare.”This fantasy ground to a halt when a 1970 plan to bomb New Jersey’s Fort Dix went awry, killing three of Rudd’s comrades and destroying the Greenwich Village townhouse the would-be bomb makers were using. Rudd, his girlfriend Sue LeGrand, and other Weatherpeople quickly fled underground. Moving between safe houses sent Rudd into a near-suicidal depression, and his graphic description of severing ties with everything and everybody garners sympathy.Nonetheless, he and LeGrand cobbled together a sub rosa life. Their first child was born in 1974, and they had a second child after he surrendered in 1978. The decision to resurface came after seven-and-a-half years on the lam; Rudd could no longer stand living with constant anxiety.Rudd eventually paid a fine and settled in New Mexico. When his relationship with LeGrand ended, he finished his degree and spent more than two decades teaching mathematics at a community college in Albuquerque, N.M. He has continued his work as a non-violent activist and organizer through his involvement with Native American land rights and antiwar and anti-militarization efforts.Underground’s poignancy is underscored by Rudd’s conclusion: “The Weather Underground didn’t seem to affect anybody at all. We were not part of most people’s universe, even of those who were still working in what remained of the movement.” This sobering and heartfelt statement, bolstered by his across-the-board denunciation of violence, clearly speaks to 21st century activists who are eager for rapid change.
|
<filename>dist/types/Image.d.ts
export default class Image {
src: string;
title: string;
description: string;
constructor(src: string, title: string, description: string);
}
|
When Palm Coast needed extra capacity for its communications network five years ago, the city decided to connect its 21 properties with a broadband, fiber optic network that would speed up communications by immense orders of magnitude.
It really is immense: Your best phone-based DSL connection will get you speeds of 1.5 megabits per second, or capacity for the equivalent of 24 simultaneous voice channels. It’s good, especially for home use. But it has its limitations when it comes to, say, video conferencing. A fiber provides 2 to 2.5 gigabits per second of capacity, or the equivalent of 32,000 voice channels. Put another way, your best 100Mb/s connection (such as your residential broadband at home) would let you download the entire contents of the Library of Congress in about two weeks. A fiber optic connection would download it all in about a minute.
Palm Coast’s broadband system is in place, connecting 20 of the city’s 21 properties (the hold-out is a water plant).
On Monday, the city is opening up its fiber optic network to private business. Those services exist already. But broadband provided entirely on private fiber is prohibitively expensive. Palm Coast’s network serves essentially as subsidized broadband: It’s in place, the city administration reasoned, and it’s underused. Might as well let the private sector benefit from the capacity and possibly use it to spur business activity. “It shows that Courtney and his crew were really forward thinking,” Palm Coast Mayor Jon Netts said, referring to Courtney Violette, director of Palm Coast’s information technology department.
It’s not quite the first time in Florida that a local government’s broadband network will be made available to private companies: Leesburg owns 140 miles of fiber optic cable, which it began installing in 1987. It uses the network to provide communication and Internet service to the Lake County schools, Leesburg Regional Medical Center and some area businesses.
To be clear: Palm Coast isn’t going into the Internet business. It’s merely granting Internet providers access to its broadband network. The providers, who’ll pay Palm Coast a fee for using the network, in turn will be able to offer broadband service to businesses in geographic areas lined with fiber optics. Those areas include Palm Coast Parkway, Belle Terre Parkway, Town Center, State Road 100 and northern portions of U.S. Route 1.
“So if you’re a business in those areas and we can get to you, then you’re eligible to connect to the network,” Violette says.
Two providers have signed on to offer the service: Lux Communications, which has offices in Palm Coast and Miami, and Palm Coast Internet.
Speed alone isn’t the advantage. For many subscribers, a basic 2 to 5 Mb/s connection will suffice. But for businesses, reliability is key. Fiber optic networks are underground, they tend to be far more reliable, and enabled with far more redundancies, or back-up systems, than traditional DSL and cable modems. Fiber networks are also less prone to weather-related hazards.
Thanks to Jim for correcting the original version of this story, which had Palm Coast as the first city in the state providing fiber optic service to businesses.
I believe St. Cloud also has had a city-wide network, but I’m not sure it’s this powerful. The city recently debated whether it could afford to keep providing it. Not sure of the outcome of that. As a footnote, I have to mention that a “fiber-cutting” seems like really an oxymoronic type of ceremony for this mileston. Shouldn’t it be analogous to the trans-national railroad linkage — the mayor putting the last few inches of fiber into place?
I hear O2 Secure Wireless is coming in too based out of St. Augustine …..will be nice to have someone besides the regular players BLight House, ATT etc…competition is good !
|
Christopher Faulise, 29, 4 Carter Ave., Norwich, 10:50 p.m., third-degree burglary, sixth-degree larceny; court date Sept. 10, Norwich Superior Court.
Stacey Zawacki, 35, 114 Phillips Road, Lisbon, 7:40 p.m., violation of probation, traveling fast; court date Sept. 3, Norwich Superior Court.
J.C. Miller, 36, 55 Talcott Ave., Jewett City, 2 p.m., third-degree assault, second-degree breach of peace, first-degree criminal mischief, second-degree unlawful restraint; court date Aug. 27, Norwich Superior Court.
Vaughn Malloy, 28, 242 Nautilus Drive, No. 218, New London, 7:45 p.m., second-degree breach of peace, third-degree assault; court date Sept. 10, Norwich Superior Court.
Joshua Zinewicz, 27, 15 Scotland Road, Norwich, 8:30 a.m., two counts first-degree failure to appear; court date Aug. 27, Norwich Superior Court.
Tony Pitts, 29, Norwich Ave., Taftville, 1 a.m., second-degree failure to appear; court date Sept. 3, Norwich Superior Court.
Matthew Lane, 39, 5 Dupont Lane, Norwich, 9 p.m., violation of probation; court date Sept. 10, Norwich Superior Court.
Darrel Anderson, 39, 23 Second St., Norwich, 10:05 a.m., violating a restraining order; court date Aug. 28, Norwich Superior Court.
Sean Bushy, 36, 7 Pratte Ave., Taftville, 12:50 a.m., second-degree failure to appear; court date Sept. 8, Norwich Superior Court.
Antanacio Cuevas, 28, 314 Central Ave., Apt. 1, Norwich, 12:56 a.m., failure to drive right, driving under the influence of alcohol or drugs; court date Aug. 28, Norwich Superior Court.
Arthur B. Chapman, 40, 100 Spruce St., Norwich, 9:43 p.m. criminal violation of a restraining order; court date Aug. 28, Norwich Superior Court.
Nicholas Minski, 24, 54 S. Chestnut St., Plainfield, 1 p.m., first-degree failure to appear; court date Aug. 28, Norwich Superior Court.
Chad M. Frazier, 30, 9 Plainfield Road, Canterbury, 9 a.m., first-degree failure to appear, two counts second-degree failure to appear; court date not listed.
Ri Xiang Lin, 61, 1685 First Ave., Flushing, N.Y., 12:45 a.m., criminal trespassing; court date Sept. 10, Norwich Superior Court.
Melissa A. Suggs, 42, 19 Market St., Apt. 4, Thompson, 8:13 p.m., disorderly conduct; court date Aug. 28, Danielson Superior Court.
Kyle Doehr, 25, 146 Snake Meadow Road, Moosup, 12:37 a.m., operating under suspension, improper use of high beam lights; court date Sept. 9, Danielson Superior Court.
Ethier Fernandez, 24, 1 Mason St., Worcester, Mass., 12:41 a.m., operating under the influence of alcohol or drugs, possession of a weapon in a motor vehicle; court date Sept. 9, Danielson Superior Court.
Matthew Von Deck, no age listed, 125 Saw Mill Hill Road, Sterling, 2:50 a.m., second-degree sexual assault, risk of injury to a minor; court date Aug. 28, Danielson Superior Court.
Paul Quinn, 39, 23 Whitten St., Worcester, Mass., no time listed, operating under the influence of alcohol or drugs, improper turn; court date Sept. 9, Danielson Superior Court.
Nicholas Bourque, 22, 29 Main St., Plainfield, 7:24 a.m., second-degree failure to appear in court; court date Aug. 28, Danielson Superior Court.
Hermes Garcia, 35, 8 Weeks Lane, Dayville, no time listed, second-degree failure to appear in court; court date Aug. 28, Danielson Superior Court.
Michael Antonio Gonzalez, 18, 311 Main St., Willimantic, 11:31 a.m., second-degree failure to appear in court; court date Sept. 11, Danielson Superior Court.
Barrett Phagan, 29, 58 Roseland Park Road, Woodstock, 1:39 p.m., first-degree robbery, fourth-degree larceny, breach of peace; no court date listed.
Arthur Bernier, no age listed, 688 Five Mile River Road, Putnam, 4:14 p.m., disorderly conduct, strangulation, risk of injury to a minor; court date Aug. 31, Danielson Superior Court.
Jason D. Daddario, 26, 9 Lake St., Moosup, 9:19 p.m., failure to respond to infraction, criminal impersonation, interfering with a police officer; no court date listed.
Alex Biron, 20, 32 E. Main St., Central Village, 10:47 p.m., first-degree failure to appear in court, second-degree failure to appear in court, violation of probation; court date Aug. 28, Danielson Superior Court.
Michael Colbourne, 32, 21 Third St., Plainfield, ,issue of marker plates, failure to maintain minimum insurance, operating an unregistered motor vehicle; court date Sept. 8, Danielson Superior Court.
Robin Surprenant, 48, 19 Collins Road, Central Village, 12:15 a.m., operating under the influence of alcohol or drugs, failure to obey stop sign; court date Sept. 8, Danielson Superior Court.
|
Weighing the costs of obesity: a brief review of the health care, workplace, and personal costs asSOCiated with obesity Despite being a serious public health concern for decades, obesity was only recognized as a chronic disease by the Canadian Medical Associated in 2015. Obesity currently affects 25% of the Canadian adult population, with many projections suggesting the prevalence will continue to increase over the next 20 years. The severity of this health issue is amplified by its numerous physical and psychosocial co-morbidities, including but not limited to, type 2 diabetes, cardiovascular disease, osteoarthritis, depression, and reduced quality of life. In addition to the myriad of negative health outcomes, there are substantial economic implications associated with the rising prevalence of obesity. The purpose of this paper is to provide a brief overview of the financial impact of obesity on the Canadian health care system and workplace, as well as highlight the personal economic costs experienced by individuals living with obesity as a result of weight bias and discrimination.
|
Quantitation of cocaine in human hair: the effect of centrifugation of hair digests. Hair pigmentation is a critical factor in the interpretation of the concentration of certain compounds and their metabolites incorporated into hair. Melanin is responsible for the pigmentation. The color and the melanin content of human hair samples differs over a wide range. Once deposited into hair, drug may remain detectable for a period of months to years. However, if drug disposition into hair is influenced by those properties attributed to hair color, then certain persons may test positive more frequently than other persons. Removal of the melanin from hair digests prior to drug analysis may reduce the effect of melanin on the total drug concentration by excluding the drug bound to the pigment. In this study, the effect of melanin removal by centrifugation of hair digests on cocaine concentrations was investigated. Two sets of hair samples from five cocaine users were analyzed for cocaine and metabolites. A solution consisting of 10 mL of 0.5M Tris buffer (pH 6.4) to which is added 60 mg D,L-dithiothreitol, 200 mg SDS, and 200 U Proteinase K, was used to digest the hair. Two milliliters of this solution was added to 20 mg of hair and incubated at 37 degrees in a shaking water bath (90 oscillations/min) overnight. The samples were removed from the water bath and mixed. One set was centrifuged at 2000 rpm and divided into supernatant and melanin pellet. The other set was not centrifuged. Internal standards were added to all tubes. The samples were further extracted, derivatized, and analyzed by gas chromatography-mass spectrometry. A mean of 8.8% (standard deviation 7.0%) of the total cocaine concentration (supernatant and pellet) was left behind in the pellet. The same experiment was repeated except that the melanin pellet was redigested with 0.1 N HCl. After redigestion of the melanin pellet, the mean cocaine concentration in the pellet was 3.8% +/- 4.0% (mean +/- SD) of the total cocaine concentration in hair. These data demonstrate that removal of melanin from hair digests by centrifugation does not eliminate hair color bias when interpreting cocaine concentrations.
|
<filename>modules/viewer/src/Viewer/gmStepperAdapter.cpp
#include "gmStepperAdapter.hpp"
namespace gm
{
namespace ViewItem
{
StepperAdapter::StepperAdapter(QObject* navigator, mitk::Stepper* stepper, const char*) : QObject(navigator), m_Stepper(stepper)
{
connect(this, SIGNAL(SendStepper(mitk::Stepper * )), navigator, SLOT(SetStepper(mitk::Stepper * )));
connect(this, SIGNAL(Refetch()), navigator, SLOT(Refetch()));
emit SendStepper(stepper);
m_ItkEventListener = new ItkEventListener(this);
m_ObserverTag = m_Stepper->AddObserver(itk::ModifiedEvent(), m_ItkEventListener);
emit Refetch();
}
StepperAdapter::~StepperAdapter()
{
m_ItkEventListener->Delete();
m_Stepper->RemoveObserver(m_ObserverTag);
}
auto StepperAdapter::SetStepper(mitk::Stepper* stepper) -> void
{
this->SendStepper(stepper);
this->Refetch();
}
}
}
|
/// In The Name Of Allah
#include<bits/stdc++.h>
#define pb push_back
#define ll long long
#define S second
#define F first
#define OO 1e9
using namespace std;
const int N=1000011;
int n,a,b,z=0;
vector<pair<int,int> >v;
vector<int>vec;
bool vis[N];
void update(){
vector<pair<int,int> >v2;
vector<int>vec2;
for(int i=0 ; i<vec.size() ; i++){
if(!vis[i]){
vec2.pb(vec[i]);
v2.pb(v[i]);
}
}
memset(vis,0,sizeof(vis));
v=v2;
vec=vec2;
}
void dfs(int i){
if(i>=v.size()) return;
vis[i]=1;
int idx=upper_bound(vec.begin(),vec.end(),v[i].S)-vec.begin();
dfs(idx);
}
int main(){
cin>>n;
for(int i=0 ; i<n ; i++){
cin>>a>>b;
v.pb({a,b});
vec.pb(a);
}
sort(v.begin(),v.end());
sort(vec.begin(),vec.end());
for(int i=0 ; i<v.size() ; i++){
if(!vis[i]){
dfs(i);
break;
}
}
update();
for(int i=0 ; i<v.size() ; i++){
if(!vis[i]){
dfs(i);
break;
}
}
update();
if(vec.size()) return cout<<"NO"<<endl,0;
cout<<"YES"<<endl;
}
|
NADH oxidase activity of rat liver plasma membrane activated by guanine nucleotides. The activity of a hormone- and growth-factor-stimulated NADH oxidase of the rat liver plasma membrane responds to guanine nucleotides, but in a manner that differs from that of the classic trimeric and low-molecular-mass monomeric G-proteins. In the absence of added bivalent ions, both GTP and GDP as well as guanosine 5'-triphosphate (GTP) but not guanosine 5'diphosphate (GDP) stimulate the activity over the range 1 microM to 100 microM. Other di- and tri-nucleotides also stimulate, but only at concentrations of 100 microM or higher. Added bivalent ions are not required either for NADH oxidation or guanine nucleotide stimulation. Bivalent ions (Mg2+ > Mn2+ > or = Ca2+) alone stimulate only slightly at low concentrations and then inhibit at high concentrations. The inhibitions are augmented by GDP or GTP but not by GTP. Although the activity is the same, or less, in the presence of 0.5 mM MgCl2, GTP at 1-100 nM and other nucleotides at 0.1 mM or 1 mM still stimulate in its presence. The NADH oxidase is activated by mastoparan but aluminum fluoride is weakly inhibitory. Cholera and pertussis toxins elicit only marginal responses. Both the Mg2+ and the GDP and GTP inhibitions (but not the GTP stimulations) shift to higher concentrations when the membrane preparations are first solubilized with Triton X-100. The results suggest a role for guanine nucleotides in the regulation of plasma membrane NADH oxidase, but with properties that differ from those of either trimeric or the low-molecular-mass G proteins thus far described.
|
async def member_greeted(action: Action, app: SirBot):
response = base_response(action)
user_id = action["user"]["id"]
response["attachments"] = greeted_attachment(user_id)
await app.plugins["slack"].api.query(methods.CHAT_UPDATE, response)
|
/**
* Common code shared between config files.
*/
public class ConfigCommon {
/**
* Builds a new config file from the specification for the specified side.
*
* @param fileName The name of the config file.
* @param type The type/side of the config file.
* @param configSpecification The specification for the config file.
* @param autosave Whether changes to the config should be automatically saved to file.
*/
public static void buildConfigFile(String fileName, ModConfig.Type type, ForgeConfigSpec configSpecification, boolean autosave) {
ModLoadingContext.get().registerConfig(type, configSpecification);
loadConfigFile(FMLPaths.CONFIGDIR.get().resolve(fileName).toString(), configSpecification, autosave);
}
/**
* Loads in a config file from the given path.
*
* @param path The path to the config file.
* @param configSpecification The config file specification.
* @param autosave Whether changes to the config should be automatically saved to file.
*/
private static void loadConfigFile(String path, ForgeConfigSpec configSpecification, boolean autosave) {
CommentedFileConfigBuilder commentedFileConfigBuilder = CommentedFileConfig.builder(new File(path));
if (autosave) commentedFileConfigBuilder.sync().autosave().writingMode(WritingMode.REPLACE);
CommentedFileConfig configFile = commentedFileConfigBuilder.build();
configFile.load();
configSpecification.setConfig(configFile);
}
}
|
A work of startling naivety? A timely exposé of the good fighting the bad? Or a bit of both? Former FBI director James Comey’s bestseller continues to sell at record levels, earning him a tidy sum.
As a government employee, Comey was unhappy with his salary — which was why he left and worked for the private sector for nearly a decade before returning to public service in 2013 as FBI director for an envisaged 10-year term. That was not to be. On May 9 2017 President Donald Trump fired him.
His firing may constitute obstruction of justice, with special counsel Robert Mueller recently presenting Trump with at least four dozen questions about his ties with Russia. For now, the book can be viewed as a justification for Comey’s aspiration to ethical leadership.
But is he eyeing a high political office, positioning himself as an ethical rock in the political divide of present-day America? Or is he just stimulating debate?
This is not clear. But he does not hold back. He has no qualms in comparing Trump to a Mafia boss. Or saying that Trump does not understand ethical leadership. Or that the Russians could have a scandalous hold on him, involving prostitutes in a Moscow hotel. He just does not know. Big statements. Heady criticism of a president.
But the book does not convince that Comey has the lofty ideals he expects from Trump. Nowhere is this more evident than in his handling of the Hillary Clinton e-mail saga.
On July 5 2016, halfway through a divisive US election, Comey announced that the FBI would level no criminal charges relating to Clinton’s emails. On October 28, just more than a week before the election, Comey said the FBI was again investigating Clinton as new information had come to light. He retracted this two days before the election.
The damage was done. Clinton, leading the polls, lost the election, causing Comey to feel "slightly nauseous" that his actions might have had an effect on the result. But he would not have done it differently.
Ironically, he makes it clear that the FBI has always followed the dictum to avoid, if possible, any action that could affect an election result. Therefore, his announcement on October 28 was necessary, "because to remain silent would have been an affirmative act of concealment, which would mean the director of the FBI had misled the American people".
Contrast this stance with the FBI investigation of the Russian collusion allegations. Comey admits the FBI was worried about clear evidence of Russian interference in the election to benefit Trump. But there was no announcement to Congress. No statement that the investigation could involve Trump.
What transpired was a lot of hand-wringing and sleepless nights as he grappled with what to do about the Russians as Trump upped the tempo against "crooked Hillary". But with Clinton’s e-mails these qualms were not apparent.
Comey’s actions are therefore sure to raise questions about a subliminal negative bias towards the Clintons. There is plenty of evidence of this in the book, with him questioning former president Bill Clinton’s controversial decision to pardon rogue trader Marc Rich. And he insinuates former US attorney-general Loretta Lynch was not as independent as she should have been towards the Clintons.
Throughout the book certain themes are addressed, such as leadership, patriotism and putting the truth first.
Comey’s fight against the Mafia taught him that they were bullies, he writes. "Evil has an ordinary face. It laughs, it cries, it deflects, it rationalises, it makes great pasta." And, he says: "Surviving a bully requires constant learning and adaptation." Fully three quarters of the book relates to Comey’s formative years, prosecuting Mafia figures and Martha Stewart, among others. But all of this is just a precursor, a road of inexorable progression through the labyrinthine political theatre that is America, towards the day he finally meets Trump.
Though there are interesting reflections on Comey’s interactions with former presidents George W Bush and Barack Obama, Trump only comes to the fore in the last quarter of the book — as the epitome of unethical behaviour.
No president is above the law. But allegations must clearly rest on proof. Innuendo is not enough. Both Clinton and Trump have reason to be aggrieved by Comey’s actions. The unpalatable fact is that, by trying to be scrupulously fair, Comey crossed a line towards a presidential candidate which could have dire consequences for the US and the world.
Perhaps Comey is not a slime ball. But he is also not the paragon of virtue he wishes to convey, his views appearing somewhat self-serving.
Striking a pose that ethical leadership is the cornerstone of any management style is a slippery slope. Especially for an FBI director, whose duty is to combat ordinary crime. Therein probably lies the real message of the book, contrary to what Comey wanted to convey.
Groenink is a tenacious journalist. She has carefully uncovered new truths that have withstood a barrage of threats.
|
from .map import setprocessing, PROCESSING_KEY, PROCESSING_BUTTON
from pygame.key import get_pressed as _get_pressed
from event import keycode as _keycode
from typing import Union
_last_status = {}
def setkey(action_name: str, name: str):
setprocessing(action_name, name, type=PROCESSING_KEY)
def setbutton(action_name: str, name: str):
setprocessing(action_name, name, type=PROCESSING_BUTTON)
def last_keystatus(key: Union[int, str]) -> bool:
if isinstance(key, str):
key = _keycode(key)
if key not in _last_status:
return False
return _last_status[key]
def is_keypressed(key: Union[int, str]) -> bool:
if isinstance(key, str):
key = _keycode(key)
returned_bool = bool(_get_pressed()[key])
_last_status[key] = returned_bool
return returned_bool
def is_keyreleased(key: Union[int, str]) -> bool:
return not is_keypressed(key) and last_keystatus(key)
def is_just_keypressed(key: Union[int, str]) -> bool:
return is_keypressed(key) and not last_keystatus(key)
|
The requested URL /_esi/meteringMPP/miamiherald/60118126/andres-oppenheimer/blogpost//news/local/news-columns-blogs/andres-oppenheimer/article60118126.html was not found on this server.
In Latin America, on the other hand, people are getting tired of charismatic leaders, at least for now.
In oil-rich Venezuela, late President Hugo Chávez and his successor Nicolás Maduro have not only curtailed fundamental freedoms, but have turned one of the world’s richest countries into a state on the verge of a humanitarian crisis. Annual inflation has surpassed 500 percent and supermarket shelves are empty.
In Argentina, Bolivia, Ecuador and Nicaragua, other nationalist-populist autocrats have in recent years weakened their countries’ democratic institutions. The net result has been fewer checks and balances, and more corruption.
But Argentines recently elected President Mauricio Macri, a soft-spoken engineer who candidly admits that he can’t solve his country’s problems by himself. In Venezuela, the center-right opposition won the Dec. 6 legislative elections by a landslide, and vows to end the country’s 17-year-old populist cycle. The nationalist-populist cycle seems to be waning across the region.
When I called several well-known Latin American political figures last week to ask them about Trump’s proposal to build a wall on the Mexican border, and to force Mexico to pay for it, most laughed. Several of them said that’s the kind of bravado that used to be typical of Latin American strongmen.
Indeed, there are many similarities between Trump and Latin America’s charismatic leaders.
First, Trump is running a headline-centered campaign, making outrageous statements almost daily to draw media attention, and to put rival politicians on the defensive. When commentators show, usually a day later, that Trump’s statements are half-truths or outright lies — as with his claim that most undocumented Mexican immigrants are criminals or rapists — he accuses the press of misquoting him, and then comes up with the next outrageous statement. Just like most Latin America’s autocrats did when they rose to power.
Second, like most nationalist populists, Trump constantly agitates the specter of foreign threats, as when he claims that there is an avalanche of undocumented immigrants despite the fact that all serious studies show that the number of undocumented immigrants has declined over the past seven years. Nationalist populists need a foreign enemy, so they can present themselves as leaders of a national cause.
Third, like most populists, Trump is an ego-maniac. He has no concrete plans, nor political organization to carry them out. His favorite word is “I” (In his campaign opening speech last year, he said 220 times the word “I”.) He wants us to believe that he’s the smartest, and that — as he constantly tells us — his rivals are “stupid,” “idiots,” or sold out to special interests.
My opinion: If Trump were to become president, more than making America great again, he would make America look more like Latin America. Or, rather, he would make America look like much of Latin America was until recently, before several countries realized how irresponsible narcissist demagogues can be.
Columnist Andres Oppenheimer interviews former Colombian President Alvaro Uribe at 8 a.m. on Feb. 29 at Miami Dade College, Wolfson Campus. The event will be in Spanish. Tickets are $25 in advance. Purchase at elnuevoheraldeventos.com.
|
import * as _ from 'underscore';
import {PlasmaState} from '../../PlasmaState';
import {IFilterState} from './FilterBoxReducers';
import {defaultListBoxMatchFilter, MatchFilter} from './FilterBoxUtils';
export interface GetFilterTextProps {
id: string;
}
const getFilterText = (state: PlasmaState, props: GetFilterTextProps): string => {
const filter: IFilterState = _.findWhere(state.filters, {id: props.id});
return (filter && filter.filterText) || '';
};
export interface GetMatchFilterTextProps {
matchFilter?: MatchFilter;
}
const getMatchFilter = (state: PlasmaState, props: GetMatchFilterTextProps): MatchFilter =>
_.isUndefined(props.matchFilter) ? defaultListBoxMatchFilter : props.matchFilter;
export const FilterBoxSelectors = {
getFilterText,
getMatchFilter,
};
|
package gorgonia
import (
"hash"
"hash/fnv"
"github.com/chewxy/gorgonia/tensor/types"
)
/*
This file contains code for Ops that aren't really functions in the sense that they aren't pure.
Since they're not adherents to the Church of Lambda, they are INFIDELS! A fatwa will be issued on them shortly
*/
type stmtOp interface {
Op
isStmt() bool
}
// letOp is not really a function. It's more of a binding statement.
// However, it's implemented as a Op so that it can be counted for register allocation and liveness
type letOp struct{}
func (op letOp) Type() Type { return nil }
func (op letOp) returnsPtr() bool { return true }
func (op letOp) overwriteInput() int { return 0 }
func (op letOp) callsExtern() bool { return false }
func (op letOp) inferShape(Type, ...*Node) (types.Shape, error) { return nil, nil }
func (op letOp) DiffWRT(int) []bool { return nil }
func (op letOp) SymDiff(inputs Nodes, outputNode, gradNode *Node) (Nodes, error) { return nil, nil }
func (op letOp) Do(vals ...Value) (Value, error) { return nil, nil }
func (op letOp) String() string { return "=" }
func (op letOp) WriteHash(h hash.Hash) { h.Write([]byte("let")) }
func (op letOp) Hashcode() uint32 {
h := fnv.New32a()
op.WriteHash(h)
return h.Sum32()
}
func (op letOp) isStmt() bool { return true }
// readOp reads a value off the input. This op ensures that a value used, and hence codegen'd out
type readOp struct {
into *Value // no, it's not a mistake. It's a pointer to a Value (which is an interface{} type)
}
func (op readOp) Type() Type { return nil }
func (op readOp) returnsPtr() bool { return true }
func (op readOp) overwriteInput() int { return 0 }
func (op readOp) callsExtern() bool { return false }
func (op readOp) inferShape(Type, ...*Node) (types.Shape, error) { return nil, nil }
func (op readOp) DiffWRT(int) []bool { return nil }
func (op readOp) SymDiff(inputs Nodes, outputNode, gradNode *Node) (Nodes, error) { return nil, nil }
func (op readOp) Do(vals ...Value) (Value, error) { return nil, nil }
func (op readOp) String() string { return "print" }
func (op readOp) WriteHash(h hash.Hash) { h.Write([]byte("print")) }
func (op readOp) Hashcode() uint32 {
h := fnv.New32a()
op.WriteHash(h)
return h.Sum32()
}
func (op readOp) isStmt() bool { return true }
|
Impact of first-principles properties of deuteriumtritium on inertial confinement fusion target designsa) A comprehensive knowledge of the properties of high-energy-density plasmas is crucial to understanding and designing low-adiabat, inertial confinement fusion (ICF) implosions through hydrodynamic simulations. Warm-dense-matter (WDM) conditions are routinely accessed by low-adiabat ICF implosions, in which strong coupling and electron degeneracy often play an important role in determining the properties of warm dense plasmas. The WDM properties of deuteriumtritium (DT) mixtures and ablator materials, such as the equation of state, thermal conductivity, opacity, and stopping power, were usually estimated by models in hydro-codes used for ICF simulations. In these models, many-body and quantum effects were only approximately taken into account in the WMD regime. Moreover, the self-consistency among these models was often missing. To examine the accuracy of these models, we have systematically calculated the static, transport, and optical properties of warm dense DT plasmas, using first-principles (FP) metho...
|
Longitudinal bending stiffness does not affect running economy in Nike Vaporfly Shoes Highlights We showed that reducing the longitudinal bending stiffness in the forefoot of the Nike Vaporfly 4% has minimal effect on overall running economy Biomechanically, the curved carbon-fiber plate in the Nike Vaporfly 4% has the most influence on the metatarsophalangeal joint, with no significant effect on the ankle or knee joints The forefoot longitudinal bending stiffness alone likely has minimal effect on performance. Instead, improved performance is likely due to an interaction of the foam, shoe geometry, and other effects of the curved carbon-fiber plate not related to bending stiffness. Introduction Performance running shoe technology, such as improved midsole energy return and increased longitudinal bending stiffness (LBS), has recently become a polarizing topic. 17 The Nike Vaporfly 4% (VF) shoe utilizes both these technologies to give athletes on average up to 4% savings in running economy compared to popular high-end marathon racing shoes, 811 which translates to improved running performance. 1215 While scientists and bloggers debate whether the foam, 8,16 geometry, 2 or curved carbonfiber plate 1,17 contributes more to these "super shoes", the exact mechanisms resulting in 4% metabolic savings are not yet understood. The use of carbon-fiber plates to improve running economy, while increasingly popular, is not new. In 2006, Roy and Stefanyshyn 16 showed small (1%) improvements in running economy with increased LBS. However, since then, reported effects of LBS on running economy have been mixed, with studies finding results ranging from deteriorations, 18 to no effect, 1921 to small effect (1%), 22 to large improvements (3%4%) 811, 23 (for a full review, see Ortega et al. 24 ). Importantly, the largest improvements in running economy have been reported in studies assessing VF shoes, 810 suggesting that the geometry and stiffness of the curved VF plate may provide additional savings compared to flat plates previously tested. It is also important to note that the contributions of the foam to these savings are unknown because no studies have addressed the effects of the curved plate and foam independently. Earlier studies have shown that soft and resilient midsole foam using air pockets or thermoplastic polyurethane foam can improve running economy by 1% compared to conventional ethyl-vinyl Peer review under responsibility of Shanghai University of Sport. acetate foam. 25,26 The VF studies used state-of-the-art baseline shoes with either ethyl-vinyl acetate foam with air pockets or thermoplastic polyurethane foam (boost); however, the VF midsole foam (polyether block amide) is softer and more resilient. 8 In an original analysis of the VFs, Hoogkamer et al. 8 hypothesized that energy return from the foam was a key contributor to the metabolic savings. However, because VFs have not been compared without the confounding influence of the carbon-fiber plate, the metabolic savings from the foam remains unknown. From a biomechanical perspective, increased LBS has been shown to reduce negative work done at the metatarsophalangeal (MTP) joint 2729 and to alter joint mechanics in the ankle 16,19,27,28,30,31 and knee. 28 Specifically, in a biomechanical analysis of the VF, Hoogkamer et al. 27 found that the curved carbon-fiber plate in the VF prototype resulted in lower work rates at the ankle and reduced dorsiflexion and negative work at the MTP joint compared to control shoes. The researchers therefore concluded that the curved plate provided a clever lever and a stiffening effect that likely contributed to the 4% energy savings. An important limitation of that study is that the tested VF prototype shoes differed in geometry (taller stack height), foam properties (more compliant and resilient), and LBS (stiffer and having a carbon-fiber plate) from the control shoes, once again making the contribution of the plate alone difficult to pinpoint. While the effects of LBS are often evaluated with flat carbon-fiber plates, 24 curved plates can be expected to be more effective. Farina et al. 32 showed that increased plate curvature can reduce net MTP joint work without increasing ankle plantarflexion moments, and recently Nigg et al. 1,17 proposed a theory attributing the majority of VF's energetic benefit to the curved plate's "teetertotter" effect. However, this theory is as of yet untested. In the current study, we attempt to determine the isolated effects of the stiffness of the carbon-fiber plate in the VF by cutting the plate and reducing its LBS in the forefoot. Our aim was to determine how LBS independently affects running economy and biomechanics in the VF. We hypothesized that cutting the plates would significantly increase (i.e., worsen) metabolic rate. Based on previous literature reporting 1% savings with flat carbon-fiber plates, we expected that reducing LBS by cutting through the curved carbon-fiber plate of the VF would increase metabolic rate during running by about 2%. To accommodate this increased metabolic rate, we hypothesized that decreasing LBS would decrease ankle dorsiflexion angle and plantarflexion moment and that it would increase MTP dorsiflexion angle, plantarflexion moment, and power. Participants A power analysis was performed a priori (G*Power 3.1; Universit€ at Kiel, Kiel, Germany), and it was determined that a sample size of 14 was necessary to achieve an effect size of 0.82. We recruited 17 male participants (aged 24 § 4 years; weight 67.8 § 4.3 kg; height 173.3 § 3.6 cm; mean § SD) who wore U.S. men's size 9.5 shoes. A total of 13 subjects took part in both the biomechanics and metabolic protocols, while 4 participated in only 1 protocol (2 in each). For the biomechanics protocol, inclusion criteria consisted of running at least 16 km/week. For the metabolic protocol, participants had to additionally be capable of running a 5 km in 19 min or an equivalent performance (10 km in 39 min, marathon in 3 h). For both protocols, participants were excluded if they had a lower extremity injury or surgery in the past 12 months or had any existing orthopedic, cardiovascular, or neuromuscular conditions. All participants gave written consent. The study was approved by the University of Massachusetts Amherst Institutional Review Board (1741 and 1789). Shoe conditions Participants wore 2 pairs of shoes: an intact Nike Vaporfly (VF intact ) and a cut Nike Vaporfly (VF cut ). In lieu of having 2 identical shoes with and without a carbon-fiber plates, we made 6 medio-lateral cuts through the carbon-fiber plate in the forefoot of new VFs to reduce the plate's effectiveness in bending ( Fig. 1). This method should not have affected the geometry and foam properties of the shoes; however, it is possible that the foam was slightly altered in the forefoot due to cutting. Cuts were made just past the depth of the plate using a table saw with an 1.5-mm blade. We measured the LBS in extension with a 3-point bending test using a standard material testing machine (Instron ElectroPuls 10000; Instron, Norwood, MA, USA). To perform this test, the shoe was placed on 2 support frames 80 mm apart. 22 An Instron tip, aligned with the MTP joint, displaced 5 mm while recording force at 200 Hz. We calculated bending stiffness in N¢m/rad based on the force applied to the shoe, displacement of the instron tip, and the distance of the support beams. 24 This method was not suitable for measuring the shoe in flexion due to foam deformation. Therefore, we measured the shoe's LBS in flexion using a standard flex tester (Shoe Flexer; Exeter Research Inc., Brentwood, NH, USA), calculating flexion stiffness for the final five of fifty 30-degree flexion cycles. Experimental set-up and protocol The study comprised 2 testing protocols: a metabolic protocol and a biomechanics protocol. If subjects completed both protocols on the same day, biomechanics testing was done first. Fig. 1. Intact Nike Vaporfly 4% (VF intact ) and cut Nike Vaporfly 4% (VF cut ) shoe conditions. To create the VF cut, 6 medio-lateral cuts were made through the carbon-fiber plate. Note that the black line is not the exact location of the carbon-fiber plate, but all the cuts were made fully through the plate. Metabolic protocol Participants wore their own shoes for a warm-up of at least 5 min at the test pace of 14 km/h (6:54 min/mile). During the warm-up, participants wore a mouthpiece attached to an expired-gas analysis system to get accustomed to running with it. After the warm-up, participants completed four 5-min trials at 14 km/h on a level, force-measuring treadmill with a rigid deck (Treadmetrix, Park City, UT, USA). Shoe order was randomly assigned, and participants wore each shoe twice in a mirrored order (e.g., VF intact, VF cut, VF cut, VF intact or VF cut, VF intact, VF intact, VF cut ). This method reduces bias due to order of conditions and any possible learning or fatigue effects. We used lightweight shoe covers to blind participants to the shoes they were wearing. During each trial we measured horizontal and vertical ground reaction forces (GRF) at 1200 Hz, as well as submaximal rates of oxygen uptake and carbon-dioxide production using an expired-gas analysis system (True One 2400; Parvo Medics, Salt Lake City, UT, USA). After each trial, participants were given a 5-min break while researchers changed their shoes behind a barrier. We calculated the metabolic rate (i.e., running economy) over the last 2 min of each trial, based on the measured rates of oxygen uptake and carbon-dioxide production using the Peronnet and Massicotte equation. 33 Running economy is the energetic cost of running at a specific velocity expressed in W/kg; therefore, lower running economy values will result in an increase in performance. 8,12,13 The metabolic rate was averaged between the 2 trials in the same shoe for each participant. In the last 30 s of each trial, we collected GRF from the treadmill. We opted to use GRF from the treadmill, and not over ground, because treadmill running allowed us to take the average of multiple steps. A custom Python script (Python Software foundation, https://www.python.org/) was used to filter GRF data using a low-pass, second-order Butterworth filter with a cut-off frequency of 20 Hz. 34 Contact time was determined using a 25 N vertical GRF threshold to determine toe-offs and touch-downs; these points were then visually inspected to ensure accuracy. We then calculated step frequency, peak vertical GRF, and propelling and braking impulse. Finally, we further visualized these differences in GRF between shoes by plotting the GRF vectors in the sagittal plane relative to the stance phase. Biomechanics protocol We placed retro-reflective markers on the participants' right leg on the greater trochanter, medial, and lateral epicondyles and on the medial and lateral malleoli. The right foot was tracked with markers on the first and fifth metatarsal head and base, the first toe, and a cluster of 3 markers on the heel. To track the thigh and the shank segments, rigid bodies with 4 non-co-linear reflective markers were adhered to the lateral aspects of the thigh and shank. Participants ran across a 30 m runway embedded with force plates (AMTI Inc., Watertown, MA, USA) at 14 km/h. During the trials, motion capture data (Oqus 3; Qualisys Inc., Gothenburg, Sweden) and GRF data were continuously collected at 200 Hz and 2000 Hz, respectively. We used timing gates to verify that the participant's speed was 14 km/h (with §4% variance), and we visually made sure that the participants right foot landed directly on a force plate. Participants continued to perform runs until we had collected 5 good trials in each shoe condition. To process the data, we first visually analyzed, and then gap filled the motion capture data in Qualisys Track Manager (Qualisys Inc., Gothenburg, Sweden). Next, using a custom Python script, GRF and kinematic data were low-pass filtered using a dual-pass Butterworth filter, with the same effective 14 Hz cut-off, 27 to prevent artificial fluctuations in joint moments. 35,36 For the knee, ankle, and MTP joints, we calculated joint angles, angular velocities, moments, powers, and work during the stance phase using a 3D inverse dynamics model custom built in Python. We assumed the MTP joint moment was 0 until the center of pressure passed the MTP joint center. Finally, we normalized data to 100% of stance phase and averaged the trials in the same shoe condition within each participant. Statistics We used a 2-tailed paired t test to compare metabolic rate, step parameters, and peak biomechanical variables between shoes (R software Version 1.0.44; The R Core Team, Vienna, Austria). Significance was set at a = 0.05, and a Holm-Bonferroni correction was implemented to account for multiple t tests. We also used 1-dimensional spatial parametric mapping in Python to conduct a 2-tailed, paired-sample t test (a = 0.05) for GRF, joint angles, angular velocities, moments, and powers. 37 Outputs from spatial parametric mapping are a time series of t values, allowing us to analyze differences across the whole stance phase rather than just peaks or averages. Results During analysis, 1 participant was removed from the metabolic protocol (n = 14) and 2 participants were removed from the biomechanics protocol (n = 13) due to data quality issues. Shoe properties The VF cut had a bending stiffness of 7.7 N¢m/rad, while the VF intact had a bending stiffness of 23.1 N¢m/rad in flexion. In extension, the VF cut had a stiffness of 3.1 N¢m/rad and the VF intact had a stiffness of 11.1 N¢m/rad. Energetics and step parameters The average metabolic rate was statistically similar (p = 0.306) in the VF cut (14.17 § 0.74 W/kg) and the VF intact (14.09 § 0.80 W/kg), with the average change within participants being 0.55% § 1.77% (95% confidence interval (95%CI): 1.44% to 0.40%; Fig. 2). Individual changes ranged from 3.3% to 3.3% between the VF cut and the VF intact, with 10 of 14 participants having worse running economy in the VF cut condition. Contact time was significantly shorter (p < 0.001) in the VF cut (0.211 § 0.014 s) compared to the VF intact (0.213 § 0.014 s), with the average change within participants being 1.19% § 1.10% (95%CI: 1.77% to 0.61%; Table 1). No significant differences were found for step frequency, braking impulse, propelling impulse, or peak vertical GRF (Table 1). Vertical GRF were significantly higher in the VF cut during 55%96% of stance phase (p < 0.001). Anteriorposterior GRF were significantly lower in the VF cut compared to the VF intact for 30%68% and significantly higher in the VF cut compared to the VF intact for 75%95% of stance phase (both p < 0.001; Fig. 3). Fig. 3C shows GRF vectors across the stance phase. Overall, the patterns look similar, but the GRF vector for the VF intact is directed more forward than for the VF cut for the majority of the stance phase. Note that for around 60%70% of the stance phase, where the vectors appear to be aligned, the VF intact vector is indeed 1% ahead of the VF cut. Biomechanics Biomechanical differences were only found in MTP joint mechanics (Fig. 4). MTP joint angles were more dorsiflexed in the VF cut for 0%12% and 85%100% of the stance phase (p = 0.013), and peak MTP joint dorsiflexion was significantly higher in the VF cut (p = 0.002; Table 2). This was accompanied by increased MTP joint angular velocity in the VF cut between 11% and 21% and between 77% and 90% of stance phase (p = 0.001). Further, there was significantly more negative MTP joint power in the VF cut compared to the VF intact from 79% to 90% of stance phase (p < 0.001). Negative MTP joint work was significantly higher in the VF cut compared to in the VF intact (p = 0.008), and positive MTP joint work was significantly lower in the VF cut compared to the VF intact (p = 0.023; Table 2). Discussion This study sought to determine the independent effect of the curved carbon-fiber plate in the VF shoe on running energetics and biomechanics. Our mechanical testing confirmed that the VF cut was dramatically less stiff in flexion (66% less stiff) and extension (72% less stiff) compared to the VF intact. Interestingly, our results show that reducing the LBS did not substantially change running economy, refuting our first hypothesis. Furthermore, we reject our second hypothesis that reduced LBS would decrease ankle dorsiflexion moment and power. Supporting our third hypothesis, MTP joint dorsiflexion angle and power were significantly larger in the VF cut ; however, MTP joint moment was not significantly different between conditions. Our findings are in line with previous research finding small differences in running economy between shoes with and without carbon-fiber plates. 16,19,22,23 However, most of these studies used flat plates, and we hypothesized that the curved plate in the VF would result in additional savings and explain 2% of the 4% savings reported by Hoogkamer et al. 8 and Barnes and Kilding. 9 Conversely, there was no detectable difference in running economy between shoe conditions. As such, our findings are in line with the data from the vast majority of studies that evaluated the effects of LBS with flat plates/insoles. 1821 When directly comparing footwear conditions at the group level, without focusing on individual responders or the individual stiffness condition with the lowest metabolic rate, only the study results of Roy and Stefanyshyn 16 and Oh and Park 22 showed improvements in running economy (0.8% and 1.1%, respectively). Our results therefore dispute suggestions that LBS from the curved Fig. 2. Running economy was similar between Vaporfly shoes with intact (VF intact ) and cut (VF cut ) carbon-fiber plates. The average metabolic rate is shown in black, and individual responses are shown with grey lines. On average, runners had a 0.55% § 1.77% (mean § SD) higher metabolic rate in the VF cut than in the VF intact, but this difference was not statistically significant (p = 0.306). carbon-fiber plate alone is responsible for the majority of the metabolic savings and instead suggest that the savings arise from a combination of the foam, shoe geometry, and other effects of the curved carbon-fiber plate not related to bending stiffness. These results challenge the recent suggestion that a curved plate alone can provide metabolic savings as high as 6% by acting as a teetertotter. 1,17 The idea behind this suggested teetertotter effect is that the curved plate would allow the shoe to pivot in mid-stance, and push-off in a way that the force applied at the front of the shoe would create a reaction force at the heel large enough to substantially improve running economy. In this mechanism, the plate needs to provide bending stiffness in extension to enable the pivoting action. However, our current research shows that reducing the bending stiffness in both flexion and extension does not have a substantial effect on running economy. Better understanding the contributions from the highly compliant and resilient foam, as well as the shoe geometry, would further our understanding of how the plate independently contributes to the energy savings. In general, gross biomechanical measures were similar between shoe conditions; specifically, step frequency, peak vertical GRF, braking impulse, or propelling impulse were not significantly different between conditions. We found a small but significant difference in contact time between the VF cut and VF intact, where contact time in the VF intact was 1% longer than in the VF cut (Table 1). These findings are in line with previous research reporting longer contact times in plated shoes compared to controls. In VF shoes (with an embedded, curved plate), contact time has been found to be 0%0.6% longer compared to controls. 810 Similar findings have also been reported in shoes with flat plates. 28,30,38 Previous research has shown that across running speeds, metabolic rate is inversely related to contact time 39 in that shorter contact times require faster muscle contractions to produce the force to support body weight. Faster muscle contractions are more energetically costly than slower contractions. Similarly, recent findings from Madden et al. 19 and Cigoja et al. 40 suggest that stiff shoes may reduce triceps surae muscle contraction velocity and improve overall running economy. While braking and propulsive impulses were similar between shoes, we found significant differences in vertical and anteriorposterior GRF traces (Fig. 3). For the majority of the stance phase, the GRF vector for the VF intact is directed more forward than for the VF cut. When accounting for the differences in contact time, the braking phases are similar, while the propulsive phase is longer for the VF intact than for the VF cut ( Supplementary Fig. 1). This longer propulsive phase for the VF intact allows for a lower peak and average propulsive force. Interestingly, the 1% change in contact time and differences in propulsive GRF observed in our study were not enough to significantly affect running economy. Joint mechanical differences were only found at the MTP joint. Importantly, the MTP joint has been shown to be a relevant location for energy loss during running. 41,42 Specifically, when the MTP joint dorsiflexes in stance phase, it absorbs mechanical energy. Our results show that cutting the carbonfiber plate did indeed result in greater MTP joint dorsiflexion and dorsiflexion angular velocity at touch down and take off (Fig. 4). This is in line with previous studies finding decreased MTP joint dorsiflexion with both flat 19,20,29 and curved 27 plates compared to controls. Interestingly, we did not find differences in MTP joint moment. However, it is important to note that we calculated the external MTP joint moment, which is a combination of the foot and the shoe. Although we cannot quantify it with our current data, it is likely that the shoe contributed more to the moment in the VF intact than in the VF cut, which would result in a larger contribution from structures in the foot to the MTP joint moment in the VF cut. Furthermore, Fig. 3. Ground reaction forces (GRF) during treadmill running in the Vaporfly shoes with intact (VF intact ; red) and cut (VF cut ; blue) carbon-fiber plates. The shaded bars represent § 1 standard error. Force traces have been normalized to body weight (BW). (A) Average vertical (F z ) GRF traces, (B) Anteri-orposterior (F y ) GRF traces. Grey shaded areas represent where traces are significantly different from each other (p < 0.05) as determined by spatial parametric mapping. (C) GRF vectors during stance phase. Each vector represents 1% of stance phase. Interestingly, around 60%70% of stance phase, when GRF vectors appear to be the same, the VF cut (blue) is 1% behind the VF intact (red). MTP joint negative power and negative work were significantly lower in the intact shoes (VF intact ). While decreasing negative work and energy loss at the MTP joint has been discussed as an important feature of a carbon-fiber shoe, 16,31 our study shows that it alone likely has a small effect on overall metabolic energy cost. We anticipated that cutting the plate would result in lower ankle dorsiflexion velocity and decreased ankle moment. Theoretically, increased LBS can be expected to affect ankle joint mechanics and energetics in several different ways, many of which are related to the opposite effects of potentially increased moment arms with increased LBS 30 on joint moment and angular velocity (for a detail review, see Ortega et al. 24 ). Indeed, in our study, small differences occurred in the center of pressure location during the final 10% of the stance phase. Specifically, the center of pressure moves farther away in the VF intact, creating a larger moment arm around the ankle (Supplementary Fig. 2). However, we did not find significant differences in ankle angle, angular velocity, moment, and power. It is worth noting that while it was not statistically different, on average, an individual's positive and negative ankle work was 7.0% and 5.5% greater, respectively, in the VF intact. These findings differ from those of Hoogkamer et al., 27 who reported decreased negative and positive work, as well as differences in peak ankle dorsiflexion and moment in the VF prototypes compared to control shoes. Importantly, the control shoes differed dramatically from the VF in midsole foam and geometry. It is possible that higher positive and negative ankle work might not be metabolically expensive when they result from increased mechanical energy storage and return in the Achilles tendon. Our results are in line with those of Farina et al., 32 who determined that ankle plantarflexion moment was similar between similar shoes with and without a curved plate. Together, these finding suggest that the differences found in the study by Hoogkamer et al. 8 are due to differences between shoes other than the LBS of the curved carbon-fiber plate under the forefoot. We did not find any biomechanical differences at the knee joint between shoe conditions. This is in line with previously reported research. 16,19,21,27,43 Ideally, we would have compared identical VF shoes with and without a plate; however, because such shoes are not available, cutting the plate was the next best option. We tried to remove the plate, but this was not possible without irreparable damage to the midsole. Because the plate was still in the shoe, it was likely still interacting with the foam and contributing to medio-lateral bending stiffness. Furthermore, only the forefoot and midfoot sections of the plate were cut. This choice was made because we believe that forefoot and midfoot bending stiffness are most important and because the plate was very close to the insole in the rearfoot. Although we believe that the plate's stiffening effects in the rearfoot likely have little effect, it is possible that the rearfoot part of the plate contributes to the shoe's effectiveness. For example, the plate may spread out the forces under the foot over a larger foam area and/or help stabilize the shoe. Interestingly, the newly released Adidas Adizero Adios Pro (Adidas, Herzogenaurach, Germany) marathon racing shoes have a flat plate in the rearfoot decoupled from a stiff rocker in the forefoot; and while there is no experimental data on how these shoes compare mechanically or energetically to the VF, they perform well in competition. The findings in our study can only apply to the role of the plate in LBS under the forefoot. Future studies should aim to assess identical shoe models with and without an embedded, curved carbon-fiber plate. It is important to note that researchers have been studying this with flat insoles; however, the literature on this subject suggests that curved plates provide a superior advantage. 24 We only tested VF shoes on males running at 14 km/h. Previous research has suggested that the effect of LBS on running economy may be speed dependent 18 (for a detailed review, see Ortega et al. 24 ). However, because both Hoogkamer et al. 8 and Barnes and Kilding 9 found that metabolic savings in the VF shoes were consistent across speeds from 14 km/h to 18 km/h, we believe that the speed of 14 km/h used in our study was adequate to test our hypotheses. Our sample size of 14 participants limited our statistical power and therefore may have affected our ability to find significant differences. Lastly, our study sample only included males, some of whom had never run in VFs before. Barnes and Kilding 9 found that metabolic savings in the VF shoes were not significantly different between males and females, but differences in sex, body mass, leg length, and shoe size can theoretically affect the relative influence of the plate on running mechanics and energetics, which should be addressed in future research. Although some participants were new to using VFs, we do not believe this influenced our findings because, for example, all participants in the Hoogkamer et al. study 8 were unfamiliar with the shoes and still exhibited metabolic savings and biomechanical differences. When mechanically testing our shoes, we used 2 different methods for quantifying flexion and extension stiffness. Because the plate is embedded within the foam, our 3-point bending test of the VF intact in flexion resulted primarily in displacement due to foam deformation rather than longitudinal bending. Therefore, we decided to use an industry standard flex tester (Shoe Flexer; Exeter Research Inc.) for measuring flexion, and a 3-point bending test for measuring extension. These tests were sufficient for showing that cutting the plate effectively reduced the LBS for flexion and extension; however, care is advised when comparing stiffness values between different testing methods (for a detailed review, see Ortega et al. 24 ). Future work should aim to improve external validity and standardization of footwear LBS assessment so that reported values can be compared across the literature. As carbon-fiber plates become increasingly popular in running shoe innovation, it is important to understand how they affect running economy and joint mechanics and how this can contribute to improved performance. Future studies should continue to address specific features of shoes by systematically assessing one feature at a time in order to further our understanding of how different features alter running economy and running biomechanics. Conclusion While multiple studies have assessed the effects of increased LBS and carbon-fiber plates on running economy, our study is the first to directly assess the role of a curved, embedded carbon-fiber plate in 2 identical shoes. We found that reducing LBS, in both flexion and extension, did not significantly alter running economy. Similarly, we found only small biomechanical changes at the MTP joint. Overall, we suggest that the curved carbon-fiber plate alone has minimal impact on the 4% savings in the VF. Instead, the savings likely result from a combination and interaction of the highly compliant and resilient midsole, shoe geometry, and other effects of the curved carbon-fiber plate not related to LBS under the forefoot.
|
The present invention relates generally to an imaging system, and more particularly to an X-ray mammogram tomosynthesis system.
Conventional X-ray mammography imaging systems utilize an X-ray source mounted on a supporting frame. The frame is manually rotated by the system operator to a place the X-ray source into desired position adjacent to a patient's breast. The X-ray source emits a first shot of X-rays through the patient's breast and an image is captured on a first an X-ray sensitive film positioned on the opposite side of the patient's breast. The frame is then manually rotated into another position by the operator and a second X-ray sensitive film is exposed by a second shot of X-rays. This procedure can be repeated several times to generate several images on different films. The images on the X-ray sensitive films may then be evaluated by a physician and/or digitized and evaluated by a computer. However, such a system produces a two dimensional image of the patient's breast, which provides insufficient information about the presence of tumors and calcification and often leads to false positive readings.
U.S. Pat. No 5,872,828 discloses a tomosynthesis system for breast imaging. This system produces a three dimensional image of the breast being imaged. The tomosynthesis system contains an X-ray source which moves in an arc shaped path over the breast that is being imaged, a stationary digital X-ray detector and an image processor. The detector is mounted on a stationary portion of a support structure. The X-ray source is mounted on a movable portion of the support structure. The movable portion of the support structure is an arm whose lower end is rotatably attached to the stationary support structure at a pivot point, and whose upper end supports the X-ray source.
However, this tomosynthesis system suffers from several disadvantages. First, the X-ray source is subject to a high amount of vibration because it is mounted to the free, upper end of a rotating arm, while the arm is supported only at the pivot point at its lower end. The vibration of the X-ray source distorts the image. Second, this system requires a high amount of driving power to move the X-ray source. The high driving power is required because torque is applied to the fixed, lower end of the arm, while the heavy X-ray source is mounted to the free, upper end of the arm.
|
A Pathway to Prioritizing and Delivering Healthy and Sustainable Cities Creating healthy and sustainable cities should be a global priority. Some cities prioritize 15-minute cities as a planning approach with co-benefits for health, climate change mitigation, equity, and economic recovery from COVID-19. Yet, as our recent Lancet Global Health series on Urban Design, Transport, and Health showed, many cities have a long way to go to achieve this vision. This policy guideline summarizes the main findings of the series, which assessed health and sustainability indicators for 25 cities in 19 countries. We then outline steps governments can take to strengthen policy frameworks and deliver more healthy, equitable, and sustainable built environments. The Lancet Global Health series provided clear evidence that cities need to transform urban governance to enable integrated planning for health and sustainability and commit to policy implementation. Evidence-informed indicators should be used to benchmark and monitor progress. Cities need policy frameworks that are comprehensive and consistent with evidence, with measurable policy targets to support implementation and accountability. The series provided evidence-informed thresholds for some key urban design and transport features, which can be embedded as policy targets. Policies and interventions must prioritize identifying and reducing inequities in access to health-supportive environments. Governments should also invest in open data and promote citizen-science programmes, to support indicator development and research for public benefit. We provide tools to replicate our indicators and an invitation to join our 1000 Cities Challenge via the Global Observatory of Healthy and Sustainable Cities.
|
#!/usr/bin/env python
"""
Read in the file named 'show_arp.xml'. This file is from 'show arp | display xml' on a
Juniper SRX (modified somewhat).
Use etree.tostring() to print out the XML tree as a string.
Use XPath parsing to find all of the arp entries and to construct the following
dictionary:
{
'10.220.88.1':
{
'intf': 'vlan.0',
'mac_addr': '00:62:ec:29:70:fe'
},
'10.220.88.20':
{
'intf': 'vlan.0',
'mac_addr': 'c8:9c:1d:ea:0e:b6'
}
}
Print this dictionary out to standard output.
"""
from __future__ import unicode_literals, print_function
from lxml import etree
from pprint import pprint
with open('show_arp.xml') as f:
arp = etree.fromstring(f.read())
print()
print("Print XML Tree out as a string:")
print("-" * 20)
print(etree.tostring(arp, pretty_print=True).decode())
xpath_arp = '//arp-table-entry'
arp_entries = arp.xpath(xpath_arp)
mac_xpath = 'mac-address'
ip_xpath = 'ip-address'
intf_xpath = 'interface-name'
arp_dict = {}
for arp_entry in arp_entries:
mac_address = arp_entry.xpath(mac_xpath)[0].text
ip_address = arp_entry.xpath(ip_xpath)[0].text
intf = arp_entry.xpath(intf_xpath)[0].text
arp_dict[ip_address] = {
'mac_addr': mac_address,
'intf': intf
}
print()
print("Print out final data structure: ")
print("-" * 20)
pprint(arp_dict)
print()
|
Vincristine and Vinblastine: Is checking bilirubin mandatory in children with Brain Tumors? Vinca alkaloids are commonly used in pediatric patients with brain tumors. Other types of pediatric cancers (leukemia) have reported abnormal liver functionwith vinca alkaloids.1,2 The experience in these cancers has influenced the care of patients with brain tumors by requiring the bilirubin result before administration of vinca alkaloids. Vinca alkaloids are common in pediatric brain tumor regimens, especially in two of the most common diagnoses: low-grade glioma and medulloblastoma.3,4 Although patients with brain tumors are more likely to have neurotoxicity with vinca administration,5 they are not historically thought tohave liver abnormalitieswith thesemedications. Wewanted to examine the risk of liver toxicity in pediatric patients with brain tumors receiving vincas in order to reduce the time and cost of weekly bilirubin testing. Therefore, we completed a multisite retrospective chart review of all pediatric neuro-oncology patients receiving vinca alkaloids and their incidence of abnormal liver laboratory values (bilirubin levels). Over the course of 3 years, at three sites, there were a total of 113 patients with brain tumors with 1,752 incidences of vinca administration and concurrent bilirubin levels. Of these, there were 10 (0.6%) abnormal bilirubin levels in 7 of the 113 (6%) patients: 1 on bone marrow transplant, 1 in multisystem organ failure, 2 with abnormalities not requiring dose adjustment, 1 on high-risk medulloblastoma therapy requiring dose omission, and 1 with known sensitivity to all chemotherapy. In the absence of evidence of abnormal bilirubin levels in the vast majority of patientswith brain tumors, a vinca alkaloid could be administered weekly without a bilirubin level. Our findings are similar to those of Bouffet et al.,3 who did not report any liver abnormalities in 2,020 doses of vinblastine for low-grade gliomas. We recommend reverting to the historical practice of monthly liver function testing whileonvincaalkaloids in this population.However, childrenwithhighrisk disease or on high-dose chemotherapy should be monitored more closely. This initiative could improve patient/family experience, clinic efficiency, and decrease cost.
|
import java.awt.BorderLayout;
import java.awt.CardLayout;
import java.awt.Color;
import java.awt.Font;
import java.awt.Graphics;
import java.awt.GridLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.KeyEvent;
import java.awt.event.KeyListener;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Random;
import java.util.Stack;
import javax.imageio.ImageIO;
import javax.swing.Box;
import javax.swing.BoxLayout;
import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JLabel;
import javax.swing.JTextArea;
import javax.swing.Timer;
import javax.swing.JFrame;
import javax.swing.JPanel;
@SuppressWarnings("serial")
public class Main extends JFrame{
HashMap<Integer, Stack<Sprite>> levels;
ArrayList<Sprite> out;
ArrayList<Projectile> bullets;
ArrayList<Boolean> outbool;
boolean rising = false;
JLabel[] numslvl= {new JLabel(new ImageIcon("0.png")), new JLabel(new ImageIcon("1.png")),
new JLabel(new ImageIcon("2.png")), new JLabel(new ImageIcon("3.png")),
new JLabel(new ImageIcon("4.png")), new JLabel(new ImageIcon("5.png")),
new JLabel(new ImageIcon("6.png")), new JLabel(new ImageIcon("7.png")),
new JLabel(new ImageIcon("8.png")), new JLabel(new ImageIcon("9.png"))
};
JLabel[] numspoints= {new JLabel(new ImageIcon("0.png")), new JLabel(new ImageIcon("1.png")),
new JLabel(new ImageIcon("2.png")), new JLabel(new ImageIcon("3.png")),
new JLabel(new ImageIcon("4.png")), new JLabel(new ImageIcon("5.png")),
new JLabel(new ImageIcon("6.png")), new JLabel(new ImageIcon("7.png")),
new JLabel(new ImageIcon("8.png")), new JLabel(new ImageIcon("9.png"))
};
JLabel [] numslives = {new JLabel(new ImageIcon("player.png")), new JLabel(new ImageIcon("player.png")),
new JLabel(new ImageIcon("player.png"))
};
JPanel window;
JPanel main;
Game game;
JPanel startwindow;
JButton start;
JLabel title2;
boolean pauze;
JLabel paused;
ImageIcon restartimg;
ImageIcon titlepic, titlepic2;
JPanel endwindow;
JPanel endgameinfo;
JPanel endlevel, endpoints;
JButton restart;
JLabel pointsimg;
JLabel levelimg;
JPanel livespanel;
JLabel livesimg;
BufferedImage background;
CardLayout cards;
int lvl;
int pause;
Player player;
Random random = new Random();
Timer timer;
JPanel infoGame;
JPanel infoPlayer;
JTextArea lives;
boolean lose = false;
Font font = new Font("DIALOG", Font.BOLD, 16);
public Main() {
player = new Player("player.png", 500, 450, 20, 0, 0);
levels = new HashMap<Integer, Stack<Sprite>>();
out = new ArrayList<Sprite>();
bullets = new ArrayList<Projectile>();
outbool = new ArrayList<Boolean>();
timer = new Timer(20, new TL());
window = new JPanel(new BorderLayout());
main = new JPanel(new BorderLayout());
startwindow = new JPanel(new BorderLayout());
titlepic = new ImageIcon("title.gif");
titlepic2 = new ImageIcon("title.png");
start = new JButton(titlepic);
title2 = new JLabel(titlepic2);
pointsimg = new JLabel(new ImageIcon("points.png"));
levelimg = new JLabel(new ImageIcon("level.png"));
livesimg = new JLabel(new ImageIcon("lives.png"));
endwindow = new JPanel(new BorderLayout());
restartimg = new ImageIcon("restart.gif");
restart = new JButton(restartimg);
try {
background = ImageIO.read(new File("background2.jpg"));
} catch (IOException e) {}
for(int i=1; i<20; i++) {
levels.put(i, new Stack<Sprite>());
for (int j=0;j<i*2+5;j++) {
levels.get(i).push(new Enemy("enemy.png", random.nextInt(1000), 10, 15, random.nextInt(10), random.nextInt(10), 5*i, 3*i));
}
}
lvl = 1;
for (int i=0;i<levels.get(lvl).size();i++) {
out.add(levels.get(lvl).pop());
}
pause = 0;
game = new Game();
infoGame = new JPanel();
infoPlayer = new JPanel(new GridLayout(1, 3));
paused = new JLabel(new ImageIcon("paused.png"));
endgameinfo = new JPanel();
endgameinfo.setLayout(new BoxLayout(endgameinfo, BoxLayout.PAGE_AXIS));
livespanel = new JPanel();
endlevel = new JPanel();
endpoints = new JPanel();
}
public Main createGUI(){
game.addKeyListener(new KL());
game.setFocusable(true);
start.addActionListener(new startAL());
restart.addActionListener(new endAL());
restart.setOpaque(false);
restart.setContentAreaFilled(false);
restart.setBorderPainted(false);
cards = new CardLayout();
this.setLayout(new BorderLayout());
this.add(window, BorderLayout.CENTER);
infoGame.setBackground(Color.black);
infoGame.add(levelimg);
levelimg.setVisible(true);
for (int i=0; i<numslvl.length;i++) {
infoGame.add(numslvl[i]);
numslvl[i].setVisible(false);
}
infoGame.add(pointsimg);
for (int i=0; i<numspoints.length;i++) {
infoGame.add(numspoints[i]);
numspoints[i].setVisible(false);
}
pointsimg.setVisible(true);
infoPlayer.setBackground(Color.black);
livespanel.add(livesimg);
for (int i=0; i<numslives.length;i++) {
livespanel.add(numslives[i]);
numslives[i].setVisible(true);
}
startwindow.add(start);
start.setLocation(-400, -400);
livespanel.setBackground(Color.black);
infoPlayer.add(livespanel);
JPanel empty = new JPanel();
empty.setBackground(Color.black);
infoPlayer.add(empty);
infoPlayer.add(paused);
paused.setVisible(false);
window.setLayout(cards);
window.add("start", startwindow);
window.add("game", main);
window.add("end", endwindow);
main.add(infoGame, BorderLayout.NORTH);
main.add(game, BorderLayout.CENTER);
main.add(infoPlayer, BorderLayout.SOUTH);
endwindow.add(title2, BorderLayout.NORTH);
endwindow.add(endgameinfo, BorderLayout.CENTER);
endgameinfo.add(Box.createVerticalGlue());
endgameinfo.add(endlevel);
endgameinfo.add(endpoints);
endgameinfo.add(restart);
endgameinfo.add(Box.createVerticalGlue());
cards.show(window, "start");
this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
this.setSize(1000,700);
this.setVisible(true);
return this;
}
public class Game extends JPanel {
int count=0;
@Override
public void paintComponent (Graphics g) {
super.paintComponent(g);
this.removeAll();
if (count-3500 >=0) {
count = 0;
}
g.drawImage(background, 0, -3500+count, null);
for (int i=0; i<out.size();i++) {
this.add(out.get(i));
out.get(i).setSize(out.get(i).rad*2,out.get(i).rad*2);
out.get(i).setLocation(out.get(i).xpos, out.get(i).ypos);
}
for (int i=0;i<bullets.size();i++) {
if(bullets.get(i).side) {
g.setColor(Color.red);
g.fillRect(bullets.get(i).xpos, bullets.get(i).ypos, bullets.get(i).rad, 2*bullets.get(i).rad);
} else {
g.setColor(Color.yellow);
g.fillRect(bullets.get(i).xpos, bullets.get(i).ypos, bullets.get(i).rad, 2*bullets.get(i).rad);
}
}
if (rising) {
g.setColor(Color.green);
g.drawOval(player.xpos-player.rad/2, player.ypos-player.rad/2, player.rad*3, player.rad*3);
}
this.add(player);
player.setSize(player.rad*2, player.rad*2);
player.setLocation(player.xpos, player.ypos);
if (pauze) {
paused.setVisible(true);
} else if (!pauze) {
paused.setVisible(false);
count++;
}
for (int i=0; i<numslvl.length;i++) {
numslvl[i].setVisible(false);
}
for (int i=0; i<numspoints.length;i++) {
numspoints[i].setVisible(false);
}
infoGame.removeAll();
infoGame.add(levelimg);
String number = String.valueOf(lvl);
for(int i = 0; i < number.length(); i++) {
int j = Character.digit(number.charAt(i), 10);
infoGame.add(numslvl[j]);
numslvl[j].setVisible(true);
}
infoGame.add(pointsimg);
number = String.valueOf(player.points);
for(int i = 0; i < number.length(); i++) {
int j = Character.digit(number.charAt(i), 10);
infoGame.add(numspoints[j]);
numspoints[j].setVisible(true);
}
for (int i=0;i<numslives.length;i++) {
if (i>=player.lives) {
numslives[i].setVisible(false);
}
}
repaint();
}
}
public boolean dist(int w, int x, int y, int z, int r1, int r2) {
return Math.pow(w-y, 2) + Math.pow(x-z, 2) <= Math.pow(r1+r2,2);
}
public void startgame() {
cards.show(window, "game");
this.validate();
this.repaint();
game.requestFocus();
timer.start();
}
public void reset() {
player.lives = 3;
player.points = 0;
player.xvel = 0;
player.yvel = 0;
out.clear();
bullets.clear();
outbool.clear();
levels.clear();
for(int i=1; i<20; i++) {
levels.put(i, new Stack<Sprite>());
for (int j=0;j<i*2+5;j++) {
levels.get(i).push(new Enemy("enemy.png", random.nextInt(900), 10, 15, random.nextInt(10), random.nextInt(10), 5*i, 3*i));
}
}
lvl = 1;
for (int i=0;i<levels.get(lvl).size();i++) {
out.add(levels.get(lvl).pop());
}
for (int i=0; i<numslives.length;i++) {
livespanel.add(numslives[i]);
numslives[i].setVisible(true);
}
}
public void endgame() {
timer.stop();
cards.show(window, "end");
endlevel.removeAll();
endlevel.add(levelimg);
String number = String.valueOf(lvl);
for(int i = 0; i < number.length(); i++) {
int j = Character.digit(number.charAt(i), 10);
endlevel.add(numslvl[j]);
numslvl[j].setVisible(true);
}
endpoints.removeAll();
endpoints.add(pointsimg);
number = String.valueOf(player.points);
for(int i = 0; i < number.length(); i++) {
int j = Character.digit(number.charAt(i), 10);
endpoints.add(numspoints[j]);
numspoints[j].setVisible(true);
}
this.validate();
this.repaint();
}
public class startAL implements ActionListener {
public void actionPerformed(ActionEvent e) {
startgame();
}
}
public class endAL implements ActionListener {
public void actionPerformed(ActionEvent e) {
reset();
startgame();
}
}
public class TL implements ActionListener {
int count = 0;
int riscount = 0;
boolean tmp = false;
public void actionPerformed(ActionEvent e) {
for (int i=0;i<out.size();i++) {
out.get(i).move();
if (out.get(i).xpos + out.get(i).xvel >= game.getWidth()-5|| out.get(i).xpos + out.get(i).xvel <= 0) {
out.get(i).xvel = (-1*out.get(i).xvel);
}
if (out.get(i).ypos + out.get(i).yvel >= game.getHeight()-200 || out.get(i).ypos + out.get(i).yvel <= 0) {
out.get(i).yvel = (-1*out.get(i).yvel);
}
if (out.get(i).xpos>game.getWidth()) {
out.get(i).xpos = 990;
out.get(i).ypos = 20;
}
if (!rising) {
if(dist(player.xpos, player.ypos, out.get(i).xpos, out.get(i).ypos, player.rad, out.get(i).rad)){
player.die(500, 450);
rising = true;
riscount = 0;
player.addPoint();
out.remove(i);
}
}
if(count>100) {
bullets.add(((Enemy) out.get(i)).shoot());
outbool.add(true);
tmp = true;
}
}
if (tmp) {count = 0;tmp = false;}
for (int i=0;i<bullets.size();i++) {
bullets.get(i).move();
if(!bullets.get(i).side) {
if (!rising) {
if(dist(bullets.get(i).xpos, bullets.get(i).ypos, player.xpos, player.ypos, bullets.get(i).rad, player.rad)) {
player.die(500, 450);
rising = true;
riscount = 0;
outbool.set(i, false);
}
}
} else {
for (int j=0;j<out.size();j++) {
if(dist(bullets.get(i).xpos, bullets.get(i).ypos, out.get(j).xpos+out.get(j).rad, out.get(j).ypos, bullets.get(i).rad, out.get(j).rad)){
out.remove(j);
player.addPoint();
outbool.set(i, false);
}
}
if (bullets.get(i).ypos >= game.getHeight() || bullets.get(i).ypos <= 0) {
outbool.set(i, false);
}
}
}
for (int i=0; i<outbool.size();i++) {
if (!outbool.get(i)) {
bullets.remove(i);
outbool.remove(i);
}
}
if(out.size() == 0) {
lvl++;
for (int i=0;i<levels.get(lvl).size();i++) {
out.add(levels.get(lvl).pop());
}
}
if (player.lives<0) {
endgame();
}
if (rising) {
riscount++;
}
if (riscount>100) {
rising = false;
}
player.move();
count++;
repaint();
}
}
public class KL implements KeyListener {
public void keyPressed(KeyEvent e) {
int keyCode = e.getKeyCode();
switch( keyCode ) {
case KeyEvent.VK_UP:
player.yvel = -10;
break;
case KeyEvent.VK_DOWN:
player.yvel = 10;
break;
case KeyEvent.VK_LEFT:
player.xvel= -10;
break;
case KeyEvent.VK_RIGHT:
player.xvel = 10;
break;
}
}
public void keyReleased(KeyEvent e) {
int keyCode = e.getKeyCode();
switch( keyCode ) {
case KeyEvent.VK_UP:
player.yvel = 0;
break;
case KeyEvent.VK_DOWN:
player.yvel = 0;
break;
case KeyEvent.VK_LEFT:
player.xvel= 0;
break;
case KeyEvent.VK_RIGHT:
player.xvel = 0;
break;
case KeyEvent.VK_P:
if(pause%2==0) {
pauze = true;
timer.stop();
}
if(pause%2==1) {
pauze = false;
timer.start();
}
pause++;
break;
case KeyEvent.VK_SPACE:
bullets.add(player.shoot());
outbool.add(true);
break;
}
}
public void keyTyped(KeyEvent e) {}
}
public static void main(String[] args) {
Main m = new Main();
m.createGUI();
}
}
|
. Modern ultrasound technique, as in other specialities, has provided the neurosurgeon with an adjuvant which may be employed intraoperatively and which supplements the neuroradiological methods of investigation. The technique provides the surgeon with the possibility of direct visualization, localization and characterization of the pathological and normal structures intracranially and intraspinally by scanning directly on the surfaces of the dura or brain or medulla. The technique provides a good possibility for planning and controlling the extent of tumour resection. By means of a stand with guiding of the cannula, tissue biopsies may be obtained with great certainty and cysts and abscesses may be punctured and evacuated. Placing of a drain in the ventricular system or cysts may be carried out precisely and can be documented by ultrasonic guidance. The technique is employed in approximately one third of the total number of operations in a department of neurosurgery. The technique is simple to use, reasonable inexpensive, does not require addition running costs and may, in addition, be employed as an alternative to CT in neonatal diagnosis and control of hydrocephalus, in particular, by scanning via the fontanelles.
|
Governmentbusiness relationships through partnerships for sustainable development: the green network in Denmark Abstract Moving from largely command and control measures in the 1970s and 1980s, through cleaner production initiatives and self-regulatory initiatives in the 1990s, the emphasis is increasingly on using networks and partnerships between private firms, NGOs, government and civil society as levers for promoting a greening of industry. In terms of publicprivate partnerships, one of the foremost Danish initiatives is the Green Network in the county of Vejle. This initiative currently involves more than 250 companies and ten public bodies. The network started in 1994 and has grown in size and importance ever since. Fundamentally, it aims at providing new forms of co-operation between public authorities and private companies. The vehicle for this was initially a voluntary environmental statement by companies, who wished to be members. With the passing of time, however, the demands and pressures on both companies and public bodies have increased. Hence, the tools and means employedoutside as well as inside the networkhave developed accordingly. In this paper, a distinct partnership mode of governmentbusiness relationshipsa collaborative network with respect, trust and mutual legitimacyis discussed and related to the Green Network way of doing things. The conclusion is that through dialogue, reflexivity and the establishment of an enabling environment, publicprivate partnerships can become useful vehicles in societies' move towards sustainability.
|
#pragma once
#ifndef PROCESSEDDB_HPP
#define PROCESSEDDB_HPP
#include <QString>
#include <QVariant>
#include <QMap>
namespace qtreports
{
namespace detail
{
/*! @~russian
@brief Класс, используемый для хранения выполненных запросов и параметров.
@note Все функции данного класса, возвращающие bool,
возвращают true в случае успеха либо false, если во время
выполнения произошли ошибки. При этом с помощью метода getLastError()
можно получить описание последней произошедшей ошибки.
Класс который отвечает за структуру хранения результатов выполненных запросов и
параметров БД.
*/
class ProcessedDB {
public:
ProcessedDB();
~ProcessedDB();
/*! @~russian
Добавляет параметр в список параметров.
@param[in] name Имя параметра
@param[in] value Значение параметра
*/
void setParameter( const QString & name, const QVariant & value );
/*! @~russian
Устанавливает карту параметров.
@param[in] map Карта параметров.
*/
void setParameters( const QMap< QString, QVariant > & map );
/*! @~russian
Возвращает параметр из результата работы скрипта.
@param[in] name Имя параметра
*/
const QVariant getParameter( const QString & name ) const;
/*! @~russian
Возвращает карту параметров.
*/
const QMap< QString, QVariant > getParameters() const;
/*! @~russian
Добавляет данные в соответствующий столбец.
@param[in] columnName Имя поля
@param[in] data Значение поля
*/
void appendColumnData( const QString & columnName, const QVariant & data );
/*! @~russian
Добавляет данные в соответствующий столбец.
@param[in] column Номер поля.
@param[in] data Значение поля.
*/
void appendColumnData( int column, const QVariant & data );
/*! @~russian
Добавляет данные в соответствующий столбец.
@param[in] columnName Имя поля.
@param[in] data Множество значений поля.
*/
void appendColumnData( const QString & columnName, const QVector< QVariant > & data );
/*! @~russian
Добавляет данные в соответствующий столбец.
@param[in] column Номер поля.
@param[in] data Множество значений поля.
*/
void appendColumnData( int column, const QVector< QVariant > & data );
/*! @~russian
Добавляет столбец.
@param[in] name Имя столбца.
*/
void addEmptyColumn( const QString & name );
/*! @~russian
Добавляет столбец.
@param[in] column Номер столбца
*/
void addEmptyColumn( int column );
/*! @~russian
Возвращает столбец данных.
@param[in] name Имя столбца.
*/
const QVector< QVariant > getColumn( const QString & name ) const;
/*! @~russian
Получает столбец целиком.
@param[in] col Номер столбца.
*/
const QVector< QVariant > getColumn( int col ) const;
/*! @~russian
Возвращает имя столбца.
@param[in] column Номер столбца
*/
QString getColumnName( int column ) const;
/*! @~russian
Возвращает индекс столбца по имени. Если столбец не существует,
возвращает -1.
@param[in] name Имя столбца.
*/
int getColumnIndex( const QString & name ) const;
/*! @~russian
Проверяет, существует ли столбец с определенным именем.
@param[in] name Имя столбца.
*/
bool columnIsExists( const QString & name ) const;
/*! @~russian
Проверяет, существует ли столбец с определенным именем.
@param[in] column Номер столбца.
*/
bool columnIsExists( int column ) const;
/*! @~russian
Возвращает значение поля.
@param[in] columnName Название столбца
@param[in] row Номер строки
*/
const QVariant getFieldData( const QString & columnName, int row ) const;
/*! @~russian
Возвращает значение поля.
@param[in] column Номер столбца
@param[in] row Номер строки
*/
const QVariant getFieldData( int column, int row ) const;
/*! @~russian
Получает количество записей в столбце.
@param[in] columnName Имя столбца.
*/
int getRowCount( const QString & columnName ) const;
/*! @~russian
Получает количество записей в столбце.
@param[in] column Номер столбца.
*/
int getRowCount( int column ) const;
/*! @~russian
Получает максимальное количество записей среди набора столбцов.
*/
int getMaxRowCount() const;
/*! @~russian
Получает строку ошибки.
*/
const QString getError() const;
private:
QMap< QString, QVector< QVariant > > m_columnsSet;
QMap< QString, QVariant > m_parametersMap;
QString m_errorString;
};
}
}
#endif // PROCESSEDDB_HPP
|
Evaluation of depth perception in elder with Parkinson's disease This study is to investigate the difference of depth perception between normal elder and Parkinson's disease (PD) patients. This study manipulate different clues (overlapping and perspective), reference material (with and without), and size of the vehicle (motorcycles, vehicles and trucks). Subject is required to judge the difference in the distance of the objects of the picture. This is in order to analyze the spatial depth perception difference between two groups. Twenty-four PD patients and twenty-four control participants were recruited to participate in this study. The results showed that perspective accuracy of Parkinson's disease patient was significantly weaker than the normal group, but no significant difference in the accuracy of overlapping reaction. This study also found an interesting result, where in a situation with difference in vehicle size frontal and rear, for example, an vehicle in the front and a motorcycle in the rear, the judgment was worsened by such scenario, which was true for both the Parkinson's patient and the normal control group, but with the former bearing higher degree of severity. In daily life, different size vehicles are mixed on public road, it is not possible to have just a single vehicle on the road. Hence, the Parkinson's patient should pay more attention to the approaching vehicle.
|
from typing import Tuple
import numpy as np
import sklearn
import sklearn.utils.multiclass
import sklearn.linear_model
from sklearn.preprocessing import StandardScaler
def diag_indices_with_offset(p, offset):
idxdiag = np.diag_indices(p)
idxdiag_with_offset = list()
idxdiag_with_offset.append(np.array([i + offset for i in idxdiag[0]]))
idxdiag_with_offset.append(np.array([i + offset for i in idxdiag[1]]))
return tuple(idxdiag_with_offset)
def _shrinkage(X: np.ndarray, gamma=None, T=None, S=None, block=False,
N_channels=31, N_times=5, standardize=True) -> Tuple[np.ndarray, float]:
p, n = X.shape
if standardize:
sc = StandardScaler() # standardize_featurestd features
X = sc.fit_transform(X.T).T
Xn = X - np.repeat(np.mean(X, axis=1, keepdims=True), n, axis=1)
if S is None:
S = np.matmul(Xn, Xn.T)
Xn2 = np.square(Xn)
idxdiag = np.diag_indices(p)
# Target = B
nu = np.mean(S[idxdiag])
if T is None:
if block:
nu = list()
for i in range(N_times):
idxblock = diag_indices_with_offset(N_channels, i*N_channels)
nu.append([np.mean(S[idxblock])] * N_channels)
nu = [sl for l in nu for sl in l]
T = np.diag(np.array(nu))
else:
T = nu * np.eye(p, p)
# <NAME>
V = 1. / (n - 1) * (np.matmul(Xn2, Xn2.T) - np.square(S) / n)
if gamma is None:
gamma = n * np.sum(V) / np.sum(np.square(S - T))
if gamma > 1:
print("logger.warning('forcing gamma to 1')")
gamma = 1
elif gamma < 0:
print("logger.warning('forcing gamma to 0')")
gamma = 0
Cstar = (gamma * T + (1 - gamma) * S) / (n - 1)
if standardize: # scale back
Cstar = sc.scale_[np.newaxis, :] * Cstar * sc.scale_[:, np.newaxis]
return Cstar, gamma
class ShrinkageLinearDiscriminantAnalysis(
sklearn.base.BaseEstimator,
sklearn.linear_model._base.LinearClassifierMixin):
def __init__(self, priors=None, only_block=False, N_times=5, N_channels=31, pool_cov=True, standardize_shrink=True):
self.only_block = only_block
self.priors = priors
self.N_times = N_times
self.N_channels = N_channels
self.pool_cov = pool_cov
self.standardize_shrink = standardize_shrink
def fit(self, X_train, y):
self.classes_ = sklearn.utils.multiclass.unique_labels(y)
if set(self.classes_) != {0, 1}:
raise ValueError('currently only binary class supported')
assert len(X_train) == len(y)
xTr = X_train.T
n_classes = 2
if self.priors is None:
# here we deviate from the bbci implementation and
# use the sample priors by default
_, y_t = np.unique(y, return_inverse=True) # non-negative ints
priors = np.bincount(y_t) / float(len(y))
# self.priors = np.array([1./n_classes] * n_classes)
else:
priors = self.priors
X, cl_mean = subtract_classwise_means(xTr, y)
if self.pool_cov:
C_cov, C_gamma = _shrinkage(X, N_channels=self.N_channels, N_times=self.N_times,
standardize=self.standardize_shrink)
else:
n_classes = 2
C_cov = np.zeros((xTr.shape[0], xTr.shape[0]))
for cur_class in range(n_classes):
class_idxs = y == cur_class
x_slice = X[:, class_idxs]
C_cov += priors[cur_class] * _shrinkage(x_slice)[0]
if self.only_block:
C_cov_new = np.zeros_like(C_cov)
for i in range(self.N_times):
idx_start = i * self.N_channels
idx_end = idx_start + self.N_channels
C_cov_new[idx_start:idx_end, idx_start:idx_end] = C_cov[idx_start:idx_end, idx_start:idx_end]
C_cov = C_cov_new
C_invcov = np.linalg.pinv(C_cov)
# w = np.matmul(C_invcov, cl_mean)
w = np.linalg.lstsq(C_cov, cl_mean)[0]
b = -0.5 * np.sum(cl_mean * w, axis=0).T + np.log(priors)
if n_classes == 2:
w = w[:, 1] - w[:, 0]
b = b[1] - b[0]
self.coef_ = w.reshape((1, -1))
self.intercept_ = b
def predict_proba(self, X):
"""Estimate probability.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Input data.
Returns
-------
C : array, shape (n_samples, n_classes)
Estimated probabilities.
"""
prob = self.decision_function(X)
prob *= -1
np.exp(prob, prob)
prob += 1
np.reciprocal(prob, prob)
return np.column_stack([1 - prob, prob])
def predict_log_proba(self, X):
"""Estimate log probability.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Input data.
Returns
-------
C : array, shape (n_samples, n_classes)
Estimated log probabilities.
"""
return np.log(self.predict_proba(X))
class TimeDecoupledLda(
ShrinkageLinearDiscriminantAnalysis):
"""shrinkage LdaClasswiseCovs with enhancement technique for ERP classification
Parameters
----------
inverted : bool (default: False)
If you want to estimate and change the diagonal blocks before
inverting the covariance matrix.
plot: bool (default: False)
If you want to plot the original covariance matrix,
the new diagonal box and the new matrix.
"""
def __init__(self, priors=None, N_times=5, N_channels=31, standardize_featurestd=False, preproc=None,
standardize_shrink=True, channel_gamma=None):
self.priors = priors
self.N_times = N_times
self.N_channels = N_channels
self.standardize_featurestd = standardize_featurestd
self.standardize_shrink = standardize_shrink
self.channel_gamma = channel_gamma
self.preproc = preproc # This is needed to obtain time interval standardization factors from vectorizer
def fit(self, X_train, y):
self.classes_ = sklearn.utils.multiclass.unique_labels(y)
if set(self.classes_) != {0, 1}:
raise ValueError('currently only binary class supported')
assert len(X_train) == len(y)
xTr = X_train.T
if self.priors is None:
# here we deviate from the bbci implementation and
# use the sample priors by default
_, y_t = np.unique(y, return_inverse=True) # non-negative ints
priors = np.bincount(y_t) / float(len(y))
else:
priors = self.priors
X, cl_mean = subtract_classwise_means(xTr, y) # outsourced to method
C_cov, C_gamma = _shrinkage(X, N_channels=self.N_channels, N_times=self.N_times,
standardize=self.standardize_shrink)
C_cov = change_diagonal_entries(C_cov, xTr, y, inverted=False, N_times=self.N_times,
N_channels=self.N_channels, standardize=self.standardize_featurestd,
jumping_means_ivals=self.preproc.jumping_mean_ivals,
channel_gamma=self.channel_gamma)
w = np.linalg.lstsq(C_cov, cl_mean)[0]
b = -0.5 * np.sum(cl_mean * w, axis=0).T + np.log(priors)
w = w[:, 1] - w[:, 0]
b = b[1] - b[0]
self.coef_ = w.reshape((1, -1))
self.intercept_ = b
def subtract_classwise_means(xTr, y):
n_classes = 2
n_features = xTr.shape[0]
X = np.zeros((n_features, 0))
cl_mean = np.zeros((n_features, n_classes))
for cur_class in range(n_classes):
class_idxs = y == cur_class
cl_mean[:, cur_class] = np.mean(xTr[:, class_idxs], axis=1)
X = np.concatenate([
X,
xTr[:, class_idxs] - np.dot(cl_mean[:, cur_class].reshape(-1, 1),
np.ones((1, np.sum(class_idxs))))],
axis=1)
return X, cl_mean
def change_diagonal_entries(S, xTr, y, inverted=False, N_times=5, N_channels=31, standardize=False,
jumping_means_ivals=None, channel_gamma=None):
# compute sigma_c
# information about time not relevant
if standardize:
if jumping_means_ivals is not None:
num_samples = ((np.diff(np.array(jumping_means_ivals))+0.001)/0.01).squeeze()
factors = np.sqrt(num_samples / np.min(num_samples))
for ti in range(N_times):
start_i = N_channels * ti
end_i = start_i + N_channels
xTr[start_i:end_i, :] *= factors[ti]
xTr_meanfree, class_means = subtract_classwise_means(xTr, y)
X_long_slim = xTr_meanfree.reshape((N_channels, -1), order='F')
sigma_c_ref, gamma_c = _shrinkage(X_long_slim, N_channels=N_channels, N_times=N_times, standardize=True,
gamma=channel_gamma)
sigma_c = np.linalg.pinv(sigma_c_ref) if inverted else sigma_c_ref
sign_sigma_c, slogdet_sigma_c = np.linalg.slogdet(sigma_c)
logdet_sigma_c = slogdet_sigma_c * sign_sigma_c
# compute scalar to scale sigma_c and change diagonal boxes of cov.matrix
S_new = np.copy(S)
for i in range(N_times):
idx_start = i*N_channels
idx_end = idx_start + N_channels
sigma_block = S[idx_start:idx_end, idx_start:idx_end]
sign_sigma_block, slogdet_sigma_block = np.linalg.slogdet(sigma_block)
logdet_sigma_block = slogdet_sigma_block * sign_sigma_block
scalar_via_determinant = np.exp(logdet_sigma_block - logdet_sigma_c)**(1.0/S[idx_start:idx_end, idx_start:idx_end].shape[1])
if scalar_via_determinant < 1:
pass
S_new[idx_start:idx_end, idx_start:idx_end] = sigma_c * scalar_via_determinant # * scaling_factor
S = S_new
if np.any(np.isnan(S)) or np.any(np.isinf(S)):
raise OverflowError('Diagonal-block covariance matrix is not numeric.')
return S
|
<filename>nacid/src/com/nacid/web/taglib/nomenclatures/SpecialityTag.java
package com.nacid.web.taglib.nomenclatures;
import java.io.IOException;
import javax.servlet.jsp.JspException;
import javax.servlet.jsp.PageContext;
import javax.servlet.jsp.tagext.SimpleTagSupport;
import com.nacid.bl.impl.Utils;
import com.nacid.data.DataConverter;
import com.nacid.web.WebKeys;
import com.nacid.web.model.nomenclatures.SpecialityWebModel;
public class SpecialityTag extends SimpleTagSupport {
public void doTag() throws JspException, IOException {
SpecialityWebModel webmodel = (SpecialityWebModel) getJspContext().getAttribute( WebKeys.SPECIALITY, PageContext.REQUEST_SCOPE);
if (webmodel != null) {
getJspContext().setAttribute("id", webmodel.getId());
getJspContext().setAttribute("name", webmodel.getName());
getJspContext().setAttribute("dateFrom", webmodel.getDateFrom());
getJspContext().setAttribute("dateTo", webmodel.getDateTo());
} else {
getJspContext().setAttribute("id", "");
getJspContext().setAttribute("name", "");
getJspContext().setAttribute("dateFrom", DataConverter.formatDate(Utils.getToday()));
getJspContext().setAttribute("dateTo", "дд.мм.гггг");
}
getJspBody().invoke(null);
}
}
|
<gh_stars>0
#include "Renderer.hpp"
Renderer::Renderer(const Window &window, bool software)
{
int flags = RENDERER_CONTEXT;
if (software)
flags |= SDL_RENDERER_SOFTWARE;
mRenderer = SDL_CreateRenderer(window.mWindow,
RENDERER_INDEX,
flags);
}
Renderer::~Renderer()
{
SDL_DestroyRenderer(mRenderer);
}
void Renderer::clear()
{
clear(0, 0, 0);
}
void Renderer::clear(char red, char green, char blue)
{
SDL_RenderClear(mRenderer);
SDL_SetRenderDrawColor(mRenderer, red, green, blue, 255);
SDL_RenderFillRect(mRenderer, NULL);
}
void Renderer::display()
{
SDL_RenderPresent(mRenderer);
}
void Renderer::draw(const Drawable &drawable)
{
drawable.draw(*this);
}
////////////////////////////////////////////////
// Getters.
////////////////////////////////////////////////
Camera Renderer::getCamera() const
{
return mCamera;
}
////////////////////////////////////////////////
// Setters.
////////////////////////////////////////////////
void Renderer::setCamera(const Camera &camera)
{
mCamera = camera;
}
void Renderer::setVsync(bool state)
{
if (state)
SDL_GL_SetSwapInterval(1);
else
SDL_GL_SetSwapInterval(0);
}
|
<gh_stars>0
import * as fs from 'fs';
import * as should from 'should';
import * as Bluebird from 'bluebird';
Bluebird.config({
cancellation: true,
});
global.Promise = Bluebird;
import {Model} from '@process-engine/process_model.contracts';
import {BpmnModelParser} from '../src/model/bpmn_model_parser';
import {ProcessModelFacade} from '../src/runtime';
export class TestFixtureProvider {
private _parser: BpmnModelParser;
public async initialize(): Promise<void> {
this._parser = new BpmnModelParser();
await this._parser.initialize();
}
public async parseProcessModelFromFile(bpmnFilename: string): Promise<Model.Process> {
const bpmnXml: string = fs.readFileSync(bpmnFilename, 'utf8');
const definitions: Model.Definitions = await this._parser.parseXmlToObjectModel(bpmnXml);
return definitions.processes[0];
}
public createProcessModelFacade(processModel: Model.Process): ProcessModelFacade {
return new ProcessModelFacade(processModel);
}
public async assertThatProcessModelHasFlowNodes(processModel: Model.Process, expectedFlowNodeIds: Array<string>): Promise<void> {
for (const flowNodeId of expectedFlowNodeIds) {
const flowNodeFound: boolean = processModel.flowNodes.some((flowNode: Model.Base.FlowNode) => flowNode.id === flowNodeId);
should(flowNodeFound).be.true(`Failed to locate FlowNode '${flowNodeId}' in ProcessModel ${processModel.id}!`);
}
}
}
|
<filename>src/app.py<gh_stars>10-100
from flask import Flask, render_template, request, redirect, url_for, make_response
from flask_awscognito import AWSCognitoAuthentication
from flask_cors import CORS
from jwt.algorithms import RSAAlgorithm
from flask_jwt_extended import (
JWTManager,
set_access_cookies,
verify_jwt_in_request_optional,
get_jwt_identity,
)
from src.keys import get_cognito_public_keys
app = Flask(__name__, static_url_path="", static_folder="src/static")
app.config.from_object("src.config")
app.config["JWT_PUBLIC_KEY"] = RSAAlgorithm.from_jwk(get_cognito_public_keys())
CORS(app)
aws_auth = AWSCognitoAuthentication(app)
jwt = JWTManager(app)
@app.route("/")
def index():
return render_template("index.html")
@app.route("/login", methods=["GET", "POST"])
def login():
return redirect(aws_auth.get_sign_in_url())
@app.route("/loggedin", methods=["GET"])
def logged_in():
access_token = aws_auth.get_access_token(request.args)
resp = make_response(redirect(url_for("protected")))
set_access_cookies(resp, access_token, max_age=30 * 60)
return resp
@app.route("/secret")
def protected():
verify_jwt_in_request_optional()
if get_jwt_identity():
return render_template("secret.html")
else:
return redirect(aws_auth.get_sign_in_url())
|
<reponame>JessenPan/leetcode-java
package org.jessenpan.leetcode.bit;
/**
* @author jessenpan
* tag:bit
*/
public class S201BitwiseAndOfNumbersRange {
public int rangeBitwiseAnd(int m, int n) {
int count=0;
while(m!=n){
m>>=1;
n>>=1;
count++;
}
return m<<count;
}
}
|
The Effect of Massage Stimulation on Anthropometric Measures of Preschool Aged Children at Integrated PAUD Merpati Anggrek Surabaya Massage stimulation is a traditional therapy of the Indonesian, which combines auditory, visual and tactile kinesthetic stimuli, which can be given from early to unlimited age. Preschoolers not only need education and protection, but also need stimulation for their growth and development. Massage stimulation for preschoolers in addition to relaxing the child is also reducing stress, increasing immunity, stimulating the vagus nerve, also increasing growth by stimulating cell growth. This research aims to determine the effect of massage stimulation on the anthropometric size of preschoolers. This research method uses true experimental design with pre and posttest with control group design. The number of samples was 25 respondents of integrated early childhood education programs (PAUD) Integrated Anggrek Merpati which divided into 2 groups, 13 children in the massage stimulation group and 12 children in the control group. Data analysis uses Paired Samples Test and Independent Samples Test. The results obtained a significant value of p = 0,000 or p <0.05 in the treatment group, but in the two control groups there was no difference in anthropometric measurements of height and head circumference p> 0.05. In conclusion, there was the effect of massage stimulation on body weight, height, upper arm circumference and head circumference. But the height and head circumference were not found differences in the two groups. Researcher suggests further case studies with a larger sample size. So that massage in preschoolers can be used as a reference as growth stimulation for stunting prevention.
|
Semantic MDA for E-Government Service Development This paper presents an approach to apply the principles of Model Driven Architecture (MDA) combined with a semantic model to the creation of E-Government services. As a result, these services will be available as semantic web services that can be searched for meeting the current goal or desire of a citizen. For this approach to be successful it is important that all required artifacts are automatically generated from the model and that there is no need for manual coding. This ensures short development cycles and fast and easy adaption to shifting requirements. Besides this, the quality and capabilities of the services provided will be significantly improved.
|
#https://codeforces.com/problemset/problem/617/A
import sys
n = int(sys.stdin.readline())
k = n//5
r = n%5
print(k + (r > 0))
|
<gh_stars>1-10
describe('Meta UI Test Suite', () => {
test.todo("meta ui test");
});
|
Hybrid Encryption Technique using Cyclic Bit Shift and RC4 Data security is an important aspect when data is sent over public networks such as the internet. Securing data can use cryptographic techniques. In symmetric cryptographic techniques, there are permutation and stream techniques, where each technique has its own weaknesses and strengths. This research combines two techniques into a hybrid technique to get stronger encryption. The proposed permutation technique is cyclic bit shift, while the proposed stream technique is RC4. The results of the combination of two methods are then performed performance tests using the avalanche effect (AE), bit error ratio (BER), the time required for decryption encryption and character error rate (CER). Based on the results of testing the proposed method is superior compared to the method that has been previously proposed.
|
Methods for Integrating Extraterrestrial Radiation into Neural Network Models for Day-Ahead PV Generation Forecasting : Variability, intermittency, and limited controllability are inherent characteristics of photovoltaic (PV) generation that result in inaccurate solutions to scheduling problems and the instability of the power grid. As the penetration level of PV generation increases, it becomes more important to mitigate these problems by improving forecasting accuracy. One of the alternatives to improving forecasting performance is to include a seasonal component. Thus, this study proposes using information on extraterrestrial radiation (ETR), which is the solar radiation outside of the atmosphere, in neural network models for day-ahead PV generation forecasting. Specifically, five methods for integrating the ETR into the neural network models are presented: division preprocessing, multiplication preprocessing, replacement of existing input, inclusion as additional input, and inclusion as an intermediate target. The methods were tested using two datasets in Australia using four neural network models: Multilayer perceptron and three recurrent neural network(RNN)-based models including vanilla RNN, long short-term memory, and gated recurrent unit. It was found that, among the integration methods, including the ETR as the intermediate target improved the mean squared error by 4.1% on average, and by 12.28% at most in RNN-based models. These results verify that the integration of ETR into the PV forecasting models based on neural networks can improve the forecasting performance. and Y.T.Y.; investigation, Y.G.J. and Y.T.Y.; data curation, S.C.J.; writingoriginal draft preparation, S.C.J.; writingreview and editing, Y.G.J., Y.T.Y. and H.C.K.; visualization, S.C.J.; supervision, H.C.K.; funding acquisition, H.C.K. authors Introduction Recently, the use of renewable energy sources (RES) for reducing greenhouse gases and consequent sustainable development has been considered inevitable. The integration of RES has been successful to some extent, as various countries have actively implemented policies for wide integration, such as feed-in-tariffs and renewable portfolio standards, and the levelized cost of energy (LCOE) of RES has decreased through technological developments. The International Renewable Energy Agency (IRENA) reported that photovoltaic (PV) power generation tripled from 250 TWh in 2015 to 720 TWh in 2019. Additionally, it was projected via an International Energy Agency (IEA) sustainable development scenario that approximately 3268 TWh will be produced by PV generation by 2030. However, such an increase in PV generation causes instability in power systems due to its variability, intermittency, and limited controllability. Specifically, from the perspective of transmission system operators (TSOs), the unfavorable characteristics of RES, including PV generation, can exacerbate the imbalance between power supply and demand and make power system planning and operation more difficult. To address such problems, TSOs need to secure a significant number of flexible resources, which is then followed by an increase in the electricity bills of customers. Its threat to power systems is ongoing due to its unlimited access to the grid. For example, electricity consumption is frequently settled at a negative price in Germany, where 32.5% of gross power production is generated by wind and solar. The demand of the power system subtracted by generation of the RESs is called net demand, and their generation cannot be changed, much like usual demand, due to unlimited access to the grid. The reason for negative prices is that significantly reduced net demand, due to a surge in increased supply from RES at an instant, causes in the power system a condition of oversupply, requiring that inflexible generators with high ramp-up and ramp-down costs generate even for negative electricity prices. Therefore, a negative electricity price indicates a severe condition of a power system because its occurrence means the power system cannot remain in balance using only flexible resources. Furthermore, from the perspective of distribution system operators (DSOs), a high penetration of PV generation results in the need for investment in various alternative resources to achieve stable power system operation by addressing problems such as voltage fluctuation, increased network losses, and feeder overloading in the distribution network. Increasing the forecasting accuracy of PV generation is one of the most simple and economic solutions to such problems because it can incentivize balance-responsible parties (BRPs) in an attempt to reduce the penalty caused by the imbalance between the scheduled and actual PV generation. Ultra short-term forecasting is used in techniques such as power smoothing and real-time electricity dispatch. Short-term forecasting is useful for power scheduling tasks, such as unit commitment and economic power dispatch, and can also be used in PV-integrated energy management systems. In contrast, medium-term and long-term forecasting are effectively used in power system planning, where network optimization is performed and investment decisions are made. PV forecasting can be classified, according to its methodology, into physical methods, statistical approaches, and machine learning approaches. In physical methods, the radiant energy reaching the Earth's surface is determined using a physical atmospheric model affecting the solar radiation. In Dolara et al., the PV output was forecasted using an irradiance model considering the transmittance of the atmosphere and an air-glass optical model. In statistical approaches, PV generation is forecasted through the statistical analysis of input variables. For instance, multi-period PV generation was forecasted using autoregressive moving average (ARMA) and autoregressive-integrated moving average (ARIMA) models in Colak et al., and an autoregressive moving average with exogenous inputs (ARMAX) model was applied to statistical PV forecasting by including weather forecasting data in Li et al.. In the last category of machine learning approaches, a forecasting model is trained by updating the model parameters based on existing data. Machine learning has wide applications, such as in dynamical systems with memory and health monitoring. Various machine learning models have been applied to forecasting PV generation, such as a support vector machine (SVM), Bayesian neural network and RNN-based models such as long short-term memory (LSTM) and gated recurrent network (GRU). There have been attempts to separate the components of a forecasting target in the time series forecasting. In Keles et al., the spot prices in the electricity market were forecasted by separating them into deterministic and stochastic signals. In Zhu et al., Zang et al., and Li et al., time series data were decomposed using wavelet decomposition, and then the decomposed signals were used to teach a neural network. Similarly, time series data PV generation can be regarded as having seasonal and temporal components. The seasonal component results from changes in the altitude of the Sun due to the Earth's orbit and rotation, while various stochastic factors, such as weather conditions, are the source of the temporal component. On the other hand, there have been studies where PV generation data of similar dates during a year are combined to forecast the output. As the generations at similar dates have the seasonal component in common, their combination leads to more accurate forecasting of the seasonal component. Thus, a forecasting method using the seasonal component is likely to show a good forecasting performance. PV generation data of adjacent days were Energies 2021, 14, 2601 3 of 18 used in the convolution neural network (CNN) and the time correlation method. In Li et al., the seasonal component was forecast using the weighted sum of the two outputs from the CNN with inputs of PV generation at adjacent days and from the LSTM with inputs of total PV generation over one day. The forecasting error can be reduced by indirectly considering seasonal changes through the regularizing effect of an ensemble model. In Wen et al., the ensemble of a back-propagation neural network, radial basis function neural network, extreme learning machine, and Elman neural network was used for forecasting the seasonal component of PV generation, and in Gigoni et al., the combination of the Grey-box model, neural nework, k-nearest neighbor, quantile random forest, and support vector regression was the best in terms of forecasting accuracy. Table 1 summarizes prior studies on PV generation forecasting by method. Table 1. Summary of existing PV generation forecasting methods. Method Reference Highlights Physical Models the physical state and dynamics of the atmosphere. Statistical Generates forecasting data with appropriate statistical assumptions. In contrast to the approaches used in the previous studies, the seasonal component can be determined more accurately by direct geometric modeling, based on the fact that solar radiation changes periodically according to the Earth's rotation and orbit. Thus, in this study, we propose methods to calculate the seasonal component using the angle of incidence and solar constant for improving forecasting accuracy, and those methods are verified using four different neural network models: MLP and three RNN-based models including Vanilla RNN, LSTM, and GRU. The contribution of this paper is as follows. This is the first study that explicitly introduces extraterrestrial radiation, that is, the solar radiation outside the atmosphere, into various neural network models; Methods for integrating the extraterrestrial radiation into neural network models for PV generating forecasting are presented; To verify the effectiveness of the proposed methods, the methods are applied to four neural network models: MLP and three RNN-based models including Vanilla RNN, LSTM, and GRU. Then, their forecasting performances are examined and compared. Seasonal Changes in Extraterrestrial Radiation Extraterrestrial radiation (ETR) refers to the power per unit area of sunlight irradiated at the distance between the Sun and the Earth in space. The ETR is radiation that does not consider the effects of the atmosphere and acts as an upper limit of the energy from the Sun reaching the Earth. The ETR has a seasonal characteristic because the Earth's orbit and rotation are seasonal. The parameters for modeling the seasonality of the ETR include the change in the distance between the Sun and Earth, the change in the solar declination due to the Earth's orbit, and the circumferential motion of the Sun due to the Earth's rotation. Since the Earth's orbit is not circular but elliptical, the distance between the Sun and the Earth changes according to the orbit, which causes corresponding changes in radiation. In addition, since the Earth orbits the Sun with its rotation axis tilted by 23.5 degrees, the declination of the Sun varies with time, which is associated with the amount of sunlight throughout the year. The Earth's rotation also affects the radiation throughout the day, that is, there is a large amount of radiation during the day in contrast to the absence of radiation during the night. In this study, the ETR was geometrically determined as a function of time based on the diurnal motion model. Specifically, the solar constant was used to determine the maximum of daily ETR, and then it was adjusted according to the angle of incidence of the Sun with respect to the PV panel. Solar Constant The solar constant, G sc, is defined as the power per unit area irradiated from the Sun at the average distance between the Sun and Earth. It can be measured from a satellite to eliminate the atmospheric effect and determined as : The radiation on a surface perpendicular to the sunlight on the n-th day of a year, denoted as G on, is calculated as : Angle of Incidence The ETR parallel to the final radiation incident on the PV panel is obtained considering the angle of incidence (), which means the angle between the radiation and the orthogonal line to the PV panel. The angle of incidence can be derived as a function of time by geometric modeling. The necessary variables include not only the declination, hour angle, latitude, and longitude but also the tilt angle and azimuth angle, which are the angles defining the geometric configuration of the PV panels. Declination As illustrated in Figure 1, the declination () is defined as the angle between the lines connecting the Sun and the Equator in the equatorial coordinate system. It has a positive value in the Northern Hemisphere. The declination on the n-th day of a year can be approximately determined as : the day, that is, there is a large amount of radiation during the day in contrast t sence of radiation during the night. In this study, the ETR was geometrically determined as a function of time the diurnal motion model. Specifically, the solar constant was used to deter maximum of daily ETR, and then it was adjusted according to the angle of inc the Sun with respect to the PV panel. Solar Constant The solar constant,, is defined as the power per unit area irradiated Sun at the average distance between the Sun and Earth. It can be measured from lite to eliminate the atmospheric effect and determined as : The radiation on a surface perpendicular to the sunlight on the -th day o denoted as, is calculated as : = 1 + 0.033 cos. Angle of Incidence The ETR parallel to the final radiation incident on the PV panel is obtained ering the angle of incidence (), which means the angle between the radiation orthogonal line to the PV panel. The angle of incidence can be derived as a fu time by geometric modeling. The necessary variables include not only the dec hour angle, latitude, and longitude but also the tilt angle and azimuth angle, w the angles defining the geometric configuration of the PV panels. Declination As illustrated in Figure 1, the declination ( ) is defined as the angle betw lines connecting the Sun and the Equator in the equatorial coordinate system positive value in the Northern Hemisphere. The declination on the -th day can be approximately determined as : The Sun's position in the celestial coordinate system can be expressed as a function of time within a day by considering the circumferential motion of the Sun due to the Earth's rotation. However, to accurately integrate the time zone into the geometrical model, the local time needs to be converted into solar time, which is determined based on the Sun, following the rule as follows : where L st and L loc are the longitudes of Local Standard Time Meridian and the longitude at a specific location, respectively, and E is calculated as: Then, the solar time is further converted into the hour angle () according to the following relationship as: The hour angle is conceptually illustrated in Figure 2. pared with the latitude ( ). Hour Angle The Sun's position in the celestial coordinate system can be expressed of time within a day by considering the circumferential motion of the Su Earth's rotation. However, to accurately integrate the time zone into th model, the local time needs to be converted into solar time, which is dete on the Sun, following the rule as follows : where and are the longitudes of Local Standard Time Meridian a tude at a specific location, respectively, and is calculated as: Then, the solar time is further converted into the hour angle () acc following relationship as: The hour angle is conceptually illustrated in Figure 2. The PV panels are installed with a slope to maximize the amount of ated by minimizing the angle of incidence. The parameters defining the geo figuration of a PV panel consist of tilt () and azimuth (), which are show The tilt is the angle between the horizontal plane and the PV panel. The a angle measured from the Meridian to the point where the normal vector of is projected orthogonal to the horizontal plane, and thus it is positive in the and negative in the eastern area. Tilt and Azimuth of PV Panels The PV panels are installed with a slope to maximize the amount of power generated by minimizing the angle of incidence. The parameters defining the geometrical configuration of a PV panel consist of tilt () and azimuth (), which are shown in Figure 3. The tilt is the angle between the horizontal plane and the PV panel. The azimuth is the angle measured from the Meridian to the point where the normal vector of the PV panel is projected orthogonal to the horizontal plane, and thus it is positive in the western area and negative in the eastern area. Calculation of the Extraterrestrial Radiation Once the solar constant and the angle of incidence at a place of interest are calculated according to and, the value of ETR, denoted as G ext, can be simply calculated as: The PV panels are installed with a slope to maximize the amou ated by minimizing the angle of incidence. The parameters defining t figuration of a PV panel consist of tilt () and azimuth (), which are The tilt is the angle between the horizontal plane and the PV panel. angle measured from the Meridian to the point where the normal vec is projected orthogonal to the horizontal plane, and thus it is positive and negative in the eastern area. For instance, the process of calculating the ETR was applied to a place in Yulara, Australia. The resulting values of the ETR for a tilted surface on the first day of each month throughout a year are shown in Figure 4. The values of ETR had the same bell shape as a typical clear-sky PV generation over a day. The place is located in the Southern Hemisphere and, thus, the Sun's altitude and the corresponding value of cos are the highest in December and January. Accordingly, the ETR is the greatest in those months. The final radiance on the PV panel had the same bell-shaped pattern, but with more fluctuations depending on weather conditions. Consequently, the ETR can be interpreted as a piece of clean information on radiation by the Sun with noise from the atmosphere removed. Calculation of the Angle of Incidence The angle of incidence () for determining the ETR can be calculated by using the explained parameters in Subsections 2.2.1., 2.2.2., and 2.2.3., and is as follows : = cos (sin sin cos − sin cos sin cos + cos cos cos cos + cos sin sin cos cos + cos sin sin sin ). Calculation of the Extraterrestrial Radiation Once the solar constant and the angle of incidence at a place of interest are calculated according to and, the value of ETR, denoted as, can be simply calculated as: For instance, the process of calculating the ETR was applied to a place in Yulara, Australia. The resulting values of the ETR for a tilted surface on the first day of each month throughout a year are shown in Figure 4. The values of ETR had the same bell shape as a typical clear-sky PV generation over a day. The place is located in the Southern Hemisphere and, thus, the Sun's altitude and the corresponding value of cos are the highest in December and January. Accordingly, the ETR is the greatest in those months. The final radiance on the PV panel had the same bell-shaped pattern, but with more fluctuations depending on weather conditions. Consequently, the ETR can be interpreted as a piece of clean information on radiation by the Sun with noise from the atmosphere removed. Forecasting Models In this section, the traditional persistent model and representative neural network models are briefly described, which are used for comparison purposes in the case study later. Forecasting Models In this section, the traditional persistent model and representative neural network models are briefly described, which are used for comparison purposes in the case study later. Persistence Model In the persistence model, PV outputs of yesterday are used as forecasted PV generation. Although the persistence model is simple, it shows a satisfactory forecasting performance, particularly when the weather conditions do not change significantly. The persistence model was used as a reference in a study comparing the forecasting accuracy. Multilayer Perceptron The multilayer perceptron (MLP) is a model where nodes that imitate a human neural network are stacked in series and parallel. The node receives input data, multiplies them by its weights, applies its activation function to the intermediate value, and finally generates the outputs. The wider and deeper the nodes are, the more complex the functions can be modeled. A learning process is performed by computing the gradient of the loss function and updating the weights of a node using the back-propagation algorithm. If the MLP is deep with many layers, a vanishing gradient problem can occur, which means the gradient is no longer passed to the previous layers. Some activation functions, such as the rectified linear unit (ReLU), can mitigate the vanishing gradient problem. The MLP can be regarded as more effective and flexible than traditional regression models because there is no assumption on the form of the target function to be modeled. Recurrent Neural Network Unlike the MLP, the recurrent neural network (RNN) has a memory function that is implemented by sequentially feeding back the outputs or states in the previous times to the current input. Obviously, the outputs in the previous times contain the information in the past. This property makes the RNN suitable for dealing with time series data. The learning process of the RNN is performed by the technique of back-propagation through time, which unfolds the RNN with respect to time and applies the same back-propagation algorithm as the MLP. However, the weights of the RNN have the risk of divergence, particularly when handling a long sequence of data, because the same weight parameters are updated repetitively during training. To address the divergence problem, gate structures are proposed for the RNN, called gated RNN, such as long short-term memory (LSTM) and gated recurrent unit (GRU). The gates in the gated RNN determine which information is retained or discarded in the hidden state. Nodes constituting the gates make these decisions. The nodes applies the sigmoid function as their activation function to their intermediate outputs, which are then multiplied with the hidden state. As the sigmoid function is bounded in, only a portion of the hidden states is preserved by multiplication, meaning the gate considers the portion important. The parameters of the nodes are added to those of the simple RNN, and they are also updated in the learning process so that the gates make effective decisions. It has been empirically verified that gated RNNs have better characteristics in terms of convergence and accuracy for the forecasting task of a long sequence. Recently, gated RNNs have been widely used for time series forecasting. Vanilla RNN Vanilla RNN is a preliminary RNN structure that stores historic information by a hidden state. The hidden state at time t (h t ) is determined by applying the hyperbolic tangent function to the weighted sum of the current input and hidden states in the previous times as follows: where W h,x and W h,h are the weights; b h is the bias; x t is the input at time t; and h t is the output at time t. Figure 5 shows the structure of a cell in the LSTM. The cell has the input, forget, and output gates that determine the output through weight update Equations in -. Long Short-Term Memory where and W g,h are the weights; b i, b f, b o, and b g are the biases; x t and h t are the input and output at time t; i t, f t, and o t are the input, forget, and output gates at time t, respectively; g t is the candidate; () is the sigmoid function. The function of the three gates of the LSTM cell is implemented as the elementwise multiplication, denoted as x around a circle in Figure 6. Specifically, the forget gate eliminates unnecessary information from the cell state (c t−1 ) in the previous time; the input gate extracts important information from the input; the output gate generates the selective output from the sigmoid function values (o t ) for the hidden state (h t−1 ) in the previous time. Figure 5 shows the structure of a cell in the LSTM. The cell ha and output gates that determine the output through weight update. and ℎ are the input and output at time ;,, an forget, and output gates at time, respectively; is the candidate; function. The function of the three gates of the LSTM cell is imple ment-wise multiplication, denoted as x around a circle in Figure 6. S get gate eliminates unnecessary information from the cell state ( time; the input gate extracts important information from the input; th erates the selective output from the sigmoid function values ( ) fo (ℎ ) in the previous time. Gated Recurrent Unit Unlike the LSTM, the cell of the GRU has two gates, that is, a re date gate ( Figure 6). The update equations associated with the gates a Gated Recurrent Unit Unlike the LSTM, the cell of the GRU has two gates, that is, a reset gate and an update gate ( Figure 6). The update equations associated with the gates are given as: where W z,x, W z,h, W r,x, W r,h, W h,x, and W h,h are the weights; b z, b r, and b h are the biases; x t and h t are the input and output at time t; z t and r t are the update and reset gates at Forecasting Methods with Extraterrestrial Radiation The ETR has rarely been considered in forecasting application machine learning models. One of the most useful properties of the E problem is that it accurately contains the seasonal effect on the ta forecasted. Thus, the forecasting accuracy is expected to improve if th integrated into existing forecasting methods. In this section, we prop tegrate the ETR into a data-driven forecasting method. Forecasting Framework of the Base Method In this subsection, the base day-ahead hourly forecasting fram not consider the ETR, is presented as shown in Figure 7. The procedu 1. Imputer imputes day-ahead generation and weather data; 2. The imputed data are arranged in the timeframe of a past in present interval (1 h); 3. Sequenced data are split for training and testing; 4. Split data are scaled using the MinMaxScaler; 5. A forecasting model is trained and tested; 6. Forecasted results are saved and errors are calculated Forecasting Methods with Extraterrestrial Radiation The ETR has rarely been considered in forecasting applications using data-driven machine learning models. One of the most useful properties of the ETR in a forecasting problem is that it accurately contains the seasonal effect on the target variable to be forecasted. Thus, the forecasting accuracy is expected to improve if the ETR is effectively integrated into existing forecasting methods. In this section, we propose methods to integrate the ETR into a data-driven forecasting method. Forecasting Framework of the Base Method In this subsection, the base day-ahead hourly forecasting framework, which does not consider the ETR, is presented as shown in Figure 7. The procedures are as follows: The imputed data are arranged in the timeframe of a past interval (15 min) and present interval (1 h); 3. Sequenced data are split for training and testing; 4. Split data are scaled using the MinMaxScaler; 5. A forecasting model is trained and tested; 6. Forecasted results are saved and errors are calculated The simple imputer imputes omitted data by nearby data. The past interval, which is the interval of the model's input, was determined to be 15 min. The present interval, which is the interval of the model's output, was determined to be 1 h as most of the day-ahead electricity market receives bids and offers on the hourly units and unit commitment is solved in an hourly manner. The lower bound of the scaling interval of the MinMaxScaler was determined to be 0.1 to prevent the distortion of data resulting from dividing by a small number. The simple imputer imputes omitted data by nearby data. The past interval, which is the interval of the model's input, was determined to be 15 min. The present interval, which is the interval of the model's output, was determined to be 1 h as most of the day-ahead electricity market receives bids and offers on the hourly units and unit commitment is solved in an hourly manner. The lower bound of the scaling interval of the MinMaxScaler was determined to be 0.1 to prevent the distortion of data resulting from dividing by a small number. Forecasting Framework of Proposed Methods In this subsection, the methods combining the ETR with a data-driven neural network model are described. The framework of the proposed methods is presented in Figure 7. Specifically, five integration methods were developed and are presented: division preprocessing, multiplication preprocessing, replacement of existing input, inclusion as additional input, and inclusion as an intermediate target, which are denoted in order as M1, M2, M3, M4, and M5. In the proposed forecasting framework, the ETR is calculated for given timestamps using the latitude, longitude, tilt angle, and azimuth angle of the panel in the place. Compared with the base method, M1 and M2 perform the additional function in the adjustment/re-adjustment block, which is drawn with a dashed line in Figure 7. Similarly, M3, M4, and M5 perform the function of the input filtering block, which is drawn with the dashed-double dotted line in Figure 7, to modify their input for their method-specific implementation. Forecasting Framework of Proposed Methods In this subsection, the methods combining the ETR with a data-driven neural network model are described. The framework of the proposed methods is presented in Figure 7. Specifically, five integration methods were developed and are presented: division preprocessing, multiplication preprocessing, replacement of existing input, inclusion as additional input, and inclusion as an intermediate target, which are denoted in order as M1, M2, M3, M4, and M5. In the proposed forecasting framework, the ETR is calculated for given timestamps using the latitude, longitude, tilt angle, and azimuth angle of the panel in the place. Compared with the base method, M1 and M2 perform the additional function in the adjustment/re-adjustment block, which is drawn with a dashed line in Figure 7. Similarly, M3, M4, and M5 perform the function of the input filtering block, which is drawn with the dashed-double dotted line in Figure 7, to modify their input for their method-specific implementation. Division Preprocessing ETR is attenuated by the atmosphere in proportion to the total magnitude of ETR. The attenuation ratio of ETR to the solar radiation reaching the land is called the clearness index in meteorological terms. Even if the atmospheric conditions are the same at 9:00 a.m. and 12:00 p.m., the absolute attenuated amount is larger at 12:00 p.m. than the other because of differences in ETR. To accurately reflect the atmospheric conditions in a model, PV generations need to be adjusted for the model to receive the same signal when meteorological circumstances are the same. Dividing the radiation by ETR is one way of transforming the radiation to only meteorologically affected signals. Since PV generation is strongly correlated with the radiation by R 2 = 0.99, dividing the PV generation by ETR is expected to have the same effect as dividing the radiation. Thus, method M1, which normalizes PV generation using ETR, is proposed. The framework for M1 is shown in Figure 7, and it is implemented by dividing the PV generation by the ETR in the adjustment block and multiplying in the re-adjustment block. Multiplication Preprocessing Data-driven neural network models are trained by back-propagating an error. Mean squared error, which calculates the squared difference between the predicted output and the target, is used in general for training a model, and is one of the performance indexes for evaluation. Since MSE is calculated by squaring absolute differences, the outputs around noon with a large ETR significantly affect total MSE due to the fact of their magnitude. For example, if the model predicts poorly around noon and favorably around morning, the MSE will be larger than the opposite situation. If the training objective is to minimize MSE, which is the most likely situation in real applications, it can be assumed desirable to train more accurately in timeslots with a high ETR. Multiplying PV generation by ETR makes squared error in the timeslots with a high ETR large during training, resulting in a high back-propagation signal to input variables in those timeslots to minimize the modified MSE. For this reason, a method that multiplies PV generation by ETR is proposed. The framework for M2 is also shown in Figure 7, and it is implemented by multiplying the PV generation by the ETR in the adjustment block and dividing in the re-adjustment block. Replacement of Existing Input Most of the forecasting frameworks use historical PV generation as one of their inputs. In the frameworks, the model receives limited data, such as for one or two days prior, as input to forecast generation at a specific time unless long periods of historical data are inserted in direct ways, such as sequence-to-sequence LSTM, or indirect ways such as prior studies have done. In other words, the model preserves historical data only in a form of model parameters and has limited access to historical generation only as input for forecasting. As meteorological conditions vary from day to day, today's PV generation can be significantly different from day-ahead generation due to the stochasticity. Performance can be expected to improve if the model receives a clean signal with the stochasticity removed. The ETR is the solar radiation unaffected by meteorological conditions and can be deemed a total sum of historical data since summing every historic generation erases the stochasticity. Therefore, a method that replaces day-ahead generation with the ETR is proposed. The framework for M3 is also shown in Figure 7, and it is implemented by inserting the ETR and weather data through an input filtering block to a sequencing block. Inclusion as Additional Input The ETR is the solar radiation without stochasticity and is an upper bound of PV power generation. Day-ahead PV generation includes information about the day-ahead meteorological conditions, which can be correlated with today's conditions. Specifically, the day-ahead data are especially valuable in a situation where similar weather conditions frequently continue for several days. Both ETR and day-ahead PV generation are meaningful information for forecasting. Therefore, a method that includes both day-ahead generation and ETR is proposed. The framework for M4 is also shown in Figure 7, and it is implemented by inserting day-ahead PV generation, ETR, and weather data through an input filtering block to a sequencing block. Inclusion as an Intermediate Target Instead of inserting the raw ETR directly, it would be more effective for the ETR to be scaled by the daily averaged clearness index considering the weather. Therefore, a method that utilizes an MLP model trained separately to predict the clearness index is proposed. This MLP model is named as the clearness model, as it predicts the clearness index. The training framework for this model is shown in Figure 8 and is as follows: 1. The clearness model takes weather data as input and outputs a clearness index, which is a one-dimensional scalar; 2. ETR is multiplied by the output of the clearness model; 3. MSELoss between the adjusted ETR and the target value is calculated; 4. MSELoss is backpropagated to train the clearness model. proposed. This MLP model is named as the clearness model, as it pr index. The training framework for this model is shown in Figure 8 an 1. The clearness model takes weather data as input and outputs which is a one-dimensional scalar; 2. ETR is multiplied by the output of the clearness model; 3. MSELoss between the adjusted ETR and the target value is calcu 4. MSELoss is backpropagated to train the clearness model. The framework for M5 is also shown in Figure 7, and it is implem day-ahead PV generation, ETR multiplied by the output of the train and weather data through an input filtering block to a sequencing blo Dataset In the case study, datasets provided by the Desert Knowledge A tre (DKASC) in Australia were used. DKASC operates solar po Springs and Yulara in Australia, and their information is listed in Tab The datasets provided by DKASC contain both meteorological i power generation data in the two regions. The specific features in th in Table 3. The feature of wind direction was excluded because it inc ing error. The features of global horizontal radiation and diffuse h were not chosen either, because it strongly correlates with PV genera ter measurement was also excluded, as it measures radiation in a band. The total period of data used in the case study was from May 20 Among them, the data from May 2016 to January 2020 were used maining data over the year from February 2020 to January 2021 were The framework for M5 is also shown in Figure 7, and it is implemented by inserting day-ahead PV generation, ETR multiplied by the output of the trained clearness model, and weather data through an input filtering block to a sequencing block. Dataset In the case study, datasets provided by the Desert Knowledge Australia Solar Centre (DKASC) in Australia were used. DKASC operates solar power plants in Alice Springs and Yulara in Australia, and their information is listed in Table 2. The datasets provided by DKASC contain both meteorological information and PV power generation data in the two regions. The specific features in the dataset are listed in Table 3. The feature of wind direction was excluded because it increased the forecasting error. The features of global horizontal radiation and diffuse horizontal radiation were not chosen either, because it strongly correlates with PV generation. Pyranometer measurement was also excluded, as it measures radiation in a specific frequency band. The total period of data used in the case study was from May 2016 to January 2021. Among them, the data from May 2016 to January 2020 were used for training; the remaining data over the year from February 2020 to January 2021 were used for the test. Models and Performance Index The effectiveness of the proposed methods was verified and compared using four representative neural network models: MLP, Vanilla RNN, LSTM, and GRU. To evaluate the forecasting performance, two types of indexes, that is, mean squared error (MSE) and mean absolute error (MAE), were used. The two indexes are defined as follows: where n is the number of samples, x i is the predictors, y i is the target output, and f is the output forecasted by a model. To determine the hyperparameters of the models, 10-fold cross-validation was conducted by randomly sampling 10 subsets with equal size from the training set and using each of the subsets for validation in each stage of validation. The resulting hyperparameters of the models are listed in Table 4. The Adam optimizer was used in the training. Epoch was determined as 150 because the average of 10-fold validation errors were saturated and oscillated in a small range after being trained 60~90 times, and increased after being trained over 200 times, as shown in Figure 9. Figure 9 also shows the average of a 10-fold training error of the same model, which steadily declined as the epoch increased. This means the model had enough capacity to fit the training data, validating a reasonable choice of the size of the hidden states. The difference in scale between the validation error and training error was because MinMaxScaler was applied during training. Learning rate schedulers were not used in this study, because they can make the model converge into a poor local minimum, which leads to a higher MSE. The RNN-based models were configured to be bidirectional, as shown in Figure 10, because it is more effective to consider future data and predicted outputs for prediction especially for the day-ahead forecasting. A simple two-layer MLP model was applied to the output of the RNN-based models to reduce the dimension of the hidden states to one. Results The proposed methods were applied to the four neural network models described in Section 5.2 for the two datasets. The results are listed in Tables 5 and 6. Each method was trained and tested 20 times for each combination of the model and the dataset to examine its effect on average. As a result, M5 was the most effective in the majority of the dataset-model combinations. Table 7 shows the improvements M5 achieved relative to the base method in RNN-based models. M5 reduced the MSE by 4.1% on average and, at most, by 12.28% in the RNN-based models. These results imply that integrating ETR into neural network models can improve model performance without additional investment in data collection. The fact that M5 had consistently higher performance than other methods under different datasets and RNN-based models means that M5 can be expected to be effective when one forecasts PV generation without verifying which method would be the best. tional, as shown in Figure 10, because it is more effective to consider future data and predicted outputs for prediction especially for the day-ahead forecasting. A simple two-layer MLP model was applied to the output of the RNN-based models to reduce the dimension of the hidden states to one. tional, as shown in Figure 10, because it is more effective to consider future data and predicted outputs for prediction especially for the day-ahead forecasting. A simple two-layer MLP model was applied to the output of the RNN-based models to reduce the dimension of the hidden states to one. We analyzed the results of each method, from which important lessons were derived as follows. First, for M1, two points are notable: Comparing the performance of M1 with the persistence model on each dataset to consider the relative error, the performance on the BP Solar dataset was worse than that on the Desert Gardens dataset. Even though M5 was superior most of the time, the performance of M1 was the best when MLP was used on the BP Solar dataset. From point, it can be derived that the performance of each method varies considerably between different datasets even though they are not vastly different, as their geographical locations are similar. From point, it can be derived that the performance of each method varies across different models. The reason that M1 featured a large error in one of the datasets is inferred as follows. When normalized, PV generation in timeslots with small ETR and large ETR become closer to each other. Then, gradient signals with similar magnitudes are backpropagated in each timeslot during training, the two timeslots being treated equally even though their ETRs are different. During testing, the outputs of the model are multiplied by ETR, which leads to amplification of the error. If trained inappropriately, forecasting in the timeslots with a large ETR can be inaccurate compared with a small ETR, resulting in a higher MSE than the base method. However, as shown in point, M1 can be the best method depending on the forecasting configuration. Therefore, rather than using the best method for a different dataset, it is important to verify the competitiveness of each method by validation to choose the best method for the dataset and model used. M2 had better performance than M1 on average. Nevertheless, it is not certainly superior to the base method, implying that adjusting the gradient signal can affect the result inappropriately. M3, which replaces the historical generation with ETR, performed poorly in general. In particular, the MSE of M3 was higher by 150% than the base method when the Desert Gardens dataset and RNN-based models were used. This result suggests two implications: Assuming that ETR performed as a good baseline for prediction, inaccurate forecasting of M3 means a lack of enough meteorological data to predict the attenuation ratio of the atmosphere. Comparing the situations with and without day-ahead generation, the superior performance of the method that includes day-ahead generation implies that day-ahead PV forecasting is effective by using day-ahead generation as a baseline for the prediction. In other words, the neural network-based models improve their performance from the persistence model by considering meteorological information. The results of M4 and M5 can be analyzed from the perspective above. M4 and M5, which take day-ahead PV generation as one of the inputs, had additional improvement by having one more feature than the base method with day-ahead generation as a baseline. M4 including raw ETR as one of the inputs saw decent improvement in the Desert Gardens dataset while performing worse in the BP Solar dataset. It was more desirable to adjust ETR using meteorological information most of the time in this experiment. Although M5 had a general advantage in our experiments, there are various circumstances in which PV generation is forecasted in a day-ahead manner in real world applications, and the effectiveness of each method varies depending on models and datasets. Therefore, it should be noted that for a given model and dataset, various methods should be evaluated by validation to effectively integrate ETR into a forecasting model. Conclusions This study presents a simple and effective method to improve the forecasting accuracy of PV generation for mitigating the problems caused by the inherent characteristics of PV, such as variability, intermittency, and limited controllability. This study focused on the ETR strongly associated with the seasonal component of PV as a means to improve forecasting performance. We selected neural network models as the basic forecasting method. Then, we composed five methods to integrate the ETR into them and examined the effect in terms of forecasting performance. The specific integration methods were division preprocessing, multiplication preprocessing, replacement of existing input, inclusion as additional input, and inclusion as an intermediate target. The methods were tested on MLP, Vanilla RNN, LSTM, and GRU using the two PV datasets. The results show that combining the ETR with existing models can achieve meaningful improvement in forecasting performance and present a new approach to considering seasonal changes in PV generation. Among the methods, including the ETR as an intermediate target (i.e., M5) showed relatively better results than the other integration methods. However, the combination of M5 with Vanilla RNN was the best for one dataset, but the combination of M5 with LSTM was the best for the other. Thus, a certain neural network model combined with the ETR did not show absolute superiority as usual with the comparison results between AI methods. This study was limited to a few selected neural network models, even though they are known as the methods that effectively deal with time series data. Thus, the effectiveness of the proposed integration methods of the ETR can be further examined in other neural network models. It is also necessary that the proposed methods be applied to other PV datasets in extended studies. Then, the relative superiority of M5 can be further evaluated and more elaborate advice for combining the ETR with neural network-based models can be given.
|
<reponame>elizabethgoldman/carbon-budget<gh_stars>0
import subprocess
#import gdal
import multiprocessing
import pandas as pd
import os
import glob
def del_tiles(tile_id):
tiles = glob.glob('cpp_util/{}*.tif'.format(tile_id))
for tile in tiles:
os.remove(tile)
def upload_final(upload_dir, tile_id):
files = ['disturbance_model_t_CO2_ha', 'shiftingag_model_t_CO2_ha', 'forestry_model_t_CO2_ha', 'wildfire_model_t_CO2_ha', 'deforestation_model_t_CO2_ha', 'urbanization_model_t_CO2_ha', 'node_totals_reclass']
for f in files:
to_upload = "outdata/{0}_{1}.tif".format(tile_id, f)
print "uploading {}".format(to_upload)
destination = '{0}/{1}/'.format(upload_dir, f)
cmd = ['aws', 's3', 'mv', to_upload, destination]
try:
subprocess.check_call(cmd)
except:
print "error uploading"
def mask_loss(tile_id):
dest_folder = 'cpp_util/'
# modify loss tile by erasing where plantations are
idn_plant_shp = '{0}/plant_est_2000_or_earlier.shp'.format(dest_folder)
loss_tile = '{0}/{1}_loss.tif'.format(dest_folder, tile_id)
cmd = ['gdal_rasterize', '-b', '1', '-burn', '0', idn_plant_shp, loss_tile]
print cmd
subprocess.check_call(cmd)
def download(file_dict, tile_id, carbon_pool_dir):
carbon_pool_files = file_dict['carbon_pool']
data_prep_file_list = file_dict['data_prep']
fao_ecozone_file_list = file_dict['fao_ecozone']
dest_folder = 'cpp_util/'
for carbon_file in carbon_pool_files:
src = '{0}/{1}/{2}_{1}.tif'.format(carbon_pool_dir, carbon_file, tile_id)
cmd = ['aws', 's3', 'cp', src, dest_folder]
subprocess.check_call(cmd)
for data_prep_file in data_prep_file_list:
file_name = '{0}_res_{1}.tif'.format(tile_id, data_prep_file)
if data_prep_file == 'tsc_model':
file_name = '{0}_{1}.tif'.format(tile_id, data_prep_file)
src = 's3://gfw2-data/climate/carbon_model/other_emissions_inputs/{0}/{1}'.format(data_prep_file, file_name)
cmd = ['aws', 's3', 'cp', src, dest_folder]
subprocess.check_call(cmd)
for ecozone_files in fao_ecozone_file_list:
file_name = '{0}_res_{1}.tif'.format(tile_id, ecozone_files)
src = 's3://gfw2-data/climate/carbon_model/inputs_for_carbon_pools/processed/{0}/{1}'.format(ecozone_files, file_name)
cmd = ['aws', 's3', 'cp', src, dest_folder]
subprocess.check_call(cmd)
burned_area = file_dict['burned_area'][0]
src = 's3://gfw2-data/climate/carbon_model/other_emissions_inputs/burn_year/{0}/{1}_burnyear.tif'.format(burned_area, tile_id)
cmd = ['aws', 's3', 'cp', src, dest_folder]
subprocess.check_call(cmd)
# Download and unzip shapefile of Indonesia and Malaysia plantations if they have not already been downloaded
if os.path.exists('{0}/plant_est_2000_or_earlier.zip'.format(dest_folder)) == False:
src = 's3://gfw-files/sam/carbon_budget/idn_plant_est_2000_or_earlier/plant_est_2000_or_earlier.zip'
cmd = ['aws', 's3', 'cp', src, dest_folder]
subprocess.check_call(cmd)
cmd = ['unzip', '-o', '{0}/plant_est_2000_or_earlier.zip'.format(dest_folder), '-d', dest_folder]
subprocess.check_call(cmd)
# rename whichever peatland file was downloaded
peat_files = ['peatland_drainage_proj', 'cifor_peat_mask', 'hwsd_histosoles']
for peat_file in peat_files:
one_peat = glob.glob("{0}/{1}*{2}*".format(dest_folder, tile_id, peat_file))
if len(one_peat) == 1:
os.rename(one_peat[0], '{0}/{1}_peat.tif'.format(dest_folder, tile_id))
def wgetloss(tile_id):
print "download hansen loss tile"
dest_folder = 'cpp_util/'
hansen_tile = 's3://gfw2-data/forest_change/hansen_2016/{}.tif'.format(tile_id)
local_hansen_tile = '{0}/{1}_loss.tif'.format(dest_folder, tile_id)
cmd = ['aws', 's3', 'cp', hansen_tile, local_hansen_tile]
subprocess.check_call(cmd)
return local_hansen_tile
def tile_list(source):
## For an s3 folder in a bucket using AWSCLI
# Captures the list of the files in the folder
out = subprocess.Popen(['aws', 's3', 'ls', source], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout, stderr = out.communicate()
# Writes the output string to a text file for easier interpretation
aboveground_c_tiles = open("aboveground_c_tiles.txt", "w")
aboveground_c_tiles.write(stdout)
aboveground_c_tiles.close()
file_list = []
# Iterates through the text file to get the names of the tiles and appends them to list
with open("aboveground_c_tiles.txt", 'r') as tile:
for line in tile:
num = len(line.strip('\n').split(" "))
tile_name = line.strip('\n').split(" ")[num - 1]
tile_short_name = tile_name.replace('_carbon.tif', '')
file_list.append(tile_short_name)
return file_list
def remove_nodata(tile_id):
print "Removing nodata values in output tiles"
files = ['disturbance_model', 'shiftingag_model', 'forestry_model', 'wildfire_model', 'deforestation_model', 'urbanization_model']
for f in files:
to_process = "outdata/{0}_{1}.tif".format(tile_id, f)
out_reclass = "outdata/{0}_{1}_t_CO2_ha"
cmd = ['gdal_translate', '-a_nodata', 'none', to_process, out_reclass]
subprocess.check_call(cmd)
cmd = ['gdal_translate', '-a_nodata', 'none', "outdata/{0}_node_totals.tif".format(tile_id), "outdata/{0}_node_totals_reclass.tif".format(tile_id)]
subprocess.check_call(cmd)
|
<gh_stars>1-10
import Vue from 'vue'
import Vuetify from 'vuetify/lib/framework'
import { Resize } from 'vuetify/lib'
import '@mdi/font/css/materialdesignicons.min.css' // Ensure you are using css-loader
Vue.use(Vuetify)
Vue.directive('resize', Resize)
export default new Vuetify({
breakpoint: {
mobileBreakpoint: 'xs'
}
})
|
Research In Motion (RIM), the company behind BlackBerry mobile devices, will not have to pay patent licence fees to a rival email software company after the High Court ruled that the rival's UK patent was invalid.
RIM took the court case to revoke a patent owned by Visto, which makes email software. It also asked the courts to declare that its software and machines did not infringe the patent.
Though Mr Justice Floyd said in his judgment that RIM's technology did infringe on the ground covered in the patent, but that the patent was invalid because it was a computer program and was not inventive enough.
The Patents Act, which is based on the European Patent Convention, says that anything which is solely a computer program cannot be patented.
"Although [the claim] is not novel in itself, it is novel within the new combination [of hardware]," said Mr Justice Floyd. "But this is simply the effect of running the program on the computers. It is providing for data to be delivered from one element to another, so that the data is accessible to a user at another computer."
"That is exactly the sort of thing that computers do when programmed. It does not seem to me that that is enough of a technical effect to render the invention patentable," he said.
Visto's patent was for a "system and method for synchronizing electronic mail across a network". but the court found that the use of communications protocol http to route emails from a corporate network to a device was obvious, and therefore not worthy of a patent.
Mr Justice Floyd pointed out that the fact that a technology involves a computer program does not automatically exclude it from patentability. "The exclusion only bites if the invention is only a computer program," he said. "The mere fact that an invention involves a computer program in some way does not exclude it from patentability."
In this case, though, he ruled that the technology was simply a computer program.
The patentability of technology which may or may not qualify as software has long been a controversial area in UK law.
A landmark ruling in a case involving inventor Neal Macrossan last year has set down a new set of rules on how courts should decide whether or not technology consists solely of a computer program and therefore cannot be patented.
The UK Intellectual Property Office (UK-IPO) has recently had to change its guidance on the issue, though, after the High Court said that some computer programs could be patented. It demanded the re-examination by the UK-IPO of six patent applications and said that the UK-IPO's guidance on the issue was too sweeping.
"I do not detect anything in the reasoning of the Court of Appeal [in the Macrossan case] which suggests that all computer programs are necessarily excluded," wrote Mr Justice Kitchin in the ruling.
|
<reponame>karlam123/DBImport
"""Add hive_javaheap to export_tables
Revision ID: 5d9908c8cf55
Revises:
Create Date: 2019-06-10 12:00:55.102094
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '5d9908c8cf55'
down_revision = '<KEY>'
branch_labels = None
depends_on = None
def upgrade():
op.add_column('export_tables', sa.Column('hive_javaheap', sa.BigInteger))
def downgrade():
op.drop_column('export_tables', 'hive_javaheap')
|
It’s been a while since Lloyds Banking Group was synonymous with positive surprises. In fact, in the eight years since Lloyds took over the stricken HBOS at the height of the financial crisis, the British bank has trotted out a succession of unwelcome setbacks.
Through the delayed sale of TSB, intermittent dividend payments and £16bn in PPI costs, investors have questioned whether the bad news would ever end. That long disappointing drudge appeared to end at 7am on Thursday, when the bank announced full-year results for 2015.
|
Synthesis of ZSM-5 microspheres made of nanocrystals from iron ore tailings by the solid phase conversion method. ZSM-5 microspheres made of nanocrystals are successfully synthesized from iron ore tailings (IOTs) using a novel and green method, which have well-defined microporous and mesoporous structure with large surface area and high acidic strength. The phase separation between the surfactant and the solid silica phase is successfully avoided owing to the absence of the liquid water phase during the solid-phase conversion. Compared to conventional methods, such as hydrothermal and steam-assisted conversion methods, this approach enhances the utilization of autoclaves, considerably reduces pollutants, and simplifies the synthetic process, which saves both energy and time. In addition, we studied the crystallization of ZSM-5 microspheres by the solid phase conversion method at 413, 433, and 453 K. The results of kinetic study suggest that the experimental data are fitted by nonlinear regression following the Kolmogorov-Johnson-Mehl-Avrami's nucleation/crystallization model. The activation energies for the induction, transition and crystallization stages are 70.96, 39.76 and 36.23 kJ∙mol-1, respectively. The new method provides a cost-effective and industrially applicable route for the reuse of IOTs to synthesize ZSM-5 microspheres. This synthetic concept could also be expanded to obtain other types of mesoporous zeolites.
|
import express from "express";
import axios from "axios";
import { getGithubUser, writeToCache } from "./util";
import jwt from "jsonwebtoken";
const app = express();
const port = 9000;
const token = jwt.sign({ app: "reposit" }, "<PASSWORD>");
app.use(express.urlencoded({ extended: true }));
app.use(express.json());
app.get("/github-redirect", async (req, res) => {
try {
const {
data: { id, sec },
} = await axios.get("https://reposit-server.herokuapp.com/creds", {
headers: { Authorization: `Bearer ${token}` },
});
const { data, status, } = await axios.post(
"https://github.com/login/oauth/access_token",
{
client_id: id,
client_secret: sec,
code: req.query["code"],
accept: "json",
},
{
headers: {
Accept: "application/json",
},
}
);
let username = await getGithubUser({ access_token: data["access_token"] });
writeToCache({
Github: {
access_token: data['access_token'],
username
},
});
res.send(`
<!DOCTYPE html>
<html>
<body>
<h1>Authorized</h1>
<h2>You can return to your terminal.</h2>
</body>
</html>
`);
} catch (e) {
res.setHeader("Content-Type", "text/html");
res.status(500)
console.log(e)
res.send(e.message)
}
server.close();
});
export const server = app.listen(9000, () => {
// console.log("listening on 9000");
});
|
<reponame>anton-a-tkachev/Magnetic-Field-of-a-Loop-Current
"""This module contains functions that calculate the magnetic field
of a circular loop with electric current"""
from scipy.special import ellipk, ellipe
import numpy as np
# import time
mu0 = 4*np.pi*1E-7
# Argument for the elliptic integrals
k = lambda r,z,a: np.divide(4*a*r, (a + r)**2 + z**2)
# Axial magnetic flux
Bz = lambda r,z,a,i: np.divide(mu0*i, 2*np.pi*np.sqrt((a + r)**2 + z**2))*(ellipk(k(r,z,a)) + np.divide(a**2 - r**2 - z**2, (a - r)**2 + z**2)*ellipe(k(r,z,a)))
# Radial magnetic flux
Br = lambda r,z,a,i: np.divide(mu0*i, 2*np.pi*z*r*np.sqrt((a + r)**2 + z**2))*(-ellipk(k(r,z,a)) + np.divide(a**2 + r**2 + z**2, (a - r)**2 + z**2)*ellipe(k(r,z,a)))
# Azimuthal vector potential
Ap = lambda r,z,a,i: np.divide(mu0*i*a, np.pi*np.sqrt((r + a)**2 + z**2))*np.divide((2 - k(r,z,a))*ellipk(k(r,z,a)) - 2*ellipe(k(r,z,a)), k(r,z,a))
# Magnetic field of a loop current
MF = lambda r,z,a,i: np.array((Ap(r,z,a,i), Br(r,z,a,i), Bz(r,z,a,i)))
# Magnetic field of a solenoid
def BB(R, Z, ID, OD, DZ, W, I, NR, NZ):
ZZ = np.linspace(-0.5*DZ, 0.5*DZ, NZ)
RR = np.linspace(0.5*ID, 0.5*OD, NR)
PP = np.array([(rr, zz) for rr in RR for zz in ZZ])
BB = np.sum(np.array([B(R, Z - pp[1], pp[0], I*W/NR/NZ) for pp in PP]), axis=0)
return BB
# Vector potential of a solenoid
def AA(R, Z, ID, OD, DZ, W, I, NR, NZ):
ZZ = np.linspace(-0.5*DZ, 0.5*DZ, NZ)
RR = np.linspace(0.5*ID, 0.5*OD, NR)
PP = np.array([(rr, zz) for rr in RR for zz in ZZ])
AA = np.sum([Ap(R, Z - pp[1], pp[0], I*W/NR/NZ) for pp in PP])
return AA
# t0 = time.time()
# B0 = BB(R = 0.1, # r of the observation point (m)
# Z = 0.0, # z of the observation point (m)
# ID = 0.250, # winding inner diameter (m)
# OD = 0.300, # winding outer diameter (m)
# DZ = 0.230, # winding depth along z-axis (m)
# W = 400, # number of turns
# I = 2.16E3, # current in the coil (A)
# NR = 10, # number of winding layers along r-axis
# NZ = 40) # number of winding layers along z-axis
# t1 = time.time()
# print(B0)
# print(t1 - t0)
|
Introduction of a Mobile Adverse Event Reporting System Is Associated With Participation in Adverse Event Reporting Physicians underutilize adverse event reporting systems. Web-based platforms have increased participation; thus, it was hypothesized that a mobile application would increase adverse event reporting. The authors developed a mobile reporting application for iOS and Android operating systems and performed a retrospective review on reporting rates by clinicians in the Department of Anesthesia and Critical Care. Monthly reporting rates were calculated for the intervention year and for the 2 prior years. The Wilcoxon rank sum test and 2 test were used to evaluate significance. Overall monthly reporting rates for all clinicians were 15.3 ± 7 for the first time period, 17.3 ± 6 for the second time period, and 27.9 ± 7 for the third time period (P =.0035). The majority of reports in the third time period were submitted using the mobile application (193/337, 57%, P =.026). Deployment of a mobile application reduced barriers to adverse event reporting and increased monthly reporting rates for all clinicians.
|
/**
* Erase all backed-up data for the given package from the storage
* destination.
*
* Any application can invoke this method for its own package, but
* only callers who hold the android.permission.BACKUP permission
* may invoke it for arbitrary packages.
*/
public void clearBackupData(java.lang.String packageName) throws android.os.RemoteException
{
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeString(packageName);
mRemote.transact(Stub.TRANSACTION_clearBackupData, _data, _reply, 0);
_reply.readException();
}
finally {
_reply.recycle();
_data.recycle();
}
}
|
. This qualitative study deals with the subjective experiences of families with chronically-ill children and with experiences of the External Care Services (EPD) in Berlin, who exclusively conduct home-care for children. Learning to cope with chronic illness is a process that often demands professional help that extends beyond the time spent in hospital. Essential for the analysis performed here are the systemic positions put forward by v. Uexkll, Wesiack (psychosomatic medicine) and Friedemann (nursing science) who stress the connection between health and relationship. The parents' positive experiences reflect the patient-oriented understanding of health-care developed by the takes in the families in many respects. Without the care offered by the EPD some children would have been torn out of their family environment due to hospitalizations. This would have resulted in the damaging of the family health.
|
/**
* Combine this and the given stats instance into a new object that represents
* the equivalent of * separately adding this and the given stat's samples to the returned instance.
* @param stats
* @return never null.
*/
public OnlineStats combine(OnlineStats stats) {
OnlineStats merged;
if (this.count == 0) {
merged = new OnlineStats(stats);
} else if (stats.count == 0) {
merged = new OnlineStats(this);
} else {
merged = new OnlineStats();
merged.count = this.count + stats.count;
merged.max = Math.max(this.max, stats.max);
merged.min = Math.min(this.min, stats.min);
merged.mean = (this.count * this.mean + stats.count * stats.mean) / (this.count + stats.count);
merged.mean2 = (this.count * this.mean2 + stats.count * stats.mean2) / (this.count + stats.count);
merged.variance = merged.mean2 - Math.pow(merged.mean, 2);
}
return merged;
}
|
Background
When I traveled to Orlando for my friends’ Jack and Jill in January, I was delighted with the chance to capture the memories with my Sony A7II which I have been using for as long as it’s been out. I brought only my 28mm F2 and while I was able to capture some photos I was happy with, my back and feet were not particularly happy with the weight. I began to long for a compact camera that would be easier for me to carry around on a daily basis. When I go walk my dog or if I just go out for dinner, wearing a camera on a strap or lugging around the system in a bag can be quite cumbersome, and just the act of having to take the camera out is too slow and deterring to capture the moments that arise in the blink of an eye. I’d carry my Sony everywhere with me but found that I was rarely using it in my day to day.
Below are some samples I took with the Sony in January, for reference:
So then comes May when I’d have the opportunity to return to Orlando for their actual wedding followed by 4+ days of Disney Parks. After some extensive research I decided to test out the Ricoh GR II and the Fuji X100F. The Fuji was by far the more attractive camera to me. In fact, I had only planned to get the Fuji at first, and ordered the Ricoh only because the Fuji was on backorder for weeks. 2 days before my vacation I managed to pick up the Fuji as well from Amazon Warehouse Deals open boxed for 100 dollars off. Some of my other considerations were the Sony RX100 series and the Fuji X70. I passed on the Sony due to the smaller sensor and my preference for prime lenses. As for the Fuji X70 I thought it would be somewhere between the Ricoh and the X100F in terms of both size and IQ, and it was also missing a built in ND filter.
Fuji X100F
I’ll cut straight to the point and say this: the Fuji X100F is the better camera. The high ISO performance is almost as good as my full frame Sony, and the build quality with the retro design is sublime. It kind of looks like a Leica (not that I’ve ever used one). The screen on the back is the best out of my 3 camera’s and the hybrid viewfinder is so much fun to use. This is a camera that really inspires you to shoot.
One of the reasons I was so attracted to Fuji are their renowned film simulations. It is a huge benefit that they collaborated with Adobe to make their color profiles accessible in Lightroom. It’s not a jpeg only feature like the filters of basically any other camera company. I can shoot raws and apply the Classic Chrome filter, which is my favorite of them all. I found Classic Chrome especially pleasing for my indoor shots. Often I find that skin tones, at least Asian skin tones, would be too yellow or over saturated. Classic Chrome helps mute those colors to an extent. Yes, I know that these color profiles can be simulated with some work in Lightroom to begin with, but when you’re going through thousands of photos and you’re a casual like myself having a 1 click solution is pretty convenient. I shot the entire wedding (not as the wedding photographer, just a friend) with the Fuji X100F mostly because it looked classier alongside a suit, but I also knew that the Ricoh would not cut it for my indoor/evening shots. I get usable shots up to 6400 with the Fuji and an extra stop from the aperture (f2 vs f2.8). That’s 2 stops better than the Ricoh if you consider 3200 the maximum usable ISO for it (I do). For example, the last shot above I don’t think I would have been able to capture with the Ricoh.
I’ve already said the Fuji is the better camera and that it’s the most fun and inspiring to shoot, so it’s a pretty easy decision right? Not quite. The truth is, the Fuji isn’t really any more portable than my Sony, nor is it better. It doesn’t fit in my pocket, I still need to carry it with a strap or a bag, and it ended up being much heavier than I thought it would be. The whole reason I was looking for a new camera was that I want something smaller and lighter to live alongside my Sony. The X100F is a camera that I’d be using instead of the Sony in most cases. Furthermore for 1300 dollars, I could spend a week in Hawaii. As much as I love this camera, it’s not the most practical as a secondary camera.
Ricoh GR II
The first thing I thought when I unboxed the Ricoh is “Damn, this thing is light“. Not only that, but IT FITS COMFORTABLY IN MY POCKET! In the pocket, it barely feels heavier than my phone. Even though it’s smaller than the Fuji, it feels MUCH more ergonomic in the hand. The grip is extremely secure and with the low weight and price, I am not afraid of dropping it at all. Did I mention that all the controls are accessible with one hand? I ended up taking the strap off and took it with me on a roller coaster, as seen below. The Fuji and Sony are too heavy and expensive for me to consider doing this.
Both cameras have a built in ND filter. I believe the Fuji has 1 more stop (3 vs 2), but the Ricoh has a handy Auto-ND setting. Often with the Fuji I forget to turn it on and off when moving between indoor and outdoor environments, which was very common at Disney parks. With the Ricoh, I rarely had to worry about my settings. Both cameras also feature a leaf shutter. I don’t do much flash photography at this time so I can’t take advantage of the flash sync speed, but both cameras are ninjas in terms of shutter sound with the Fuji having an edge. The Ricoh sounds like a mouseclick, but the Fuji is practically silent, even without going into electronic shutter mode.
At first I hesitated on buying this camera since it’s quite dated. The sensor is the same as the GR I from 2013 and I prefer the 35mm equivalent focal length of the X100F over the 28 of the GR. What pushed me to get it is that I feel being able to take a camera with an APSC sized sensor out of my pocket is insanely cool and convenient. At under 600 dollars I could buy the GR II and sell it for it’s eventual successor for less than the price of the X100F.
The autofocus is decent under good lighting conditions, but in low light it lags behind the Fuji. From what I know the Ricoh only has contract detect autofocus whereas the Fuji has contrast and phase. Many GR owners purchase the camera for it’s snap focus feature, which can be considered a form of zone focusing. I have not had a chance to take it out for any street photography yet so I can’t speak too much for how well it works in practice.
Comparing the images I get between the two cameras, the Ricoh is sharper but the Fuji gives better colors off the bat and gives me some cleaner files. The Fuji will naturally give more shallow depth of field due to the longer focal length combined with it’s larger aperture, but it’s not so much that you should be buying the Fuji just for that. One thing to note is that when I was looking at the images in camera the Fuji images looked MUCH better due to it having a much superior screen, but when I loaded them up in lightroom and applied some presets on top, the difference was much smaller. In fact, lightroom edits are much more prominent on the Ricoh’s DNG files. I’ve heard that lightroom does not handle Fuji’s RAF files optimally, but I haven’t had a chance to switch over to Capture One at this time to verify.
The Decision
It ended up being a clear decision that was tough to make, but I will be returning the Fuji X100F and plan on upgrading to the GR III or Fuji X70 successor whenever they’re released. It is tough because I enjoy shooting the Fuji most out of the 3 cameras, but it is not the most practical choice for me. I own the Sony since it’s the best camera of the 3 and I have a lot of vintage lenses for it that I enjoy using. The Fuji and Sony would also cannibalize each other’s use cases due to comparable performance and dimensions, but I’d lose out on flexibility and IQ with the Fuji. If I were to only own one camera and lens, I’d probably choose the X100F. For a single compact camera, it is the right combination of image quality, looks, convenient focal length, and size. Fuji just does so many things right and I know I’ll own one in the future.
The GR II, on the other hand, is a camera that I’d throw in my pocket when I go about my daily life, something I can easily take everywhere and as such, I can get pictures with it that I can’t with any other camera. I know I won’t get the shallow depth of field that I love with my full frame lenses, but not having it forces me to use composition to isolate my subjects and train me to become a better photographer. I only hope that the next generation will have better low light performance (my dream: f/2 AND better high ISO performance) and this will be the perfect camera for me.
Perhaps I can use the cash I get back from returning the Fuji to spend a weekend in Palm Springs 🙂
Purchase Links:
Any assistance given to keep this website up and running is much appreciated. As of right now, the only method for this website is to be self sustainable is through the affiliate links. If you are interested in purchasing gear, please consider using my links. Thanks!
Fujifilm X100F 24.3 MP APS-C Digital Camera – Silver
Fujifilm X100F 24.3 MP APS-C Digital Camera – Black
Ricoh GR II Digital Camera with 3-Inch LCD (Black)
Sony Alpha a7II Mirrorless Digital Camera – Body Only
Sony SEL28F20 FE 28mm f/2-22 Standard-Prime Lens for Mirrorless Cameras (SD Card Bundle)
Notes
Fuji X100F:
Pros Cons Absolutely love the colors and built in film simulations. The fact that I can load them as color profiles in Lightroom when editing the RAWS is a really nice touch from Fuji/Adobe
Build quality is superb and it just looks attractive. I feel like a wannabe Leica user (not that I’ve ever used one).
Hybrid Viewfinder
Built in ND and Leaf shutter
Dead silent shutter
35mm Equivalent, F2, Good high ISO performance (somewhat usable shots up to 6400)
24MP
Faster Autofocus Not significantly more portable than my Sony since I still need a strap or bag.
There is no Auto-ND setting that I could find, which the Ricoh GR has. I often forget to turn it on/off when moving between outdoor and indoor environments
When I am in auto Shutter speed and have my aperture wide open, the shutter only goes up to 1/1000. The max should be 1/4000. Not sure if this is by design or if it’s configurable.
Incredibly Expensive, especially for a secondary camera. Cost about as much as my vacation
I am having trouble uploading the .RAF files to Google Photos. There are many photos that I don’t bother editing because they’re not that great, but I want to be able to see EVERYTHING on Google photos.
I’ve heard that Adobe handles .RAF files sub optimally, but I don’t have another editor like Capture One to compare with.
Not the greatest grip, especially with the weight. Would not trust myself holding it on a roller coaster for example.
Can’t use with one hand, although the manual dials are really nice.
Ricoh GR II:
Pros Cons FITS COMFORTABLY IN MY POCKET!!!!
Featherweight
28mm equivalent
Built in ND and Leaf Shutter with
Auto-ND
Snap Focusing (google it)
Sharper lens than the Fuji, especially wide open and close up
Extremely affordable.
Very good grip. Along with it’s affordable price and light weight, I’m not afraid of dropping it. I ended up taking the strap off and held it on a roller coaster with no fear.
The camera is designed to be used in one hand. Every button and thumb rest is in a very well thought out location.
Shoots RAWS in DNG format. Weak low light performance. F2.8 and usable shots only up to ISO 3200.
Poor low light autofocus.
Dated Sensor and lower Megapixel count (16MP)
Negligible subject isolation unless really up close
Shutter is quiet, but still a clicking sound
Lack of a viewfinder or articulating screen (I would consider the Fuji X70 if you want this).
Less attractive design (could be a Pro to some people)
Lower max shutter speed
No MF focus ring.
|
package com.bosssoft.hr.train.jsp.example.tag;
import javax.servlet.jsp.tagext.TagSupport;
/**
* @description: 定义<boss:userTag /> 标签
* @author: Administrator
* @create: 2020-05-29 13:50
* @since
**/
public class UserTag extends TagSupport {
}
|
RECOORD: A recalculated coordinate database of 500+ proteins from the PDB using restraints from the BioMagResBank Stateoftheart methods based on CNS and CYANA were used to recalculate the nuclear magnetic resonance (NMR) solution structures of 500+ proteins for which coordinates and NMR restraints are available from the Protein Data Bank. Curated restraints were obtained from the BioMagResBank FRED database. Although the original NMR structures were determined by various methods, they all were recalculated by CNS and CYANA and refined subsequently by restrained molecular dynamics (CNS) in a hydrated environment. We present an extensive analysis of the results, in terms of various quality indicators generated by PROCHECK and WHAT_CHECK. On average, the quality indicators for packing and Ramachandran appearance moved one standard deviation closer to the mean of the reference database. The structural quality of the recalculated structures is discussed in relation to various parameters, including number of restraints per residue, NOE completeness and positional root mean square deviation (RMSD). Correlations between pairs of these quality indicators were generally low; for example, there is a weak correlation between the number of restraints per residue and the Ramachandran appearance according to WHAT_CHECK (r = 0.31). The set of recalculated coordinates constitutes a unified database of protein structures in which potential user and softwaredependent biases have been kept as small as possible. The database can be used by the structural biology community for further development of calculation protocols, validation tools, structurebased statistical approaches and modeling. The RECOORD database of recalculated structures is publicly available from http://www.ebi.ac.uk/msd/recoord. Proteins 2005. © 2005 WileyLiss, Inc.
|
The Denver Post has fired writer Terry Frei after he tweeted that he was “very uncomfortable with a Japanese driver winning the Indianapolis 500 during Memorial Day weekend.”
After Takuma Sato won the Indy 500 on Sunday, writer Frei expressed his discomfort while clarifying that it was “nothing specifically personal.” A huge backlash ensued, as many people took the comment extremely personally.
Frei deleted the tweet, apologized and said in a lengthy apology that his father, former University of Oregon coach Gerald L. “Jerry” Frei, had fought against the Japanese in World War II, flying 67 missions in all.
Also Read: Frank Deford, Sportswriter Who Found Human Stories Behind Wins and Loss, Dies at 78
The elder Frei died in 2001, and Terry Frei said he had visited his grave and saluted him Sunday, the day of Sato’s Indy 500 win.
“I am sorry, I made a mistake, and I understand 72 years have passed since the end of World War II and I do regret people with whom I am very probably very closely aligned with politically and philosophically have been so offended,” Frei said.
The Post issued tweeted Monday that Frei was “no longer an employee.”
Also Read: 10 Women Who Have Left Fox News Shows, From Megyn Kelly to Laurie Dhue (Photos)
“We apologize for the disrespectful and unacceptable tweet that was sent out by one of our reporters,” the paper said. “The tweet doesn’t represent what we believe nor what we stand for.”
Terry Frei is also an author whose latest book, ” Olympic Affair,” is set against the backdrop of the 1936 Berlin Olympics. It focuses on U.S. decathlon champion Glenn Morris’ affair with German actress and film director, Leni Riefenstahl, who came to be remembered as a Nazi propagandist.
Here is Frei’s original tweet:
Here is his apology:
Here is the Post’s statement:
UPDATE: The Denver Post's statement on Terry Frei https://t.co/HPYG08nOe9 (corrects typo from earlier version) pic.twitter.com/3ROSPSsELE — The Denver Post (@denverpost) May 29, 2017
|
Imaging Ferroelectric Domains via Charge Gradient Microscopy Enhanced by Principal Component Analysis Local domain structures of ferroelectrics have been studied extensively using various modes of scanning probes at the nanoscale, including piezoresponse force microscopy (PFM) and Kelvin probe force microscopy (KPFM), though none of these techniques measure the polarization directly, and the fast formation kinetics of domains and screening charges cannot be captured by these quasi-static measurements. In this study, we used charge gradient microscopy (CGM) to image ferroelectric domains of lithium niobate based on current measured during fast scanning, and applied principal component analysis (PCA) to enhance the signal-to-noise ratio of noisy raw data. We found that the CGM signal increases linearly with the scan speed while decreases with the temperature under power-law, consistent with proposed imaging mechanisms of scraping and refilling of surface charges within domains, and polarization change across domain wall. We then, based on CGM mappings, estimated the spontaneous polarization and the density of surface charges with order of magnitude agreement with literature data. The study demonstrates that PCA is a powerful method in imaging analysis of scanning probe microscopy (SPM), with which quantitative analysis of noisy raw data becomes possible. Introduction Lithium niobate (LiNbO3) is a uniaxial ferroelectric crystal with large spontaneous polarization ( = 80 ± 5 C cm 2 ) aligned along the crystallographic Z-axis. Because of its versatile ferroelectric properties, LiNbO3 has been used in a wide ranges of applications, including electro-optics, nonlinear optics, ferroelectric data storage, and microelectromechanical devices. Most of these applications involve ferroelectric domains with 180° domain walls created by applying an external electric field to produce antiparallel (180°) domains with +Ps and -Ps polarizations, and the large spontaneous polarization of LiNbO3 often results in screening charges from the surrounding environment, so that bound charges on the crystal surface can be compensated, and the electrostatic energy can be minimized. Local domain structure and dynamics of domain walls in LiNbO3 have been studied extensively using various modes of scanning probe microscopy (SPM) techniques at the nanoscale, including piezoresponse force microscopy (PFM), Kelvin probe force microscopy (KPFM) and time-resolved Kelvin probe force microscopy (tr-KPFM), and electrostatic force microscopy (EFM). Because of experimental complications, all these SPM techniques are usually performed with a slow or moderate scan speeds, less than 10 Hz/line over 10 length, with which the pixel time of imaging is much longer than the time scale of the ferroelectric dynamics. Therefore, the fast formation kinetics of domains and domain walls and the evolution of the screening charges cannot be captured by such quasi-static measurements. Furthermore, none of these techniques measure the polarization directly, and thus the data interpretation is often challenging. For example, PFM images ferroelectric domains through piezoelectric strain, and it is now well known that multiple electromechanical mechanisms contribute to the piezoresponse signal measured by PFM. Recently, charge gradient microscopy (CGM) has been developed to study the ferroelectric domains and characterize their surface charges. It operates by mechanically scraping the screening charges on the surface using a conductive scanning probe, and collecting the resultant current using conductive atomic force microscopy (cAFM). Note that these scraped charges can be compensated and refilled by the conductive probe, and since the probe is virtually grounded under cAFM, the refilled charges can be measured as a current signal, making imaging possible. Because of the fast-moving probe, the refilling process is rapid compared to other mechanisms, enabling the study of dynamic process of domains and domain walls. More importantly, the current measured can be directly related to polarization, unlike other SPM techniques discussed earlier. In this paper, we study ferroelectric domain structure of LiNbO3 using CGM, and investigate the effects of scan speed and temperature on CGM signals. We rely on principal component analysis (PCA) to reduce the dimensionality of the data and enhance their signal-tonoise ratio, and demonstrate that CGM signal increases linearly with the scan speed while decreases with the temperature under power-law. The study suggests that current measured under CGM arise from the scraped and refilled surface charges within a domain, and from polarization change experienced by the scanning probe across a domain wall, enabling us to estimate the spontaneous polarization and surface charge density of LiNbO3. Experimental methods We used periodically poled LiNbO3 (PPLN, Asylum Research, USA) sample in our study, which is mounted on a metal puke and has dimensions of 3 3 0.5, with the poled domains approximately 10 in width. All the experiments were carried out on Asylum Research atomic force microscopes (Cypher and MFP-3D) using a conductive diamond probe (CDT-NCHR-10, Nanosensors, Inc.) with a spring constant around 80 N., as schematically shown in Fig. 1 Principal component analysis A series of CGM current mappings were acquired under different conditions, including different scan speeds and temperatures, and the data were post-processed by principal component analysis to enhance the signal-to-noise ratio using MATLAB ®, as detailed in the supplementary information (SI). Under PCA, a set of p current images (variables) of m-by-n pixels (observations) is represented as: Results and discussions In order to map the domain structure and identify the polarization direction of PPLN, we first performed a slow off-resonance PFM scan, since the absolute phase of piezoresponse at the contact resonance is not well defined, and resonance-enhanced imaging at higher frequencies can change the piezoresponse due to the dynamics of cantilever or instrumental lags. Away from the resonance, however, the polarization direction can be deduced directly from the piezoresponse phase signal measured. The domain pattern of PPLN is revealed by PFM amplitude mapping in Fig. 2 (a), wherein domains of comparable amplitude are observed, separated by domain walls with much reduced response. This is evident from PFM phase mapping in Fig. 2 Given the positive d33 coefficient for PPLN, the phase should be around 180° and 0° for upward-and downward-polarized domains, respectively, which are labelled by ⊙ and ⊗ signs in Fig. 2(b). While powerful for domain imaging, PFM does not yield any information on local polarization value. The CGM signals acquired in a separate scan but on the same area as PFM, on the other hand, reveal interesting current mappings ( Fig. 2(d,e)) that match PFM well. First of all, there are current spikes when passing over domain walls, and the sign of such spikes is reversed between trace and retrace passes, and between entering and exiting a particular domain, which can be seen more clearly from the line scans in Fig. 2 resulted. This is indeed what we observe in Fig. 2(d,e), confirming that the imaging mechanism within domains is scratching and refilling of surface charge by the virtually grounded CGM probe. When a domain wall is crossed, for example from a downward polarized domain to an upward polarized one, the net polarization difference is +2Ps, resulting in a positive current spike, which is reversed when going from the upward polarization to the downward one. This also explains why the current spikes reverse polarity between trace and retrace passes. These observations confirm that the imaging mechanism at domain walls is caused by the change of polarization experienced by the probe. As such, two different imaging mechanisms exist for CGM, which were observed by Hong et al. as well. We also noted that Schroder et al. reported unexpected photoinduced current along domain walls in lithium niobate single crystal, though the mechanism is different there. To further understand the imaging mechanism of CGM, we investigate the effect of scan speed, ranging from ~75 m s to ~3 mm s, on CGM signals. The raw CGM mappings acquired during the trace pass under a slow, intermediate, and fast scan speeds are presented in Fig. 3(a-c), showing in general enhanced CGM signals under higher scan speeds. However, having currents in the order of pA, the signals are rather noisy and it is difficult to draw a quantitative conclusion. Traditionally, the evolution of a set of AFM images were compared by their mean and standard deviation, which can be unreliable given the noisy CGM measurements. Principal component analysis (PCA), a powerful tool for background and noise subtraction, was employed to analyze the data. The PCA eigenvalues drop rapidly after the first one, as shown in Fig. 3(d), suggesting that the first mode has the highest possible variance and contains the most information in this set of data. The first eigenvector 1 ( ) captures the average spectrum of data as a function of discrete set of scan speeds, as shown in Fig. 3(e), and its corresponding component image (1 st PCA loading) shows the spatial distribution of CGM data (Fig. 3(f)). The higher component images of PCA are shown in Fig S1, confirming that the first mode is indeed sufficient to represent the data. The noise reduction is evident in the first component image, while the corresponding eigenvector exhibits a linear correlation between scan speed and the average current signal. Such a linear relationship is consistent with the imaging mechanisms proposed, as current under a constant change in charge is inversely proportional to time, and thus linear to scan speed. We then investigate the origin of current signals further by performing temperaturedependent CGM, acquired while the temperature is continuously decreasing from 83℃ to 30℃. The raw CGM current images acquired at three distinct temperatures are shown in Fig. 4(a-c), showing higher signal at the lower temperature. To see this more clearly, this set of temperaturedependent CGM data was also analyzed by PCA. The normalized eigenvalues are plotted in Fig. 4(d), and it is observed that the second and higher modes of PCA have variances orders of magnitude less than that of the first mode, as confirmed by the higher modes PCA component images in Fig. S2. As such, the first mode contains the most information and the power-law decrease with increased temperatures exhibited by the first eigenvector, as shown in Fig. 4(e), is sufficient to represent the trend of the data. The corresponding component image is shown in Fig. 4(f), again with much enhanced signal-to-noise ratio. This temperature-dependence can be understood again from the imaging mechanism of the CGM that the increased temperature reduces spontaneous polarization and removes the screening charges, and thus reduces CGM signal. Indeed, Tong et al. discussed that the surface of PPLN is screened via short-range adsorption of ambient humidity molecules, and the rise in temperature results in evaporation of water and local decrease in the humidity, which significantly suppress the CGM signal. Furthermore, when the relative ambient humidity is less than 30%, no CGM signal is observed in our experiments. C cm 2 (refer to SI section 2.2 and Fig. S4), comparable to previously measured value of = 80 ± 5 C cm 2 for LiNbO3 crystals. Moreover, the surface charge density | | is estimated around 2.84 C cm 2 (for details refer to SI section 2.3 and Fig. S5). Both spontaneous polarization and surface charge density estimated from experimental data appear to be relatively low, perhaps because of the partial screening of polarization charges and incomplete scraping and refilling of surface charges under moving CGM probe. Concluding Remarks In summary, we have used CGM and PCA to image ferroelectric domains of PPLN, and found that the CGM signal increases linearly with the scan speed while decreases via power law with the temperature. The observations are consistent with proposed imaging mechanisms of scraping and refilling of surface screening charge within domains, and polarization change across domain wall, enabling us to estimate the spontaneous polarization and the density of surface charges with order of magnitude agreement with literature data. The study demonstrates value of PCA, which helps us reduce noise level and enhance signal-to-noise ratio, making quantitative analysis of noise raw data possible. Quantitative analysis of spontaneous polarization and surface charge density We estimate the spontaneous polarization and the surface screen charge density | | Calculation of collected charges over domains and domain walls A simple example of this analysis is visualized in Fig. S3, where a trace and a retrace line scans of a CGM image are shown in Fig. S3(a) as a function of scan distance. With the CGM probe speed given, the absolute value of these signals as a function scan time were calculated and shown in Fig. S3(b). The area under these curve is the collected charges, which were scraped over domains and domain walls. To separate these two mechanisms of charge scraping, the signals were Estimation of spontaneous polarization based on domain wall charge The retrace signal is mostly dominated by the domain wall charges as shown in Fig. S4(a). The image was masked by replacing all the absolute values less than 5.5 pA with zero, which is shown in Fig. S4 Estimation of surface charge density based on domain charge As discussed in the manuscript, the trace signal contains the scraped charge over the domains. To remove the domain wall current spikes, the trace CGM image (shown in Fig. S5(a)) is masked by considering only absolute current values less than the threshold of 7 pA. We estimate the extent of surface screening by comparing the estimated domain charge density | | with the ideal surface charge of ferroelectric surface, which should be equal to polarization charges = 80 C cm 2 ), and found 3.55% screening ratio.
|
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
# we generate an empty graph
G = nx.DiGraph()
# we add 3 nodes with labels Input,A,B
G.add_nodes_from(["Input","A","B"]),
G.add_edge("Input","A")
G.add_edge("A","B",weight=1)
G.add_edge("B","A",weight=-1)
# we display the graph
pos = nx.spring_layout(G)
edge_labels = nx.get_edge_attributes(G,'weight')
nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels)
nx.draw_networkx(G,pos,node_size=1500, with_labels=True)
# compute the adjacency matrix of the network (and check it!)
A_matrix = nx.adjacency_matrix(G, nodelist=None)
|
Mean-Field Models for EEG/MEG: From Oscillations to Waves Neural mass models have been used since the 1970s to model the coarse-grained activity of large populations of neurons. They have proven especially fruitful for understanding brain rhythms. However, although motivated by neurobiological considerations they are phenomenological in nature, and cannot hope to recreate some of the rich repertoire of responses seen in real neuronal tissue. Here we consider a simple spiking neuron network model that has recently been shown to admit an exact mean-field description for both synaptic and gap-junction interactions. The mean-field model takes a similar form to a standard neural mass model, with an additional dynamical equation to describe the evolution of within-population synchrony. As well as reviewing the origins of this next generation mass model we discuss its extension to describe an idealised spatially extended planar cortex. To emphasise the usefulness of this model for EEG/MEG modelling we show how it can be used to uncover the role of local gap-junction coupling in shaping large scale synaptic waves. Introduction The use of mathematics has many historical successes, especially in the fields of physics and engineering, where mathematical concepts have been put to good use to address challenges far beyond the context in which they were originally developed. Physicists in particular are well aware of the "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" (Wigner 1960). One recent breakthrough in the field of large-scale brain modelling has come about because of advances in obtaining exact mean-field reductions of certain classes of coupled oscillator networks via the so-called Ott-Antonsen (OA) ansatz (Ott and Antonsen 2008). This is especially important because the mathematical step from microscopic to macroscopic dynamics has proved elusive in all but a few special cases. Indeed, many of the current models used to describe coarse-grained neural activity, such as the Wilson-Cowan (Wilson and Cowan 1972), Jansen-Rit (Jansen and Rit 1995), or Liley () model are phenomenological in nature. Nonetheless they have been used extensively to study and explore the potential mechanisms that coordinate brain rhythms underlying cognitive processing and large scale neuronal communication (Fries 2005). For example, such neural mass models have recently been used to understand cross-frequency coupling between brain areas (), understand how patterns of functional connectivity may arise in brain imaging studies (), and are a key ingredient of the Virtual Brain project that aims to deliver the first open simulation of the human brain based on individual large-scale connectivity (). Moreover they have been used to uncover how hyper-and hypo-synchrony of neuronal network firing may underpin brain dysfunction including epilepsy (). Making use of the OA reduction Luke and colleagues (;) were able to obtain exact asymptotic dynamics for networks of pulse-coupled theta-neurons (Ermentrout and Kopell 1986). Although the theta-neuron model is simplistic, it is able to capture Handling Editor: Katharina Glomb. This is one of several papers published together in Brain Topography on the "Special Issue: Computational Modeling and M/EEG". some of the essential features of cortical firing pattern, such as low firing rates. As such, this mean-field reduction is a candidate for a new type of cortical neural mass model that makes a stronger connection to biological reality than the phenomenological models mentioned above. The theta-neuron is formally equivalent to the quadratic integrate-and-fire (QIF) model (), a mainstay of many studies in computational neuroscience, e.g. Dipoppa and Gutkin. Interestingly an alternative to the OA approach has been developed by Montbri et al. that allows for an equivalent reduction of networks of pulse-coupled QIF neurons, and establishes an interesting duality between the two approaches. In the OA approach the complex Kuramoto order parameter is a fundamental macroscopic variable and the population firing rate is a function of the degree of dynamically evolving within-population synchrony. Alternatively in the approach of Montbri et al. average voltage and firing rate couple dynamically to describe emergent population behaviour. Given that both approaches describe the same overall system exactly (at least in the thermodynamic limit of an infinite number of neurons) there must be an equivalence between the two macroscopic descriptions. Montbri et al. have further shown that this relationship takes the form of a conformal map between the two physical perspectives. This correspondence is very useful when dealing with different types of neuroimaging modality. For example, when looking at power spectrograms from electro-or magneto-encephalograms (EEG/MEG), it is useful to contemplate the Kuramoto order parameter since changes in coherence (synchrony) of spike trains are likely to manifest as changes in power. On the other hand the local field potential recorded by an extracellular electrode may more accurately reflect the average population voltage. A model with a perspective on both, simply by a mathematical change of viewpoint, is not only useful for describing experimental data, it may also help the brain imaging community develop new approaches that can exploit a non-intuitive link between seemingly disparate macroscopic variables. Importantly, for this to be relevant to the real world some further features of neurobiology need to be incorporated, as purely pulsatile coupling is not expected to capture all of the rich behaviour seen in brain oscillations and waves. In particular synaptic processing and gap-junction coupling at the level of localised populations of neurons, and axonal delays at the larger tissue scale are all well known to make a major contribution to brain rhythms, both temporal and spatio-temporal (Nunez and Srinivasan 2005;Buzski 2011). Fortunately, these biological extensions, that generalise the initial theta-neuron and QIF network models with pulsatile coupling, are natural and easily accommodated. Work in this area has already progressed, e.g. with theoretical work by Laing and Pietras et al. on how to treat gap junctions, and by Coombes and Byrne on the inclusion of realistic synaptic currents (governed by reversal potentials and dynamic conductance changes). Recent work by Byrne et al. has also considered the inclusion of finite action potential speeds. In this paper we consider a synthesis of modelling work to date on developing a new class of mean-field models fit for use in complementing neuroimaging studies, and present some new results emphasising the important role of local gap-junction coupling in shaping brain rhythms and waves. Even without the inclusion of gap junctions a first major success of this so-called next generation neural mass and field modelling approach has been in explaining the phenomenon of beta-rebound. Here a sharp decrease in neural oscillatory power in the 15 Hz EEG/MEG beta band is observed during movement followed by an increase above baseline on movement cessation. Standard neural mass models cannot readily reproduce this phenomenon, as they cannot track changes of synchrony within a population. On the other hand the next-generation models treat population coherence as fundamental, and are able to track and describe changes in synchrony in a way consistent with movementrelated beta decrease, followed by an increase above baseline upon movement termination (post-movement beta rebound) (). Moreover, these models are capable of explaining the abnormal beta-rebound seen in patients with schizophrenia. Beta decrease and rebound are special cases of event related synchrony/desynchrony (ERS/ERD), as measured by changes in power at given frequencies in EEG/MEG recordings (Pfurtscheller and da Silva 1999), and as such this class of model clearly has wider applicability than standard neural mass models that cannot describe ERD/ERS because their level of coarsegraining does not allow one to interrogate the degree of within-population synchrony. By merging this new dynamical model of neural tissue with anatomical connectome data it has also been possible to gain a perspective on whole brain dynamics, and preliminary work in Byrne et al. has given insight into how patterns of resting state functionalconnectivity can emerge and how they might be disrupted by transcranial magnetic stimulation. Despite the success of the next generation models that include synaptic processing it is well to recognise the importance of direct electrical communication between neurons that can arise via gap junctions. Without the need for receptors to recognise chemical messengers gap junctions are much faster than chemical synapses at relaying signals. The communication delay for a chemical synapse is typically in the range 1-100 ms, while that for an electrical synapse may be only about 0.2 ms. Gap junctions have long been thought to be involved in the synchronisation of neurons (;Bennet and Zukin 2004) and are believed to contribute to both normal () and abnormal physiological brain rhythms, including epilepsy (Velazquez and Carlen 2000;). In the "Neural Mass Model" section we introduce the mathematical description for the microscopic spiking cell dynamics as a network of QIF neurons with both synaptic and gap-junction coupling. We present the corresponding mean-field ordinary differential equation model with a focus on the bifurcation properties of the model under variation of key parameters, including the level of population excitability and the strength of gap-junction coupling. A simple cortical model built from two sub-populations, one excitatory and the other inhibitory, is shown to produce robust oscillations via a Hopf bifurcation. The derivation of the macroscopic equations of motion is deferred to a technical appendix. This new class of neural mass model is used as a building block in "Neural Field Model" section to construct a continuum model of cortical tissue in the form of an integro-differential neural field model. Here, long-range connections are mediated by action potentials giving rise to space-dependent axonal delays. For computational ease we reformulate the neural field as a brain-wave partial differential equation, and pose it on idealised one-and two-dimensional spatial domains. A Turing analysis is performed to determine the onset of instabilities that lead to novel patterned states, including bulk oscillations and periodic travelling waves. These theoretical predictions, again with details deferred to a technical appendix, are confirmed against direct numerical simulations. Moreover, beyond bifurcation we show that the tissue model can support rich rotating structures, as well as localised states with dynamic cores. Finally, in the "Discussion" section we outline further applications and extensions of the work presented in this paper. Neural Mass Model Here we describe a new class of neural mass model that can be derived from a network of spiking neurons. The microscopic dynamics of choice is the QIF neuron model, which is able to replicate many of the properties of cortical cells, including a low firing rate. In contrast to the perhaps more well studied linear or leaky IF model it is also able to represent the shape of an action potential. This is important when considering electrical synapses, whereby neurons directly "feel" the shape of action potentials from other neurons to which they are connected. An electrical synapse is an electrically conductive link between two adjacent nerve cells that is formed at a fine gap between the pre-and post-synaptic cells known as a gap junction and permits a direct electrical connection between them. They are now known to be ubiquitous throughout the human brain, being found in the neocortex (Galarreta and Hestrin 1999), hippocampus (Fukuda and Kosaka 2000), the inferior olivary nucleus in the brain stem (), the spinal cord (), the thalamus (Hughes and Crunelli 2007) and have recently been shown to form axo-axonic connections between excitatory cells in the hippocampus (on mossy fibers) (). It is common to view the gap junction as nothing more than a channel that conducts current according to a simple ohmic model. For two neurons with voltages v i and v j the current flowing into cell i from cell j is proportional to v j − v i. This gives rise to a state-dependent interaction. In contrast, chemical synaptic currents are better modelled with event-driven interactions. If we denote the mth firing time of neuron j by T m j then the current received by neuron i if connected to neuron j would be proportional to ∑ m∈ℤ s(t − T m j ), where s is a temporal shape that describes the typical rise and fall of a post synaptic response. This is often taken to be the Green's function of a linear differential operator Q, so that Qs = where is a delta-Dirac spike. Throughout the rest of this paper we shall take where H is a Heaviside step function. In this case the operator Q is second order in time and given by where −1 is the time-to-peak of the synapse. We are now in a position to consider a heterogeneous network of N quadratic integrate-and-fire neurons with voltage v i and both gap-junction and synaptic coupling: Here, firing times are defined implicitly by v j (T m j ) = v th. The network nodes are subject to reset: v i → v r at times T m i. The parameter is the membrane time constant. The strengths of gap-junction and synaptic coupling are v and s respectively. The background inputs i are random variables drawn from a Lorentzian distribution with median 0 and half width. The value of 0 can be thought of as setting the level of excitability, and as the degree of heterogeneity in the network. The larger 0 is, the more neurons would fire if uncoupled, and the larger is, the more dissimilar the inputs are. A schematic of a QIF network and its reduction to a neural field model is shown in Fig. 1, with details of the neural field formulation described in "Neural Field Model" section. The mean-field reduction of can be achieved by using the approach of Montbri et al.. This is described in detail in Appendix 1, and is valid for globally coupled cells in the thermodynamic limit N → ∞. The network behaviour can be summarised by the instantaneous mean firing rate R(t) (the fraction of neurons firing at time t), the average membrane potential V(t), and the synaptic activity U(t). The synaptic activity U is driven by mean firing rate according to QU = R, with the mean-field dynamical equations for (R, V): Interestingly this (R, V) perspective on population dynamics can be mapped to one that tracks the degree of within-population synchrony described by the complex Kuramoto order parameter Z according to the conformal map (): where W * is the complex conjugate of W. The corresponding dynamics for Z is given by equation in Appendix 1. Alternatively, one can evolve the model for (R, V, U) and then obtain results about synchrony |Z| by the use of. The mean field model accurately describes the underlying spiking network (Fig. 2). A network of 1000 synaptically and electrically coupled QIF neurons (blue), as described by, was simulated and compared to the mean field dynamics (red), as described by and. The finite size fluctuations are most apparent for the membrane potential V. However, the overall behaviour is similar. As expected, increasing the population size reduces the finite size fluctuations. A previous instance of this model, without gap-junction coupling and with synaptic reversal potentials, was applied to describe beta-rebound, as seen in real MEG data (). Beta-rebound is a special case of event-related desynchronisation and synchronisation, whereby power in the beta band decreases at movement initiation and rebounds above baseline after movement termination. For our model, which does not incorporate synaptic reversal potentials, we find that gap-junction coupling is important for betarebound. In particular, our results suggest that there is a delicate balance between too little gap-junction coupling and too much gap-junction coupling (Fig. 3). A temporally filtered square pulse of length 400 ms and magnitude 3 A was applied to the model at t = 0 ms to mirror a movement. For intermediate values of gap-junction coupling, there is a reduction in beta power at movement onset (0 ms), followed by a sharp increase in power shortly after movement termination (500 ms). The transient increase in beta band power presents with high within population synchrony, confirming the link between rebound and synchronisation. With weak gap-junction coupling, the system does not oscillate, and as such, beta rebound is not possible. With strong gapjunction coupling, the system oscillates, but after movement termination the system returns, almost immediately, to its original behaviour. The population is overly synchronised at steady state, and as such, a transient of high synchrony is not possible. Single Population: Bifurcation Analysis We first consider a single excitatory population ( s > 0 ). In contrast to a scalar rate model with self-feedback (as an exemplar of a single population model), the next generation model has at least two variables (to describe either synchrony Z, or the pair (R, V)) and thus is of high enough dimension to support oscillations in time (Fig. 4). Fig. 1 Model schematic. At each point in a two-dimensional spatial continuum there resides a density of QIF neurons whose mean-field dynamics are described by the triple (R, V, U), where R represents population firing rate, V the average membrane potential, and U the synaptic activity. The non-local interactions are described by a kernel w, taken to be a function of the distance between two points. The space-dependent delays arising from signal propagation along axonal fibres are determined in terms of the speed of the action potential c Examining the profile of these oscillations, we observe that the peaks and troughs of the firing rate R and the synchrony |Z| roughly coincide. This indicates, rather unsurprisingly, that when a population is highly synchronised the population firing rate will be high. As the strength of gap-junction coupling v is decreased the system undergoes a Hopf bifurcation and oscillations disappear (Fig. 5). Note that to the right of the Hopf bifurcation the amplitude and frequency of the oscillations increases with v. Increasing the level of excitability 0 also leads to oscillatory behaviour. A continuation of the Hopf bifurcation in v and 0 is shown for different values of (Fig. 5). The system oscillates for parameter values to the right of these curves. Remembering that sets the level of heterogeneity, we note the window for oscillations gets smaller as the heterogeneity of network is increased. Excitatory-Inhibitory Network: Bifurcation Analysis The single population model can be easily extended to a two population network, consisting of an excitatory and an inhibitory population, labelled by E and I respectively. Synaptic coupling is present both within and between populations, while gap-junction coupling only exists between neurons in the same population. The augmented system of equations, describing the mean firing rate R and the average membrane potential V of each population, as well as 4 distinct synaptic variables U for each of the synaptic connections, is presented in Appendix 2. The excitatory-inhibitory network possesses a rich repertoire of dynamics. For example, it is possible to generate bursts of high frequency and high amplitude activity at a slow burst rate (Fig. 6). This pattern of activity is typical in epileptic seizures, e.g. (). Decreasing the gap-junction coupling strengths E v and I v results in smoother lower amplitude oscillations, more in line with healthy brain oscillations. We note that E v and I v are not the only parameters that can change the profile of the oscillations; reducing E 0 (the median background drive to the excitatory population) can also eradicate the seizure-like oscillations. Next we examine the bifurcation structure of the excitatory-inhibitory network for different combinations of gap-junction coupling strengths E v and I v (Fig. 7). With no gap-junction coupling in either population ((a), the amplitude of oscillation increases significantly for this branch of oscillatory solutions. An additional branch of oscillatory solutions emerges for low I 0, with moderate amplitude oscillations for the firing rate of the inhibitory population R I and low amplitude oscillations for the excitatory population R E. Interestingly, the two oscillatory solutions co-exist for I 0 ≈ −7.5 to 2.5. Jansen and Rit (Jansen and Rit 1995) demonstrated that transitions between seizures and healthy brain activity could be With a good understanding of the behaviour of the spatially clamped system, we move on to consider the spatially extended neural field model. Neural Field Model Brain waves are inherently dynamical phenomena and come in a vast variety of forms that can be observed with a wide range of neuroimaging modalities. For example, at the mesoscopic scale it is possible to observe a rich repertoire of wave patterns, as seen in voltage-sensitive dye imaging data from the primary visual cortex of the awake monkey (), and local field potential signals across the primary motor cortex of monkeys (). At the whole brain scale they can manifest as EEG alpha oscillations propagating over the scalp (), and as rotating waves (defined as a significant increase in phase offset with rotation about a wave center) seen during human sleep spindles with intracranial electrocorticogram recordings (). Waves are known to subserve important functions, including visual processing (), saccades (), and movement preparation (). They can also be associated with dysfunction and in particular epileptic seizures (). Computational modelling is a very natural way to investigate the mechanisms for their occurrence in brain tissue, as well as how they may evolve and disperse ((Heitmann et al., 2017). The study of cortical waves (at the scale of the whole brain) is best advanced using a continuum description of neural tissue. The most common of these are referred to as neural fields, and are natural extensions of neural mass models to incorporate anatomical connectivity and the associated delays that arise through wiring up distant regions using axonal fibres. The study of waves, their initiation, and their interactions is especially pertinent to the study of epileptic brain seizures and it is known that gap junctions are especially important in this context (). Phenomenological neural field models with gap-junction coupling have previously been developed and analysed by Steyn-Ross et al. (2007, and more principled ones derived from theta-neuron models by Laing (2014Laing (, 2015. In the latter approach it was necessary to overcome a technical difficulty by regularising the shape of the action potential. However, with the approach used in "Neural mass model" section this is not necessary and the neural field version of - is constructed by replacing full temporal derivatives by partial temporal derivatives and replacing the temporal dynamics for U with the dynamics QU =, where denotes a spatio-temporal drive. For example, in the plane we might consider where R(, t) is the population firing rate at position ∈ ℝ 2 at time t and c represents the speed of an action potential, as illustrated in Fig. 1. Typical values for cortico-cortical axonal speeds in humans are distributed, and appear to peak in the 5 − 10 m/s range (Nunez and Srinivasan 2014). Here, w represents structural connectivity as determined by anatomy. For example, long-range corticocortical interactions are predominantly excitatory whilst inhibitory interactions tend to be much more short-ranged, suggesting a natural choice for the shape of w as an inverted Mexican hat (). A similar equation would hold in one spatial dimension. We emphasise that in the continuum model presented here, the gap-junction coupling has no spatial extent. The mass model (defined locally) incorporates gap junctions, while the only coupling between masses is via synaptic currents. The model in Laing treats a linear array with nearest neighbour electrical interactions (representing cells that touch) as well as allowing for interactions beyond nearest neighbour. For large scale brain modelling it is more natural to view the brain as a network of synaptically interacting neural masses, each with its own local synaptic and gap-junction currents, with longer range interactions mediated only by synaptic currents. In this section we shall work with the explicit choices of structural connectivity w(x) = (|x| − 1)e −|x| in 1D and w(r) = (r∕2 − 1)e −r ∕(2 ) in 2D (where x represents distance in 1D, and r represents radial distance in 2D). For convenience we have chosen spatial units so that the scale of exponential delay is unity, though note that typical values for the decay of excitatory connections between cortical areas (at least in macaque monkeys) is ∼ 10 mm (). Both of the above kernel shapes have an inverted wizard hat shape and are balanced in the sense that the integral over the whole domain is zero. They also allow for a reformulation of the neural field model as a partial differential equation, as detailed in Appendix 3. The resulting brain-wave equation is very amenable to numerical simulation using standard (e.g. finite difference) techniques. Before we do this, it is first informative to determine some of the patterning properties of the neural field model using a Turing instability analysis. Below we outline the results of the analysis and discuss the ensuing patterns for the neural field model in both 1D and 2D. One Spatial Dimension Turing instability analysis, originally proposed by Turing in 1952(Turing 1952, is a mechanism for exploring the emergence of patterns in spatio-temporal system, including neural fields. Similar to the bifurcation analysis for the neural mass model, it allows us to determine the parameter values for which oscillations and patterns occur. Bulk oscillations, whereby synchronous activity across the spatial domain varies uniformly at the same rate, emerge at a Hopf bifurcation. Static patterns, which do not change with time, emerge at a Turing bifurcation. Dynamic patterns, that oscillate in time and space, emerge at a Turing-Hopf bifurcation. The 1D neural field model, given in Appendix 3 by, supports both bulk oscillations and spatio-temporal patterns. Using the inverted wizard hat connectivity kernel (longrange excitation and short-range inhibition), we find Hopf and Turing-Hopf bifurcations (Fig. 8 left). See Appendix 4 for details of the analysis. For the chosen parameter values and weak gap-junction coupling ( v ≲ 0.8 ), the spatiallyuniform steady state is always stable and neither patterns nor oscillations exist. Increasing the median background drive 0 moves the Hopf and Turing-Hopf curves down in the c-V plane, allowing for oscillations and patterns in the absence of gap junctions ( v = 0 ). For slow action potential speeds ( c ≲ 0.2 ), the system first undergoes a Hopf bifurcation as v is increased and bulk oscillations emerge (Fig. 8I). As v is increased further, the system undergoes a Turing-Hopf bifurcation and standing waves emerge (Fig. 8II). For faster action potential speeds ( c ≳ 0.2 ), the Turing-Hopf bifurcation occurs before the Hopf, and we see periodic travelling and standing waves between the two bifurcations (Fig. 8III). To assess the role of gap junctions, we fixed the action potential speed c = 0.11 and explored the dynamics of the synchrony variable |Z| for different gap-junction coupling strengths v (Fig. 9). For weak gap-junction coupling (I), there is a regular standing wave and the level of synchronisation is low. As v is increased bulk oscillations emerge and the level of synchronisation increases (II). Increasing v further leads to the emergence of mixed dynamics, with both spatial and temporal patterning. The tissue is now highly synchronised, confirming the belief that gap-junction coupling increases the level of synchronisation. For a standard wizard hat coupling kernel (long-range inhibition and short-range excitation) the neural field model can undergo a Turing bifurcation, as well as Hopf and Turing-Hopf bifurcations (see Supplementary material 1 panel (a)). Changing the sign of the synaptic coupling strength s changes the coupling to long-range inhibition and short-range excitation. When Turing and Hopf instabilities occur simultaneously, interesting patterns emerge. In particular, we see stationary bumps where the activity at the centre of the bump oscillates in both space and time (see Supplementary material 1 panel (b)). We will discuss the two dimensional version of such patterns in more detail below. Two Spatial Dimensions A Turing analysis was also performed for the 2D neural field equation, given in Appendix 3 by, and a very similar bifurcation structure was found (see Supplementary material 2 panel (a)). As expected, close to the Hopf bifurcation the activity of the tissue oscillates in time, but no spatial pattern emerges (see Supplementary material 3). Near the Turing-Hopf bifurcation we see both planar waves (Supplementary material 4) and radial waves (Supplementary material 5) depending on initial conditions. Close to the intersection of the Turing-Hopf and Hopf bifurcation we see mixed spatio-temporal dynamics (Supplementary material 6). Away from bifurcation, more interesting patterns emerge. We fix the action potential speed c = 1 and vary the gapjunction coupling strength v to assess how gap-junction coupling affects patterning. For weak gap-junction coupling, we observe rotating waves with source and sink dynamics where the waves collide with each other (Fig. 10). The domain shown contains 12 rotating cores. Periodic boundary conditions were used. Hence, the cores at the edge of the domain wrap around to those on the other side. Supplementary material 7 shows the temporal evolution of the synchrony variable |Z|, from which the cores and rotations are readily observed. The direction of rotation alternates, such that every second core rotates clockwise/anti-clockwise. As the gap-junction coupling strength v is increased robust spirals emerge at the centre of the rotating cores. The spiral is tightly wound with a diffused tail of high amplitude activity that propagates into the rest of the domain and interacts with the other rotating waves (Fig. 11). The time course of a point close to the centre of a rotating core (green dot) depicts higher amplitude oscillations for the firing rate R, mean membrane potential V and level of synchronisation |Z| when compared to the simulations for lower gap-junction coupling strength v (Fig. 10). In addition, the peaks in R are sharper and the minimum level of synchrony |Z| is substantially higher. The temporal evolution for the full tissue can be seen in Supplementary material 8. We again note that increasing the gap-junction coupling strength increases the level of synchronisation across the tissue. For v = 0.695, the synchrony variable oscillates between 0.02 and 0.36. For v = 0.8, it oscillates between.120 and 0.56. Increasing v further does not change the overall dynamics, but does result in higher levels of synchronisation. This supports the hypothesis that gap-junction coupling lends to more synchronous activity. As mentioned in the "One Spatial Dimension" section, for a regular wizard hat connectivity kernel (short-range excitation and long-range inhibition) the neural field supports static Turing patterns, periodic bumps of high activity in 1D and a periodic lattice of high-activity spots in 2D. When a Hopf instability coincides with this Turing instability, patterns form at the centre of these localised states. In 2D, patterns of concentric circles can appear within spots when the two bifurcations coincide (Fig. 12). Activity within a localised state can oscillate in time, while the activity in the surround is constant with a low firing rate. These patterns are reminiscent of chimeras (Abrams and Strogatz 2004;Kuramoto and Battogtokh 2002;Laing 2009a, b;), as seen in networks of coupled oscillators, where a fraction of the oscillators are phase-locked or silent while the others oscillate incoherently. Note how the peaks in firing rate coincide with peaks in synchrony. However, in the surround synchrony is high, but the firing rate is minuscule. This indicates that the neurons are also synchronised at rest. A video illustrating how these exotic patterns evolve on the entire spatial domain is provided in Supplementary material 9 and the bifurcation diagram is given in Supplementary material 2 panel (b). The patterns presented here persist with the refining of the numerical mesh so we are confident that the relatively sharp changes between spatial points are not just a numerical artefact. When not perfectly synchronised, the relative timing of oscillations across single areas or distant regions in the cortex can give rise to a range of flexible phase offsets which can manifest as travelling waves of various shapes, including plane, radial and spiral waves (). The numerical simulations presented above highlight the ease with which these can be generated within the next generation neural field model with local gap-junction currents, and in particular spiral waves. The latter are thought to be highly relevant to status epilepticus characterised by the formation of spiral waves that emerge after wavefront annihilation and exhibit complex interactions (). Discussion Mean-field models have proven invaluable in understanding neural dynamics. Although phenomenological in nature, coarse-grained neural mass/field models have proven particularly useful in describing neurophysiological phenomena, such as EEG/MEG rhythms (Zhang 1996), cortical waves (;), binocular rivalry (Laing and Chow 2002;Bressloff and Webber 2012), working memory and visual hallucinations (Ermentrout and Cowan 1979;). The exclusion of synchrony in standard neural mass/field models prohibits them from describing event-related synchronisation and desynchronisation; the increase and decrease of oscillatory EEG/MEG power due to changes in synchrony within the neural tissue. Here we presented and analysed a recently developed neural mass/ field model that incorporates within population synchrony. In contrast to other reductive approaches for describing the behaviour of populations of spiking neurons the one described here is exact (in the thermodynamic limit) for realistic event-driven models of (non-instantaneous) synaptic process. For example, the spike-density formalism for reducing networks of linear integrate-and-fire neurons requires a moment closure approximation (Ly and Tranchina 2007), whilst Fokker-Planck approaches for describing renewal-type spiking neurons often only reduce after the truncation of some eigenfunction expansion (). The mean-field model presented here has previously been applied to real MEG data in Byrne et al. (at the neural mass level) and used with MRI-derived structural connectivity in Byrne et al. (for a network of neural masses), though not with the inclusion of gap junctions. The main benefit of such a model is that it is derived from a population of interacting spiking neurons, with the QIF model incorporating a reasonable representation of the action potential shape. This further allows for the inclusion of realistic gap junctions at the cellular level. Gap junctions are known to promote synchrony within neural tissue (Watanabe 1958;Bennett 1977) and the strength of these connections has been linked to the excessive synchronisation driving epileptic seizures (;). Nonetheless, it is also important to recognise the important effects that the extracellular space has on seizure dynamics, as discussed in Wei et al.. Recent work by Martinet et al. has emphasised the usefulness of bringing models to bear on this problem, and coupled the Steyn-Ross neural field model () to a simple dynamics for local extracellular potassium concentration. Here, gap junctions are modelled by appending a diffusive term to a standard neural field and increases in the local extracellular potassium concentration act to decrease the inhibitory-to-inhibitory gap-junction diffusion coefficient (to model the closing of gap junctions caused by the slow acidification of the extracellular environment late in seizure). A more refined version of this phenomenological approach would be to replace the Steyn-Ross model with the neural field described here. This would allow a more principled study of how slow changes in the extracellular environment could initiate wave propagation, leading to waves that travel, collide, and annihilate. Indeed, simulations of the next-generation neural field model (without coupling to the extracellular space) have already shown such rich transient dynamics including seizure-like oscillations (and their dependence on the strength of gap-junction coupling). It would be interesting to explore this further, and in particular the transitions whereby spatio-temporal wave patterns are visited in sequence. This has already been the topic of a major modelling study by Roberts et al. who considered a variety of more traditional neural mass models in a connectome inspired network using the 998-node Hagmann et al. dataset () with a single fixed axonal delay. A similar computational study, with a focus on spiral waves and sinks/sources from which activity emanates/converges, could also be undertaken using the alternative neural mass model presented here, and with the further inclusion of space-dependent axonal delays. Moreover, electrical stimulation can easily be integrated into the model, by returning to the microscopic voltage dynamics model given by (which ensure current balance) and including a time-dependent drive, say A(t), which could represent a pattern of applied transcranial direct current. This modifies the background drive in the mean-field model according to 0 → 0 + A(t). In Byrne et al. this approach was used to determine the effects of transcranial magnetic stimulation (with an induced electrical form for A(t)) on patterns of network functional connectivity. Finally, it is well to note the assumption throughout our modelling study that chemical and electrical synapses operate independently. However, there is now accumulating evidence to suggest that this might not be the case (Pereda 2014). For example, neurotransmitter modulators released by nearby synaptic terminals can regulate the synaptic strength of co-localised chemical and electrical synapses through the activation of G protein-coupled metabotropic receptors. All of the above are topics of ongoing investigation and will be reported upon elsewhere. Appendix 1: Mean-Field Reduction Consider a heterogeneous network of N quadratic integrate-and-fire neurons with voltage v i and both gap-junction and synaptic coupling: Here, the mth firing time of the jth neuron is defined implicitly by v j (T m j ) = v th. The network nodes are subject to reset: v i → v r at times T m i. The strengths of gap-junction and synaptic coupling are v and s respectively. The function s(t) represents the shape of a post synaptic response (to a delta-Dirac spike) and will be taken to be the Green's function of a linear differential operator Q. For an alpha-function s(t) where H is a Heaviside function, Q = (1 + −1 d∕dt) 2, whilst for an exponential response s(t) = exp(− t)H(t), Q = (1 + −1 d∕dt). In the i are random variables drawn from a Lorentzian distribution: with median 0 and half-width. The threshold v th and reset v r values are taken to be ∞ and −∞, respectively. To derive the mean-field equations we follow closely the exposition by Montbri et al.. Consider the 1 ( − 0 ) 2 + 2, thermodynamic limit N → ∞ with a distribution of voltage values ( |, t). The continuity equation for is where and which represent the average voltage and population firing rate respectively. We now assume a solution (v|, t) of the form For a fixed the firing rate r(, t) can be calculated as (v → ∞|, t)v(v → ∞|, t), from which we may establish that By exploiting the structure of, with poles at v ± = y ± ix, a contour integration shows that where PV denotes the Cauchy principal value. After averaging over the distribution of single neuron drives given by we obtain For fixed, substitution of into the continuity equation and balancing powers of v shows that x and y obey two coupled differential equations that can be written as. where (, t) = x(, t) + iy(, t). After evaluating the integrals in and using contour integration (and using the fact that has poles at ± = 0 ± i ) the coupled equations for (R, V) can be found as The complex quantity W = R + iV is known to be related to the Kuramoto order parameter Z by the conformal map (): where W * is the complex conjugate of W. The evolution equation for Z is given by the complex differential equation where QU = R(Z) and Appendix 2: Interacting Sub-populations Consider an excitatory population labelled by E coupled to an inhibitory one labelled by I. In this case there are four distinct synaptic inputs with connection strengths ab s, a, b ∈ {E, I}, with aE s > 0 and aI s < 0. Each population has a background drive drawn from a Lorentzian with median a 0 and half-width a, a ∈ {E, I}. Generalising the mean-field model derived in section Appendix 1, for gap-junction coupling only within a given sub-population, we have that For a second order synapse with time-scale −1 ab we would set Note that in a slightly more general setting that would allow for electrical connections between excitatory and inhibitory sub-populations then we would replace and Appendix 3: Brain Wave Equation A simple continuum model for an effective single population dynamics can be written in the form where = w ⊗ R. The symbol ⊗ is used to describe spatial interaction within the neural field model, while w represents structural connectivity. For example, in the plane we might consider (R, V, U) = (R(, t), V(, t), U(, t)), with ∈ ℝ 2 and t ≥ 0 with where c represents the speed of an action potential. We note that can be written as a convolution: where G(r, t) = w(r) (t − r∕c). For certain choices of w it is possible to exploit this convolution structure to obtain a PDE model, often referred to as a brain-wave equation (Nunez 1974;Jirsa and Haken 1997
|
class BTerrain:
"""
Functions for creating Blender meshes from DTM objects
This class contains functions that convert DTM objects to Blender meshes.
Its main responsiblity is to triangulate a mesh from the elevation data in
the DTM. Additionally, it attaches some metadata to the object and creates
a UV map for it so that companion ortho-images drape properly.
This class provides two public methods: `new()` and `reload()`.
`new()` creates a new object[1] and attaches a new mesh to it.
`reload()` replaces the mesh that is attached to an already existing
object. This allows us to retain the location and orientation of the parent
object's coordinate system but to reload the terrain at a different
resolution.
Notes
----------
[1] If you're unfamiliar with Blender, one thing that will help you in
reading this code is knowing the difference between 'meshes' and
'objects'. A mesh is just a collection of vertices, edges and
faces. An object may have a mesh as a child data object and
contains additional information, e.g. the location and orientation
of the coordinate system its child-meshes are reckoned in terms of.
"""
@staticmethod
def new(dtm, name='Terrain'):
"""
Loads a new terrain
Parameters
----------
dtm : DTM
name : str, optional
The name that will be assigned to the new object, defaults
to 'Terrain' (and, if an object named 'Terrain' already
exists, Blender will automatically extend the name of the
new object to something like 'Terrain.001')
Returns
----------
obj : bpy_types.Object
"""
bpy.ops.object.add(type="MESH")
obj = bpy.context.object
obj.name = name
# Fill the object data with a Terrain mesh
obj.data = BTerrain._mesh_from_dtm(dtm)
# Add some meta-information to the object
metadata = BTerrain._create_metadata(dtm)
BTerrain._setobjattrs(obj, **metadata)
# Center the mesh to its origin and create a UV map for draping
# ortho images.
BTerrain._center(obj)
return obj
@staticmethod
def reload(obj, dtm):
"""
Replaces an exisiting object's terrain mesh
This replaces an object's mesh with a new mesh, transferring old
materials over to the new mesh. This is useful for reloading DTMs
at different resolutions but maintaining textures/location/rotation.
Parameters
-----------
obj : bpy_types.Object
An already existing Blender object
dtm : DTM
Returns
----------
obj : bpy_types.Object
"""
old_mesh = obj.data
new_mesh = BTerrain._mesh_from_dtm(dtm)
# Copy any old materials to the new mesh
for mat in old_mesh.materials:
new_mesh.materials.append(mat.copy())
# Swap out the old mesh for the new one
obj.data = new_mesh
# Update out-dated meta-information
metadata = BTerrain._create_metadata(dtm)
BTerrain._setobjattrs(obj, **metadata)
# Center the mesh to its origin and create a UV map for draping
# ortho images.
BTerrain._center(obj)
return obj
@staticmethod
def _mesh_from_dtm(dtm, name='Terrain'):
"""
Creates a Blender *mesh* from a DTM
Parameters
----------
dtm : DTM
name : str, optional
The name that will be assigned to the new mesh, defaults
to 'Terrain' (and, if an object named 'Terrain' already
exists, Blender will automatically extend the name of the
new object to something like 'Terrain.001')
Returns
----------
mesh : bpy_types.Mesh
Notes
----------
* We are switching coordinate systems from the NumPy to Blender.
Numpy: Blender:
+ ----> (0, j) ^ (0, y)
| |
| |
v (i, 0) + ----> (x, 0)
"""
# Create an empty mesh
mesh = bpy.data.meshes.new(name)
# Get the xy-coordinates from the DTM, see docstring notes
y, x = np.indices(dtm.data.shape).astype('float64')
x *= dtm.mesh_scale
y *= -1 * dtm.mesh_scale
# Create an array of 3D vertices
vertices = np.dstack([x, y, dtm.data]).reshape((-1, 3))
# Drop vertices with NaN values (used in the DTM to represent
# areas with no data)
vertices = vertices[~np.isnan(vertices).any(axis=1)]
# Calculate the faces of the mesh
triangulation = Triangulate(dtm.data)
faces = triangulation.face_list()
# Fill the mesh
mesh.from_pydata(vertices, [], faces)
mesh.update()
# Create a new UV layer
mesh.uv_textures.new("HiRISE Generated UV Map")
# We'll use a bmesh to populate the UV map with values
bm = bmesh.new()
bm.from_mesh(mesh)
bm.faces.ensure_lookup_table()
uv_layer = bm.loops.layers.uv[0]
# Iterate over each face in the bmesh
num_faces = len(bm.faces)
w = dtm.data.shape[1]
h = dtm.data.shape[0]
for face_index in range(num_faces):
# Iterate over each loop in the face
for loop in bm.faces[face_index].loops:
# Get this loop's vertex coordinates
vert_coords = loop.vert.co.xy
# And calculate it's uv coordinate. We do this by dividing the
# vertice's x and y coordinates by:
#
# d + 1, dimensions of DTM (in "posts")
# mesh_scale, meters/DTM "post"
#
# This has the effect of mapping the vertex to its
# corresponding "post" index in the DTM, and then mapping
# that value to the range [0, 1).
u = vert_coords.x / ((w + 1) * dtm.mesh_scale)
v = 1 + vert_coords.y / ((h + 1) * dtm.mesh_scale)
loop[uv_layer].uv = (u, v)
bm.to_mesh(mesh)
return mesh
@staticmethod
def _center(obj):
"""Move object geometry to object origin"""
bpy.context.scene.objects.active = obj
bpy.ops.object.origin_set(center='BOUNDS')
@staticmethod
def _setobjattrs(obj, **attrs):
for key, value in attrs.items():
obj[key] = value
@staticmethod
def _create_metadata(dtm):
"""Returns a dict containing meta-information about a DTM"""
return {
'PATH': dtm.path,
'MESH_SCALE': dtm.mesh_scale,
'DTM_RESOLUTION': dtm.terrain_resolution,
'BIN_SIZE': dtm.bin_size,
'MAP_SIZE': dtm.map_size,
'MAP_SCALE': dtm.map_scale * dtm.unit_scale,
'UNIT_SCALE': dtm.unit_scale,
'IS_TERRAIN': True,
'HAS_UV_MAP': True
}
|
Exploring Working Relationships between Union Representatives and School Management Teams in the Rural Public Schools of South Africa: Implications for School Management Abstract The present study seeks to explore the working relationship between members of school management teams (SMT) and union representatives in four rural public junior secondary schools in the Eastern Cape Province of South Africa. The study was conducted through a qualitative research methodology, which entailed the use of a case study design in four selected junior secondary schools. This was done through the use of focus group interviews. Two sets of questions were posed to two interview schedules whereupon participants were required to provide responses. Eleven SMT members and 7 union representatives took part in the study. Separate focus group interviews were conducted according to SMT groups and union representative groups in each school. The findings highlighted the role that SMT members and union representatives played jointly or separately in ensuring the smooth running of their school, as well as in addressing tensions between staff and management and amongst the staff members to enhance effective work in each school. The findings include issues of bias amongst SMT members against some union representatives or members of their unions; the role of the SMT members in consultation processes towards improving working relationship with union members on matters of mutual interest. The study also showed that there was a need for managers of schools and union representatives to know, interpret, understand and implement labor-related legislations in the same way to ensure consistency and stability in their schools. Recommendations to improve the relationship between the school management teams and the union representatives were made.
|
Junior Cody Campbell showed off his school spirit while supporting the football team.
School spirit put on its true game face this season with the Screamin’ Scots more alive than ever.
The Screamin’ Scots are a group of energetic Carlmont students that attend sporting events at home games as well as away games to cheer with school pride and motivate the players to reach their fullest potential.
Their enthusiastic energy at football games has “definitely [been] motivating,” said sophomore Jake Kumamoto from the Junior Varsity football team.
As for this year’s football season, the Screamin’ Scots have been “more alive than ever. The passion and pride that they provide just makes the games all that better. Screamin’ Scots is such a great way for students to exhibit their appreciation of our school. It brings everyone together and reminds us that we are all one united team,” said senior Kiana Yekrang.
As we all know, football season has come to an end at Carlmont for the 2013-2014 school year.
However, the Screamin’ Scots are nowhere near an end. Upcoming sporting events are the Screamin’ Scots’ comeback.
“We won’t stop. We’re here to stay,” said Yekrang.
|
P374 Front loading infliximab dosing regimen improves outcomes in Crohns Disease perianal fistulas Higher infliximab maintenance trough levels are found to be associated with higher perianal fistula healing rates in Crohns Disease. The relevance of accelerated front loading dosing in perianal fistula has not been determined. This study aimed to establish whether front loading with a higher dose infliximab regime at induction is associated with better fistula outcomes. Crohns Disease patients with perianal fistulas treated with infliximab in a tertiary referral centre IBD unit were included. Patients were categorised as standard or front loading based on induction dose infliximab of, 5mg/kg or, 10mg/kg. The target for infliximab maintenance trough levels was >10mcg/mL. The primary outcome was the need for reintervention (defined as repeat abscess drainage, seton re-insertion, diverting stoma or proctectomy) at, 12 months post initiation of treatment. Secondary outcomes were the proportion of patients having clinical healing, radiologic healing and combined clinical and radiologic healing. Drug levels post induction and post first maintenance were evaluated. The proportion of patients needing dose escalation or de-escalation were also assessed. Chi-squared or Fisher exact tests were used to compare categorical variables and Kaplan-Meier survival curves were plotted for re-intervention free survival. Seventy-nine patients were included in analysis (Males: Females, 36:43, median age with fistula = 31). Seventeen (22%) of patients received a front loading dose of, 10mg/kg while sixty-two (79%) patients received the standard dose of, 5mg/kg among whom, 70% had dose escalation. Need for reintervention was significantly lower in patients who received a front loading dosing schedule compared to the standard dose (2/17 (12%) vs, 26/62 (42%), p=0.02) (Figure, 1). Higher clinical fistula healing rates were observed with front-loading dosing (82% vs, 45%, p=0.008). Radiologic healing in those who had follow up Magnetic Resonance Imaging (MRI) was not significant between both groups (8/17 (47%) vs, 20/62 (32%), p=0.133). Target therapeutic infliximab levels at maintenance was achieved in, 14/17 (82% of the patients receiving front loading dosing, while this was achieved in only, 2 of the, 43 (5%) tested patients with standard dosing. Reintervention requirement rates were higher in those patients with suboptimal maintenance levels (17/44 (39%) vs, 2/16 (12.5%), p=0.01). The Kaplan-Meier curves confirm that the front loading dosing regimen does prevent the need for reintervention (p=0.03) (Figure 2). Figure1: Figure 2: Front loading with higher infliximab dose achieves better fistula healing and reduce the need for reinterventions in Crohn`s disease with perianal fistula.
|
. Overall 260 women and 202 men were examined comparatively during duodenal ulcer exacerbation. 53.8% of the women demonstrated an atypical disease course. Fibrogastroscopy is a method of early diagnosis of duodenal ulcer in women. As compared to men, the size and depth of ulcers in women was less. Acid and peptic factor was also less pronounced. The four-week treatment with the use of the diet, antacids, gastrozepine and repair agents brought about ulcer healing in 82% of the women. The single and daily doses of antacids administered to women were 1.5 times less than in men. The use of H2-blockers is indicated in women with a severe course of peptic ulcer. The inclusion of antibacterial agents is indicated if the mucous membrane of the antral part of the stomach demonstrates Campylobacter pylori.
|
. The role of drugs, especially injectables, in medical treatment is extremely important. In Japan, accidents associated with injections were not uncommon. Post-injection neuroparalysis of the radial nerve and sciatic nerve was associated with problems in textbooks, which described injection sites at the regions of neural passage. Then, in the 1970s, quadriceps femoral muscle contracture and deltoid muscle contracture became a social problem. When the author examined the osmotic pressure of various injectables, some injectables were found to deviate from the physiological range. When 335 injectables were evaluated for the potential to cause hemolysis, many injectables marketed for intramuscular injection gave strong hemolytic reaction, which attracted attention. Next, 89 types of injectables were tested for the potential to cause muscle damage by injecting into the femoral muscle of rabbits, and severe muscular damages were observed with antipyretics, analgesics, and antibiotics. The group lawsuit of Yamanashi Prefecture was settled at the Tokyo High Court in 1989, with the pharmaceutical company agreeing to pay over 2.95 billion yen as settlement. Giving drugs to patients occupies an important place among medical act. Medical disputes concerning accidents of wrong drugs and wrong usage are common. Obviously health care workers should exercise great caution in their duties, but without measures taken against medical equipment and drugs, such as containers and seals, this type of accidents will not be reduced. Through giving numerous speeches and lectures on medical accidents throughout the years, the author had felt strongly the need for teaching materials such as video tapes that can be used for self-learning. In 1999, the video entitled "Medical Accident-Learning from Actual Cases-" was finally completed, which consists of 6 parts (1. General consideration, 2. Blood transfusion, 3. Drug administration, 4. Surgery, 5. Examinations, 6. Management). For each area, typical incidents are presented as reproduced video taping. Then the criminal responsibility, civil responsibility and administrative responsibility of the doctor or nurse concerned are questioned in the form of questionnaires. Subsequently, the author also published a commentary book for the video that provides commentaries of the cases.
|
Using the relic dark energy hypothesis to investigate the physics of cosmological expansion We use the trans Plankian hypothesis about Dark energy as outlined by Mercini et al to investigate some of the physics of cosmological expansion. We find that a parametric oscillator equation frequency we can use leads to a time dependent frequency whose dispersion relationship behavior mimics the Epstein functions used by Mercini et al in the initial phases of cosmological expansion, but that it is very difficult to meet the trans Planckian assumptions of a vanishing of this same frequency with known physics at ultra high momentum values. So being the case, a numerical algorithm used to re construct the scale factor for cosmological expansion is proposed which is to re construct more of the physics of the Trans-Planckian hypothesis in Trans-Planckian momentum values I. INTRODUCTION We investigate whether using dark energy 1 from the tail-mode of ultra high momentum contributions of the universe leads to useful cosmological expansion models. We assert that the answer is yes, provided that we pick fitting scale factors for cosmological expansion. Here the brackets contain scalar sectors -a traceless, transverse tensor ij h cc-that we associate with gravitational waves. We can use this tensor to define a quantity T for each mode k with a Fourier type of decomposition, which includes additional information via an assumed polarization tensor ( ) We make a linkage to the transplanckian hypothesis by setting this as the square of the Mancini et al 1 derived frequency and the Epstein formula they used to obtain desired behavior of density functionals in a given ratio. They attempt to reconstruct a frequency range via modification of a hypothesis by Magueijo and Smolin 3 with respect to an alteration of the special relativity energy hypothesis advanced to fit cosmic ray data. However, that, plus a modification of Magueijo and Smolin 3 hypothesis, did not work out so well. Note that this is independent of the slow roll hypothesis needed for cosmological potential field systems. 4 We are referring to where H is the Hubble expansion rate, which is a requirement of realistic inflation models. 4 Note that the slow roll requirement is for scalar fields dominant in the early phases of inflationary cosmology. In this situation, dynamics given by dominating gravitational wave perturbations lead to considerations of a momentum spread -and not the scalar field inflation model, which is useful in initial phases of cosmological inflation. Still, the break down of the dispersion relationship model as given an example will lead us to evaluate this problem via the numerical reconstruction procedure that we outline herein. energy expression appears to have insurmountable problems, one of which shows up in the beta coefficient in the denominator becoming too large. This is in tandem with Lemoine, Martin, and Uzan. 5,6 However, we assert that the transplanckian hypothesis is a useful means to winnow appropriate scale factor behavior -and should be viewed as a test-bed for finding models that appropriately characterize the expected expansion behavior physics when the universe evolves beyond today's environment. III. DESCRIPTION OF PROCEDURE USED TO OBTAIN ENERGY DENSITY RATIO. What Mersini et al 1 did was to use ultra low dispersion relationship values for ultra high momentum values to obtain ultra-low energy values, which were, and still remain, allegedly frozen. 1 We also have a specific tail-mode energy region picked by to obtain H k. We then have an energy calculation for the tail-modes 1 This is about 122 orders of magnitude smaller than ( ) Here, the tail-modes of energy are chosen as frozen 1 during any expansion of the universe. This is for energy modes for frequency regions iv. There exists a correspondence principle: At energy scales much smaller than P E, conventional special and general relativity are true. That is, they hold to first order in the ratio of energy scales to P E. We now can consider how can fashion these principles into predictions of energy values that we can use to obtain dispersion relationships. Magueijo and Smolin 3 obtained a modified relationship between energy and mass: with fit the Epstein function behavior for energy but which also violated the slow roll requirements of potential fields in cosmology. Needless to say we found it instructive to try to come up with an Ansatz, see discussion of Eq., that appeared to give correct behavior to getting a ratio of Eq.. We found it useful to work with, instead with: The factor of 11 was merely put in to help form a dispersion relationship approaching will lead to the same result as spoken of with the modified Epstein function, 2 assuming that 1 2 ≅, so: gives the values seen in Fig. 1 Note how the cut off value of momentum P k is due to as a quantity in dispersion behavior leads to the results seen in Fig. 1. Fig. 1 about here] We can contrast this dispersion behavior with 1 Note, 3 3 10 11 + ⋅ ≅, whereas we would prefer to find which has a very different lower bound than the behavior seen in Eq.. If we pick 10 10 − ≡ as suggested by T. Jacobson 7 to try to solve the cosmic ray problem, we then find that Eq. approaches unity. Appendix II shows us that we still could not match the beta coefficient values needed to solve the cosmic ray problem of special relativity. V. CONFRONTING THE NUMERICAL RECONSTRUCTION ALGORITHM NECESSITY TO RETRIEVE PHYSICS. In the onset of this article, we noted the necessity of retrieving information with respect to As we have that when That is, we have that at low momentum values, the ratio of a a becomes a vanishingly small contribution that grows with increasing time increments. So we can, using Fig. 1, set up qualitative bounds as to the behavior of the scale factor a and use it to reconstruct useful physics, assuming that the transplanckian hypothesis is legitimate. Qualitatively, as the momentum gets larger, one sees That is, the scale factor expansion of the universe continues to grow, perhaps This would lead to a numerical differential equation of the form This is our situation, given what we know of the Epstein function behavior given in Fig. 1 definable as As we observe, we have the completely expected phenomena of a slowly expanding scale factor of a. What is of very significant import is determining the physics of the large hump in Fig. 1, that is, the inflection point where the net frequency turns from an increasing function of momentum to one where the frequency is decreasing asymptotically. I do believe that this has been insufficiently explored in the literature and bears significant impact upon solving necessary and sufficient conditions for the validity of the transplanckian hypothesis. Furthermore, note that this hump occurs well before the defining frequency of VI. CONCLUSION We found that the dispersion relationship given in Eq. in order to solve the cosmic ray problem. 3,6,7 We have thereby established that perhaps analytical criteria used to derive the behavior of the dispersion relationship is not necessarily the optimal way to extract physical information from the transplanckian hypothesis. Accordingly, in the last section, a necessary and sufficient set of conditions for a numerical simulation was proposed that would: 1. Give an idea of the relative importance of time variation for momentum in our cosmology models. APPENDIX I: THE BOGOLIUBOV FUNCTION USED We followed Bastero-Gil and Mersini's 5 assumption of negligible deviations from a strictly thermal universe, and we proved it in our Bogoliubov coefficient calculation. This led to us picking the thermality coefficient 5 B to be quite small. In addition, the ratio of confocal times as given by C had little impact upon Eq. of the main text. Also, Also, if we are working with the conformal case of 6 / 1 = appearing 5 in :. We begin with 5 : where P k k < 1 and where 1 k is in the transplanckian regime but is much greater than. Then, we obtain:
|
#include <iostream>
#include <algorithm>
using namespace std;
bool TripletSum(int arr[], int n, int k){
sort(arr,arr+n);
for(int i=0;i<n-2;i++)
{
int j=i+1;
int y=n-1;
while(j<y){
int sum = arr[i]+arr[j]+arr[y];
if(sum==k)
return 1;
else if(sum>k)
y--;
else if(sum<k)
j++;
}
}
return 0;
}
int main() {
int t;
cin>>t;
while(t--){
int n,k;
cin>>n>>k;
int arr[n];
for(int i=0;i<n;i++){
cin>>arr[i];
}
cout<<TripletSum(arr,n,k)<<endl;
}
return 0;
}
|
On upper bounds for the minimum rank of regular classes of -matrices Let be the set of all -matrices of order with constant line sum and let be the minimum rank over. It is known that where is the rank of a recursively defined matrix. Brualdi, Manber and Ross showed that if and only if. In this note, we study the equality conditions for the upper bounds. We prove that if and only if and then answer a question posed by Pullman and Stanford. Moreover, we show that only if satisfies one of the following three relations: (i), or ; (ii), ; (iii), and. On the other hand, we show that for and.
|
An Intelligent and Pervasive Surveillance System for Home Security Domotics is a promising area for intelligent and pervasive applications that aims at achieving a better quality of life. Harnessing modern technologies is valuable in many contexts, in particular in home surveillance scenario, where people safety or security might be threatened. Modern home security systems endorse monitoring as well as control functions in a remote fashion, e.g. via devices as a laptops, PDAs, or cell phones, thus implementing the pervasive computing paradigma; moreover, the intelligence is now often embedded into modern applications, e.g. surveillance systems could adapt to the environment through a self-learning algorithm. This work presents an intelligent and pervasive surveillance system for home and corporate security based on the ZigBee protocol which detects and classifies intrusions discarding false positives, also providing remote control and cameras live streaming. Results of tests in different environments show the effectiveness of the proposed system.
|
Save this picture! LEGO recently made architecture news with their BIG-designed "LEGO House," a museum and "experience center". Image Courtesy of LEGO Group
LEGO has long been recognized by architects as a key inspiration in the world of creative building - but the Danish toy company's influence over the construction industry may be about to get a whole lot more direct. Yesterday, LEGO announced the establishment of its own sustainable materials research center, with an investment of 1 billion Danish Krone ($150 million US), which will search to find sustainable alternatives to the plastic used in their products and packaging.
The LEGO Sustainable Materials Center will be established at LEGO's headquarters in Billund, Denmark over the coming year, when the company expects to recruit over 100 specialists in materials science. LEGO also intends to continue and expand its collaborations with organizations such as the World Wildlife Fund (WWF) in the search for sustainable alternatives.
"Our mission is to inspire and develop the builders of tomorrow," said LEGO Group owner Kjeld Kirk Kristiansen. "We believe that our main contribution to this is through the creative play experiences we provide to children. The investment announced is a testament to our continued ambition to leave a positive impact on the planet, which future generations will inherit."
Though the research center's primary concern is to find sustainable materials for the 60 billion parts it produces per year, its findings will inevitably have impacts on other sectors that work with plastics - including architecture. The LEGO group has said that it will "continuously report" on its progress as it aims to replace all the materials it uses with sustainable alternatives by 2030.
|
<gh_stars>0
package jun.prospring5.ch5;
public class DefaultSimpleBean implements SimpleBean {
private long dummy = 0;
@Override
public void advised() {
this.dummy = System.currentTimeMillis();
}
@Override
public void unadvised() {
this.dummy = System.currentTimeMillis();
}
}
|
<filename>graphanything/src/main/java/info/puzz/graphanything/models2/GraphEntry.java
package info.puzz.graphanything.models2;
import java.io.Serializable;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.experimental.Accessors;
@NoArgsConstructor
@Data
@Accessors(chain = true)
public class GraphEntry implements Serializable {
public static final int COLUMNS_NO = 5;
public Long _id;
public long graphId;
public long created;
public String comment;
public Double value0;
public Double value1;
public Double value2;
public Double value3;
public Double value4;
/*
public Double value5;
public Double value6;
public Double value7;
public Double value8;
public Double value9;
*/
String message;
/**
* Stupid, but it works.
*/
public GraphEntry set(int i, Double val) {
if (i < 0 || i > COLUMNS_NO) {
throw new Error("Invalid index:" + i);
}
switch (i) {
case 0: value0 = val; break;
case 1: value1 = val; break;
case 2: value2 = val; break;
case 3: value3 = val; break;
case 4: value4 = val; break;
/*
case 5: value5 = val;
case 6: value6 = val;
case 7: value7 = val;
case 8: value8 = val;
case 9: value9 = val;
*/
}
return this;
}
public Double getOr(int i, double dflt) {
Double res = get(i);
if (res == null) {
return dflt;
}
return res.doubleValue();
}
public Double get(int i) {
switch (i) {
case 0: return value0;
case 1: return value1;
case 2: return value2;
case 3: return value3;
case 4: return value4;
/*
case 5: return value5;
case 6: return value6;
case 7: return value7;
case 8: return value8;
case 9: return value9;
*/
default: throw new Error("Invalid index:" + i);
}
}
}
|
Combining Patent Law Expertise with R&D for Patenting Performance Drawing on the resource-based view (RBV), this paper examines how the combination or bundling of resources influences firm-patenting performance. We hypothesize that firm-patenting output depends not only on research and development (RD furthermore, this effect is moderated by the firm's level of top management team (TMT) patent law background and industry-patenting pressures. However, our hypothesis of a complementary relationship between patent law expertise and RD instead, we found evidence of a counterintuitive (weak) negative interaction between these two variables. Our findings shed light on how the combination of other resources with R&D affects firm-patenting performance, and advance the integration of complementary organizational perspectives with the RBV.
|
// get data count of specific attribute.
// it'll return 0 if given attribute buffer is not exists.
const uinteger ArrayBuffer::GetDataCount(AttributeSemantic attribSemantic) const {
if (mAttribFeats[attribSemantic] == NULL)
return 0;
return mAttribFeats[attribSemantic]->dataCount;
}
|
Extending CSR in SMEs upstream supply chains: a dynamic capabilities perspective ABSTRACT In recent years, corporate social responsibility (CSR) has benefited from a renewed interest in supply chains. However, little scholarly attention has been paid to the CSR practices in small- and medium-sized enterprises (SMEs) supply chains that lie beyond the first-tier supplier. Drawing from the dynamic capabilities perspective, the purpose of this study is twofold: first, to examine how SMEs extend CSR into their multi-tier supply chain (MSC); and second, to investigate the drivers of and the barriers to this process. A multiple-case study was conducted to examine six triadic relationships of SMEs including sub-suppliers. We performed within-case and cross-case analyses. The study shows that SMEs use co-evolving or reflexive control capabilities to extend CSR to the SME first-tier supplier. The findings reveal that, in contrast, SMEs use active delegation, supply chain re-conceptualization capabilities or a dont bother approach to extend CSR to the SME second-tier supplier. Besides, our study shows that the type of internal drivers (instrumental or normative) determines the dynamic capabilities implemented at the first-tier and at the second-tier supplier level, while external drivers are weak and the barriers are primarily internal.
|
/*
* Copyright 2021 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.hateoas.mediatype.html;
import org.springframework.hateoas.mediatype.InputTypeFactory;
import org.springframework.lang.Nullable;
/**
* An {@link InputTypeFactory} based on {@link HtmlInputType}.
*
* @author <NAME>
* @since 1.3
*/
class HtmlInputTypeFactory implements InputTypeFactory {
/*
*
* (non-Javadoc)
* @see org.springframework.hateoas.mediatype.InputTypeFactory#getInputType(java.lang.Class)
*/
@Nullable
@Override
public String getInputType(Class<?> type) {
HtmlInputType inputType = HtmlInputType.from(type);
return inputType == null ? null : inputType.value();
}
}
|
DUBLIN--(BUSINESS WIRE)--The "Global Gel Batteries Market 2018-2022" report has been added to ResearchAndMarkets.com's offering.
The gel batteries market will register a CAGR of more than 3% by 2022.
With the growing subscription of 4G users globally, the need for installation of new telecom towers will also increase. These factors will boost the demand for energy storage devices in telecommunication industry during the forecast period, thus driving growth in the gel batteries market.
As gel batteries are low maintenance batteries and provide longer quality services, they are well suited for operation in extreme temperatures. This will enhance the productivity and adoption of gel batteries. Thus, the deployment of gel batteries in microgrids act as one of the prominent factors in the growth of the global gel batteries market.
The gel batteries can be easily replaced by the AGM batteries and other flooded batteries due to the various advantages associated with these batteries such as cost effectiveness and better performance.
|
<filename>app/src/main/java/com/verNANDo57/rulebook_educational/pdflib/pdfium/util/SizeF.java
package com.verNANDo57.rulebook_educational.pdflib.pdfium.util;
import androidx.annotation.NonNull;
public class SizeF {
private final float width;
private final float height;
public SizeF(float width, float height) {
this.width = width;
this.height = height;
}
public float getWidth() {
return width;
}
public float getHeight() {
return height;
}
@Override
public boolean equals(final Object obj) {
if (obj == null) {
return false;
}
if (this == obj) {
return true;
}
if (obj instanceof SizeF) {
final SizeF other = (SizeF) obj;
return width == other.width && height == other.height;
}
return false;
}
@NonNull
@Override
public String toString() {
return width + "x" + height;
}
@Override
public int hashCode() {
return Float.floatToIntBits(width) ^ Float.floatToIntBits(height);
}
public Size toSize() {
return new Size((int) width, (int) height);
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.